Saturday, 2018-12-15

tristanCpabelanger: clarkb: shouldn't we sqlreport each RETRY_LIMIT for paul's use case?00:01
clarkbtristanC: we report the last failure as retry limit. THe problem is for the earlier failures that may or may not result in a retry limit00:01
tristanCI meant each failure resulting in a retry limit yes00:02
clarkbby default you get 3 attempts. So if the first two fail then the third pass you won't get that information00:03
clarkbwith openstack we do sort of get that through the tracking with elasticsearch, but paul doesn't have that00:03
tristanCabout elasticsearch, we would like to propose a logstash role as described in this story: https://tree.taiga.io/project/morucci-software-factory/us/210300:06
tristanCthat is to replace this one we are currently using: https://review.openstack.org/53784700:06
clarkbtristanC: it would be helpful if we could track those things upstream. I agree having a more "normal" logstash submission role would be useful00:08
clarkbtristanC: I do think thee is merit to the existing system (you don't have to block on it as its fully async in the background is a big one)00:08
clarkbbut offering both would be good00:08
clarkbtristanC: maybe make it a toggle on the submit log logstash job role. Default is write to tcp input (or similar) and flag could be to submit gearman job as openstack does00:11
clarkbone issue is there are a lot of logstash input options. but TCP is likely a "safe" one here00:11
tristanCI agree, these could be implemented in a single role00:12
clarkbanother option is separate roles which might help avoid confusion00:13
clarkb(why is there this gearman option?)00:13
tristanCanother improvement would be to report role usage information. It seems like this is already part of the job-output.json, thus we could anotate logstash event with the role name00:20
tristanCand if the console stream could include that information, we could really prettify the stream web page by grouping the lines per roles00:22
clarkbright now it only has the play and task info right?00:23
tristanCclarkb: there is a role dictionary attached to each task00:23
tristanCgranted it doesn't tell you the source context, but that is still useful if we could filter kibana queries with it :)00:25
tristanCe.g. curl http://logs.openstack.org/72/599472/8/check/tox-pep8/91380a0/job-output.json.gz | gzip -d | grep '"role":' -A 3 | grep '"name":'00:30
clarkbya the info is avaialble but not exposed in the console stream iirc. I do agree that it would be helpful though00:31
*** persia has quit IRC02:24
*** persia has joined #zuul02:24
mrhillsmani got some jobs running that i know 100% the change is not touching anything relevant and i want to kill them03:03
SpamapShrm.. seeing more weirdness in post jobs03:03
mrhillsmannot sure if dequeue is the right command but last time i used it things went haywire03:03
SpamapSin theory dequeue works.03:03
mrhillsmanok, will try it again :)03:04
SpamapSIn reality, killing the ansible-playbook process is the one I've used in the past.03:04
* mrhillsman crosses fingers03:04
mrhillsmanhrm...not sure i trust client03:06
mrhillsmanzuul show fails03:06
mrhillsmanguess i'll just let it ride03:06
SpamapSHm.. not sure I understand how branch matchers work with github push events.03:11
SpamapSAlso variants. I have a job with 'branches: master' and another similar job with 'branches: prod' and it seems like the *post* playbook for both branches03:12
SpamapShttp://zuullogsgdmnyco.s3-website-us-west-2.amazonaws.com/db/db4d83d8b94528015a66153e910970c98486e3ef/post/gmapidev-post/0905c1c/job-output.txt03:12
SpamapSactually the pre's run that way too03:14
SpamapSbut the vars are all from the 'master' version of the job03:14
* SpamapS so confused03:14
mrhillsmannot sure i can help being so new to using zuul03:22
mrhillsmannor what you are trying to do that you are not able to03:23
mrhillsmani do not actually see a question in your text but again that could be because i am new and do not have expected context to see/understand03:24
*** bhavikdbavishi has joined #zuul03:44
*** sean-k-mooney has quit IRC03:57
*** sean-k-mooney has joined #zuul04:04
*** bhavikdbavishi has quit IRC04:47
*** fbo has quit IRC04:54
*** bhavikdbavishi has joined #zuul05:13
*** bhavikdbavishi has quit IRC05:18
SpamapSmrhillsman: thanks... I'm mostly just bitching. ;)05:21
mrhillsmanhehe ok05:21
*** bhavikdbavishi has joined #zuul05:52
*** bhavikdbavishi has quit IRC06:04
*** bhavikdbavishi has joined #zuul06:08
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement an OpenShift resource provider  https://review.openstack.org/57066706:41
*** bhavikdbavishi has quit IRC07:35
*** pcaruana has joined #zuul08:00
SpamapShrm, this really does just make no sense at all to me. :-P08:06
*** bhavikdbavishi has joined #zuul08:36
*** bhavikdbavishi has quit IRC09:30
*** bhavikdbavishi has joined #zuul10:58
*** bhavikdbavishi has quit IRC11:34
*** dkehn has quit IRC11:54
*** bhavikdbavishi has joined #zuul12:20
*** dkehn has joined #zuul12:23
*** goern has quit IRC12:44
*** bhavikdbavishi has quit IRC13:28
*** ssbarnea|rover has quit IRC14:28
*** ssbarnea|rover has joined #zuul14:30
*** bhavikdbavishi has joined #zuul15:40
*** bhavikdbavishi has quit IRC15:58
*** goern has joined #zuul16:14
*** bhavikdbavishi has joined #zuul16:34
*** bhavikdbavishi has quit IRC16:46
SpamapSSo I guess what I'm experiencing has to do with branch matching. I have two identical copies of the job, one in prod, one in master, and as such, both are running against the push to master. So.. do I have to remove the job from the `prod` branch? That seems... very unexpected.23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!