Tuesday, 2018-07-03

*** threestrands_ has joined #zuul00:07
*** threestrands_ has quit IRC00:07
*** threestrands_ has joined #zuul00:07
*** threestrands_ has quit IRC00:08
*** threestrands_ has joined #zuul00:09
*** threestrands_ has quit IRC00:09
*** threestrands_ has joined #zuul00:09
*** threestrands has quit IRC00:10
*** threestrands_ has quit IRC00:10
*** threestrands_ has joined #zuul00:10
*** threestrands_ has quit IRC00:11
*** threestrands_ has joined #zuul00:12
*** threestrands_ has quit IRC00:13
*** threestrands_ has joined #zuul00:13
*** threestrands_ has quit IRC00:13
*** threestrands_ has joined #zuul00:13
*** threestrands_ has quit IRC00:14
*** threestrands_ has joined #zuul00:15
*** threestrands_ has quit IRC00:15
*** threestrands_ has joined #zuul00:15
*** threestrands_ has quit IRC00:16
*** threestrands_ has joined #zuul00:16
tristanCcorvus: i worry fixing get request on '/zuul/', or even '/' for multi-tenant is going to be tricky. It seems like we need to change the routing strategy by calling a speculative 'api/info' endpoint to be able to check if it's a white-label tenant or not00:53
tristanCit's either that (an extra http call), or we document that multi-tenant deployment needs to redirect /zuul/ to /zuul/t/tenants.html00:54
tristanCi would prefer the later, so doing a 3.1.1 release with 57941800:54
tristanCi mean, i already switch back to master tracking in order to get the min_hdd_avail sensor and other fixes... so i don't mind waiting longer for a release, but either ways that's not ideal00:57
tristanCanother strategy would be to drop http path <-> component relationship and switch to what storyboard is doing, e.g.: redirect everythin to a generic index.html, and manage navigation using the "#!" anchor00:58
tristanCwell there is another option, we could make entrypoint redirect to t/tenants.html by default, and white-label setup can be configured to redirect to status.html..01:14
tristanClet me start a zuul-discuss thread01:14
*** hwoarang has quit IRC02:10
*** bhavik1 has joined #zuul04:04
*** threestrands_ has quit IRC04:25
*** bhavik1 has quit IRC05:11
*** hwoarang has joined #zuul05:14
*** Rohaan has joined #zuul05:23
*** hwoarang has quit IRC06:23
*** hwoarang has joined #zuul06:23
*** hwoarang has quit IRC06:23
*** hwoarang has joined #zuul06:23
tobiashcorvus: yesterday we had a case that zuul tried to execute a post playbook that was added with a PR but not merged in a trusted base job and other jobs tried to run that and failed then because they didn't find the playbook06:24
tobiashI have no clue how this could happen and the logs don't seem to help me either06:24
*** nchakrab has joined #zuul06:24
tobiashcould it be that we still have a subtile bug in the caching and the PR modified the master-version of the base job?06:25
*** gtema has joined #zuul06:33
*** hashar has joined #zuul06:39
*** Rohaan has quit IRC06:48
*** openstackgerrit has quit IRC06:49
*** gtema has quit IRC06:59
*** gtema has joined #zuul07:00
*** openstackgerrit has joined #zuul07:28
openstackgerritMerged openstack-infra/nodepool master: launcher: add pool quota debug and log information  https://review.openstack.org/57904807:28
*** jpena|off is now known as jpena07:52
tobiashcorvus: according to the logs we have at least one case where a non-merged base job description got into the active layout and that even in a different tenant08:00
*** tobiash has quit IRC08:14
*** tobiash has joined #zuul08:16
*** electrofelix has joined #zuul08:26
*** Rohaan has joined #zuul08:34
tobiashcorvus: I can think of two ways how this could happen. Either a bug in the shared config caching or maybe some side effect of concurrently running cat and execute jobs on the exeutor.09:01
tobiashcorvus: at least right before that happened I see a cat job for that repo in the scheduler log09:01
tristanCon a similar topic, we also add a weird issue where a job was running twice on the same nodeset, perhaps a bug in ansible retry code that leaked the first thread or someting...09:48
tobiashtristanC: was that github or gerrit?09:50
tobiashAh nodeset not pipeline?09:50
tristanCit was with a gerrit change, but i don't think it was related09:51
tristanCtobiash: yes, a single executor picked the job, and it failed with odds anomalies, in journald there was clearly a parallel execution of the same run playbook09:51
*** hwoarang has quit IRC09:53
*** hwoarang has joined #zuul09:54
*** hwoarang has quit IRC09:54
*** hwoarang has joined #zuul09:54
*** hwoarang has quit IRC09:56
tristanCnodeset message file was: https://softwarefactory-project.io/logs/87/12787/1/gate/sf-ci-functional-minimal/6330df5/logs/managesf.sfdomain.com/var/log/messages  , each ansible logs are repeated twice with a 5 seconds delay09:57
*** hwoarang has joined #zuul09:57
*** hwoarang has quit IRC09:57
*** hwoarang has joined #zuul09:57
*** jpena is now known as jpena|lunch11:29
*** elyezer has quit IRC12:07
*** jpena|lunch is now known as jpena12:28
*** rlandy has joined #zuul12:30
*** elyezer has joined #zuul12:39
*** Rohaan has quit IRC12:53
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Fix zuul startup with inexisting project template and gate  https://review.openstack.org/57985912:55
*** nchakrab_ has joined #zuul13:27
*** nchakrab_ has quit IRC13:27
*** nchakrab_ has joined #zuul13:28
*** nchakrab has quit IRC13:31
*** nchakrab has joined #zuul13:32
*** nchakrab_ has quit IRC13:32
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Fix zuul startup with inexisting project template and gate  https://review.openstack.org/57985913:39
*** nchakrab has quit IRC13:45
*** nchakrab has joined #zuul13:45
*** elyezer has quit IRC13:47
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Fix zuul startup with inexisting project template and gate  https://review.openstack.org/57985913:57
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Tolerate missing project  https://review.openstack.org/57987213:57
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: test persistent-firewall job  https://review.openstack.org/57987414:00
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: test persistent-firewall job  https://review.openstack.org/57987414:01
*** jiapei has joined #zuul14:01
corvustristanC: why is the /zuul/ url an issue with the angular change?14:14
mordredcorvus: it's related to the routing stuff14:17
*** ianychoi has quit IRC14:18
corvusmordred: ack14:19
mordredtristanC: I actually have been planning on adding the GET /info call you mentioned and just haven't quite gotten to it yet14:19
*** ianychoi has joined #zuul14:19
mordredtristanC: because of where the routing table is declared, I've gotten myself stuck thinking about the 'right' way to accomplish that14:19
corvustobiash: there's a lock in the executor so it shouldn't run a cat job (or any merger job) at the same time it's cloning out of the cache for a job14:19
mordredtristanC: because I *want* the call to be defined in the ZuulService class, but we don't have access to the injectable instance of that class (I don't think) in the right place to be able to call it to get the answer for that routing table14:21
mordredbut that's one of the main reasons for adding that /info call - so that the dashboard could make a call against it and not have to guess so many things based on the urls and whatnot14:22
mordredcorvus: oh - also - there's a thing in the /info call that either never worked properly or was broken by the cherrypy patches14:23
mordredcorvus: if you look at http://zuul.openstack.org/api/info - you'll see that capabilities.job_history is false - even though we have the sql driver enabled and thus should have job history support14:24
Shrews /c 214:24
Shrewsdoh14:24
mordredcorvus: I think the original _intent_ was that *something* in the driver would set the flag to true14:24
mordredcorvus: I don't know if that was working properly in the old version of the code - but it certainly isn't being set by anything now - and on a brief look I wasn't 100% sure the best way to fix14:25
corvusmordred: yep, i broke it14:26
corvusit was being set as a side effect of getting the sql handler14:27
mordredcorvus: cool!14:28
corvusmordred: job_history was global, but should be tenant-specific14:28
mordredoh - hrm. that's very interesting14:29
mordredcorvus: I suppose the most correct version would be based on whether or not the tenant has any pipelines that are configured to report to the sql driver14:30
mordredcorvus: although - just _having_ the sql driver enabled globally will make the sql queries in the rest api work - even though a given tenant might not have any data if it's not reporting to the db14:31
corvusmordred: right.  that's how the /builds api call works now.  it always exists, but if you call it and there's no pipeline configured with a sql reporter, then it raises an exception (so it should return 500)14:32
corvus(no pipeline configured with a sql reporter in that tenant)14:32
mordredyah14:32
corvuswe could have it return [] in that case.  or, we could only add job_history to the tenant info endpoint.14:34
mordredI think the questionfrom the dashboard pov would be whether or not we want to show the builds link in the navbar if the tenant doesn't have history14:35
mordredbecause we can certainly have the builds page just have no data - or a 'this tenant doesn't have history' message14:36
corvusyeah, for that, i think we need it to hit the tenant info api endpoint.  does it do that if you're non-whitelabel?  or does it only ever hit one info endpoint?14:36
mordredright now it hits no info endpoints14:36
mordredbut we need to get it to - so we can define the right thing for it to do14:36
corvusack14:37
*** hashar is now known as hasharAway14:37
corvustobiash: have you seen any config contamination issues within the same tenant?14:37
corvus(i should say, "within a single tenant")14:38
tobiashcorvus: I don't think it's related to any tenant, all tenants were contaminated14:56
corvustobiash: sorry, my question was trying to get at whether a multi-tenant environment is required to trigger the bug14:57
*** nchakrab has quit IRC14:57
corvusi asked it poorly14:57
tobiashcorvus: I still have no clue how this happens but I don't think a multi-tenant env is necessary14:57
corvustobiash: if you have any logs you can share, maybe i can help narrow down hypotheses14:59
tobiashI wasn't able to create a test case to reproduce so far and I couldn't spot an issue by looking at cache handling and executor locks14:59
tobiashcorvus: unfortunately I don't have the executor logs of that timeframe but I saw a cat job to that repo before the error started to happen15:01
tobiashit broke the base job and a manually triggered reconfiguration fixed it15:01
tobiashone of the broken jobs contain this:15:02
tobiash2018-07-02 15:48:12,789 DEBUG zuul.layout: Variant <Job base-cilib branches: None source: codecraft/zuul-conf-global/zuul.d/jobs.yaml@master#123> matched <Change 0x7f1d52094c50 16,b64a68d2622968dc9947a2002aade1101aa41931>15:02
tobiashwhere the source line indicates that it matched the version of the PR and not master15:03
tobiash(the pr version of that repo)15:12
*** weshay|ruck is now known as weshay15:29
*** sshnaidm is now known as sshnaidm|rover15:29
corvustobiash: we got a report last week that the source contexts of our base jobs are wrong.  eg: http://logs.openstack.org/59/579859/3/check/tox-py35/7108831/zuul-info/inventory.yaml15:34
corvustobiash: 'base' is not defined in zuul-jobs.yaml, it's in jobs.yaml15:35
corvustobiash: but so far, i haven't seen an indication that the jobs themselves are affected.  this may be related, or it may be that you've seen two separate problems.15:36
corvusoh, i guess it's job.start_mark, not the source context that gives us those line numbers15:42
corvuser, no it's both.  the line number comes from the start_mark, but the rest comes from the source_context15:46
tobiashcorvus: oh that's an interesting information15:56
tobiashcorvus: because at least in our case I'm pretty sure that the source context was correct (in terms of line number) but the config itself wasn't15:57
tobiashcorvus: that might indicate these issues are the same15:58
corvustobiash: it looks like the errors we're seeing are that the line number is correct (start_mark) but the filename is wrong (source_context)15:59
openstackgerritLogan V proposed openstack-infra/zuul-jobs master: bindep: Ensure virtualenv is present  https://review.openstack.org/57990616:00
tobiashoh, ok, that seems different16:00
tobiashcorvus: it would be interesting if your source context changes after a full reconfiguration16:01
corvustobiash: in both cases, we're observing correct line numbers.  then in my case, i'm observing an incorrect filename and you're observing incorrect content.  right?16:02
corvusi'm looking in our logs for instances of the *correct* filenames, and i see some, but far fewer than the incorrect ones.  the one i'm looking at right now appears to be for a project-config change which encountered a config error but did not report it because it didn't think the source context matched: https://review.openstack.org/57969016:02
tobiashyes16:02
corvusi'm concerned that our config-errors api endpoint is not returning data16:09
corvusthat is to say, the request has not returned after several minutes16:10
tobiashthe config-errors api doesn't return?16:13
tobiashcorvus: is that a new api that needs a scheduler restart?16:19
*** yolanda has joined #zuul16:24
*** elyezer has joined #zuul17:11
Shrewscorvus: ah ha. i see the issue from yesterday. the shade api is being used improperly. tl;dr we should not be trying to send it a dict and just call get_image_by_id() instead17:14
Shrewscorvus: within shade itself, if you send get_image() an *object* that has an 'id' attribute, it assumes that object is already the thing you want17:15
Shrewscorvus: we send it a dict() assuming it will do the lookup by id. it only does that if the thing you send it "looks like" a uuid17:17
Shrewsso i think my original fix may do what we intend. validating...17:17
Shrewscorvus: confirmed, but we must have 'use_direct_get: true' set in our clouds.yaml.17:25
Shrewsotherwise it does a list search as normal17:25
mordredShrews: yes - and we don't want use_direct_get: true in our clouds.yaml because batching17:29
Shrewsand we do not set that clouds option. might be better to just change nodepool to call get_image_by_id directly17:29
Shrewsmordred: so much batching17:30
mordredShrews: fwiw, I would expect get_image to work with a dict with an id key - I feel like I fixed that as a bug at some point recently17:30
Shrewsmordred: it does not17:30
Shrewsmordred: hasattr does not work on a plain dict17:30
mordredlike, get_image(dict(id='asdf')) should return the dict you pass in without making any remote calls17:30
mordredso that's a bug17:30
Shrewsmordred: i don't think we want that though17:30
mordredwhat are we looking for? I may be missing context17:31
Shrewswhat good is that dict() to the return code?17:31
mordreddepends on what we're trying to do?17:31
Shrewsmordred: that code is an "optimization" saying "you've already sent me the thing you're looking for"17:31
mordredyah17:31
mordredthat is true17:32
Shrewsif we send it a full Image() object, great17:32
Shrewsif we send it an empty dict(), we don't have the full object17:32
Shrewsso returning it back is useless17:32
mordredwell - it depends on where it is in the flow - but I agree with you that if whatyou are looking for is the full object and you have an id that returning a dict with only the id is typically useless17:33
Shrewsmordred: i feel like that's only useful within shade itself. for users of shade, they need use_direct_get17:33
*** electrofelix has quit IRC17:34
mordredit's useful in other place s- create_server('foo', image=dict(id='asdf')) is useful if you know you have an id and you don't want the create_server call to do a lookup for you to find the image id to pass to nova17:35
*** jpena is now known as jpena|off17:35
mordredbut I'm not sure what use_direct_get has to do with it? use_direct_get just controls GET /images | local_filter vs GET /images/{id} ... but I'm totally jumping in half-way and am not sure the problem you're working on solving?17:36
* mordred is likely being unhelpful17:36
Shrewsmordred: actually, for the nodepool use, it would work as you describe if we did just return the dict()... it only wants the image id17:37
mordredShrews: ah - https://review.openstack.org/#/c/579664 right?17:37
Shrewsmordred: yeah17:37
mordredyah. the external property is returning something suitable for passing to create_server without an extra lookup17:38
mordredwell - it would be if the dict thing was working17:38
Shrewsyeah17:38
mordredyou could work around it in nodepool for the moment by returning a munch instead of a dict from that method17:38
mordredbut I do think we should fix the code in shade/sdk17:39
Shrewsmordred: _get_entity() would be the thing to fix in shade. but yeah, we can just pass it an object instead of dict in nodepool17:39
mordredyup to both17:40
Shrewsno need to get munch involved as a dependency17:40
mordredShrews: something like this:17:40
mordred    if (hasattr(name_or_id, 'id')17:41
mordred            or (isinstance(name_or_id, dict) and 'id' in name_or_id)):17:41
mordredright?17:41
Shrewsmordred: exactly17:41
Shrewsi might just have nodepool call get_image_by_id() directly. this whole passing a dict() vs. str stuff is a wonky API that causes confusion17:43
mordredShrews: just do it at the top and cache the result?17:45
Shrewsoh don't even need that call, actually17:46
*** gtema has quit IRC17:48
mordredShrews: well, you do if caching isn't turned on - if you pass that image-id into create_server's image argument without wrapping it in a dict you lose the fact that it's an id and create_server will helpfully do a roundtrip it doens't need17:49
Shrewsbah17:49
mordredyah17:50
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Tolerate missing project  https://review.openstack.org/57987217:50
Shrewsmordred: i mean, i was going to change that too17:50
Shrewsbasically steal your _is_uuid_like() code from shade17:51
mordredShrews: well ... you can't quite do that though17:51
mordredShrews: glance does not enforce that image ids are uuid iirc17:52
Shrewsmordred: so shade would be similarly broken17:52
mordredI think your idea of doing a get_image_by_id call is a good one - as long as you do it at a $time when you can cache the results17:53
Shrewsmordred: that's not a thing we need to do though. we just need the id (which we already have)17:54
Shrewswe're just using shade as a way to say "yep, you have an id already"17:54
Shrewswe could do that ourselves17:54
mordredright - but you don't need is_uuid_like for that17:54
Shrewstrue17:55
mordredwe know, because the config setting is 'image-id'17:55
Shrewswe can get that based... yah17:55
mordredShrews: do you want to stab me yet?17:55
Shrewsmordred: nope, i got to that same place, just microseconds behind you17:56
mordredShrews: darn. I'll try harder next time17:56
corvustobiash: yep.  we somehow restarted between the change adding support for loading with config errors and the change which exposed them through the api.  which means we're blind.  :(18:05
tobiashwhen is your next scheduler restart planned?18:10
corvustobiash: i'll do it in about 2 hours since i think i need that in order to continue looking at this bug (or bugs)18:10
*** yolanda_ has joined #zuul18:18
tobiashcorvus: what do you think about limiting the history of the repos in merger and executors?18:19
corvustobiash: we should remove the zuul refs and let git gc take care of it18:20
tobiashone of my users wants to add a repo that has 2.5 million commits and takes 3gb and 25minutes to clone (most time in resolving deltas)18:20
corvusoh, like shallow clones?18:20
tobiashyes18:20
*** yolanda has quit IRC18:20
tobiashcurrently the merger has a hard coded limit of 5 minutes for clone operations18:21
tobiashthat repo cannot be handled currently with zuul18:21
tobiashand tbh I don't want to raise that timeout to 30 minutes18:21
corvustobiash: it's part of the design that the repo that appears in the job is what you would get if you clone.  without the full repo on the executor, that won't happen.18:21
corvustobiash: this will take some thought.  i have to run now though.18:22
tobiashok18:22
*** yolanda__ has joined #zuul18:27
*** yolanda_ has quit IRC18:30
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Add test for referencing cloud image by ID  https://review.openstack.org/57970219:33
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Fix for referencing cloud image by ID  https://review.openstack.org/57966419:33
Shrewscorvus: mordred: verified 579702 using my personal vexxhost account so appears to fix the problem19:43
corvusShrews: that's the one that just adds the test?19:44
Shrewscorvus: oh, the other one19:44
mordredShrews: it fixed it with current shade?19:45
Shrewsmordred: yes19:45
Shrewsmordred: i can put up the shade/sdk fix next if you haven't already19:46
mordredShrews: I think I :q!-ed it ... so if you've got it, awesome :)19:46
corvusShrews: so the createServer part worked, but not getImage?19:47
Shrewscorvus: right19:48
corvusokay i think that all makes sense to me now :)19:49
Shrewscorvus: the labelReady change fixes the problem reported in storyboard. the handler changes just copy the pattern19:49
Shrewsnow, who can i send my vexxhost bill to??  ;)19:51
*** AJaeger has joined #zuul20:00
*** elyezer has quit IRC20:02
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP add repl  https://review.openstack.org/57996220:08
mordredShrews: I always just bug mnaser ^^20:08
mnaserhi20:09
mnasereverytime i see a highlight in #zuul i assume javascript20:09
mordred"vexxhost: used to diagnose, test and fix all shade/openstacksdk issues"20:09
mordredmnaser: hehe20:09
Shrewsmordred: mnaser: it's all good. probably only cost me a few cents20:12
*** yolanda_ has joined #zuul20:14
mnaseri always worry that y'all test against our stuff20:14
mnaserand we have something broken somehow20:14
mnaserhaha20:14
Shrewsmnaser: we test against yours b/c it works the best20:15
mnaserwoo20:15
Shrewsi mean, who wants to find MORE bugs while fixing bugs?20:15
mnaseri have 11 hours of planes tomorrow so maybe dashboard hacking20:16
*** yolanda has joined #zuul20:16
*** yolanda__ has quit IRC20:18
*** yolanda_ has quit IRC20:18
*** yolanda_ has joined #zuul20:19
*** yolanda has quit IRC20:22
pabelangerfinger urls :)20:23
*** yolanda__ has joined #zuul20:24
mnaseroooh20:24
mnaserthat might be a fun one20:24
*** yolanda_ has quit IRC20:24
corvustobiash: okay, restarted and confirmed our broken config is running; that explains some of the behavior i was seeing earlier, and confirms that there's a bug somewhere that caused us not to report on a breaking config error.  i strongly suspect it's related to the source_context not being correct (since that would cause zuul to suppress a config error report)20:26
corvusi will try to confirm that next, and continue to follow leads to try to find a root cause for that, also with the hope that there's a suggestion as to how that could run the wrong content20:27
corvussince the restart, *only* the wrong path has shown up in the logs.  never the correct path.  i find that interesting and surprising.  but also, it gives me hope.  :)20:27
mordredmnaser: also - check out the mailing list message from tristanC and my response to it ... there's a "fun" task related to setting up angular routing differently based on the results of api/info20:28
mordredmnaser: I'm going to take a stab at it - but if you got bored and did it instead I wouldn't complain :)20:28
mordredalthough also pabelanger's finger urls would be pretty awesome20:28
tobiashcorvus: that's really surprising. I thought it would be correct after the restart20:29
corvusme too.  but maybe it means it's more reproducible in our env20:30
tobiashcorvus: is that repl thing somrthing you want to build into zuul permanently or is it just for debugging now20:31
corvustobiash: if we're going to merge it, i need to check with the author of some of that code about licensing.20:33
tobiashthat would be an interesting an powerful tool for debugging20:33
corvusit is.  it's also very dangerous and should be disabled by default20:33
corvus(like, telnet in and ask zuul for decrypted secrets)20:33
tobiashAgreed :)20:33
corvusokay i went ahead and fired off an email about licensing20:38
openstackgerritJames E. Blair proposed openstack-infra/zuul master: DNM: Break Zuul's config  https://review.openstack.org/57998621:00
corvustobiash: well, i have found one problem, which is how we ended up breaking our config.  it's because the load-with-broken-config system allows us to remove a job that's still in use in another project.21:11
corvusfbo: ^21:11
corvustobiash: unfortunately, that suggests that it's not related to the source_context being wrong.  and even less to the issue you saw of running with the wrong content.21:12
corvusfbo: all of openstack's projects are gated by zuul, but we still broke zuul's config by removing a job which was still in use in another repo.21:13
corvusfbo: that's because if you remove a job, and it's not in use in the repo in which it's defined, then the only error messages are the ones from the other repos which use it.  since they aren't the current repo of the change, they are suppressed, so we don't report them to the user.  even though they really are caused by the current change.21:14
corvusi wonder if we keyed all the errors by the source_context+start_mark (project+branch+file+line) and said if any of the new configuration error keys don't appear in the current configuration error keys, we report the error.21:19
corvusthat would probably uniquely identify errors enough to avoid most cases like this.21:19
corvusoh, it also looks like if zuul accumulates too many errors, it won't report on any more errors21:45
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Split test_dynamic_conf_on_broken_config  https://review.openstack.org/57999622:20
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Report config errors when removing cross-repo jobs  https://review.openstack.org/57999722:20
corvusfbo: ^22:20
*** dtruong has quit IRC22:30
gundalowHi all. Myself and mattclay have been thinking more about softwarefactory-zuul for ansible/ansible. We are considering creating gh/ansible/zuul-config which will be very generic, like https://github.com/ansible-network/zuul-config And rather than having gh/ansible/zuul-jobs, we are considering putting all of that in gh/ansible/ansible so everything is versions together. In the past we've faced issues with other CI frameworks where certain things22:56
gundalowhave been versioned independently to others.22:56
clarkbgundalow: probably one thing to keep in mind when figuring out if things should be split or not is the limitations placed on trusted config projects22:57
clarkbgundalow: trusted config projects cannot be tested pre merge and if you end up needing to make gh/ansible/ansible trusted then you'd not have self testing config changes for the subset that could be self tested22:57
gundalowinteresting. https://softwarefactory-project.io/r/gitweb?p=config.git;a=blob;f=zuul/ansible_networking.yaml;h=fc7e43ceb305453f5ec7cb3b2f95c9cf5c4f8682;hb=HEAD#l7 contains the config I'm using to test gh/ansible-network, according to that only `ansible-network/zuul-config` is trusted  (which defines very little), so I think that's OK22:58
gundalowWhat may cause me to want to make gh/ansible/ansible trusted?22:59
clarkbif you've got gh/ansible/zuul-config for that then possibly nothing23:00
gundalowwoot23:00
gundalowThanks. I'll create a PR and see how it goes23:00
clarkbcorvus: for identifying errors, the downside to the method used in your change above is that you can introduce a new error in a child change on the same line that would go unreported until the parent error was fixed?23:03
clarkbcorvus: could we address that by comparing the lines (either directly or via a hash)23:03
*** pwhalen has quit IRC23:06
*** pwhalen has joined #zuul23:09
*** pwhalen has joined #zuul23:09
clarkboh it does hash the line if it is supplied, nevermind23:12
clarkbcorvus: I left some comments but I don't think they are -1 worthy23:58
clarkbmostly thinking out loud about a couple things23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!