Thursday, 2018-01-18

*** threestrands_ has quit IRC00:48
*** threestrands has joined #zuul00:48
*** persia has joined #zuul00:49
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: zk: check for client in properties  https://review.openstack.org/53498801:24
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Add a plugin interface for drivers  https://review.openstack.org/52462001:25
*** harlowja has quit IRC01:55
*** mattclay has quit IRC01:55
*** robcresswell has quit IRC01:55
*** gothicmindfood has quit IRC01:55
*** mnaser has quit IRC01:55
*** fdegir has quit IRC01:55
*** gothicmindfood has joined #zuul01:55
*** mattclay has joined #zuul01:56
*** mnaser has joined #zuul01:56
*** robcresswell has joined #zuul01:56
*** fdegir has joined #zuul01:56
mrhillsmanwant to test if environment behind vpn will work, nodepool and zuul-executor seem to be talking to zookeeper and gearman respectively, is there a way to trigger a job run for this particular executor to confirm things are working?02:21
mrhillsmanadditionally, nodepool is not creating any images/servers, is this because the scheduler needs to be able to hit nodepool rather than communicate this via zookeeper?02:22
mrhillsmanzookeeper and gearman are hosted outside of the environment02:22
pabelanger2018-01-18 02:34:20,571 WARNING kazoo.client: Failed connecting to Zookeeper within the connection retry policy.02:34
pabelangertristanC: Shrews: ^did we have a patch up to address that? keep trying zookeeper forever02:35
pabelangerright now, port to zookeeper is closed by firewall02:35
pabelangerbecause we haven't opened it yet, but we'd want nodepool-builder to keep trying until it does get open02:35
tristanCpabelanger: the patch is https://review.openstack.org/523640, but it's for retrying command after the connection is established, retrying connection attempt would need to set the "connection_retry" client argument02:42
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: zk: automatically retry command when connection is lost  https://review.openstack.org/52364002:43
pabelangertristanC: okay, I'll check back in morning with Shrews on how best to proceed02:44
*** TheJulia has quit IRC03:30
*** timrc has quit IRC03:30
*** jamielennox has quit IRC03:30
*** timrc has joined #zuul03:30
*** TheJulia has joined #zuul03:31
*** jamielennox has joined #zuul03:37
*** jkt has quit IRC03:51
SpamapSmrhillsman: if you have min-ready: set higher than 0, you should have servers. And your labels should all be built and uploaded, regardless of whether zuul has asked for any capacity.03:55
SpamapSor rather, your images03:55
SpamapSmrhillsman: I'd check the logs, there are likely things failing.03:55
mrhillsmanmin-ready set to 103:59
mrhillsmanmax-servers for the pool set to 5 and concurrency set to 503:59
mrhillsmangot debug on and no errors at all04:00
mrhillsman:(04:00
mrhillsmanmaybe i'll change some stuff to see if anything acts up04:01
clarkbimages happen first so start there04:02
mrhillsmanlet me try to build an image with dib logs set to debug and are not saying much of anything04:06
mrhillsmanhttp://paste.openstack.org/show/646808/ < launcher04:06
mrhillsmanhttp://paste.openstack.org/show/646809/ < builder04:07
*** bhavik1 has joined #zuul04:09
*** bhavik1 has quit IRC04:13
clarkbwhat does your builder config lool like?04:16
mrhillsmansec04:20
SpamapSIs there a known bug in the streaming plugins that makes command: ignore no_log: true?04:20
clarkbSpamapS: I want to say last I checked thw no logging atuff resulted in no logs but that may have regressed04:21
SpamapSI just had our artifactory passwords leaked into log files04:22
mrhillsmanhttp://paste.openstack.org/show/646812/04:22
SpamapSafter decrypting them to stdout with no_log: true .. the stdout was logged.04:22
SpamapSSo, guessing it doesn't respect the flag.04:22
* SpamapS has reworked the code to output to a temp file and slurp it.04:23
clarkbmrhillsman: that looks correct to me. And nodepool image-list shows no images? other than it thnking its work is done I'm stumped for now04:25
mrhillsmanok thx04:26
mrhillsmanimage-list shows images04:26
mrhillsmanjust nothing local04:26
mrhillsmanwhich is strange to me since i removed information about the other environments04:27
clarkbwell if it thinks its work is done for that cloud it will idle04:27
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: zk: use kazoo retry facilities  https://review.openstack.org/52364004:27
mrhillsmanbut i figured it was reading that from zookeeper04:27
SpamapSif it shows images, that means they're at least built04:27
mrhillsmanit shows images, just from other environments04:27
tristanCpabelanger: Shrews: i've rewrote the retry logic patch, apparently the KazooClient expect a dictionary of retry option instead of a retry object04:27
mrhillsmanmaybe cached?04:27
mrhillsmani copied the config from working environment and modified it04:28
tristanCpabelanger: Shrews: then i removed the nodepool local suspended logic. it does work as expected (e.g. nodepool survive when zookeeper is turned off randomly)04:29
clarkbmrhillsman: do they use the same zookeeper? and talk to the same cloud(s)04:30
tristanCSpamapS: i think zuul_stream is ignoring no_log yes, fixed it with https://review.openstack.org/#/c/533509/04:31
SpamapStristanC: thanks!04:35
mrhillsmanclarkb they use the same zookeeper yes04:37
mrhillsmani do not have them all talking to the same clouds though04:37
mrhillsmanthis particular one does not allow ingress without vpn04:37
mrhillsmanso my understanding, which could be wrong, was that i needed at least nodepool(builder+launcher) and zuul-executor inside, with zookeeper and gearman outside04:38
mrhillsmanso dib works fine it appears04:39
mrhillsmandisk-image-create ubuntu vm gave me image.qcow204:40
mrhillsmanso will keep digging04:41
clarkbmrhillsman: if the providers and pools are names the same in the configs they will be treated as the same regardless of network connecrivity04:48
clarkbas long as those are unique it should be fine04:48
clarkbalso the builder doesnt need access to where the VMs get IPs only to the cloud apis and artifacts/reaources to build the images themselves04:49
mrhillsmanok so i need to set the new environment in the nodepool builder config on the public server04:50
mrhillsmanso it can put in zookeeper information about what should be available there04:51
mrhillsmanand the local builder will read that and do its thing04:51
mrhillsmanit seems that is the case because the local nodepool config i do not think is affecting what is in zookeeper04:52
mrhillsmani'm sure it is late for you, i'll draw a picture and drop in here tomorrow for some feedback04:53
mrhillsmanthe issue it would seem to me is that zookeeper does not have any info about the new environment so when builder checks it, nothing happens, at least on starting that is what the logs suggest04:54
tobiashmrhillsman: if you share zookeeper by several unrelated nodepools you need to take care of having provider named unique04:56
tobiashThe truth of the nodepool state is saved in zk and the provider name is a key for accessing that04:57
mrhillsmanthe provider names are all unique04:57
mrhillsmani see it creating ProviderManager for local < new provider/pool04:58
mrhillsmanand i see it reading info on the other providers04:58
mrhillsmanand their images04:58
mrhillsmanand nodepool image-list confirms04:58
tobiashYou probably also need to name the images unique04:59
mrhillsmanok04:59
mrhillsmanwill try that04:59
tobiashNodepool has entities for local and uploaded imaged05:00
tobiashYou can query the local images with nodepool dib-image-list05:01
mrhillsmanthat was it05:01
mrhillsmanthe image names05:01
tobiashSo if you have the same image names the following will happen05:01
mrhillsmani change image and label05:02
mrhillsmanand i see builder trying to build images now05:02
tobiashNodepool1 builds and uploads image, nodepool2 sees the local image and tells to upload it, but no builder feels responsle for the local image target cloud combination05:04
tobiashBecause only the original builder can upload it but doesn't know the target05:05
mrhillsmani understand what you are getting at in a sense though not entirely clear05:07
mrhillsmanwill have to work it out in my head but i get what you mean05:07
mrhillsmanok, had to talk it through to myself hehe, got it, thanks for clarity05:08
tobiashYou also could use the chroot setting:05:09
tobiashhttps://docs.openstack.org/infra/nodepool/feature/zuulv3/configuration.html#zookeeper-servers05:09
tobiashThat separates their namespaces and inhibits such name collisions05:09
tobiashBut zuul needs to use the same chroot, have to check if zuul supports that05:10
*** jaianshu has joined #zuul05:12
tobiashHrm, looks like zuul doesn't support that yet05:12
tobiashChroot is currently only used in the test suite of nodepool where we would have the same problems05:13
tobiashSo once zuul is enabled for this you should chroot this05:13
tobiashUntil then you have the choice to keep an eye on the naming or use separate zk clusters05:14
mrhillsmanthat makes sense05:20
mrhillsmanthanks!05:20
SpamapSI am not sure it's wise to share zookeeper's without chrooting inside the zookeeper.05:20
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Link to zuul-base-jobs docs from User's Guide  https://review.openstack.org/53191205:21
mrhillsmani can use chroot05:21
mrhillsmanjust was not aware05:21
mrhillsmanstill new to using zuul as a whole and everything was all in one environment05:22
mrhillsmanwill implement it now05:22
tobiashSpamapS: it's not (at least for production)05:31
*** flepied_ has quit IRC06:38
*** flepied has joined #zuul06:39
*** flepied_ has joined #zuul06:50
*** flepied has quit IRC06:53
*** flepied_ has quit IRC07:01
*** flepied_ has joined #zuul07:01
*** ianw has quit IRC07:02
*** flepied_ has quit IRC07:03
*** flepied has joined #zuul07:04
*** flepied has quit IRC07:16
*** Wei_Liu has joined #zuul07:18
*** flepied has joined #zuul07:22
*** flepied has quit IRC07:23
*** ankkumar has joined #zuul07:27
*** threestrands has quit IRC08:03
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add buildset-artifacts-location and fetch roles  https://review.openstack.org/53067908:37
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add linters job and role  https://review.openstack.org/53068208:37
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add ansible-lint job  https://review.openstack.org/53208308:37
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add ansible-import-to-galaxy job  https://review.openstack.org/53208408:37
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add ansible-spec job  https://review.openstack.org/53208508:37
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add ansible-review job  https://review.openstack.org/53522308:37
*** jpena|off is now known as jpena08:46
*** sshnaidm is now known as sshnaidm|afk08:55
*** ianw has joined #zuul08:58
*** saop has joined #zuul11:05
saoptristanC, My ansible was failing to adding logserver in inventory, we are trying to run ssh-agent there, with this command: "eval `ssh-agent -s`" but it was failing?11:06
saoptristanC, we tried many methods running ssh-agent but not working anything?11:07
saoptristanC, any suggestion?11:07
*** tobiash has quit IRC11:19
*** tobiash has joined #zuul11:22
*** jkilpatr has joined #zuul11:52
*** jkilpatr has quit IRC11:53
*** jkilpatr_ has joined #zuul11:54
*** jpena is now known as jpena|lunch11:59
openstackgerritMerged openstack-infra/zuul-jobs master: Remove testr and stestr specific roles  https://review.openstack.org/52934012:43
*** jpena|lunch is now known as jpena13:06
*** jaianshu has quit IRC13:27
*** rlandy has joined #zuul13:27
*** weshay is now known as weshay|rover13:52
*** sshnaidm|afk is now known as sshnaidm13:57
*** sshnaidm is now known as sshnaidm|afk14:06
*** sshnaidm|afk is now known as sshnaidm|mtg14:07
*** saop has quit IRC14:15
*** ankkumar has quit IRC14:17
tobiashShrews, corvus: do you know about a state 'init' in nodepool?14:48
tobiashI have a node in this state without server id14:48
tobiashah, maybe it was killed during creation of the node14:50
*** dkranz has joined #zuul14:52
pabelangertristanC: Shrews: thoughts on adding zuul.d support to nodepool (nodepool.d)? I think it would help to manage the since of our yaml files15:48
Shrewsi have no preference either way15:50
clarkbcorvus: do you want to rereview https://review.openstack.org/#/c/533771/9 so we can get that bugfix in before merging branches today?16:10
corvuspabelanger: i could see nodepool.d being useful, yeah.16:14
pabelangercool, I want to say tristanC might have some existing patches in SF nodepool. Hopefully he has more info16:16
corvusclarkb: i see 2 changes: useBuilder and the labels, yeah?  lgtm.16:17
clarkbcorvus: yup, the labels were to fix an actual issue with the test (avoids a race) and useBuilder was a base test class api change16:18
*** dkranz has quit IRC16:26
mrhillsmanok, so i got builder to build images now and instances have spawned, next of course is to see if the scheduler hosted outside of this environment can actually use it for jobs, i have zuul-executor running on the same server as nodepool (builder+launcher), is there a way to trigger a job run to this executor?16:27
mrhillsmanor should it simply pick up jobs as it is able to from having access to gearman16:28
clarkbit should pick up jobs as it is able via gearman16:29
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Make subunit processing more resilient  https://review.openstack.org/53443116:35
mrhillsmanok cool16:38
*** dkranz has joined #zuul16:38
corvusclarkb: it looks like no one approved 533372 so i just did16:54
*** dkranz has quit IRC16:54
*** sshnaidm|mtg has quit IRC16:56
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Handle nodesets in branches  https://review.openstack.org/53497417:00
*** dkranz has joined #zuul17:07
* SpamapS kind of wishes nodepool built images by submitting a job to the zuul-executor so it could be shown in builds.17:17
corvusthat's probably not infeasible with things like static nodes or cloud-provider images.17:18
clarkbThe upside to nodepool builders running externally is you can use it as an image build service on its own. I know there is a non zero number of people that are or were doing this.17:19
corvusa bit inceptiony17:19
corvusclarkb: oh of course, i would never suggest that as the exclusive way.  just an additional way.17:20
corvuss/never/probably never unless drunk/17:20
corvusi like a toolkit with options though :)17:21
clarkbya, to make that more simple we might want to add a nodepool image build command that doesn't use a daemon at all17:21
clarkbjustbuilds and image and optionally uploads it17:22
SpamapSThe way it is now the debugging process for "why did my job break" is quite different than "why did my image build break" .. so while I respect nodepool-doesnt-need-zuul as an argument, I also think maybe nodepool does need {something} and I like zuul for that something. :)17:22
clarkbSpamapS: the other downside (and you can kinda hack around this with gearman a bit) is being at the mercy of job demand for image freshness17:23
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Only fail requests if no cloud can service them  https://review.openstack.org/53337217:24
clarkbwaiting on new images to fix a problem causing everyone to recheck taking forever because everyone is rechecking sort of issue17:24
corvusalso, exposing nodepool build logs through a web api could help with "why did my build break?"17:24
SpamapSYeah, that's what SUBMIT_JOB_HIGH is for :)17:25
corvusoh *that's* what it means by "high"17:25
corvusno wonder my jobs are taking so long.  i was so confused.17:25
SpamapScorvus: yeah I do actually have a pretty extensive Fluentd/ElasticSearch cluster available to shove them into. But it's the iterating-on-fixing-it part that is a bit weird.17:25
corvusi thought... nevermind.17:26
SpamapS:)17:26
SpamapSI was gonna build you a new image... but then somebody SUBMIT_JOB_HIGH'd17:26
clarkbSpamapS: I do that locally fwiw. Its just way easier to stick a bash call into dib elements and debug from there or mount the image and poke around or boot it17:26
SpamapSclarkb: because you run Linux locally. :)17:27
clarkbyes, I highly recommend it to everyone else too :)17:28
clarkb(that said I have a vm I do all my dib in because dib is rooty so anyone should eb able to do it regardless of platform as long as they are willing to run a linux VM)17:30
*** hashar has joined #zuul17:31
*** hashar is now known as hasharDinner17:31
*** sshnaidm|mtg has joined #zuul17:31
*** sshnaidm|mtg is now known as sshnaidm17:32
SpamapSYeah I dib in a clean VM too ;)17:33
SpamapSBut that sort of feels like what zuul is good at :)17:33
SpamapShence we've come full circle :)17:34
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Add test_launcher test  https://review.openstack.org/53377117:34
corvusclarkb: ^17:35
corvusokay, how about we start this merge thing?17:35
clarkb++17:35
corvuslet's call the repos frozen now17:35
clarkbelectrofelix had some comments on the commands to run17:35
corvusi'll record a list of all the changes on feature/zuulv3, then abandon them all with a nice message17:35
clarkbI'm not entirely sure how they are different than what I suggested since aiui ours means always use the new stuff17:35
SpamapS\o/17:35
clarkbor s/new/our/17:35
corvusclarkb: i think electrofelix solved the 'manually delete files' step...17:36
clarkbah17:36
clarkbits like 4 files so not a huge deal17:36
corvusactually, let's do an etherpad for this: https://etherpad.openstack.org/p/3mSln6ATMP17:36
corvusi think that's got it, that look okay to everyone?17:41
clarkblgtm17:42
mordredyay merging!17:43
-corvus- Zuul and Nodepool feature/zuulv3 branches are now frozen while we merge them into master. Please don't upload or approve any changes to those branches.17:43
corvusdo folks see the notice i just made ^ ?17:43
dmsimardcorvus: yes17:43
corvuscool.  it's nice and red-highlighted on my client;  just wanted to make sure it's visible17:43
clarkbI see it17:43
mordredyup. I see it17:44
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: WIP: Add provider info command  https://review.openstack.org/53542317:44
dmsimardtobiash, tristanC: btw I had an idea for the zuul web dashboard.. a schedule of periodic jobs (like a queue that shows which jobs will be triggered when)17:44
tobiashdmsimard: that sounds cool17:45
dmsimardperiodic jobs are easy when you have just one pipeline that triggers every 4hrs but if you have multiple periodic pipelines that trigger on different timers it gets confusing17:45
Shrewspabelanger: 535423 ^^ will give us a tool to remove vanilla's zookeeper data w/o having to manually do that17:45
dmsimardtobiash: I'll create a story for it17:45
corvusShrews: ^ we're freezing the feature/zuulv3 branches17:45
tobiashalso the current number of changes in pipelines in the tenant view would be useful17:45
Shrewscorvus: that's fine. i can re-home it later, it's not ready yet17:46
corvusi think this etherpad is mostly a one-person thing, so i'll plan on doing most of it.17:46
Shrewsi did not see your notice until just now  :)17:46
corvusmaybe someone wants to start looking at item #7 -- update configs where we point to feature/zuulv3 ?17:47
mordredcorvus: that should mostly be in system-config I think, right?17:47
mordredor, rather - I'll take system-config17:48
pabelangerShrews: great, thanks17:49
clarkbpabelanger: ^ your recent change to test-requirements in project-config made me think of number 7 above17:50
clarkbbut system-cofnig has it too for the puppet based git repo checkouts and pip installs17:51
* clarkb updates system-config17:51
pabelangerah, good point17:51
mordredhow about topic 'zuulv3-merge' ?17:51
clarkbwfm17:52
clarkbhttps://review.openstack.org/535429 is up for system-config17:56
mordredok. I added a list of all of the places feature/zuulv3 happens across all our repos to the etherpad17:57
dmsimardtobiash, tristanC: https://storyboard.openstack.org/#!/story/200148617:57
mordredclarkb: jinx. we made the same patch - but slightly differently17:57
clarkbmordred: oh I totally missed you saying you would do system config :P17:58
clarkbI can abandon mine as you were first17:58
corvushttp://paste.openstack.org/show/646946/18:00
tobiashdmsimard: nice idea18:00
corvushow's that look for the list of open changes?18:00
tobiashwow, even with dependencies18:01
clarkbcorvus: if you abandon all of them and gerrit lets us delete the branch then you know you got them all :)18:01
mordredlooks great18:01
tobiashlike it18:01
mordredclarkb: perhaps if we land https://review.openstack.org/#/c/534458/ real quick, when people re-propose all of their patches we won't burn the VMs on the cover jobs18:02
corvustobiash: it's a screenshot from gertty18:02
mordredgah18:02
mordredcorvus: ^^18:02
corvusmordred: or if we land that asap after the merge :)18:02
mordredcorvus: either way :)18:02
corvuslet's do it post-merge, so we don't have to pause here for 15 minutes :)18:03
corvusokay, if it lands it lands :)18:04
openstackgerritMonty Taylor proposed openstack-infra/zuul-base-jobs master: Remove feature/zuulv3 from pip depend  https://review.openstack.org/53543218:05
corvusactually -- we should probably perform the merge first, *then* abandon all the changes.18:06
clarkbmordred: re system-config the reason I explicitly switched feature/zuulv3 -> master was I didn't want to have to cross check against where we put in the default to old master tags for things like openstackci18:07
clarkbmordred: I think chances are what you did are fine, but it may install zuulv2 in some cases18:07
corvusclarkb, mordred: ^ should we merge the branches first, then abandon the changes?18:08
mordredcorvus: I'm good with that18:09
corvusalso, i put in a proposed 'abandon' message for that step, does that look good?18:09
mordredclarkb: let's do your version18:09
clarkbI think we can abandon changes whenever. The branch delete is the tricky one.18:09
mordredyah. we have a few tox.ini files and required-projects lines referencing it - I thinkn we should merge, abandon, then land the changes updating things to pointto master, then delete branch18:10
corvuskk18:11
openstackgerritJames E. Blair proposed openstack-infra/nodepool master: Replace master with feature/zuulv3  https://review.openstack.org/53543518:14
Shrewshrm, the sitemap.xml in openstack-manuals seems to be regenerated rather than manually edited. anyone know offhand how that's done?18:14
corvusShrews: ask in #openstack-infra ?18:15
corvusor #openstack-doc18:15
corvusi've updated the steps in 3 to reflect what i'm doing on a fresh clone of the repos18:15
mordredcorvus: we have a chicken-and-egg issue18:16
mordredcorvus: oh, no we don't - nevermind18:16
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Replace master with feature/zuulv3  https://review.openstack.org/53543618:17
mordredcorvus: you may want to make a follow-up that removes the feature/zuulv3 entries from README18:17
corvusmordred: yep18:17
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Stop running tox-cover job  https://review.openstack.org/53445818:18
mordredoh - although we do have an issue with the .zuul.yaml file18:18
corvus535436 is failing due to the issue i'm fixing in 53497418:18
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Replace master with feature/zuulv3  https://review.openstack.org/53543618:19
mordredcorvus: should we land 534974 first? or land a feature/zuulv3 patch removing the .zuul.yaml ?18:19
corvusmordred: 534974 needs careful review and a restart18:19
corvusmordred: i've modified 535436 to use a different nodeset name18:20
mordrednod. wfm18:20
corvusso if you download that patch and diff the two branches, you should see .gitreview and .zuul.yaml are different.18:20
AJaegerwe also need to change some jobs, like http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/jobs.yaml for landing the change18:20
AJaegerand http://git.openstack.org/cgit/openstack-infra/zuul-base-jobs/tree/tox.ini#n3818:21
mordredAJaeger: yah - we have a topic up - zuul-merge18:21
mordredor zuulv3-merge rather, sorry18:21
clarkbmordred: should I unabandon mine?18:21
clarkband ya I don't think there is a chicken and egg, we just have to get merge in onto master first then do everything else before deleting the brnach18:21
mordredclarkb: yah - I think youmake a good point about being explicit for now18:21
corvusi've updated my local copy of the change list to remove the cover job.18:21
corvuser, the cover job change18:22
AJaegermordred: good, I see it's covered already18:22
clarkbmordred: https://review.openstack.org/#/c/535429/ is restored18:22
mordredclarkb: +2 - abandoned mine18:24
corvusclarkb, mordred, tobiash: 535435 and 535436 are ready for reviews... in whatever way we think is best to review those.  :)18:27
mordredcorvus: +2 on both from me18:28
tobiashthe nodepool change looks suprisingly small18:31
corvustobiash: the builder part of nodepool v3 is already on master18:31
clarkbya I'm not sure the best way to review that. When I did it locally I trusted git and tox for the most part18:32
clarkbI suppose I can reproduce the merge and see if it looks different18:32
corvusmordred: my merge did not include your tox-cover change18:33
tobiashsure that's correct? I don't see driver related stuff in there.18:33
corvusi guess i will redo it.18:33
mordredcorvus: piddle. oh well18:34
Shrewsoh, hrm, i'm not seeing the nodepool/launcher.py file in that change18:34
corvustobiash: be aware that gerrit is not very good at rendering merge commit diffs.18:34
corvusShrews: ^18:34
mordredtobiash: for merge commits gerrit only shows you conflicts18:34
mordredand their resolutions18:34
tobiashah, that might explain my confusion18:34
corvusgertty will show you the whole diff18:35
Shrewsah18:35
corvusbut the best way is probably to check it out locally and verify the result looks correct18:35
tobiashthat's what I just was going to do18:35
corvusanyway, hold off on zuul for now.  proceed with nodepool.18:35
Shrewsyeah, checking out the change and visually inspecting looks good18:36
tobiashok, the checkout looks familiar18:37
tobiash:)18:37
tobiashso +2 from me18:37
clarkbcorvus: electrofelix's steps and mine do result in slightly different diffs. Things like tools/check_devstack_plugin.sh in nodepool appear to have been updated on master but not on feature/zuulv318:37
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Replace master with feature/zuulv3  https://review.openstack.org/53543618:38
clarkband we've got stuff like precise instead of xenial in the docs on the feature/zuulv3 side that appear to have been updated on the master side18:39
corvusapparently master wasn't as frozen as we thought it was?18:39
clarkbya I guess not18:39
corvusclarkb: is the diff large?18:40
clarkbhttp://paste.openstack.org/show/646948/18:40
clarkb- is yours + is mine18:40
clarkber did I get that backwards /me double checks18:40
tobiashah, diff against parent 2 is what I want to look at18:40
clarkboh I may have ogtten it backwards - is mine + is yours18:41
clarkbmy read on that is your diff is safe and only tests should be affected but we will be running tests for master now so that should be fine18:42
corvusya, i mean, we've also been running tests for feature/zuulv3 :)18:42
corvusi'm inclined to just overwrite master with v3; i think the little we lose is okay and easily fixable.  if anyone disagrees, you're welcome to merge master into v3 and then v3 back into master.18:44
clarkbI've +2'd the change to indicate I think its fine18:44
corvusclarkb: are you going to check and review the zuul change too?18:45
mordredcorvus: I agree with your inclination18:45
*** jpena is now known as jpena|off18:46
corvusand does anyone else want to review either of the changes, or should we go aheand and approve them?18:46
mordredcorvus: I vote approve.18:48
pabelangercorvus: no objection here18:48
tobiashzuul failed with test failures I've seen already several times on unrelated changes18:52
clarkbcorvus: I can if you like I'm less concerned about it after seeing nodepools since I think zuul was frozen harder18:53
mordredtobiash: I thnk that means the merge worked :)18:53
tobiashyes ;)18:54
openstackgerritFabien Boucher proposed openstack-infra/zuul feature/zuulv3: Do not call merger:cat when all config items are excluded  https://review.openstack.org/53545019:02
corvusfbo: ^ feature/zuulv3 is frozen for the merge into master; you'll need to repropose that19:03
AJaegerok to merge the infra-manual change already?19:04
corvusi've gone ahead and approved the zuul change; it timed out on a test, but tobiash rechecked19:05
corvusi'm going to afk until after lunch; hopefully those will have merged by then.  if folks could help push things along if you see any issues, i'd appreciate it.19:05
mordredcorvus: ++19:06
AJaegerso, zuul change is in gate - passed check queue..19:17
clarkbnodepool has a couple longer check tests but once it gets to gate should go quickly19:18
SpamapSvery exciting19:34
* AJaeger rechecked the zuul merge change again19:43
tobiashso again check and gate :-/19:44
AJaegertobiash: yeah ;(19:44
clarkbhrm should we be concerned the merge is causing that instability or is this normal?19:44
AJaegertobiash: hope we see it before we call it a day ;)19:44
tobiashdoes nodepool really take soo long or is it hanging?19:45
AJaegerclarkb: the py35 job is not stable AFAIK ;(19:45
SpamapShaven't seen too many non-deterministic fails, but it does happen19:45
SpamapSthere are a few "wait a second" tests that are susceptible to cloud blips I think19:46
corvusthe two test failures are slow and busy tests; probably recently made slightly less stable by the recent addition of a bunch of new tests throwing off the scheduling.19:46
clarkbtobiash: it takes a long time, its building an image and booting it all without nested virt19:47
clarkbtobiash: also without a warm cache for image builds19:47
clarkbtobiash: I think ~40 minutes is normal19:47
tobiashAJaeger: actually I called it a day some hours ago but lurking around here in my free time a bit ;)19:47
tobiashclarkb: the current nodepool job is running since 90min19:48
AJaegertobiash: ;)19:48
clarkbtobiash: hrm it actually looks like it never finished making the opnstack cloud with devstack19:49
tobiashthe log lying... 'running devstack now this takes 10 - 15 minutes'19:52
*** harlowja has joined #zuul19:52
clarkbthat almost looks like the node "went away"19:58
clarkbwe'll have to check logs19:58
tobiashno luck today20:13
tobiashnodepool also failed the gate20:13
tobiashrechecked20:13
AJaegerand zuul merge failing again ;(20:15
tobiashlooks like a recheck party20:18
pabelangerwhat failed? test or something on network?20:18
* AJaeger passes on the recheck torch to the next person - if needed20:19
AJaegerpabelanger: test ;(20:19
pabelangerpossible something in merge is creating race?20:20
pabelangeror existing failure20:20
pabelangerhaven't looked yet myself20:20
clarkbpeople seemed to think its due to extra tests that were recently added (and not due to the merge)20:20
AJaegerI saw these on zuul-jobs already in the last days...20:20
* clarkb wanders off to find food too while we wait20:22
pabelangerah, see that in backscroll now20:22
* tobiash gives up and raises eod20:25
Shrewsclarkb: seems nodepool failed on your new test20:25
Shrewsmust still be a race there somewhere?20:25
clarkbshrews have a link? I'll look after lunch20:27
Shrewsclarkb: http://logs.openstack.org/35/535435/1/gate/tox-py35/d58d61e/testr_results.html.gz20:27
Shrewsnot entirely sure what has happened there20:28
Shrewsclarkb: oh, i think maybe it's the map() call. getNode() can return None if the id you give it has gone away by the time you call it20:31
clarkbah20:31
clarkbya that will do it20:31
clarkbso we need if node and node.provider ==20:31
Shrewsyup20:31
mordredcorvus, Shrews, clarkb, pabelanger: I got sucked in to fixing a bug caused by something getting released when I wasn't expecting it - looks like we're still waiting on the recheck dance, but ping me if I seem absent - I'm trying to bounce back and forth though20:40
openstackgerritMerged openstack-infra/zuul master: Replace master with feature/zuulv3  https://review.openstack.org/53543620:47
AJaegerparty time ;)20:47
mrhillsman++20:59
corvusnodepool just moved to gate21:00
AJaegernow time to merge the job changes from https://review.openstack.org/#/q/topic:zuulv3-merge ?21:00
corvusAJaeger: the ones for zuul should be safe to merge, yes21:01
corvus535429 has some nodepool stuff in it and so should wait21:02
* AJaeger already gave +2 - so need second reviewer...21:02
corvusapproved 53543221:02
corvusand 53543321:03
corvusand 53543421:03
AJaegeryeah, those were the job changes - thanks21:04
AJaegerquite a large branch merge:21:05
AJaeger995 files changed, 58394 insertions(+), 19548 deletions(-)21:05
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Remove feature/zuulv3 references from README  https://review.openstack.org/53548121:06
*** maxamillion has quit IRC21:08
fungicorvus: are we considering master unfrozen just for ^ i guess?21:09
fungi(and maybe any other potentially misleading v3-related references that need cleaning up before merge)21:10
AJaegerfungi: we're after merge...21:10
openstackgerritMerged openstack-infra/nodepool master: Replace master with feature/zuulv3  https://review.openstack.org/53543521:10
fungioh!21:10
fungii missed the event21:10
AJaegerfungi: 20:4721:10
fungithought it had gotten suspiciously quiet in #openstack-infra21:10
corvusand there's the other event!21:10
AJaegerfungi: you're here just in time for the nodepool party ;)21:10
fungiheh21:11
corvusapproving 535429 now21:11
corvusall zuulv3-merge changes are approved21:11
clarkbI've eaten /me is back now21:11
corvushow does my abandon message look in https://etherpad.openstack.org/p/3mSln6ATMP  section 5?21:11
fungii should have checked in here earlier in the day, forgot to consider the great merging would probably get coordinated here and not in the other channel21:12
AJaegercorvus: friendly ;)21:12
corvussorry, i guess it's about a 50/50 split we have going on21:12
*** maxamillion has joined #zuul21:12
clarkbcorvus: lgtm21:12
AJaegercorvus: change "is being merged into" to "has merged"21:13
fungiapologies if i seemed strangely absent21:13
corvusAJaeger: ++21:13
openstackgerritMerged openstack-infra/zuul-base-jobs master: Remove feature/zuulv3 from pip depend  https://review.openstack.org/53543221:13
*** maxamillion has quit IRC21:13
*** maxamillion has joined #zuul21:13
fungiabandon message in step #5 lgtm21:13
clarkbI'm going to get a patch up for the bug that shrews pointed out in my test21:14
AJaeger#success Zuul and nodepool feature/zuulv3 branches have merged into master21:14
corvusokay, gertty is chewing through that list now21:15
corvus"Sync: 114"21:15
AJaegersuccess not working here?21:15
fungiAJaeger: looks like openstackstatus may be out to lunch21:15
fungii'll check on it21:15
AJaegerit's in #openstack-infra, I'll do it there...21:16
AJaegerfungi, openstackstatus is not run here according to system-config21:17
fungiyep, am nearly done writing the commit message21:18
openstackgerritClark Boylan proposed openstack-infra/nodepool master: Fix node data retrieval race in test_failed_provider  https://review.openstack.org/53548421:20
clarkbShrews: ^ I think that will do it21:20
AJaegerenough party - /me calls it a day21:23
clarkbAJaeger: good night21:23
fungithanks AJaeger!21:23
fungi<proxying openstackgerrit> https://review.openstack.org/535485 Have statusbot join #zuul21:24
*** weshay|rover is now known as weshay|rover|den21:24
*** weshay|rover|den is now known as weshay|dentist21:24
corvusemail to zuul-discuss about the merge and abandoned changes sent21:25
corvusstill waiting for gertty to confirm that all were abandoned (i think it's done abandoning and it's just resyncing them now for confirmation)21:25
corvusand it just finished21:25
corvusnow we should wait until the zuulv3-merge changes have merged before deleting the branches21:26
*** openstackgerrit has quit IRC21:33
mordredcorvus: ++21:34
*** sshnaidm is now known as sshnaidm|off21:36
fungilooks like the last of them just merged21:38
clarkbwe should probably wait for ansible puppet to idle (if not already) before deleting the branch?21:39
corvusclarkb, mordred: some of the items in #7 don't have changes -- is it the case that they don't need any?  are we ready to delete the branch once the puppetmaster idles as clarkb suggests?21:40
clarkblooks like it is running right now21:40
fungior are there more needed for the remaining subparts under step #7 in the etherpad?21:40
mordredcorvus: they were just less-high priority21:41
mordredthe openstack-ansible-* ones are plentiful - but it's all a mention in a comment21:41
clarkbTIL infra-core has core on ansible-role-zuul21:41
fungithe one listed for system-config (535426) seems to have been unceremoniously abandoned too21:41
mordredfungi: oh - sorry - clarkb made one too that has been approved21:41
fungiokay, cool, just making sure21:42
clarkbwe wrote two patches both of them at the same time because i am a derp21:42
fungino worries21:42
mordredbut then clarkb's wound up being nicer21:42
clarkbI don't see changes to ansible-role-zuul or nodepool21:42
clarkbpabelanger: ^ is that urgent for you? I've never used or read the roles before so hesitate to start doing surgery on them21:42
fungiclarkb: which one did you say was still running? i don't see any open under https://review.openstack.org/#/q/is:open+topic:zuulv3-merge21:43
clarkbfungi: ansible puppet on puppetmaster21:44
fungioh, _that_21:44
fungiyup21:44
clarkbI think it may have just stopped and will start a new run in a minute (which will pull updated code which should be fine)21:44
*** hasharDinner has quit IRC21:45
clarkbya it started again so I think we are clear on that front21:47
clarkbso just ansible-role-zuul and ansible-role-nodepool21:47
*** rlandy is now known as rlandy|brb21:48
corvusi guess let's give pabelanger a few minutes to weigh in21:49
*** threestrands has joined #zuul21:56
*** threestrands has quit IRC21:56
*** threestrands has joined #zuul21:56
Shrewsthe openstack-manuals change is slightly more complicated since the sitemap.xml is generated by walking docs.o.o (according to AJaeger), so I guess that one needs to wait21:57
clarkboh that brings up an interesting problem. For feature branches do we manually delete their docs?21:58
clarkbor juts let them live on because lazy21:59
clarkbactually we get ref updated events for branch deletes we could have jobs unpublish stuff21:59
fungii suppose we could add redirects in case anyone had bookmarked some deep link21:59
corvusi'd be fine just deleting it.  it's a feature branch :)22:01
clarkb++22:01
fungiwfm22:01
corvusshall we go ahead and delete the branches now?22:02
fungidoesn't seem to be any other blockers in the pad22:02
corvusi don't think we need to eol-tag these branches since they were merged.22:04
mordredcorvus: agree. and also think deleting is a great idea22:04
fungiagreed22:04
corvusand the merge commit second parent should be sufficient recording of the sha for posterity22:04
corvusdone22:05
clarkbya you can follow the code back from the merge commit so I agree22:05
corvusi believe that concludes this operation :)22:06
mordred\o/22:06
Shrewsend of an era22:06
mordredsrrsly22:06
corvusbeginning of a new one?22:06
*** rlandy|brb is now known as rlandy22:09
fungitime to create the feature/zuulv4 branch! ;)22:12
* fungi ducks22:13
*** openstackgerrit has joined #zuul22:18
openstackgerritMerged openstack-infra/nodepool master: Fix node data retrieval race in test_failed_provider  https://review.openstack.org/53548422:18
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Handle secrets in branches  https://review.openstack.org/53550122:25
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Handle nodesets in branches  https://review.openstack.org/53550222:25
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Normalize semaphore branch handling  https://review.openstack.org/53550322:25
corvusi'm going to abandon the original github integration changes with a nice message22:32
SpamapSthis is not the end. This is not the beginning of the end. But perhaps, this is the end of the beginning of github support. ;)22:33
corvusthat's, erm, shorter than what i wrote.22:33
corvusthat's 97->75 open changes22:34
*** dkranz has quit IRC22:36
dmsimardHow do you drain all the nodes from nodepool again ? need to set max-servers to -1 right ? does that kill ongoing jobs ?22:37
dmsimard(This is for RDO)22:37
corvusdmsimard: does not kill jobs.  max servers can be 0, or you can drop the label from the providers.22:37
corvus-1 also works22:38
corvusi'm planning on using this message for changes that look like they are no longer relevant: http://paste.openstack.org/show/646960/22:39
corvushow does that sound?22:39
dmsimardiirc there was a difference between max-servers 0 and -122:40
mordredcorvus: sounds good22:40
dmsimardI think 0 lets the servers drain over time (i.e, after they're consumed nodepool doesn't respawn more nodes) while -1 would kill "ready" nodes for example22:41
corvusdmsimard: the difference was whether images are deleted22:41
corvusmy preferenc is for the new method which is just to remove things from the config which you don't want.  that seems clear to me.  :)22:41
corvus(though, if you remove the provider entirely, then obviously it can't delete things, so sometimes you want a provider with an empty label list)22:42
corvusdmsimard: just 'nodepool delete' the ready nodes after you've quiesced the provider22:42
dmsimardlast time I put max-servers 0 it would just say in the logs "woops I'm over quota" but wouldn,t actively delete things22:42
dmsimardcorvus: yeah that sounds like a plan22:43
corvusi don't believe anything will delete ready nodes other than a timeout22:43
openstackgerritFabien Boucher proposed openstack-infra/zuul master: Do not call merger:cat when all config items are excluded  https://review.openstack.org/53550922:43
corvus(*automatically)22:43
openstackgerritFabien Boucher proposed openstack-infra/zuul master: Make Zuul able to start with a broken config  https://review.openstack.org/53551122:45
dmsimardcorvus: yeah that's fine, I just thought -1 did it automatically22:46
corvusokay, i did a pass through open changes and removed a bunch that looked like they no longer applied.  i'm sure there's more, but the remainders may take a little more thought.22:49
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Normalize semaphore branch handling  https://review.openstack.org/53550322:53
openstackgerritIan Wienand proposed openstack-infra/zuul master: Remove zuul._projects  https://review.openstack.org/53551823:10
openstackgerritMerged openstack-infra/zuul master: Remove feature/zuulv3 references from README  https://review.openstack.org/53548123:15
openstackgerritIan Wienand proposed openstack-infra/zuul master: Remove zuul._projects  https://review.openstack.org/53551823:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!