Tuesday, 2019-04-16

*** rlandy|ruck has quit IRC00:24
*** mattw4 has quit IRC00:37
*** jamesmcarthur has joined #zuul01:37
*** bhavikdbavishi has joined #zuul01:45
*** threestrands has joined #zuul01:47
openstackgerritMerged openstack-infra/zuul master: Fix dynamic loading of trusted layouts  https://review.openstack.org/65278701:53
pabelangerclarkb: ^01:55
clarkbyup I'll be restarting our scheduler soon01:56
pabelangerother 2 look good so far01:56
*** jamesmcarthur has quit IRC02:08
openstackgerritMerged openstack-infra/zuul master: Config errors should not affect config-projects  https://review.openstack.org/65278802:10
clarkbjust one to go now02:10
*** bhavikdbavishi1 has joined #zuul02:13
*** bhavikdbavishi has quit IRC02:14
clarkbI think the release note change will need a recheck02:14
pabelangerlooks like quickstart is waiting for node02:16
clarkbit is waiing in the image build.job02:17
*** bhavikdbavishi1 has quit IRC02:18
clarkbwhich I think must be in limestone and failing because docker cant ipv602:18
pabelangerclarkb: is there no way to force ipv4 on docker jobs atm?02:19
pabelangerlooks like rax02:20
pabelangerstarted now02:20
*** bhavikdbavishi has joined #zuul02:20
clarkbno we have ipv6 only clouds and dont have ipv4 labels02:21
clarkbits frustrating becayse docker shouldnt break on this its just a regex input check they do02:21
pabelangerah, right02:22
pabelangerforgot about ipv6 only02:22
*** bhavikdbavishi1 has joined #zuul02:24
*** bhavikdbavishi has quit IRC02:24
*** bhavikdbavishi1 is now known as bhavikdbavishi02:24
clarkboh cool so ya now that it is in rax it appears to be finishing02:26
clarkbI don't know if it will merge before puppet runs but I'm not worried about restarting on just the release note change02:26
pabelanger++02:27
clarkbyup puppet just ran02:29
openstackgerritMerged openstack-infra/zuul master: Add release note for broken trusted config loading fix  https://review.openstack.org/65279302:29
*** jamesmcarthur has joined #zuul02:39
*** jamesmcarthur has quit IRC02:46
*** weshay_pto is now known as weshay02:49
*** jamesmcarthur has joined #zuul03:02
*** jamesmcarthur has quit IRC03:39
*** jamesmcarthur has joined #zuul03:40
*** jamesmcarthur has quit IRC03:45
*** jamesmcarthur has joined #zuul04:11
*** jamesmcarthur has quit IRC04:18
openstackgerritMerged openstack-infra/nodepool master: Fix for orphaned DELETED nodes  https://review.openstack.org/65272904:35
*** swest has joined #zuul05:21
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines  https://review.openstack.org/65276405:22
*** quiquell|off is now known as quiquell|rover05:49
*** bjackman has joined #zuul05:56
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines  https://review.openstack.org/65276405:59
*** saneax has joined #zuul06:21
*** saneax has quit IRC06:25
*** pcaruana has joined #zuul06:26
*** saneax has joined #zuul06:26
*** kmalloc has quit IRC06:28
*** kmalloc has joined #zuul06:30
*** hogepodge has quit IRC06:32
*** kmalloc has quit IRC06:40
bjackmanWould be great to get a review of https://review.openstack.org/#/c/649900/ if anyone has time - it fixes a bug in the Zuul/Gerrit interface06:45
*** gtema has joined #zuul06:51
*** kmalloc has joined #zuul06:53
*** quiquell|rover is now known as quique|rover|brb06:55
*** hogepodge has joined #zuul06:56
*** hashar has joined #zuul06:57
*** quique|rover|brb is now known as quiquell|rover07:09
*** bhavikdbavishi has quit IRC07:28
*** bhavikdbavishi has joined #zuul07:29
openstackgerritSimon Westphahl proposed openstack-infra/zuul master: Ensure correct state in MQTT connection  https://review.openstack.org/65293207:29
*** gtema has quit IRC07:45
*** gtema has joined #zuul07:46
*** dkehn has quit IRC07:55
*** threestrands has quit IRC07:58
*** bhavikdbavishi1 has joined #zuul08:37
*** bhavikdbavishi has quit IRC08:38
*** bhavikdbavishi1 is now known as bhavikdbavishi08:38
openstackgerritBrendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field  https://review.openstack.org/64990009:31
*** saneax has quit IRC09:39
openstackgerritBrendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field  https://review.openstack.org/64990009:49
*** hashar has quit IRC09:57
openstackgerritBrendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field  https://review.openstack.org/64990010:27
*** panda is now known as panda|lunch11:05
*** pcaruana has quit IRC11:17
*** sshnaidm has quit IRC11:17
openstackgerritBrendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field  https://review.openstack.org/64990011:18
*** quiquell|rover is now known as quique|rover|eat11:25
*** sshnaidm has joined #zuul11:26
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines  https://review.openstack.org/65276411:37
*** gtema has quit IRC11:40
*** gtema has joined #zuul11:43
*** pcaruana has joined #zuul12:05
Shrewscorvus: tobiash: the cache is probably a decent guess. November is when we added I4834a7ea722cf2ac7df79455ce077832ae966e63 (Add second level cache of nodes)12:05
*** quique|rover|eat is now known as quiquell|rover12:07
*** rlandy has joined #zuul12:19
*** rlandy is now known as rlandy|ruck12:20
*** jamesmcarthur has joined #zuul12:21
*** panda|lunch is now known as panda12:22
*** jamesmcarthur has quit IRC12:30
*** bhavikdbavishi has quit IRC12:31
*** bhavikdbavishi has joined #zuul12:35
Shrewspabelanger: 652778 lgtm. probably would have been cleaner to use   in ('ssh', 'network_cli')  in the connection_type comparisons, but still ok12:36
pabelangerShrews: ah, didn't think of it12:40
pabelangerShrews: also thanks!12:40
pabelangerShrews: do you think there will be a release of nodepool this week?12:46
*** jamesmcarthur has joined #zuul12:48
Shrewspabelanger: i don't think one is planned. normal procedure is to run it in production infra for a while before a release, so we'd probably have to schedule a restart first and run for a while before releasing with your change.12:51
pabelanger+112:52
pabelangermaybe after we stand down from zuul release :)12:52
Shrewsyeah. i think the task manager removal stuff has merged, but we aren't running that yet. so that may or may not be a big deal to contend with.12:54
*** gtema has quit IRC12:54
Shrewsmordred may want to be around to help watch that when we do12:54
pabelangerack12:54
Shrewsthen again, nl01 was restarted yesterday, so may not be an issue12:55
Shrewsor were all of them restarted?12:56
Shrews"2019-04-15 23:20:07 UTC restarted nodepool-launcher on nl01 and nl02 due to OOM; restarted n-l on nl03 due to limited memory"12:56
* Shrews keeps fingers crossed12:57
pabelangeroh right12:57
pabelangerwe did it last night12:57
Shrewslauncher is not running on nl0412:58
Shrewsmoving to #-infra12:58
openstackgerritMerged openstack-infra/nodepool master: Gather host keys for connection-type network_cli  https://review.openstack.org/65277812:58
*** jamesmcarthur has quit IRC12:58
*** bjackman has quit IRC13:00
*** bjackman has joined #zuul13:00
tristanCmordred: so, could we get the zuul-operator project created?13:03
tristanCAnd has anyone here used the executor zone feature yet?13:05
*** pcaruana has quit IRC13:07
mordredtristanC: sorry - I still need to write the spec - I've been slammed with RH internal things recently. I promise I'll get that done today or tomorrow13:10
tobiashShrews: so your launchers are already running the taskmanager changes?13:11
Shrewstobiash: they are now13:11
tobiashI wasn't sure if I want to be the first to try ;)13:11
pabelangertristanC: not yet, I've tested it locally a few weeks ago, but hoping to try it out in the next month or so13:12
tristanCpabelanger: iiuc, the zone scope only happens for the execute method, wouldn't it make sense to disable merge job for zoned executor?13:13
mordredShrews: and they seem to be not crashing? that's great!13:14
pabelangertristanC: maybe, I cannot remember if we discussed that or not on IRC as improvement. It was more to make sure executors were closer to testing, but could see why you'd want that for mergers too13:15
pabelangerand use private IPs only13:15
tristanCpabelanger: i mean, not do merge job for zoned executors because they are not related to most of our git sources13:17
mordredShrews: speaking of ... there is a (very large) patch up for review in sdk that's about getting the image code to use the underlying resource primimtives: https://review.openstack.org/#/c/651534 - that should probably get decently deep review13:18
*** bhavikdbavishi has quit IRC13:23
Shrewsmordred: i think there might be an issue with openstacksdk... or maybe nodepool13:26
Shrewsmordred: we've been trying to delete a server from ovh-gra1 for a long time that does not exist (10a646d6-fd03-4cca-ab97-ca137fafa745)13:26
*** bjackman has quit IRC13:27
Shrewsmordred: it doesn't seem that nodepool is getting what it expects from get_server:  http://paste.openstack.org/show/749354/13:27
pabelangertristanC: yah, could see that, however in the use case i was thinking about, I didn't mind all mergers merging all git repos, was more about the push from executor to test node13:28
Shrewsmordred: we aren't throwing exceptions.NotFound() in cleanupServer there13:28
pabelangertristanC: I think you could propose a patch and see what others say13:28
Shrewsso we are always trying to delete it13:28
mordredShrews: hrm. what ARE we getting back I wonder?13:29
Shrewsmordred: yeah, that's the question13:29
mordredblerg. we're throwing openstack.exceptions.ResourceNotFound13:30
mordredlemme go look why13:30
*** tobiash has quit IRC13:31
Shrewsmordred: oh yay. we don't handle that in nodepool13:31
mordredwell - we shouldn't be throwing it13:31
pabelangertristanC: I think there is a topic, say if the repo was marked private.  Today we don't really support that in a multi-tenant setup. All mergers can see the content of all tenants, but there might be a use case when you want to limit the scope a merger / executor could see.  Thankfully for now, that isn't the case for us.13:31
*** tobiash has joined #zuul13:32
*** lennyb has quit IRC13:32
*** pcaruana has joined #zuul13:33
Shrewsmordred: i don't think we are actually13:33
mordredShrews: I just ran get_server_by_id against ovh gra1 with that id and it threw that exception at me13:34
Shrewsmordred: what i see in the log is nodepool.exceptions.ServerDeleteException, which happens when we wait for server deletion13:34
mordredShrews: if we're not throwing that in the nodepool code it's even weirder :)13:34
mordredShrews: WEIRD13:34
Shrewsi'm confused13:34
Shrewsmordred: what happens if you call get_server(server_id)13:36
mordredchecking13:37
mordredShrews: I properly get None13:37
Shrewswow. now i'm more confused13:37
mordredShrews: remote:   https://review.openstack.org/652995 Return None from get_server_by_id13:38
mordredShrews: taht should fix the get_server_by_id bug13:38
mordredhrm. that probably wants a release note13:38
Shrewsmordred: can you share your test code with me so i don't have to hack that up real quick13:42
Shrewswant to run it on nl0413:42
*** bjackman has joined #zuul13:43
*** lennyb has joined #zuul13:45
*** bjackman_ has joined #zuul13:45
*** bjackman has quit IRC13:48
Shrewsnm. hacked one up13:50
Shrewswow, this code is really not acting the way i'm expecting13:54
ShrewscleanupNode() should be throwing a NotFound() exception, which would prevent waitForNodeCleanup() from being called. That doesn't appear to be the case.13:56
mordredShrews: yeah - i just ran c = openstack.connect() ; c.get_server_by_id() in a repl13:56
Shrewsmordred: nodepool clearly thinks that server exists from some reason14:01
*** bjackman_ has quit IRC14:13
*** bjackman_ has joined #zuul14:14
mordredShrews: is it that it's in zk - and since the get_server_by_id is misbehaving it never gets cleaned out of zk?14:22
*** bjackman_ has quit IRC14:23
Shrewsmordred: it’s in zk but calling get_server which doesn’t appear to be returning None but does in my test14:24
ShrewsI have to afk for a bit14:25
mordredShrews: eek. get_server not returning None seems extra weird14:26
*** gtema has joined #zuul14:29
*** quiquell|rover is now known as quiquell|off14:41
fungiyikes, so i finally got tox -e py37 running on my workstation for zuul last night (by deleting the web/build symlink shipped in the git repo) and after running many, many tests it finally dies because of a too long subunit packet, or so i think based on the traceback...14:42
fungiFile "/home/fungi/src/openstack/openstack-infra/zuul/.tox/py37/lib/python3.7/site-packages/subunit/v2.py", line 208, in _write_packet14:42
fungiraise ValueError("Length too long: %r" % base_length)14:42
fungiValueError: Length too long: 537595714:42
fungithat's one massive subunit packet14:43
clarkbI had those too until I got zk mysql and postgres running14:43
fungiit starts its own services for those as fixtures though, right?14:44
fungior maybe i just dreamt that14:44
SpamapSyou did14:44
fungiso... to run zuul unit tests on my workstation i need to have already running database and zookeeper services (with some predetermined configuration)?14:45
SpamapSfungi:tools/test-setup.sh ftw ;)14:45
fungithat script is frightening unless i run it on a throwaway v,14:45
fungivm14:45
SpamapSfungi:actually if you docker, `cd tools && docker-compose up`14:45
fungiit adds packages from random places which are not my distro14:45
fungiheh... "docker-compose: command not found"14:46
fungitime for me to install and learn about docker14:46
SpamapSright, it's an add-on14:47
SpamapSfungi:docker doesn't come with compose by default, it's a separate tool, but this does at least avoid polluting your dev box.14:49
corvusi think it's a good fit for this case14:50
fungiyep, noted14:50
clarkbI run zk out of its tarball but I should convert it to a docker container since I have mysql and postgres running out of docker containers now too14:50
corvusi will tag dc9347c1223e3c7eb0399889d03c5de9e854a836 as zuul 3.8.0 -- does that look good?14:50
corvusclarkb, pabelanger, mordred, tobiash: ^14:50
corvus(i believe clarkb restarted opendev's zuul last night one commit behind that, and it seems to work)14:51
pabelangercorvus: wfm14:51
clarkbcorvus: I did but only the scheduler14:51
tobiashcorvus: ++14:51
clarkband web14:51
mordredcorvus: ++14:52
corvusi don't think there are any changes which require restarts of other components, so under the circumstances (urgent release required) i think we're good14:52
clarkband ya sha1 lgtm so ++ from me14:52
corvuspushed14:53
clarkbI'll send the email to the announce list. I expect I'll get moderated so you can let that through when the release is on pypi?14:54
*** bjackman_ has joined #zuul14:54
corvusclarkb: yep14:54
clarkbsent14:56
*** jamesmcarthur has joined #zuul14:56
*** bjackman_ has quit IRC14:59
fungiand we should switch the story to public at ~ the same time15:05
fungioh, looks like that's already done. cool15:06
fungii guess that was the same time the fixes were pushed to gerrit, so makes sense15:07
clarkbhttps://pypi.org/project/zuul/#history has 3.8.0. Now just waiting for release notes to update?15:10
clarkbhttps://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-8-0 that just updated15:10
clarkbcorvus: ^ I think we are good to send the email?15:10
corvusclarkb: done!15:12
corvusclarkb, pabelanger: thanks!15:13
pabelangerthanks all. starting upgrade now15:13
*** pcaruana has quit IRC15:27
*** pcaruana has joined #zuul15:30
*** pcaruana has quit IRC15:40
pabelangerZuul version: 3.8.0 :)15:43
pabelanger<3 xterm.js15:43
Shrewsmordred: umm, wow. this is what nodepool is getting back: http://paste.openstack.org/raw/749375/15:44
Shrews'status': 'BUILD'15:44
Shrews'created': '2018-11-04T06:15:47Z'15:45
Shrewswhy is nodepool getting that but our scripts are not?15:45
*** pcaruana has joined #zuul15:46
openstackgerritFabien Boucher proposed openstack-infra/zuul master: WIP - Pagure driver - https://pagure.io/pagure/  https://review.openstack.org/60440415:46
clarkbShrews: could it be an instance that is in a building state that has effectively leaked in the cloud?15:48
clarkbor are your scripts listing the same instance?15:48
clarkbif ^ then maybe double check you are using the correct account15:48
Shrewshttp://paste.openstack.org/show/749377/15:49
Shrewsthat prints None15:49
*** gtema has quit IRC15:49
Shrewsusing clouds.yaml from the nodepool home directory15:49
clarkbcache maybe? though you restarted the process and the cache isn't persistent unless that changed15:50
Shrewsoh! the correct param is 'region-name', not 'region'15:52
Shrewsnow i get it returned15:52
Shrewsso apparently deleting an instance in the BUILD state does not ever work15:53
Shrewsat least in this case15:53
pabelangerShrews: yah, you likely need to toggle the state, but need to be openstack admin to do that15:53
pabelangerwhich usually means, something cloud side messed up15:54
pabelangerthis is the exact issue rdocloud has, but they've given the nodepool user admin access to the tenant15:54
Shrewspabelanger: clarkb: so we need to contact someone at ovh to clean that up for us?15:54
pabelangerShrews: Yup, I've done that in the past15:55
*** pcaruana has quit IRC15:55
pabelangerusually email them, and they are very good at helping15:55
Shrewswhat's the contact email? does it have to come from someone they know?15:56
pabelangerShrews: I can send you it, 1 sec15:57
clarkbamorin in IRC is who we've been pinging lately. I'll PM you an email addr15:57
pabelangerah, clarkb is doing it15:58
*** bhavikdbavishi has joined #zuul16:00
*** pcaruana has joined #zuul16:02
mordredShrews: wow. we just don't know how to type region_name?16:03
mordredShrews: we're awesome16:04
Shrewsi haz shame16:04
*** jamesmcarthur has quit IRC16:04
* SpamapS is excited for 3.8.0 as well16:04
SpamapSxterm.js is so nice16:04
mordredShrews: MAYBE we should consider updating sdk to accept both region and region_name ... I get this wrong and I'm core on the project16:04
mordredShrews: especially since passing region *works* but doesn't do what the user thinks it does16:05
Shrewsmordred: i would unobject to such a thing so hard16:05
Shrewsmordred: it's also a bit odd that 'openstack server list' does not return it16:06
Shrewsmaybe they filter based on state16:06
*** pcaruana has quit IRC16:08
Shrewsoh, it does return it. helps to use the right credentials16:12
clarkbyesterday when trying to make a symlink I kept running ls -s and wondering why it wasn't working16:13
corvusclarkb: you probably meant "ls -n"16:13
*** jamesmcarthur has joined #zuul16:13
clarkbha16:14
*** pcaruana has joined #zuul16:36
*** mattw4 has joined #zuul16:39
*** jamesmcarthur_ has joined #zuul16:48
*** jamesmcarthur has quit IRC16:52
*** bhavikdbavishi1 has joined #zuul16:55
*** bhavikdbavishi has quit IRC16:56
*** bhavikdbavishi1 is now known as bhavikdbavishi16:56
*** yolanda_ has quit IRC16:57
*** bjackman_ has joined #zuul17:22
*** bhavikdbavishi has quit IRC17:23
*** bhavikdbavishi has joined #zuul17:25
*** bhavikdbavishi has quit IRC17:32
*** bjackman_ has quit IRC17:40
*** mattw4 has quit IRC18:00
clarkbhttps://review.openstack.org/#/c/652727/ is a simple fix for the tox cover target in zuul18:04
*** electrofelix has quit IRC18:13
*** jamesmcarthur_ has quit IRC18:25
*** jamesmcarthur has joined #zuul18:26
*** jamesmcarthur has quit IRC18:35
*** irclogbot_0 has quit IRC18:39
*** jamesmcarthur has joined #zuul18:40
*** irclogbot_1 has joined #zuul18:42
clarkbhrm that failed on the py35 issues18:57
clarkbwe probably need to dig into why it is so unstable (and either pin it down to $cloud(s) or something else)18:58
clarkbhttp://logs.openstack.org/27/652727/1/gate/tox-py35/3852b93/testr_results.html.gz that doesn't seem to be zk or db related18:58
pabelangerguess we need py35 still for xenial18:59
clarkbI'm not sure the errors are python versions specific18:59
clarkbwe just happen to roll the dice ineffectively on those nodes more often?18:59
pabelangeryah, we hit a good streak last night19:06
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Add test_setup_reset_connection setting  https://review.openstack.org/65313019:38
pabelangerI am not sure if we want to do^, but that is something I do in local jobs to pickup group changes when using tools/setup.sh19:39
*** jamesmcarthur has quit IRC20:00
*** weshay has quit IRC20:14
*** weshay has joined #zuul20:15
*** pcaruana has quit IRC20:19
*** jamesmcarthur has joined #zuul21:01
SpamapSHm, zuul upgrade broke us21:19
SpamapSnot necessarily 3.8's fault21:19
SpamapSBut the multi-ansible is clearly not working21:19
SpamapShttp://paste.openstack.org/show/749395/21:19
SpamapS2.5 works, but 2.6 and 2.7 throw this weird optparse sys.exit problem21:21
mordredSpamapS: yeah ... that's ... what?21:21
SpamapSI can't figure it out21:22
mordredSpamapS: I don't see any smoking guns in that paste WRT bad paths or anything21:22
SpamapSThey were installed with zuul-manage-ansible during our docker build21:22
SpamapSI'm setting default ansible version to 2.521:23
SpamapSOh that's annoying... so...21:24
SpamapSansible seems to still use system python when doing localhost tasks21:25
SpamapSso my boto3 installed in all the venvs does nothing :-P21:25
mordredSpamapS: *ugh*21:25
mordredSpamapS: fwiw- I can't find DEFAULT_LOCAL_TMP in ansible/constants.py21:26
SpamapSmordred:yeah I think there's some weird interactions going on21:27
mordredyeah21:27
SpamapSAlso it seems that while it did print that out21:27
SpamapSit went ahead and used 2.7 just fine21:27
mordredwell - also - it's not in 2.5 - so that might not be a thing21:27
mordredok21:27
mordredyeah21:27
mordredI think magic somewhere might set that21:27
mordredSpamapS: I'm trying to think of a way of setting ansible_python_interpreter to the venv python for localhost - without creating a localhost entry in the inventory21:29
mordredSpamapS: so far I am not succeeding in having such an idea21:29
mnaserSpamapS: use vars in the task/play?21:29
mnaseror you can use a block too21:30
mordredSpamapS: terrible idea ... what if we were to bind-mount $venv/bin/python in as /usr/bin/python to the bubblewrap21:30
mnaserhttps://github.com/openstack/openstack-ansible-os_nova/blob/master/tasks/nova_service_setup.yml we delegate operations to the cloud to a specific host (mostly localhost)21:30
mnasernova_service_setup_host_python_interpreter: "{{ openstack_service_setup_host_python_interpreter | default((nova_service_setup_host == 'localhost') | ternary(ansible_playbook_python, ansible_python['executable'])) }}"21:30
mnaserits funky but it works.21:30
mordredmnaser: I think the thing is that the allocation of which python to use in localhost tasks with multi-ansible in zuul is stuff that really shouldn't be in playbooks or roles21:31
mnaserah I see21:31
mnaserthat's more of a job definition thing?21:31
SpamapSI already set ansible_python_interpreter=/usr/bin/python321:31
SpamapSas a site var21:31
SpamapSbecause I don't use any images that have a /usr/bin/python21:31
mnaserthat's a nice feeling21:32
mordredyah - but that's not the python that has the boto installed in it21:32
SpamapSIt is now. ;)21:32
* SpamapS just rebuilt images w/ it21:32
mordredhahaha21:32
mordredSpamapS: fair21:32
SpamapSIt was actually before too21:32
SpamapSI removed it thinking they'd somehow magically use their own python21:32
mordredI'm still stumped by your traceback though21:32
SpamapSI always forget that Ansible doesn't do that21:32
corvusthat's what we do for gear, etc -- just install them systemwide21:32
corvusbut yeah, there's some weird dichotomy here since the executor has a bunch of ansible/python stuff it uses, yet also happens to serve dual-duty as a "remote" system for ansible which receives ansiballz21:34
SpamapSSeems to only cause problems in the --version check21:35
SpamapSso that was a red herring21:35
SpamapSmy real problem was lack of system-wide boto321:35
SpamapSstill.. nice to see colors in my log viewer21:38
SpamapSI wish I had something that would make the static logs colorized too21:38
SpamapSlots of folks complain about the color codes while looking for errors21:38
SpamapSwould be nice if there was some de-duping in the venvs. My zuul-executor image is now 700+MB21:48
SpamapSoh, I see, that's because I'm doing it wrong and re-running zuul-manage-ansible21:49
SpamapSthe ones built right are only 312MB21:49
fungicorvus et al: we want zuul deliverables moved to a zuul namespace during the opendev maintenance on friday, right?22:40
corvusfungi: yes!  want me to make a list?22:41
corvusi think http://git.zuul-ci.org/cgit is it22:41
fungicorvus: please do! i can hard-code it into my script22:41
fungioh, right, i can pull it that way too22:41
fungiit's just one more file to ingest22:42
fungii'll do it that way22:42
corvusok probably just as easy :)22:42
fungiyep!22:42
corvusfungi: fyi there's an existing zuul/project-config which does not need to move22:43
fungiyep, that's covered just fine. the only things forced to move if they *aren't* listed anywhere are things in the openstack namespace22:44
corvusbut that's the only thing that we pre-emptively created into the zuul namespace22:44
SpamapSHm, the Buildsets tab seems kind of..22:57
SpamapSpointless?22:57
pabelangerI've been using it to quickly see which projects are pushing up PRs22:59
pabelangercould I get a review on https://review.openstack.org/650880/ works around some python2.6 issues with validate-host role23:04
pabelangerhttps://review.openstack.org/648815/ also fixes some stale files with prepare-workspace on static nodes23:05
*** jamesmcarthur has quit IRC23:06
*** jamesmcarthur has joined #zuul23:07
*** jamesmcarthur has quit IRC23:11
*** rlandy|ruck is now known as rlandy|ruck|biab23:22
*** sshnaidm is now known as sshnaidm|afk23:50
*** rlandy|ruck|biab is now known as rlandy|ruck23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!