Wednesday, 2018-04-11

*** JasonCL has quit IRC00:00
*** JasonCL has joined #zuul00:39
*** odyssey4me has quit IRC00:47
*** JasonCL has quit IRC00:47
*** odyssey4me has joined #zuul00:47
*** JasonCL has joined #zuul01:13
*** harlowja has quit IRC01:14
dmsimardHad never looked at what GitLab CI was01:25
dmsimardI'll stay with Zuul01:25
dmsimardhttps://docs.gitlab.com/ee/ci/yaml/01:25
*** JasonCL has quit IRC01:41
*** JasonCL has joined #zuul01:49
tristanCbitbucket also comes with it's own integrated ci: https://bitbucket.org/product/features/pipelines01:49
*** JasonCL has quit IRC01:55
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: mqtt: add basic reporter  https://review.openstack.org/53554302:09
*** bhavik1 has joined #zuul02:13
*** bhavik1 has quit IRC02:16
*** Wei_Liu has joined #zuul02:39
mrhillsmanthe default roles available for uploading logs does not allow changing the ssh port true?03:04
mrhillsmans/roles/role03:04
mrhillsmanpersonally i do not think it does but am not 100% sure03:04
clarkbmrhillsman: ya doesnt look like add-fileserver role takes aport argument03:13
mrhillsmanok thx03:13
clarkbyou could likely extend it pretty simply especially since it alrwady takes a bundle of data in the secret03:14
mrhillsman++03:14
mrhillsmanty sir03:14
tobiashcorvus: re 535680: it works and we use it in production since a few months04:16
tobiashcorvus: basically it's WIP because I didn't have time yet to add tests to it04:17
*** harlowja has joined #zuul04:23
*** rlandy has quit IRC04:31
*** swest has joined #zuul05:01
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Also prefix the alembic_version table  https://review.openstack.org/56026305:18
tobiashcorvus, clarkb: just noticed that for correctly support table prefixes I missed an important fix ^05:19
tobiashwithout that prefixed tables won't work correctly as soon as there are more than one deployments sharing the same db05:20
tobiashunfortunately I cannot think of a way to fix that (if someone is already using prefixes) without manual intervention (aka renaming the version table by hand before upgrading)05:22
*** elyezer has quit IRC05:25
*** elyezer has joined #zuul05:29
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Also prefix the alembic_version table  https://review.openstack.org/56026305:31
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Add postgresql release note  https://review.openstack.org/56007905:33
*** harlowja has quit IRC05:35
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Add regex support to project stanzas  https://review.openstack.org/53571305:36
openstackgerritTobias Henkel proposed openstack-infra/zuul master: DNM: test  https://review.openstack.org/56026505:41
openstackgerritMerged openstack-infra/zuul master: Return CORS headers on all requests  https://review.openstack.org/55502705:56
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: gerrit: recognize project-created event  https://review.openstack.org/56027406:14
*** jhesketh_ is now known as jhesketh06:29
*** hashar has joined #zuul06:36
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: mqtt: add basic reporter  https://review.openstack.org/53554306:49
*** xinliang has quit IRC07:42
*** xinliang has joined #zuul07:42
*** jpena|off is now known as jpena07:55
*** electrofelix has joined #zuul08:40
*** hashar is now known as hasharAway09:48
*** Wei_Liu has quit IRC09:52
*** Wei_Liu has joined #zuul09:52
kklimondadoes Depends-On work in gate pipeline when for inter-project dependencies, where two projects don't share a queue?10:16
tobiashkklimonda: yes it should, but the second won't be enqueued automatically10:43
kklimondatobiash: hmm, not sure what you mean10:44
kklimondaboth will still have to have a correct set of votes to be enqueued, what I'm mostly interested in if both changes will be merged at the ~same time10:45
tobiashThe gate will ensure that the dependency is there but they won't be enqueued into the gate at the same time10:45
tobiashBut the order of merging is ensured10:46
kklimondaright, so there is no guarantee that the change A will be merged before change B (that depends on A) about the same time, but if change B gets its votes before change A, they should enter the gate around the same time10:48
AJaegerkklimonda: If b has "depends-on: A" then A will always merge first. That is guranteed11:05
*** xinliang has quit IRC11:11
kklimondaAJaeger: but whether A and B are merged together is dependent on whether they share a queue, right?11:15
AJaegerkklimonda: if they share a queue, the get merged one after the other (if they pass tests;)11:16
*** xinliang has joined #zuul11:22
*** xinliang has quit IRC11:22
*** xinliang has joined #zuul11:22
*** jpena is now known as jpena|lunch11:46
*** hasharAway has quit IRC12:05
*** odyssey4me has quit IRC12:14
*** odyssey4me has joined #zuul12:14
*** rlandy has joined #zuul12:30
*** dkranz has joined #zuul12:36
*** jpena|lunch is now known as jpena12:50
*** _ari_|conf is now known as _ari_13:17
*** JasonCL has joined #zuul13:22
*** iamcxl001 has joined #zuul13:26
*** iamcxl001 has left #zuul13:29
*** ttx has quit IRC13:55
*** ttx has joined #zuul13:57
corvusmordred: can you +3 https://review.openstack.org/560090 pls?14:09
*** dkranz has quit IRC14:34
openstackgerritJeremy Stanley proposed openstack-infra/zuul master: Add instructions for reporting vulnerabilities  https://review.openstack.org/55435214:42
fungicorvus: ^ thanks for pointing out i should have updated that after we got the new docs site14:47
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add installation/deprecation documentation  https://review.openstack.org/56046315:04
corvusi encourage everyone to read that change, as it outlines a deprecation policy for zuul-jobs ^15:05
corvusmordred, dmsimard, AJaeger, pabelanger, tristanC, tobiash: ^15:06
tobiashhrm, got a strange exception on the executor: http://paste.openstack.org/show/718944/15:17
tobiashI've never seen that before15:17
tobiashI just tried to add a hello world job with playbook15:17
*** dkranz has joined #zuul15:22
corvustobiash: that seems to indicate that /tmp/eefa1daac14348c88cad40da7e1a0748/ansible/playbook_0/role_0 already exists15:42
corvustobiash: are there any log lines before that which might suggest a symlink was already created there?15:42
tobiashI don't find anything in the logs15:45
tobiashthe project containing that at least has a path roles in it15:46
corvustobiash: running tip of master?15:46
tobiashbut I have to retest with keep to check what's already symlinked there15:46
tobiashnot yet15:46
tobiash2-3 weeks old15:46
corvustobiash: oh, so this doesn't have the role checkout changes15:46
tobiashso these might fix it15:47
tobiash?15:47
corvustobiash: no idea.  there's a good chance the bug may still be present, but everything about that has changed, so it's probably worth upgrading before debugging further.15:47
tobiashgood point15:48
tobiashmy current prod deployment is based on fddf8f9d15:49
corvustobiash: while you have it in this configuration, i'd go ahead and set keep to find out what the symlink is, just for later reference.15:49
corvustobiash: but then after that, i'd suggest upgrading, and if it happens again, let's look at the debug lines from before then to see what roles it previously prepared.15:49
tobiashstrange, it prepares an empty dir15:51
tobiashit's not even a symlink15:51
corvustobiash: oh, hrm, that might be the expected behavior15:52
corvusi think our documentation of the jobdir layout may be misleading15:53
corvustobiash: is there any chance there is...15:54
corvustobiash: oh i see it right in the output you sent15:54
corvustobiash: Prepare zuul role for {'target_name': ''15:54
corvustobiash: the target name gets appended to the role path root15:54
corvustobiash: does the job definition say "roles: [{name: '', zuul: '...'}]" ?15:55
tobiashhttp://paste.openstack.org/show/718949/15:56
tobiashcorvus: that's the job definition15:56
tobiashit works within a different project, but not the project in question15:56
corvustarget_name should be 'ansible' there15:58
corvusah15:58
corvuswe don't allow roles named 'ansible'15:58
corvusthere's a regex for that in configloader15:59
tobiashthe repo is named ansible15:59
tobiashand that's probably what the target_name would be15:59
corvusbecause we're trying to replace repo names like "ansible_role_foo" with "foo"15:59
*** patriciadomin has quit IRC16:00
corvustobiash: this is in _makeImplicitRole in configloader16:00
*** patriciadomin has joined #zuul16:01
corvustobiash: we should probably ensure that leaves us with *something*.  if it's empty, maybe just use the pre-regex value.16:01
tobiashcorvus: yah16:02
tobiashthanks for finding this :)16:02
tobiashI probably would have searched for hours16:02
corvussorry i didn't see it right away.  this is definitely something that's present in the new code too16:03
tobiashyou saw it amazingly fast :)16:04
*** harlowja has joined #zuul16:05
openstackgerritTobias Henkel proposed openstack-infra/zuul master: WIP: Fix implicit role for repos named ansible  https://review.openstack.org/56055416:12
tobiashcorvus: so I think that should fix it ^16:12
tobiash(needs test case though)16:12
tobiashwill add tests later or tomorrow16:16
corvustobiash: are you still using https://review.openstack.org/550982 and child?16:19
tobiashcorvus: no, that was just to demonstrate an issue in diskimage-builder16:26
tobiashjust updated prod to latest master :)16:45
*** electrofelix has quit IRC16:53
*** harlowja has quit IRC17:07
Shrewscorvus: So, re: my ZfS changes... move svc mgmt from the Fedora section into the respective nodepool and zuul install sections?17:16
*** jpena is now known as jpena|off17:16
Shrewscorvus: and all the directory setup commands for zuul17:16
Shrewswhat about zookeeper setup?17:17
corvusShrews: yeah, what do you think about that?  i'm thinking that's like all of the boilerplate package installation stuff together -- things you don't really need to think about much, then writing the config can be separate17:17
Shrewscorvus: i like it. not sure where the zk stuff fits though17:18
corvusShrews: maybe do the same for that, and put it first?17:18
corvusShrews: zk install + setup + svg17:18
corvuser "svc"17:19
Shrewscorvus: zk install is repo specific17:19
corvusabbreviation fail17:19
Shrewsso i see that in the environment setup part. and that should take care of it's svc file automatically17:20
Shrewsso it's really just modifying the config17:20
corvusShrews: oh, yep.17:20
Shrewswhich, honestly... why isn't the default config good enough?17:20
Shrewsmaybe we should just rip that out17:21
corvusShrews: indeed, good question :)17:21
corvusi haven't checked to see what fedora splats onto disk, it's probably worth a run through17:22
*** _ari_ has quit IRC17:23
corvusi'll check real quick17:24
Shrewsoh good, thx. i was about to go through the process of creating a fedora vm17:24
corvusi've got one up, but i need to remove zk first17:24
clarkbzk doesn't have to be repo specific, in fact on suse there is no package so I just install it from upstream's tarballs which includes a sysv init script that should work with systemd17:25
corvusclarkb: yes, but its *so much easier* to install it from distro packages17:26
Shrewsoh, my stomach is reminding me I missed lunch. bbiab17:28
corvusShrews: the answer is there's no default config file on fedora.  so at the least, we need to do "sudo cp /etc/zookeeper/zoo_sample.cfg /etc/zookeeper/zoo.cfg".17:31
corvusShrews: that's really all that needs to happen, so we should probably replace the existing commands with that.17:31
*** _ari_ has joined #zuul17:32
clarkbcorvus: do you need to make sure the data dir is created or does the package do that they just don't use the config by default?17:32
corvusclarkb: it does it17:32
Shrewscorvus: ack. I can put that after the zk install17:32
*** hashar has joined #zuul17:38
tobiashcorvus: I think I have a new nodepool usecase. We like to get github enterprise instances from nodepool for our integration testing. This is supplied as a virtual appliance aka cloud image. So far so good, and easy to do with nodepool. But ghe requires at least one additional volume attached to the instance. Which is I think not yet integrated in nodepool.17:44
*** dtruong_ has joined #zuul17:45
tobiashcorvus: what do you think about an 'additional-volumes' config in the pool config?17:45
clarkbtobiash: correct there is no explicit volume management in nodepool today. It can boot instances from volume though (not sure if that solves your problem)17:45
*** dtruong_ has quit IRC17:45
clarkbliek could you boot an instance from volume with enough storage at whatever path ghe expects it at?17:46
tobiashclarkb: nope, I'll have to use that too as the diskimage is larger than my largest flavor17:46
tobiashso I need boot-from-volume and attach another volume17:46
*** dtruong_ has joined #zuul17:46
*** dtruong_ has quit IRC17:46
tobiashgithub requires a separate disk for user data17:47
*** dtruong_ has joined #zuul17:47
*** dtruong_ has quit IRC17:47
tobiashI think it creates an lvm spanning any non-boot-devices17:47
*** dtruong_ has joined #zuul17:48
*** dtruong has quit IRC17:48
*** harlowja has joined #zuul17:52
tobiashfungi, corvus: commented on 55435218:01
fungitobiash: yep, i expect we should try to get at least a third volunteer for better redundancy18:03
fungithough having information on how to securely report suspected vulnerabilities, even if we still need more volunteers to triage them, is probably better than not having those recommendations published anywhere18:04
tobiashfungi: so you like to land it with the todo?18:05
tobiashor just remove the todo?18:06
fungiif there's a better way to leave ourselves a reminder that we want at least one more volunteer there, then i'd be fine removing that todo18:07
tobiashit just feels wrong landing a doc after release with a todo in there18:09
corvustobiash: i think supporting additional volumes is a good idea.  i'm sure Shrews, clarkb, and mordred can help with implementation suggestions.  it seems like maybe that should be a pool-label attribute?18:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: Reorganize "Zuul From Scratch" document  https://review.openstack.org/55698818:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: Add Gerrit docs to Zuul From Scratch  https://review.openstack.org/55860018:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: Add static driver doc to Zuul From Scratch  https://review.openstack.org/55880218:14
corvustobiash, fungi: i plan on adding my key; i may not do it right this moment18:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: Add sample systemd service files.  https://review.openstack.org/55883018:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: Move zuul jobs config to zuul config section  https://review.openstack.org/55886718:14
tobiashcorvus: yes that was the idea18:14
Shrewscorvus: end result of that stack looks so much nicer18:14
clarkbtobiash: corvus: yes I think it should be specific to the pool as not all pools will have volumes necessarily18:17
tobiashso we're all on the same page18:18
tobiashlet's see when I have time to implement this18:18
clarkbthe big thing is like instances and floating IPs these will need to be tracked18:18
clarkbbecause nova isn't going to reliably delete your volume when nodepool is done with it18:19
tobiashyes and quota checked18:19
clarkbwe will either need some way to detect leaks like with floating ips or directly track them as explicit resources like instances18:19
tobiashquota check should even be done today due to boot-from-volume18:19
tobiashclarkb: I think my first try would be to add the volume information to the node in zk and delete it together with the instance18:20
clarkbtobiash: ya like floating IPs18:20
tobiashand maybe add a cleanup thread like the one for leaked instances18:20
ShrewsI would like to see all of the various nodepool drivers merged before doing the volume support.18:21
clarkbtobiash: I think you can attach metadata to the volume to indicate it is nodepool managed then if you see one that isn't attached to an instance it can be deleted by leak detection18:21
corvusis it only boot-from-volume that has the option to delete when the instance is deleted?18:21
tobiashcorvus: I think so18:21
tobiashbut maybe mordred knows better18:22
Shrewsiirc, boot-from-volume has a flag to control that (defaults to True)18:22
Shrewsi *think*18:22
clarkbI want to say that is a volume level attribute and can eb set for any volume18:22
clarkbso if you attach it to an instance you can say delete this when instance goes away18:22
corvusShrews: i'm probably going to leave a -2 on the kubernetes/containers drivers for a bit, until we get traction on the container spec.  so maybe we can target after the refactor + azure/vmware/ec2 drivers?  basically, after all the outstanding vm drivers, i guess?18:22
Shrewscorvus: sounds good18:23
corvusclarkb, Shrews, tobiash: so if that delete flag exists, can we rely on it and not have to track the volumes?18:23
corvus"exists for any volume" i should say18:24
tobiashif it does then I think yes as we rely on that already and at least what I see it works without problems so far18:24
tobiashbut I haven't seen this option in volume attach personally18:24
tobiashat least the openstack cli doesn't seem to offer this18:26
Shrewsit's a nova api maybe?18:27
clarkbcorvus: you can rely on it for 99% of the deletes18:28
clarkband the other 1% will leak because openstack18:28
clarkbthen you clean them up like we clean up floating ips18:28
clarkbShrews: ya I think its nova side /me looks at apis18:28
Shrewsblock_device_mapping18:29
Shrewsblock_device_mapping_v2, to be specific about what shade uses18:29
tobiashah, that sounds right18:29
tobiashoverlooked that in the cli18:30
Shrewswith a delete_on_termination option in the mapping18:30
tobiashso the additional block device can also be created together with the instance18:30
clarkbok container spec comments posted looks like Shrews found the api bits first18:31
tobiashand shade seems to support this as well18:32
rcarrillocruzhey folks18:32
rcarrillocruzi've noticed that in zuul18:32
rcarrillocruzon long running shell tasks18:32
rcarrillocruzthe console stream logs18:32
rcarrillocruz'waiting on logger'18:32
clarkbtobiash: Shrews v2 may not support delete_on_termination though?18:32
rcarrillocruzis that normal?18:32
rcarrillocruzany way to prevent that?18:32
clarkbhttps://docs.openstack.org/nova/latest/user/block-device-mapping.html at least there it isn't documented as supporting it18:32
Shrewsclarkb: from past experience developing to those docs, do not believe the docs18:33
Shrews:)18:33
clarkbha18:34
Shrewsthey were so frustrating when developing shade. had to just go to the source code most of the time18:34
clarkbshade definitely seems to add it in for v2 so I'm assuming your reading of the source figured that out18:35
Shrewsi think mordred actually coded that bit18:35
tobiashshould we rather trust shade?18:35
tobiashhttp://git.openstack.org/cgit/openstack-infra/shade/tree/shade/tests/unit/test_create_server.py#n74218:35
clarkbtobiash: we should trust it for now but confirm it once we can reasonably test it against a real cloud probably18:36
Shrewscorvus: does this match what you were imagining? http://logs.openstack.org/67/558867/2/check/build-sphinx-docs/90556d1/html/admin/zuul-from-scratch.html18:41
corvusShrews: yeah!18:43
clarkbfwiw I said maybe not worry about getting all the bug fixes in this week (I've got a handful myself queued up) but I'm going to review teh changes tobiash called otu specifically18:44
corvusclarkb, tobiash: yeah, i see those changes as "good to have" but i don't think we should block on them18:44
corvusi think we're about ready -- need to check about the postgres release note18:44
clarkbfor me I think as long as we can have a good cadence and openstack acting as canary we can safely release really often18:45
clarkbCI will cover a good chunk of stuff and then production openstack zuul as sanity checker18:45
tobiashcorvus, clarkb: regarding the database stuff, 560263 is essential to make the table_prefix work18:46
tobiashshame on me that I found this today in my staging branch :(18:46
clarkbtobiash: oh maybe I start there then18:46
tobiashwithout that I doubt that anyone can run successfully with table_prefix and more than one instances in one db18:47
clarkb+218:48
clarkbcorvus: ^ fyi18:48
Shrewsrcarrillocruz: i have never seen that, but I don't think that's what we'd expect18:49
tobiashrcarrillocruz: did you start the zuul_stream module in your base job?18:50
clarkbrcarrillocruz: Shrews I want to say what we have seen is a race between the streamer starting in the job and the dashboard likning to it18:50
clarkbseems like tobiash and pabelanger and I debugged it but I was at a conference or something and not sure a patch ever resulted out of that18:50
tobiashclarkb: I just remember you saying 'thing x needs to be moved to y'18:51
tobiashclarkb: however I think thats unrelated to rcarrillocruz's problem18:51
clarkbtobiash: ya sounds like different problem18:51
tobiashrcarrillocruz: do you use the role http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/start-zuul-console in your base job?18:53
tobiashif not that might be the problem18:53
tobiashanother possible problem could be that your task does things for a long time without starting to print anything18:54
tobiashwhich could make zuul think there is a problem18:54
tobiashso you might want to add a 'echo foo' at the start to check if that's the case18:55
clarkbok I've approved the first of tobiash's linked fixes since it was easy. I'm going to make a sandwich for lunch before digging into the others18:57
rcarrillocruzyeah, i think that is used at SF tobiash19:04
dmsimardShould the zuul stream connection timeout eventually ? https://github.com/openstack-infra/zuul/blob/master/zuul/ansible/callback/zuul_stream.py#L125-L12619:06
dmsimardSaw it try to connect for a good 20 minutes before it gave up19:06
rcarrillocruzfor reference: https://ansible.softwarefactory-project.io/logs/11/11/3e2d953bfb56ebb4a31c24bb37e17ada5a6228b6/check/ansible-test-network-integration-vyos-devel/6e1fb5a/job-output.txt.gz19:06
rcarrillocruzlook at the 'waitng for logger' lines19:06
dmsimardrcarrillocruz: wait are we asking about the same thing ? lol19:07
openstackgerritMerged openstack-infra/zuul master: Switch implicit role not found message to debug  https://review.openstack.org/55686619:14
tobiashclarkb: just verified that block device mapping v2 and terminate works19:47
tobiashhttp://paste.openstack.org/show/718982/19:47
tobiashit creates a vm using boot-from-volume and adds an additional volume and both get deleted on instance termination :)19:48
clarkbtobiash: perfect so a cleanup cron like for floating IPs is probably sufficient19:48
tobiashyes19:48
tobiashclarkb: however I don't see any information in the docs if we can attach metadata to it19:50
tobiashthat maybe needs to be done after creation19:50
clarkbtobiash: or maybe if we can use a name template do that19:50
tobiashalso there seems to be no documented way to set a volume name19:52
clarkb:(19:54
corvusclarkb, tobiash: i approved the alembic fix.19:58
corvusclarkb: can you +3 https://review.openstack.org/560079 ?19:58
clarkbcorvus: ya just finishing lunch will look19:58
corvusclarkb, tobiash: then i think we should restart openstack-infra after that lands, make sure it doesn't explode, then tag that.19:58
tobiashcorvus: sounds good19:59
clarkbcorvus: done. And seems reasonable as a plan to me20:01
mrhillsmancan someone point me to the code in zuul that talks to this section - https://imgur.com/vPCbL6R - on github and/or the github doc20:07
mrhillsmani say github doc as i think it could be zuul using github3.py, i just do not know what it is called so not sure what to look for20:07
mrhillsmanstatuses?20:11
mrhillsmanyep, nvm20:12
openstackgerritMerged openstack-infra/zuul master: Also prefix the alembic_version table  https://review.openstack.org/56026320:15
*** kmalloc_ has joined #zuul20:21
openstackgerritMerged openstack-infra/zuul master: Add postgresql release note  https://review.openstack.org/56007920:22
*** dtruong_ has quit IRC20:24
*** dtruong has joined #zuul20:25
*** kmalloc_ has quit IRC20:25
*** kmalloc_ has joined #zuul20:25
*** kmalloc_ is now known as notmorgan20:26
*** notmorgan has quit IRC20:26
*** notmorgan has joined #zuul20:26
*** notmorgan has quit IRC20:26
*** notmorgan has joined #zuul20:26
*** kmalloc has quit IRC20:27
*** andreaf_ has joined #zuul20:27
*** xhku_ has joined #zuul20:27
*** notmorgan is now known as kmalloc20:27
*** mordred has quit IRC20:28
*** andreaf has quit IRC20:28
*** fbouliane has quit IRC20:28
*** andreaf_ is now known as andreaf20:29
*** mordred has joined #zuul20:30
*** acozine1 has quit IRC21:01
openstackgerritMerged openstack-infra/zuul master: Update pypi metadata  https://review.openstack.org/56009821:26
tobiashmrhillsman: status checking is not done yet21:27
tobiashmrhillsman: you cal look at https://review.openstack.org/#/c/53568021:28
tobiashWe use that for quite some time now21:28
tobiashIt's wip as it has no tests yet21:29
clarkbtobiash: thinking about the volumes problem a bit more. I wonder if we can just use the volume age or modified field (whatever it is) and the fact that it is not attached as our criteria21:29
mrhillsmanthx tobiash will have a look, have an interesting use case I am trying to ensure I understand before I bring it up21:33
tobiashclarkb: maybe we also can create the volume beforehand and still use delete on terminate21:35
*** hashar has quit IRC21:36
clarkbtobiash: ya that might be another appraoch and that way we can name it properly or set metadata21:36
corvustobiash, jlk, sigmavirus24: i just spotted this error from github.com: http://paste.openstack.org/show/718995/21:43
jlkcorvus: I think that was already fixed21:50
corvusjlk: huh... let me check if we failed to upgrade the lib21:50
jlkhttps://github.com/sigmavirus24/github3.py/pull/82421:50
jlkit was fixed 9 hours ago, so it missed my release :(21:51
corvusjlk: ah!  yeah, we're running 1.1.021:55
corvusglad it's fixed!21:56
jlkis it a showstopper for zuul? I could spin up yet another release :D21:59
corvusjlk: that's a good question.  based on the pr description, i'd say 'no'.  i wonder what the event was though...22:02
corvusit was for https://github.com/ansible/ansible/pull/35920/commits22:02
corvusand it was some kind of a 'pull_request' event, though that's a broad category, and i don't know the action.22:02
corvus(that pull request *does* have a commit -- ie, it's not a plain issue)22:03
jlkyeah, uh...22:03
jlkYou have access to the github zuul app, right? within that app page you can find the events that were delivered22:03
corvusoh right!22:03
jlkfrom there you should be able to see the full event json22:03
jlkalso, that MAY not be the same bug, but near the same bug22:05
jlkcould have been issue_comment22:06
jlkor issue_label22:07
jlkor it could have been issue_status, but that SHOULD have had a commit associated with it22:08
corvusok, i think i found the event22:08
corvusaction 'labeled'22:08
corvuscan i just post this whole json?  is that safe?22:10
clarkbcorvus: iirc there is a shared secret that zuul confirms to check autenticity of the vent22:10
clarkb*event22:10
clarkbbut maybe its hashed I can't remember. That is what I would be concerned about sahring though22:10
corvusi think it's hashed, and i think it's in a header22:11
corvusi think the only info pertaining to us in the data is the installation id, which i assume isn't terribly sensitive22:11
corvusjlk: oh, huh, actually i think this is inside of a further github api call.  it's in *response* to the event, but it's not the event itself that's the problem22:12
jlkthe secret is in the header22:12
jlkthe body should be safe to post22:12
corvuslooking at the traceback, this is operating on the result of github.pull_request(...).reviews()22:12
jlkI see, so it's trying to dig through all the reviews.22:13
jlkhrm.22:13
jlkI'm needing to run some errands very soon22:17
jlkBUT22:18
jlkI see where we ASSUME that every pull review has a commit_id, and maybe that's not true.22:19
jlkI don't suppose you can tell WHICH review...22:19
corvusjlk: no, it's in the middle of a list comprehension and doesn't log it.  but maybe this is easy to reproduce, assuming no one changes the reviews on that pr in the interim?22:24
jlkyeah it could be easy to test.22:25
jlkI jsut have to remember how to get back to an interactive session to load that PR :D22:26
jlkSo what we do is we hit the APi end point for listing the reviews: https://developer.github.com/v3/pulls/reviews/#list-reviews-on-a-pull-request22:26
jlkso /repos/:owner/:repo/pulls/:number/reviews22:26
jlkand we ASSUME that every entry in the list we get will have a 'commit_id' attribute, because we take every entry and build a PullReview object from the data. https://github.com/sigmavirus24/github3.py/blob/develop/github3/pulls.py#L80622:27
jlkhttpie is so useful22:30
jlkhttps://gist.github.com/omgjlk/0b456ba255e79c6cc675a05926b08f3422:31
-openstackstatus- NOTICE: zuul was restarted to updated to the latest code; you may need to recheck changes uploaded or approvals added between 21:30 and 21:4522:31
jlkand here is one without a commit_id https://gist.github.com/omgjlk/0b456ba255e79c6cc675a05926b08f34#file-foo-json-L214-L24922:32
jlkSo it looks like there is SOME situations where a COMMENT pull request review does not have a 'commit_id' associated with it. THANKS GITHUB.22:33
jlksame with an approval.22:35
jlkI'm going to ask internally what the scenarios are.22:35
openstackgerritMerged openstack-infra/zuul master: Add test for builds/ web route  https://review.openstack.org/56011722:41
corvusjlk: i don't have anything to add, except ++ :)22:46
*** tobasco has quit IRC22:50
*** tobasco has joined #zuul22:58
mrhillsmanhttps://www.irccloud.com/pastebin/3ktRtg5E/23:43
mrhillsmanis this related to the parent: of a job?23:44
mrhillsmantrying to understand who the parent and child are here23:44
mrhillsmanhttps://zuul-ci.org/docs/zuul/admin/drivers/zuul.html23:45
clarkbmrhillsman: zuul creates a queues in its pipelines for changes so that they merge serially for example23:46
clarkbmrhillsman: the parent change is the change ahead of the current change in that queue23:46
mrhillsmanok, got it, don't think it will address my use case then :(23:48
mrhillsmanback to reading more, thx23:48
*** JasonCL has quit IRC23:49
clarkbmrhillsman: what are you trying to do?23:51
mrhillsmantyping it up23:53
mrhillsmani guess it might be thought of as nested pipelines23:53
mrhillsmanwhich we have working23:54
mrhillsmanthe issue is reporting back to github23:54
mrhillsmanlet me type example so it is clear23:54
mrhillsmanhttps://hackmd.io/vdbrD8k8QYazKeGzKir8oQ23:57
mrhillsmani hope that makes sense23:57
clarkbmrhillsman: have each pipeline get triggered on the first /test command then configure each pipeline to also trigger on their specific trigger command23:58
clarkbthat should be all that is necessary to do that23:58
clarkbI don't know about the reporting side of things23:58
clarkbyou can have multiple triggers in each pipeline and any one of them matching results in jobs being run23:59
*** JasonCL has joined #zuul23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!