Monday, 2016-12-05

jamielennoxjhesketh: do you know if there is work ongoing regarding handling multiple connections?03:31
jamielennoxin v303:31
jamielennoxor different question - is there a reason you have to specify a pipeline.source? couldn't you determine that from pipeline.triggers?03:43
jheskethjamielennox: in terms of ongoing work multiple connections are currently functional so ensuring they continue work is important03:51
jheskethwith v3 there may be some refactoring and bugs etc, but it should be mostly there03:51
jheskeththe difference is a trigger could be unrelated to the source.. for example, you could trigger a build off an email, or a ping on a website etc (although not currently implemented)03:52
jheskethjamielennox: I think the question is, will there be support for multiple sources.. that's a little harder as zuul needs to know where to get the repos from and how to apply changes etc to them03:53
jheskethwe do want to add support for github03:53
jhesketh(and others, but starting there)03:53
jheskethand eventually be able to have zuul connect to both gerrit and github.. for example doing a depends-on between a gerrit change and an upstream github PR03:54
jamielennoxjhesketh: so yea, i think i got into knots a little trying to figure out some of the relationships - it seemed like i was having to specify the source in multiple locations03:54
jamielennoxbut i was aiming to re-enable the test in test_conenctions03:54
jamielennoxdifferent types of sources is a must for us - does it make sense for a pipeline to have multiple sources?03:55
jheskethjamielennox: I wonder if it makes more sense for a project to have a source03:55
jamielennoxit can have multiple triggers03:55
jheskethrather than a pipeline03:55
jamielennoxwell it can have multiple pipelines so ~= multiple sources?03:56
jheskethjamielennox: but that doesn't account for inter-source CRD's03:56
jamielennoxah, so that was something i came up with when looking at this test03:57
jamielennoxwhen defining your tenant you define sources with config-repos and project-repos03:57
jamielennoxare they only used to read in configuration?03:58
jamielennoxit seems to work just fine if the config-repo refers to sources that were never listed in the tenatn03:58
jheskethjamielennox: they are used to read in config, but also to do the merges etc.. ie the source is what provides the head of the repo03:59
jheskethjamielennox: connections can be used in different areas of the config, but they are different to a source04:00
*** bhavik1 has joined #zuul04:19
*** bhavik1 has quit IRC04:24
openstackgerritJamie Lennox proposed openstack-infra/zuul: Re-enable multiple gerrit connection test  https://review.openstack.org/40669904:29
openstackgerritJamie Lennox proposed openstack-infra/zuul: Re-enable multiple gerrit connection test  https://review.openstack.org/40669904:33
*** saneax-_-|AFK is now known as saneax05:26
*** mgagne has quit IRC05:53
*** mgagne has joined #zuul05:55
*** mgagne is now known as Guest5173705:55
*** bhavik1 has joined #zuul06:13
*** bhavik1 has quit IRC06:16
*** adam_g_ has joined #zuul06:47
*** jkt_ has joined #zuul06:49
*** willthames has quit IRC06:52
*** adam_g has quit IRC06:53
*** jkt has quit IRC06:53
*** adam_g_ is now known as adam_g06:53
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool: Keep existing loggers with fileConfig  https://review.openstack.org/40678408:02
*** openstackgerrit has quit IRC08:03
*** hashar has joined #zuul08:21
*** abregman has joined #zuul08:21
*** abregman_ has joined #zuul08:23
*** abregman has quit IRC08:26
*** jamielennox is now known as jamielennox|away08:40
*** bhavik1 has joined #zuul09:30
*** Cibo_ has joined #zuul09:39
*** Cibo_ has quit IRC09:50
*** jkt_ is now known as jkt09:59
*** abregman_ is now known as abregman10:22
*** bhavik1 has quit IRC10:53
*** hashar has quit IRC11:01
*** hashar has joined #zuul11:01
*** hashar has quit IRC11:21
*** hashar has joined #zuul12:04
*** Cibo has joined #zuul12:48
ajafohi guys, is it possible to use DependentPipelineManager  in check pipeline like this https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L5-L32 or maybe it always should be with options like on success submit:true ?13:57
SpamapSajafo: "The dependent pipeline manager is designed for gating."14:01
SpamapSajafo: the whole point of check is that it's a pipeline for checking things independent of other events.14:02
SpamapSajafo: whereas gate enforces an order so that each commit landing has been tested with its predecessors.14:03
*** openstackgerrit has joined #zuul14:14
openstackgerritMerged openstack-infra/zuul: Correct logic problem with job trees  https://review.openstack.org/40045614:14
openstackgerritMerged openstack-infra/zuul: Add test for variant override  https://review.openstack.org/39987114:15
ajafoSpamapS: thx for answer, but I consider case when I've some multi repository project so sometimes I have changes in one repository that will work only with other change in second repository,so I would like to test it in check pipeline is there way to solve it without dependent pipeline? I can use Depends-On but as I understand from documentation it works only with  DependentPipelineManager14:26
Shrewsjeblair: i left a rather concerned comment on your updated zuulv3 spec14:36
fungiajafo: of the documentation says Depends-On only works with DependentPipelineManager then it's outdated. support for Depends-On with an IndependentPipelineManager has been implemented for a very long time. see http://docs.openstack.org/infra/zuul/gating.html#independent-pipeline14:39
fungiajafo: "When changes are enqueued into an independent pipeline, all of the related dependencies (both normal git-dependencies that come from parent commits as well as CRD changes) appear in a dependency graph..."14:40
fungiajafo: if you found somewhere else in zuul's documentation which contradicts that, let me know and i'll be happy to patch it14:40
fungihonestly, i don't even recall if there was a time where we supported crd only in dependent pipelines, but if there was it was very brief14:41
SpamapSright I've certainly used Depends-On in check since I learned of the magic of Depends-On :)14:42
fungii'm pretty sure it was implemented for both at the same time14:42
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul: Re-enable test_rerun_on_abort  https://review.openstack.org/40700014:43
fungiaha, there was a _very_ brief period where that was the case, yes... https://review.openstack.org/144555 did dependent pipeline support and then https://review.openstack.org/144556 added independent pipelines. they merged a little less than three hours apart ;)14:45
fungiodds are we didn't restart our zuul scheduler in between, so both would have taken effect at the same time for us14:46
ajafofungi: thx for clarification and sorry it's my fault not documentation, I suggested description from cross-project testing but it little different than just depend-on which is described in paragraph next to it14:48
*** saneax is now known as saneax-_-|AFK15:17
*** Cibo has quit IRC15:48
pabelangero/15:59
pabelangeroops16:02
pabelangerValueError: No JSON object could be decoded16:02
pabelangerfrom dib-image-list again16:02
pabelangerlet me see why16:02
pabelangerstopping nodepool-builder on nb01, logs getting spammed16:05
pabelangerand been broken for a day or 216:05
Shrewspabelanger: latest review I have up hopefully fixes that16:11
pabelangerShrews: Ah, cool.16:11
pabelanger(CONNECTED) /> get /nodepool/image/ubuntu-xenial/builds/0000000003/16:12
pabelangeris empty16:12
pabelangerso, that is the reason for the exception16:12
pabelangerokay, deleted bad key from zookeeper, nodepool-builder started again. Cleanup process running16:20
rcarrillocruzhdy folks, we have bank holiday tomorrow and thursday, thus i took half of the week off16:20
rcarrillocruznot sure if i'll be at zuul's meeting today16:20
pabelangerJust blew up again16:21
pabelangerlet me see what happened16:21
pabelangerokay, could have been old data in zookeeper, fedora-24-000000000216:25
pabelangerthat's an old image, which I manually cleaned up last week16:25
clarkbpabelanger: what does manual cleanup involve?16:35
clarkbpabelanger: `nodepool image-delete`?16:35
pabelangerclarkb: for now, rmr /nodepool/image/ubuntu-xenial/builds/0000000003/16:35
pabelangerin zk16:35
pabelangerbut ya, that is not production good16:35
pabelangerour issue is, dib-image-list doesn't say which value is bad ATM16:36
jeblairShrews: replied, thanks16:42
pabelangerjeblair: re: 406412, are you okay with making DIB_CHECKSUM a required field for our diskimages env-vars?16:49
jeblairpabelanger: do we set DIB_CHECKSUM?16:50
pabelangerjeblair: yes, today we do16:51
pabelangerotherwise, we need to update nodepool to use the --checksum flag for disk-image-create16:52
jeblairpabelanger: the only place i see nodepool set that is in the devstack plugin16:52
jeblairpabelanger: so that does not seem to be a default behavior16:53
pabelangerjeblair: right, we manually opt into it today16:53
pabelangerhttp://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml#n100316:53
pabelangerbut we could change that16:53
jeblairis there any reason not to do that?16:54
pabelangerI don't think so now16:54
pabelangerwe did it manually first to make sure it works16:54
jeblairpabelanger: then yeah, let's hard code it into nodepool, and the requirement as ianw suggests, and remove the options from project-config16:54
pabelangerWFM16:55
clarkbwait16:56
clarkbwe have had problems with these things in thr past and have had to unhard set them16:56
jeblairclarkb: what kind of problems?16:57
clarkbproblrms of using non default dib optiond and then they break us because less tested or similar16:57
jeblairbecause i'll just go ahead and say, one way of another -- 'who does checksums' is not something a nodepool user is ever going to have to know.16:57
clarkblet me go find the specific example that hit us recently16:57
jeblairclarkb: okay, that's something we'll all get better at going forward16:58
*** abregman has quit IRC16:58
jeblairwe either trust dib to work properly or not.  i'd like us to trust it to work, help fix things and improve testing when they break16:58
pabelangerDIB_SHOW_IMAGE_USAGE was the issue16:58
pabelangerit was added as an env into nodepool.py over using the env-vars16:58
jeblairi'd rather not have a bunch of levers that nodepool end-users have to pull (and know to pull) to get things to work16:59
clarkbpabelanger: right thanks16:59
pabelangerhttps://review.openstack.org/#/c/357393/16:59
clarkbI think the issue is when we trust it and end up requiring things that arent actually required16:59
jeblairwell, is image checksumming something substantially important to dib that we think it might continue supporting it for some time to come?17:00
jeblair(i hope so -- they've put a lot of work into it lately :)17:00
clarkbI amnot sure, is it still super slow?17:01
clarkbor did the implementationget addressed to use fewer reads?17:01
jeblairwell, pabelanger and ianw have recently been benchmarking a bunch of ways to speed it up.17:01
pabelangerbetter now that we do md5sum and sha256sum in parallel17:01
pabelangerand shade is faster, that is for sure17:01
jeblairpabelanger: shade is faster than what?17:01
pabelangerlet me rephrase, image uploads start way faster in shade, since we don't need to md5sum / sha256sum each image on upload17:02
jeblairah, shade is faster with dib checksumming enabled17:03
clarkbpabelanger: is it? shade caches the checksums so it should be just as fast17:03
clarkbI think ^ is my biggest concern here, shade already supports this and caches results, by adding dib support for it we actually regressed because it was slower to calculate the hashes17:04
pabelangerclarkb: I'd have to test again, honestly.17:04
pabelangerbut, it feels faster17:05
jeblairthen why are we doing this?17:05
jeblairclarkb: where are they cached?17:06
pabelangerI was under the impression that shade would calculate the hash for each provider of the same image17:06
clarkbjeblair: in shade memory, so ifyou restart nodepool it has to recalculate them again once17:06
clarkbpabelanger: if it does that that seems like a shade bug17:06
jeblairclarkb: hrm, we currently have a shade client for each provider, so i think they would be separate.17:06
clarkb(cache for on disk info should be provider/cloud independent)17:07
jeblairclarkb: i agree, but with no persistent cache for shade in the builder, and multiple shade clients, i'm not sure i see a place for shade to easily cache it.17:08
pabelangerWe also get the added bonus of sha256 / md5 files existing at build time, so we can provide validation to images if we ever host them17:09
jeblairit seems that having dib perform the checksums gets us that persistent cache.  and it's now fast, thanks to ianw and pabelanger, so why don't we continue with that?17:09
clarkbyup I think for hosting images its a good option. But I as I said just wary of requiring things that arent actually hard required (since shade should be doing this)17:11
clarkbas far as fixing it in shade if necessary it uses that dogpile cache right? I imaginr there is support for such things in dogpile17:11
jeblairyeah, i don't think we should do it gratuitously, but i also don't think this rises to the level of being a user-configurable option.  so i'd like us to pick one, and enforce it in nodepool.17:12
jeblairclarkb: yes, but 'get dogpile cache working for nodepool' would be a considerable distraction at the moment.17:12
clarkboh its not in use at all yet? if so then I agreethat would be effort not necessary17:14
clarkb(for some reason I thought we were using dogpile just not thr external cache server with it)17:14
pabelangerI'll do which every way everybody is happy with :)17:22
clarkbI'm fine with using dib for it if its what we want to do. We likely want to give shade a heads up that they will no longer get transitive testing of this via nodepool though17:23
*** Guest51737 is now known as mgagne17:26
*** mgagne has quit IRC17:27
*** mgagne has joined #zuul17:27
pabelangerokay, I'll work on --checksum patch for now17:31
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641117:45
pabelangerclarkb: jeblair: I need to relocate, back in about 20mins. Mind looking over ^ and leaving some comments, jeblair I know we talked about moving it into DiBImageFile(), pointers would be helpful how you see that working. Guessin some sort of remove() function we'd call?17:48
*** harlowja has joined #zuul17:50
jeblairpabelanger: lgtm.  and exactly -- i'd probably just move all of 240-252 into DibImageFile.remove()17:51
jeblairShrews, greghaynes, mordred: ^ if you could look at that too -- this would switch nodepool to use dib's checksumming rather than shade's (which we've been doing for openstack for a while -- see earlier conversation here for more infos)18:05
*** yolanda has quit IRC18:07
*** hashar has quit IRC18:11
greghaynesjeblair: pabelanger left a comment on https://review.openstack.org/#/c/406411/218:13
pabelangergreghaynes: I would be okay with that18:16
pabelangeralso, for some reason our cleanup worker isn't running properly: http://logs.openstack.org/11/406411/2/check/gate-nodepool-python27-ubuntu-xenial/0a99b56/console.html18:21
pabelangerlooks to be a possible race18:21
pabelangerI'll dig into that shortly18:21
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Sort images and providers in zookeeper  https://review.openstack.org/40712418:21
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641118:23
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul: Re-enable test_client_get_running_jobs  https://review.openstack.org/40713518:56
SpamapSjeblair: ^ that one there.. I'm really not sure I'm doing the right thing. :-P18:56
SpamapSI don't quite understand why those items were left in "TODOv3"... so some guided spelunking of history may be necessary18:57
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Fix race condition in test_image_rotation_invalid_external_name  https://review.openstack.org/40713919:04
pabelangerjeblair: Shrews: ^ should fix up a race condition: http://logs.openstack.org/11/406411/3/check/nodepool-coverage-ubuntu-xenial/ebe55bd/console.html19:05
*** Zara_ has joined #zuul19:23
*** tflink_ has joined #zuul19:27
*** Zara has quit IRC19:28
*** tflink has quit IRC19:28
jeblairSpamapS: that's information that the worker returns to zuul, mostly for the use of the admin via logs.  there was no worker in v2.0, and in v2.5, it's mostly ignored/unimplemented.  url is an interesting one because it's supposed to be the place where a user can see the job results (ie, the jenkins log).  in v2.5 it's the telnet url.  in v3, we don't actually have a solution for it yet (it should probably be the websocket streamer url once we ...19:30
jeblair... have that ready).19:30
jeblairSpamapS: i think we can just put *something* in the url field for now, and then update it later when that's there.19:31
jeblairSpamapS: url is the only one that should really affect zuul operation19:31
jeblairSpamapS: for the rest, we just need to add that information, since we never had before.  or potentially cull any fields that are no longer useful.19:32
*** yolanda has joined #zuul19:32
SpamapSjeblair: ok, thanks that's helpful.19:35
SpamapSjeblair: I'm not entirely sure what the worker name should be now though.19:35
Shrewspabelanger: confirmed the random failure in test_image_rotation19:37
pabelangerShrews: Ya, I'm still hunting that one.19:37
jeblairSpamapS: i think previous values were 'Jenkins' and 'Turbo Hipster'.  i think there's some duplication in there.  and we can probably lose the 'worker_' prefix on some of those.  if you want to propose a cleanup, i think this transition would be a swell time for it.19:40
SpamapSjeblair: Indeed, I'd prefer to de-dupe than to duplicate cruft.19:50
* SpamapS will ponder whilst gym-ing19:50
pabelangerjeblair: clarkb: Do you mind reviewing: https://review.openstack.org/#/c/407139/ fixes a race condition in our testing.19:57
pabelangerclarkb: mordred: also https://review.openstack.org/#/c/406411/ could use your thoughts too, using --checksums now with diskimage-builder19:57
clarkbpabelanger: commented on first one19:58
Shrewspabelanger: geez, easily reproduced that failure at first. now i'm having a devil of a time19:59
Shrewsmust've got lucky the first time19:59
pabelangerShrews: Ya, let me know if you know the secret. Cause I haven't reproduced it locally yet20:00
clarkbpabelanger: is there a reason we can't do 406411 with the TODO done?20:00
clarkbpabelanger: in fact, if I am reading that correctly I think it may already do it20:02
pabelangerclarkb: we could, but not sure when I'll get to that. And since we're leaking checksum files, figured we could land it first then work on refactor20:02
clarkbfrom_image_id is looking at $id.qcow2 $id.raw $id.arbitrary.suffix and returning every file that has the $id prefix I think20:02
clarkbpabelanger: I guess what I am trying to say is I think this is mostly done and is likely even less code to write than the current change20:03
pabelangerright, let me look at 407139 first then will loop back20:03
clarkbpabelanger: DibImageFile already keeps track of the checksum files20:04
clarkbso I think waht you need is 'if image.md5 is not None and os.exists(image.md5) then delete image.md520:04
clarkbexcept in this case its called 'file' not 'image'20:04
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Fix race condition in test_image_rotation_invalid_external_name  https://review.openstack.org/40713920:06
clarkboh sorry thats the loaded hash so needs just a small amount of updating to store the file path too20:06
pabelangerokay, looking at 40641120:06
clarkbI am about to post comments in line too which may help20:07
pabelangerk20:07
clarkbdone20:07
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add cleanup worker debug code  https://review.openstack.org/40718420:10
Shrewspabelanger: ^^^^ ran more than 100 attempts. let's just add some debug code to try and catch it in zuul20:10
pabelangerclarkb: is storing the path to md5 file in DibImageFile going to be an issue? We currently don't store the path for our actually DIB, wondering if we could get out of sync some how20:14
clarkbpabelanger: you could do md5_to_path instead if you like20:15
pabelangerYa, let me try that first20:16
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add cleanup worker debug code  https://review.openstack.org/40718420:19
*** hashar has joined #zuul20:30
Shrews?20:32
Shrewsnm20:32
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641120:53
pabelangerclarkb: okay, I decided to return a dict for to_path^, rather then expose another function call20:54
pabelangerlet me know what you think20:54
pabelangerShrews: here is something interesting: http://logs.openstack.org/39/407139/2/check/nodepool-coverage-ubuntu-xenial/5b10369/console.html#_2016-12-05_20_11_17_49926820:59
pabelangerwe are waiting 23 seconds it looks like between found image and uploading20:59
pabelangercurious what is going on there21:01
Shrewspabelanger: the only real processing b/w those two statements is the checksum stuff21:03
Shrewsfs i/o maybe21:03
pabelangerShrews: it should be zero right now, we haven't enabled checksums yet for unit tests21:04
Shrewspabelanger: it still hits the f/s21:04
pabelangerright, check to see if file exists21:05
Shrewsos.path.isfile()..21:05
Shrewsyeah21:05
pabelangerya21:05
pabelangerour should have SSDs in osic-350021:06
pabelangerwe*21:06
Shrewsyeah, i dunno. that's really the only thing in between those log stmts21:07
pabelangerlet me rebase it a top of https://review.openstack.org/#/c/40641121:09
clarkbpabelanger: I think that breaks the from_path(to_path()) relationship in the object21:09
clarkbpabelanger: but also its built to be objecty to keeping that objectyness is probably a good thing21:09
pabelangerclarkb: oh, maybe21:09
clarkbI think its fine if anytime the md5 property is set you also set the path that came from and have a second property21:10
clarkbshould be very simple21:10
*** jamielennox|away is now known as jamielennox21:13
pabelangerclarkb: can you explain your comment more? Looked at the code again, don't see where we invoke from_path(to_path()) is code.21:14
clarkbpabelanger: its a general property of the object21:14
*** tflink_ is now known as tflink21:16
clarkbpabelanger: you should be able to take the to_path output and use it to get an equivalent object using from_path21:16
clarkband since its easy to do this in a backward compatible manner while preserving that complementary behaviopr (just use a new attribute that updates when the hash updates) I think we should do it that way21:17
clarkb(or alternatively adding a new method)21:17
pabelangerDid we do that before? I don't see where that is in zuulv3 branch now21:18
clarkbits in nodepool before21:19
clarkbthe object interface doesn't change21:19
pabelangerin fact, from_images_dir is only used in test_builder.py right now21:19
clarkbignoring where these methods are or aren't called. Its a carefulyl crafted object21:20
clarkbcurrently it has a few properties that are nice to have. This breaks one of them21:20
clarkbso I am asking that we don't break that and instead solve this problem in another way21:20
pabelangerokay, I'm fine to revert. Just having trouble following along21:21
clarkblet me annotate the change a bit more then21:21
jheskethMorning21:26
clarkbpabelanger: commented, does that help?21:28
clarkber any() is the wrong function21:31
* clarkb looks up the python builtin that does what we want there21:31
clarkbfilter()21:31
pabelangerclarkb: filter(None, [filename, f.md5_file, f.sha256_file]) looks like the syntax?21:39
openstackgerritJames E. Blair proposed openstack-infra/zuul: Update storyboard links in README  https://review.openstack.org/40721221:39
openstackgerritJames E. Blair proposed openstack-infra/zuul: Add roadmap to README  https://review.openstack.org/40721321:39
clarkbpabelanger: filter([filename, f.md5_file, f.sha256_file])21:40
clarkboh wait no you are right21:40
pabelangeractually, I can pass the function21:41
clarkbpabelanger: the default function is what you want though21:41
clarkbor use the comprehension that it is compatible with in the docs21:41
clarkb[item for item in items if item]21:41
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641121:45
pabelangerokay, passed a function to filter21:45
clarkbpabelanger: oh if you want to use it that way (which is fine) you may want to use map21:47
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641121:47
clarkbpabelanger: map() is a bit more direct in your intent there I think21:47
pabelangerclarkb: okay, let me see the difference21:47
clarkbmap returns all the results of calling that function not just that evaluate to true21:48
SpamapS10 minutes till meeting21:50
SpamapSjeblair: did you point that script at board 41 yet?21:50
jeblairSpamapS: not yet, that's next on my list21:52
pabelangerclarkb: map it is21:53
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641121:53
clarkbpabelanger: its a small difference, typically you would use map when you just want to run a function over all the things21:53
clarkbwhereas filter is a specialized on that filters the returned result21:53
pabelangerYa, http://stackoverflow.com/questions/18939596/python-difference-between-filterfunction-sequence-and-mapfunction-sequence helped21:53
clarkboh wait there may be a slight bug in that, easy fix though. md5_file and sha256_file can be None if those files aren't there for some reason (like old dib)21:55
clarkbwhich is why I wanted the filter in the first place, wee need more caffeien21:55
pabelangerha21:55
clarkbpabelanger: I think you can just update your new function to have an if not None check21:55
clarkbwill comment21:55
pabelangersure21:55
jeblairSpamapS: i ran it right now against 39 again; i need to fix a bug with it and 41.  that's in progress then i'll check it in.21:56
clarkbpabelanger: ok commented21:56
jeblairSpamapS: 41 done21:59
jeblairit's zuul meeting time in #openstack-meeting-alt22:00
openstackgerritPaul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create  https://review.openstack.org/40641122:03
Shrewspabelanger: will start looking at nb02 tomorrow morning22:33
pabelangerShrews: sure, once system-config is setup for nb02.o.o an infra-root will run launch-node.py against rackspace22:35
Shrews*nod*22:35
pabelangerthat is a manual process, but should result in nb02.o.o coming on line22:35
openstackgerritJames E. Blair proposed openstack-infra/zuul: Add update storyboard script  https://review.openstack.org/40722923:05
jeblairSpamapS: ^ there's the script.  i'll add that to crontab on my workstation23:05
*** hashar has quit IRC23:10
clarkbjeblair: Shrews commented on teh zuul-nodepool zk protocol spec23:16
jeblairclarkb: ah.  i can take care of most of that quickly -- regarding the quotas thing, i realize i had an implicit assumption -- we would only support one launcher per provider (ie cloud-region).  mostly because of rate limits.23:22
clarkbjeblair: ya I think tahts a fine way to solve the problem, but its worth calling out if that is what we are designing for since its a constraint23:22
SpamapSjeblair: sweet, it's much appreciated.23:23
jeblairclarkb: i agree that we could support more than one with the options you suggest.  for users without ratelimit issues, that may be a fine way to run.  and even for us, we might just be able to halve our rate limits and run two.23:23
jeblairclarkb: so how about i mention the assumption, and possible future enhancement based on your suggestions (which i think we will be forward-compatible with).23:23
jeblair?23:23
clarkbjeblair: sounds good23:23
clarkb"basically current design assumption has this constraint. Should that become undesireable it can be modified in one of these ways to do the other thing"23:24
clarkband yes it should be forward compatible23:25
clarkbsince one region per launcher trivially satisfies those other quota managment situations23:25
*** willthames has joined #zuul23:26
clarkbpabelanger: looks liek latest patchset of that chagne timed out on a test23:28
clarkbpabelanger: wondering if md5sum and sha256sum aren't presetn on the test boxes and so result in empty files?23:31
clarkbpabelanger: ok left a few comments on stuff I noticed debugging ^ but havne't figured it out23:42
clarkbgonna try and finish up reviewing AJaeger's Xenial stuff and will swing back around to this23:42
jeblairclarkb: one more issue with multiple launchers working on the same quota pool -- the algo a put in the spec says that if a request for a large set of nodes arrives, the launcher for that provider will fill that request before it continues on to the next one.  in a constrained situation, a second launcher might continue to satisfy small requests while the large one is still outstanding, making it take 2x as long (or worse, maybe even starve it).23:48
jeblairclarkb: so that would require some coordination, or a different algorithm.23:48
jeblairclarkb: fortunately, that's still something we can change in the future if needed.23:49
clarkbya we could potentially force quota zk updates to be greedy but that has the chance of starving the smaller requests23:50
jeblairclarkb: i'm going to state the problems but be slightly more vague about the solutions23:51
jeblairclarkb, Shrews: updated https://review.openstack.org/30550623:53
clarkbjeblair: looks good thanks23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!