jamielennox | jhesketh: do you know if there is work ongoing regarding handling multiple connections? | 03:31 |
---|---|---|
jamielennox | in v3 | 03:31 |
jamielennox | or different question - is there a reason you have to specify a pipeline.source? couldn't you determine that from pipeline.triggers? | 03:43 |
jhesketh | jamielennox: in terms of ongoing work multiple connections are currently functional so ensuring they continue work is important | 03:51 |
jhesketh | with v3 there may be some refactoring and bugs etc, but it should be mostly there | 03:51 |
jhesketh | the difference is a trigger could be unrelated to the source.. for example, you could trigger a build off an email, or a ping on a website etc (although not currently implemented) | 03:52 |
jhesketh | jamielennox: I think the question is, will there be support for multiple sources.. that's a little harder as zuul needs to know where to get the repos from and how to apply changes etc to them | 03:53 |
jhesketh | we do want to add support for github | 03:53 |
jhesketh | (and others, but starting there) | 03:53 |
jhesketh | and eventually be able to have zuul connect to both gerrit and github.. for example doing a depends-on between a gerrit change and an upstream github PR | 03:54 |
jamielennox | jhesketh: so yea, i think i got into knots a little trying to figure out some of the relationships - it seemed like i was having to specify the source in multiple locations | 03:54 |
jamielennox | but i was aiming to re-enable the test in test_conenctions | 03:54 |
jamielennox | different types of sources is a must for us - does it make sense for a pipeline to have multiple sources? | 03:55 |
jhesketh | jamielennox: I wonder if it makes more sense for a project to have a source | 03:55 |
jamielennox | it can have multiple triggers | 03:55 |
jhesketh | rather than a pipeline | 03:55 |
jamielennox | well it can have multiple pipelines so ~= multiple sources? | 03:56 |
jhesketh | jamielennox: but that doesn't account for inter-source CRD's | 03:56 |
jamielennox | ah, so that was something i came up with when looking at this test | 03:57 |
jamielennox | when defining your tenant you define sources with config-repos and project-repos | 03:57 |
jamielennox | are they only used to read in configuration? | 03:58 |
jamielennox | it seems to work just fine if the config-repo refers to sources that were never listed in the tenatn | 03:58 |
jhesketh | jamielennox: they are used to read in config, but also to do the merges etc.. ie the source is what provides the head of the repo | 03:59 |
jhesketh | jamielennox: connections can be used in different areas of the config, but they are different to a source | 04:00 |
*** bhavik1 has joined #zuul | 04:19 | |
*** bhavik1 has quit IRC | 04:24 | |
openstackgerrit | Jamie Lennox proposed openstack-infra/zuul: Re-enable multiple gerrit connection test https://review.openstack.org/406699 | 04:29 |
openstackgerrit | Jamie Lennox proposed openstack-infra/zuul: Re-enable multiple gerrit connection test https://review.openstack.org/406699 | 04:33 |
*** saneax-_-|AFK is now known as saneax | 05:26 | |
*** mgagne has quit IRC | 05:53 | |
*** mgagne has joined #zuul | 05:55 | |
*** mgagne is now known as Guest51737 | 05:55 | |
*** bhavik1 has joined #zuul | 06:13 | |
*** bhavik1 has quit IRC | 06:16 | |
*** adam_g_ has joined #zuul | 06:47 | |
*** jkt_ has joined #zuul | 06:49 | |
*** willthames has quit IRC | 06:52 | |
*** adam_g has quit IRC | 06:53 | |
*** jkt has quit IRC | 06:53 | |
*** adam_g_ is now known as adam_g | 06:53 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool: Keep existing loggers with fileConfig https://review.openstack.org/406784 | 08:02 |
*** openstackgerrit has quit IRC | 08:03 | |
*** hashar has joined #zuul | 08:21 | |
*** abregman has joined #zuul | 08:21 | |
*** abregman_ has joined #zuul | 08:23 | |
*** abregman has quit IRC | 08:26 | |
*** jamielennox is now known as jamielennox|away | 08:40 | |
*** bhavik1 has joined #zuul | 09:30 | |
*** Cibo_ has joined #zuul | 09:39 | |
*** Cibo_ has quit IRC | 09:50 | |
*** jkt_ is now known as jkt | 09:59 | |
*** abregman_ is now known as abregman | 10:22 | |
*** bhavik1 has quit IRC | 10:53 | |
*** hashar has quit IRC | 11:01 | |
*** hashar has joined #zuul | 11:01 | |
*** hashar has quit IRC | 11:21 | |
*** hashar has joined #zuul | 12:04 | |
*** Cibo has joined #zuul | 12:48 | |
ajafo | hi guys, is it possible to use DependentPipelineManager in check pipeline like this https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L5-L32 or maybe it always should be with options like on success submit:true ? | 13:57 |
SpamapS | ajafo: "The dependent pipeline manager is designed for gating." | 14:01 |
SpamapS | ajafo: the whole point of check is that it's a pipeline for checking things independent of other events. | 14:02 |
SpamapS | ajafo: whereas gate enforces an order so that each commit landing has been tested with its predecessors. | 14:03 |
*** openstackgerrit has joined #zuul | 14:14 | |
openstackgerrit | Merged openstack-infra/zuul: Correct logic problem with job trees https://review.openstack.org/400456 | 14:14 |
openstackgerrit | Merged openstack-infra/zuul: Add test for variant override https://review.openstack.org/399871 | 14:15 |
ajafo | SpamapS: thx for answer, but I consider case when I've some multi repository project so sometimes I have changes in one repository that will work only with other change in second repository,so I would like to test it in check pipeline is there way to solve it without dependent pipeline? I can use Depends-On but as I understand from documentation it works only with DependentPipelineManager | 14:26 |
Shrews | jeblair: i left a rather concerned comment on your updated zuulv3 spec | 14:36 |
fungi | ajafo: of the documentation says Depends-On only works with DependentPipelineManager then it's outdated. support for Depends-On with an IndependentPipelineManager has been implemented for a very long time. see http://docs.openstack.org/infra/zuul/gating.html#independent-pipeline | 14:39 |
fungi | ajafo: "When changes are enqueued into an independent pipeline, all of the related dependencies (both normal git-dependencies that come from parent commits as well as CRD changes) appear in a dependency graph..." | 14:40 |
fungi | ajafo: if you found somewhere else in zuul's documentation which contradicts that, let me know and i'll be happy to patch it | 14:40 |
fungi | honestly, i don't even recall if there was a time where we supported crd only in dependent pipelines, but if there was it was very brief | 14:41 |
SpamapS | right I've certainly used Depends-On in check since I learned of the magic of Depends-On :) | 14:42 |
fungi | i'm pretty sure it was implemented for both at the same time | 14:42 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul: Re-enable test_rerun_on_abort https://review.openstack.org/407000 | 14:43 |
fungi | aha, there was a _very_ brief period where that was the case, yes... https://review.openstack.org/144555 did dependent pipeline support and then https://review.openstack.org/144556 added independent pipelines. they merged a little less than three hours apart ;) | 14:45 |
fungi | odds are we didn't restart our zuul scheduler in between, so both would have taken effect at the same time for us | 14:46 |
ajafo | fungi: thx for clarification and sorry it's my fault not documentation, I suggested description from cross-project testing but it little different than just depend-on which is described in paragraph next to it | 14:48 |
*** saneax is now known as saneax-_-|AFK | 15:17 | |
*** Cibo has quit IRC | 15:48 | |
pabelanger | o/ | 15:59 |
pabelanger | oops | 16:02 |
pabelanger | ValueError: No JSON object could be decoded | 16:02 |
pabelanger | from dib-image-list again | 16:02 |
pabelanger | let me see why | 16:02 |
pabelanger | stopping nodepool-builder on nb01, logs getting spammed | 16:05 |
pabelanger | and been broken for a day or 2 | 16:05 |
Shrews | pabelanger: latest review I have up hopefully fixes that | 16:11 |
pabelanger | Shrews: Ah, cool. | 16:11 |
pabelanger | (CONNECTED) /> get /nodepool/image/ubuntu-xenial/builds/0000000003/ | 16:12 |
pabelanger | is empty | 16:12 |
pabelanger | so, that is the reason for the exception | 16:12 |
pabelanger | okay, deleted bad key from zookeeper, nodepool-builder started again. Cleanup process running | 16:20 |
rcarrillocruz | hdy folks, we have bank holiday tomorrow and thursday, thus i took half of the week off | 16:20 |
rcarrillocruz | not sure if i'll be at zuul's meeting today | 16:20 |
pabelanger | Just blew up again | 16:21 |
pabelanger | let me see what happened | 16:21 |
pabelanger | okay, could have been old data in zookeeper, fedora-24-0000000002 | 16:25 |
pabelanger | that's an old image, which I manually cleaned up last week | 16:25 |
clarkb | pabelanger: what does manual cleanup involve? | 16:35 |
clarkb | pabelanger: `nodepool image-delete`? | 16:35 |
pabelanger | clarkb: for now, rmr /nodepool/image/ubuntu-xenial/builds/0000000003/ | 16:35 |
pabelanger | in zk | 16:35 |
pabelanger | but ya, that is not production good | 16:35 |
pabelanger | our issue is, dib-image-list doesn't say which value is bad ATM | 16:36 |
jeblair | Shrews: replied, thanks | 16:42 |
pabelanger | jeblair: re: 406412, are you okay with making DIB_CHECKSUM a required field for our diskimages env-vars? | 16:49 |
jeblair | pabelanger: do we set DIB_CHECKSUM? | 16:50 |
pabelanger | jeblair: yes, today we do | 16:51 |
pabelanger | otherwise, we need to update nodepool to use the --checksum flag for disk-image-create | 16:52 |
jeblair | pabelanger: the only place i see nodepool set that is in the devstack plugin | 16:52 |
jeblair | pabelanger: so that does not seem to be a default behavior | 16:53 |
pabelanger | jeblair: right, we manually opt into it today | 16:53 |
pabelanger | http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml#n1003 | 16:53 |
pabelanger | but we could change that | 16:53 |
jeblair | is there any reason not to do that? | 16:54 |
pabelanger | I don't think so now | 16:54 |
pabelanger | we did it manually first to make sure it works | 16:54 |
jeblair | pabelanger: then yeah, let's hard code it into nodepool, and the requirement as ianw suggests, and remove the options from project-config | 16:54 |
pabelanger | WFM | 16:55 |
clarkb | wait | 16:56 |
clarkb | we have had problems with these things in thr past and have had to unhard set them | 16:56 |
jeblair | clarkb: what kind of problems? | 16:57 |
clarkb | problrms of using non default dib optiond and then they break us because less tested or similar | 16:57 |
jeblair | because i'll just go ahead and say, one way of another -- 'who does checksums' is not something a nodepool user is ever going to have to know. | 16:57 |
clarkb | let me go find the specific example that hit us recently | 16:57 |
jeblair | clarkb: okay, that's something we'll all get better at going forward | 16:58 |
*** abregman has quit IRC | 16:58 | |
jeblair | we either trust dib to work properly or not. i'd like us to trust it to work, help fix things and improve testing when they break | 16:58 |
pabelanger | DIB_SHOW_IMAGE_USAGE was the issue | 16:58 |
pabelanger | it was added as an env into nodepool.py over using the env-vars | 16:58 |
jeblair | i'd rather not have a bunch of levers that nodepool end-users have to pull (and know to pull) to get things to work | 16:59 |
clarkb | pabelanger: right thanks | 16:59 |
pabelanger | https://review.openstack.org/#/c/357393/ | 16:59 |
clarkb | I think the issue is when we trust it and end up requiring things that arent actually required | 16:59 |
jeblair | well, is image checksumming something substantially important to dib that we think it might continue supporting it for some time to come? | 17:00 |
jeblair | (i hope so -- they've put a lot of work into it lately :) | 17:00 |
clarkb | I amnot sure, is it still super slow? | 17:01 |
clarkb | or did the implementationget addressed to use fewer reads? | 17:01 |
jeblair | well, pabelanger and ianw have recently been benchmarking a bunch of ways to speed it up. | 17:01 |
pabelanger | better now that we do md5sum and sha256sum in parallel | 17:01 |
pabelanger | and shade is faster, that is for sure | 17:01 |
jeblair | pabelanger: shade is faster than what? | 17:01 |
pabelanger | let me rephrase, image uploads start way faster in shade, since we don't need to md5sum / sha256sum each image on upload | 17:02 |
jeblair | ah, shade is faster with dib checksumming enabled | 17:03 |
clarkb | pabelanger: is it? shade caches the checksums so it should be just as fast | 17:03 |
clarkb | I think ^ is my biggest concern here, shade already supports this and caches results, by adding dib support for it we actually regressed because it was slower to calculate the hashes | 17:04 |
pabelanger | clarkb: I'd have to test again, honestly. | 17:04 |
pabelanger | but, it feels faster | 17:05 |
jeblair | then why are we doing this? | 17:05 |
jeblair | clarkb: where are they cached? | 17:06 |
pabelanger | I was under the impression that shade would calculate the hash for each provider of the same image | 17:06 |
clarkb | jeblair: in shade memory, so ifyou restart nodepool it has to recalculate them again once | 17:06 |
clarkb | pabelanger: if it does that that seems like a shade bug | 17:06 |
jeblair | clarkb: hrm, we currently have a shade client for each provider, so i think they would be separate. | 17:06 |
clarkb | (cache for on disk info should be provider/cloud independent) | 17:07 |
jeblair | clarkb: i agree, but with no persistent cache for shade in the builder, and multiple shade clients, i'm not sure i see a place for shade to easily cache it. | 17:08 |
pabelanger | We also get the added bonus of sha256 / md5 files existing at build time, so we can provide validation to images if we ever host them | 17:09 |
jeblair | it seems that having dib perform the checksums gets us that persistent cache. and it's now fast, thanks to ianw and pabelanger, so why don't we continue with that? | 17:09 |
clarkb | yup I think for hosting images its a good option. But I as I said just wary of requiring things that arent actually hard required (since shade should be doing this) | 17:11 |
clarkb | as far as fixing it in shade if necessary it uses that dogpile cache right? I imaginr there is support for such things in dogpile | 17:11 |
jeblair | yeah, i don't think we should do it gratuitously, but i also don't think this rises to the level of being a user-configurable option. so i'd like us to pick one, and enforce it in nodepool. | 17:12 |
jeblair | clarkb: yes, but 'get dogpile cache working for nodepool' would be a considerable distraction at the moment. | 17:12 |
clarkb | oh its not in use at all yet? if so then I agreethat would be effort not necessary | 17:14 |
clarkb | (for some reason I thought we were using dogpile just not thr external cache server with it) | 17:14 |
pabelanger | I'll do which every way everybody is happy with :) | 17:22 |
clarkb | I'm fine with using dib for it if its what we want to do. We likely want to give shade a heads up that they will no longer get transitive testing of this via nodepool though | 17:23 |
*** Guest51737 is now known as mgagne | 17:26 | |
*** mgagne has quit IRC | 17:27 | |
*** mgagne has joined #zuul | 17:27 | |
pabelanger | okay, I'll work on --checksum patch for now | 17:31 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 17:45 |
pabelanger | clarkb: jeblair: I need to relocate, back in about 20mins. Mind looking over ^ and leaving some comments, jeblair I know we talked about moving it into DiBImageFile(), pointers would be helpful how you see that working. Guessin some sort of remove() function we'd call? | 17:48 |
*** harlowja has joined #zuul | 17:50 | |
jeblair | pabelanger: lgtm. and exactly -- i'd probably just move all of 240-252 into DibImageFile.remove() | 17:51 |
jeblair | Shrews, greghaynes, mordred: ^ if you could look at that too -- this would switch nodepool to use dib's checksumming rather than shade's (which we've been doing for openstack for a while -- see earlier conversation here for more infos) | 18:05 |
*** yolanda has quit IRC | 18:07 | |
*** hashar has quit IRC | 18:11 | |
greghaynes | jeblair: pabelanger left a comment on https://review.openstack.org/#/c/406411/2 | 18:13 |
pabelanger | greghaynes: I would be okay with that | 18:16 |
pabelanger | also, for some reason our cleanup worker isn't running properly: http://logs.openstack.org/11/406411/2/check/gate-nodepool-python27-ubuntu-xenial/0a99b56/console.html | 18:21 |
pabelanger | looks to be a possible race | 18:21 |
pabelanger | I'll dig into that shortly | 18:21 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Sort images and providers in zookeeper https://review.openstack.org/407124 | 18:21 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 18:23 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul: Re-enable test_client_get_running_jobs https://review.openstack.org/407135 | 18:56 |
SpamapS | jeblair: ^ that one there.. I'm really not sure I'm doing the right thing. :-P | 18:56 |
SpamapS | I don't quite understand why those items were left in "TODOv3"... so some guided spelunking of history may be necessary | 18:57 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Fix race condition in test_image_rotation_invalid_external_name https://review.openstack.org/407139 | 19:04 |
pabelanger | jeblair: Shrews: ^ should fix up a race condition: http://logs.openstack.org/11/406411/3/check/nodepool-coverage-ubuntu-xenial/ebe55bd/console.html | 19:05 |
*** Zara_ has joined #zuul | 19:23 | |
*** tflink_ has joined #zuul | 19:27 | |
*** Zara has quit IRC | 19:28 | |
*** tflink has quit IRC | 19:28 | |
jeblair | SpamapS: that's information that the worker returns to zuul, mostly for the use of the admin via logs. there was no worker in v2.0, and in v2.5, it's mostly ignored/unimplemented. url is an interesting one because it's supposed to be the place where a user can see the job results (ie, the jenkins log). in v2.5 it's the telnet url. in v3, we don't actually have a solution for it yet (it should probably be the websocket streamer url once we ... | 19:30 |
jeblair | ... have that ready). | 19:30 |
jeblair | SpamapS: i think we can just put *something* in the url field for now, and then update it later when that's there. | 19:31 |
jeblair | SpamapS: url is the only one that should really affect zuul operation | 19:31 |
jeblair | SpamapS: for the rest, we just need to add that information, since we never had before. or potentially cull any fields that are no longer useful. | 19:32 |
*** yolanda has joined #zuul | 19:32 | |
SpamapS | jeblair: ok, thanks that's helpful. | 19:35 |
SpamapS | jeblair: I'm not entirely sure what the worker name should be now though. | 19:35 |
Shrews | pabelanger: confirmed the random failure in test_image_rotation | 19:37 |
pabelanger | Shrews: Ya, I'm still hunting that one. | 19:37 |
jeblair | SpamapS: i think previous values were 'Jenkins' and 'Turbo Hipster'. i think there's some duplication in there. and we can probably lose the 'worker_' prefix on some of those. if you want to propose a cleanup, i think this transition would be a swell time for it. | 19:40 |
SpamapS | jeblair: Indeed, I'd prefer to de-dupe than to duplicate cruft. | 19:50 |
* SpamapS will ponder whilst gym-ing | 19:50 | |
pabelanger | jeblair: clarkb: Do you mind reviewing: https://review.openstack.org/#/c/407139/ fixes a race condition in our testing. | 19:57 |
pabelanger | clarkb: mordred: also https://review.openstack.org/#/c/406411/ could use your thoughts too, using --checksums now with diskimage-builder | 19:57 |
clarkb | pabelanger: commented on first one | 19:58 |
Shrews | pabelanger: geez, easily reproduced that failure at first. now i'm having a devil of a time | 19:59 |
Shrews | must've got lucky the first time | 19:59 |
pabelanger | Shrews: Ya, let me know if you know the secret. Cause I haven't reproduced it locally yet | 20:00 |
clarkb | pabelanger: is there a reason we can't do 406411 with the TODO done? | 20:00 |
clarkb | pabelanger: in fact, if I am reading that correctly I think it may already do it | 20:02 |
pabelanger | clarkb: we could, but not sure when I'll get to that. And since we're leaking checksum files, figured we could land it first then work on refactor | 20:02 |
clarkb | from_image_id is looking at $id.qcow2 $id.raw $id.arbitrary.suffix and returning every file that has the $id prefix I think | 20:02 |
clarkb | pabelanger: I guess what I am trying to say is I think this is mostly done and is likely even less code to write than the current change | 20:03 |
pabelanger | right, let me look at 407139 first then will loop back | 20:03 |
clarkb | pabelanger: DibImageFile already keeps track of the checksum files | 20:04 |
clarkb | so I think waht you need is 'if image.md5 is not None and os.exists(image.md5) then delete image.md5 | 20:04 |
clarkb | except in this case its called 'file' not 'image' | 20:04 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Fix race condition in test_image_rotation_invalid_external_name https://review.openstack.org/407139 | 20:06 |
clarkb | oh sorry thats the loaded hash so needs just a small amount of updating to store the file path too | 20:06 |
pabelanger | okay, looking at 406411 | 20:06 |
clarkb | I am about to post comments in line too which may help | 20:07 |
pabelanger | k | 20:07 |
clarkb | done | 20:07 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool: Add cleanup worker debug code https://review.openstack.org/407184 | 20:10 |
Shrews | pabelanger: ^^^^ ran more than 100 attempts. let's just add some debug code to try and catch it in zuul | 20:10 |
pabelanger | clarkb: is storing the path to md5 file in DibImageFile going to be an issue? We currently don't store the path for our actually DIB, wondering if we could get out of sync some how | 20:14 |
clarkb | pabelanger: you could do md5_to_path instead if you like | 20:15 |
pabelanger | Ya, let me try that first | 20:16 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool: Add cleanup worker debug code https://review.openstack.org/407184 | 20:19 |
*** hashar has joined #zuul | 20:30 | |
Shrews | ? | 20:32 |
Shrews | nm | 20:32 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 20:53 |
pabelanger | clarkb: okay, I decided to return a dict for to_path^, rather then expose another function call | 20:54 |
pabelanger | let me know what you think | 20:54 |
pabelanger | Shrews: here is something interesting: http://logs.openstack.org/39/407139/2/check/nodepool-coverage-ubuntu-xenial/5b10369/console.html#_2016-12-05_20_11_17_499268 | 20:59 |
pabelanger | we are waiting 23 seconds it looks like between found image and uploading | 20:59 |
pabelanger | curious what is going on there | 21:01 |
Shrews | pabelanger: the only real processing b/w those two statements is the checksum stuff | 21:03 |
Shrews | fs i/o maybe | 21:03 |
pabelanger | Shrews: it should be zero right now, we haven't enabled checksums yet for unit tests | 21:04 |
Shrews | pabelanger: it still hits the f/s | 21:04 |
pabelanger | right, check to see if file exists | 21:05 |
Shrews | os.path.isfile().. | 21:05 |
Shrews | yeah | 21:05 |
pabelanger | ya | 21:05 |
pabelanger | our should have SSDs in osic-3500 | 21:06 |
pabelanger | we* | 21:06 |
Shrews | yeah, i dunno. that's really the only thing in between those log stmts | 21:07 |
pabelanger | let me rebase it a top of https://review.openstack.org/#/c/406411 | 21:09 |
clarkb | pabelanger: I think that breaks the from_path(to_path()) relationship in the object | 21:09 |
clarkb | pabelanger: but also its built to be objecty to keeping that objectyness is probably a good thing | 21:09 |
pabelanger | clarkb: oh, maybe | 21:09 |
clarkb | I think its fine if anytime the md5 property is set you also set the path that came from and have a second property | 21:10 |
clarkb | should be very simple | 21:10 |
*** jamielennox|away is now known as jamielennox | 21:13 | |
pabelanger | clarkb: can you explain your comment more? Looked at the code again, don't see where we invoke from_path(to_path()) is code. | 21:14 |
clarkb | pabelanger: its a general property of the object | 21:14 |
*** tflink_ is now known as tflink | 21:16 | |
clarkb | pabelanger: you should be able to take the to_path output and use it to get an equivalent object using from_path | 21:16 |
clarkb | and since its easy to do this in a backward compatible manner while preserving that complementary behaviopr (just use a new attribute that updates when the hash updates) I think we should do it that way | 21:17 |
clarkb | (or alternatively adding a new method) | 21:17 |
pabelanger | Did we do that before? I don't see where that is in zuulv3 branch now | 21:18 |
clarkb | its in nodepool before | 21:19 |
clarkb | the object interface doesn't change | 21:19 |
pabelanger | in fact, from_images_dir is only used in test_builder.py right now | 21:19 |
clarkb | ignoring where these methods are or aren't called. Its a carefulyl crafted object | 21:20 |
clarkb | currently it has a few properties that are nice to have. This breaks one of them | 21:20 |
clarkb | so I am asking that we don't break that and instead solve this problem in another way | 21:20 |
pabelanger | okay, I'm fine to revert. Just having trouble following along | 21:21 |
clarkb | let me annotate the change a bit more then | 21:21 |
jhesketh | Morning | 21:26 |
clarkb | pabelanger: commented, does that help? | 21:28 |
clarkb | er any() is the wrong function | 21:31 |
* clarkb looks up the python builtin that does what we want there | 21:31 | |
clarkb | filter() | 21:31 |
pabelanger | clarkb: filter(None, [filename, f.md5_file, f.sha256_file]) looks like the syntax? | 21:39 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul: Update storyboard links in README https://review.openstack.org/407212 | 21:39 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul: Add roadmap to README https://review.openstack.org/407213 | 21:39 |
clarkb | pabelanger: filter([filename, f.md5_file, f.sha256_file]) | 21:40 |
clarkb | oh wait no you are right | 21:40 |
pabelanger | actually, I can pass the function | 21:41 |
clarkb | pabelanger: the default function is what you want though | 21:41 |
clarkb | or use the comprehension that it is compatible with in the docs | 21:41 |
clarkb | [item for item in items if item] | 21:41 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 21:45 |
pabelanger | okay, passed a function to filter | 21:45 |
clarkb | pabelanger: oh if you want to use it that way (which is fine) you may want to use map | 21:47 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 21:47 |
clarkb | pabelanger: map() is a bit more direct in your intent there I think | 21:47 |
pabelanger | clarkb: okay, let me see the difference | 21:47 |
clarkb | map returns all the results of calling that function not just that evaluate to true | 21:48 |
SpamapS | 10 minutes till meeting | 21:50 |
SpamapS | jeblair: did you point that script at board 41 yet? | 21:50 |
jeblair | SpamapS: not yet, that's next on my list | 21:52 |
pabelanger | clarkb: map it is | 21:53 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 21:53 |
clarkb | pabelanger: its a small difference, typically you would use map when you just want to run a function over all the things | 21:53 |
clarkb | whereas filter is a specialized on that filters the returned result | 21:53 |
pabelanger | Ya, http://stackoverflow.com/questions/18939596/python-difference-between-filterfunction-sequence-and-mapfunction-sequence helped | 21:53 |
clarkb | oh wait there may be a slight bug in that, easy fix though. md5_file and sha256_file can be None if those files aren't there for some reason (like old dib) | 21:55 |
clarkb | which is why I wanted the filter in the first place, wee need more caffeien | 21:55 |
pabelanger | ha | 21:55 |
clarkb | pabelanger: I think you can just update your new function to have an if not None check | 21:55 |
clarkb | will comment | 21:55 |
pabelanger | sure | 21:55 |
jeblair | SpamapS: i ran it right now against 39 again; i need to fix a bug with it and 41. that's in progress then i'll check it in. | 21:56 |
clarkb | pabelanger: ok commented | 21:56 |
jeblair | SpamapS: 41 done | 21:59 |
jeblair | it's zuul meeting time in #openstack-meeting-alt | 22:00 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool: Add --checksum support to disk-image-create https://review.openstack.org/406411 | 22:03 |
Shrews | pabelanger: will start looking at nb02 tomorrow morning | 22:33 |
pabelanger | Shrews: sure, once system-config is setup for nb02.o.o an infra-root will run launch-node.py against rackspace | 22:35 |
Shrews | *nod* | 22:35 |
pabelanger | that is a manual process, but should result in nb02.o.o coming on line | 22:35 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul: Add update storyboard script https://review.openstack.org/407229 | 23:05 |
jeblair | SpamapS: ^ there's the script. i'll add that to crontab on my workstation | 23:05 |
*** hashar has quit IRC | 23:10 | |
clarkb | jeblair: Shrews commented on teh zuul-nodepool zk protocol spec | 23:16 |
jeblair | clarkb: ah. i can take care of most of that quickly -- regarding the quotas thing, i realize i had an implicit assumption -- we would only support one launcher per provider (ie cloud-region). mostly because of rate limits. | 23:22 |
clarkb | jeblair: ya I think tahts a fine way to solve the problem, but its worth calling out if that is what we are designing for since its a constraint | 23:22 |
SpamapS | jeblair: sweet, it's much appreciated. | 23:23 |
jeblair | clarkb: i agree that we could support more than one with the options you suggest. for users without ratelimit issues, that may be a fine way to run. and even for us, we might just be able to halve our rate limits and run two. | 23:23 |
jeblair | clarkb: so how about i mention the assumption, and possible future enhancement based on your suggestions (which i think we will be forward-compatible with). | 23:23 |
jeblair | ? | 23:23 |
clarkb | jeblair: sounds good | 23:23 |
clarkb | "basically current design assumption has this constraint. Should that become undesireable it can be modified in one of these ways to do the other thing" | 23:24 |
clarkb | and yes it should be forward compatible | 23:25 |
clarkb | since one region per launcher trivially satisfies those other quota managment situations | 23:25 |
*** willthames has joined #zuul | 23:26 | |
clarkb | pabelanger: looks liek latest patchset of that chagne timed out on a test | 23:28 |
clarkb | pabelanger: wondering if md5sum and sha256sum aren't presetn on the test boxes and so result in empty files? | 23:31 |
clarkb | pabelanger: ok left a few comments on stuff I noticed debugging ^ but havne't figured it out | 23:42 |
clarkb | gonna try and finish up reviewing AJaeger's Xenial stuff and will swing back around to this | 23:42 |
jeblair | clarkb: one more issue with multiple launchers working on the same quota pool -- the algo a put in the spec says that if a request for a large set of nodes arrives, the launcher for that provider will fill that request before it continues on to the next one. in a constrained situation, a second launcher might continue to satisfy small requests while the large one is still outstanding, making it take 2x as long (or worse, maybe even starve it). | 23:48 |
jeblair | clarkb: so that would require some coordination, or a different algorithm. | 23:48 |
jeblair | clarkb: fortunately, that's still something we can change in the future if needed. | 23:49 |
clarkb | ya we could potentially force quota zk updates to be greedy but that has the chance of starving the smaller requests | 23:50 |
jeblair | clarkb: i'm going to state the problems but be slightly more vague about the solutions | 23:51 |
jeblair | clarkb, Shrews: updated https://review.openstack.org/305506 | 23:53 |
clarkb | jeblair: looks good thanks | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!