Friday, 2017-03-24

clarkbthinking about this more I think you are right and we want to see the config drive data00:02
clarkbbecause this must've worked before because we short circuited on the dhcp path00:02
clarkbbut now we are gettnig something that isn't dhcp (and is unknown) and breaking glean00:02
clarkbits possible its just junk and we'll bring up our interface with dhcp on the next item in the list, but seems like maybe something in nova changed here?00:03
SpamapSwoot.. running with OS_TEST_TIMEOUT=9999 produced useful things00:05
SpamapSRan 211 (+27) tests in 743.023s (-9.924s)00:05
SpamapSFAILED (id=20, failures=9 (+5), skips=29)00:05
* SpamapS EOD's for now00:06
clarkbpabelanger: maybe https://review.openstack.org/#/c/430910/00:06
clarkbwhen did the job start failing? ^ merged yesterday00:06
pabelangerclarkb: ya, yesterday00:06
jeblairclarkb, pabelanger: how can we get a copy of the configdrive data?00:10
clarkbjeblair: I'm working to boot a cirros host on this held instance and can dump it from there00:11
jeblairoh i see that in -infra now :)00:11
clarkbwatch cirros not have the ability to mount th device once I get it up :/00:13
clarkb{"services": [], "networks": [{"network_id": "7ee08c00-7323-4f18-94bb-67e081520e70", "link": "tap14b906ba-8c", "type": "ipv4_dhcp", "id": "network0"}, {"network_id": "7ee08c00-7323-4f18-94bb-67e081520e70", "link": "tap14b906ba-8c", "type": "ipv6_dhcp", "id": "network1"}], "links": [{"ethernet_mac_address": "fa:16:3e:9e:84:5c", "mtu": 1450, "type": "ovs", "id": "tap14b906ba-8c", "vif_id":00:18
clarkb"14b906ba-8c2c-4829-852e-2987d25ceabc"}]}00:18
clarkbthe issue appears to be "ipv6_dhcp"00:18
clarkbglean doesn't handle that00:18
pabelangeris that new?00:19
clarkbalso WHY ARE WE DHCPING IPV6. ok I fleel better after getting that out of my system00:19
mordredclarkb: well - at least we know what the problem is00:19
clarkbpabelanger: sort of, I think its been there but only if explicitly enabled, But now nova just defaults to it enabled00:19
clarkbso my patch to glean will work for now00:19
mordredI mean - sadly it means we're down the stack to a glean patch to fix it ...00:20
mordredoh - I missed that00:20
clarkbbut only for ipv400:20
clarkbmordred: https://review.openstack.org/44936600:20
mordred+A00:20
clarkband I don't feel like untangling ipv6 dhcp tonight becuase well YOU SHOULDN"T DHCP00:20
clarkbthere I got it out of my system more00:20
pabelangerwell, ipv6 and glean is limited too. We don't even support centos yet00:21
clarkbI wonder if this means neutron's ipv6 default is dhcp not RAs now00:22
openstackgerritJoshua Hesketh proposed openstack-infra/nodepool feature/zuulv3: Merge branch 'master' into feature/zuulv3  https://review.openstack.org/44532500:22
jlkjesusaur: any chance you want to port over https://review.openstack.org/#/c/415750  or should I?00:22
pabelangerclarkb: cool, 449366 worked for fedora / centos00:33
pabelangerwaiting for ubuntu00:33
openstackgerritJoshua Hesketh proposed openstack-infra/nodepool feature/zuulv3: Fix test_leaked_node_not_deleted for v3  https://review.openstack.org/44937500:47
pabelangerclarkb: mordred: cool, once we clean up line breaks, we can see the glean error: http://logs.openstack.org/30/358730/6/check/gate-dsvm-nodepool/eadd068/logs/screen-nodepool.txt.gz00:51
pabelangerwill make debugging things easier00:52
jesusaurjlk: ya, I can take a stab at that if you want to take a break from the stack to work on something else00:53
jlkwell I'm working on some that are after that, which are going to be big pains00:53
jlkall the status/review stuff00:53
jesusaurah, ok00:54
jesusaurin that case, I'll take a look at porting 415750 tomorrow00:54
pabelangermordred: left a comment on 35873001:02
*** jamielennox is now known as jamielennox|away01:39
openstackgerritJesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for requiring github pr head status  https://review.openstack.org/44939001:47
*** jamielennox|away is now known as jamielennox01:48
*** timrc has quit IRC02:38
*** timrc has joined #zuul02:49
*** bhavik1 has joined #zuul06:20
*** bhavik1 has quit IRC07:38
*** hashar has joined #zuul11:58
mordredpabelanger: woot12:01
openstackgerritMonty Taylor proposed openstack-infra/nodepool master: WIP Fetch server console log if ssh connection fails  https://review.openstack.org/35873012:03
pabelangermordred: also left a question on^, maybe rethinking my original patch.  I think we want console-log to be part of the provider section, since it deals with SSH thing, not the label.12:26
pabelangeryay13:10
pabelangerhttp://logs.openstack.org/30/358730/7/check/gate-dsvm-nodepool/841af13/logs/screen-nodepool.txt.gz#_2017-03-24_12_40_00_97213:10
Shrewsoh, so it's a glean bug?13:28
mordredShrews: yup!13:30
mordredwell - actually13:30
mordredwe think it's an openstack bug13:30
mordredbut ... it's also a glean bug13:30
Shrewsbugs on bugs on bugs on bugs13:30
mordredpabelanger: yah - I think that makes sense too13:31
*** hashar is now known as hasharAway14:29
jeblairShrews: https://review.openstack.org/449147 is true?  i thought that was a misapprehension...15:22
Shrewsjeblair: which part of that are you asking is true?15:23
jeblairShrews: the "you must specify all azs in order to have nodes grouped"15:24
Shrewsjeblair: yes, that is true. unless there is an existing node that it can select for the first node. if it has to build a node, there is no guarantee15:25
Shrewsbecause you don't know what zone will be selected for a new node15:25
jeblairShrews: and we may try to launch them all at the same time... right?15:26
Shrewswell, no. the launch is started as soon as we determine we need a new node15:27
jeblairShrews: right, but we don't wait for it to get far enough to find out what az it landed in before we start the launch for the next one15:28
Shrewsright15:28
jeblair(by design because we'd never get anything done :)15:28
Shrewsi'm open to better wording there15:28
jeblairShrews: no, the wording is good, i just object to the meaning :)15:28
jeblairShrews: is there a way to get the list of az's?  from shade or occ maybe?15:29
Shrewsnot sure, off hand15:29
jeblairmordred, clarkb: ^ ?15:30
mordredjeblair: I do not believe so15:30
mordredjeblair: BUT - we could certainly make one15:30
mordredmaybe15:30
mordredlet me do some poking15:30
mordred(at the moment I am unaware of any clouds we have access to that have more than one az, so testing az things is admittedly more difficult15:31
jeblairif so, we can turn this into "leave blank to have nodepool select an AZ from the full set; list az's to restrict nodepool to a subset".  which is, i think what a user would expect.15:32
SpamapSConsole log also usually has SSH host keys btw. Might be good to fetch that before even sshing when we start caring about things like CD or build attestation15:32
* SpamapS hasn't had coffee so.. grains of salt15:32
mordredjeblair: ooh - neat. there is a list-azs call we can make15:33
mordredSpamapS: well15:33
mordredSpamapS: that's only true if the host runs cloud-init, which our hosts do not15:33
mordredSpamapS: and also rackspace does not support fetching console-log (you always get empty string)15:33
SpamapSYeah let's not do attestation there.15:34
mordredSpamapS: this is, actually, the reason we keep requesting an API call to be added to nova that would fetch the host key over the network that exists in the cloud15:34
mordredSpamapS: but for whatever reason have been unable to get any traction on it being a super important thing to have15:34
SpamapSmordred: I think that's called DNS sec for designate? ;-)15:35
mordredSpamapS: well - that would be a way - but honestly having a nova api call that gets neutron to ping port 22 and return the host key should be fairly trivial to implement15:35
mordredbut my attempts to write it up always run in to me describing a poor sample implementation15:36
mordredSpamapS: https://review.openstack.org/#/c/167202/ - it's been a couple of years since I tried15:37
SpamapSIt would. Break the TOFU hegemony!15:37
mordredjeblair: I'll add a list_azs call to shade so that we can implement that15:38
jeblairmordred: cool, thanks!15:39
mordredin case anyone wants to see the payload it returns:15:39
mordred[Munch({u'zoneState': {u'available': True}, u'hosts': None, u'zoneName': u'ca-ymq-2'}), Munch({u'zoneState': {u'available': False}, u'hosts': None, u'zoneName': u'nova'})]15:39
mordredthat's for vexxhost15:39
mordredthere is one 'available' az and one that is not available15:39
mordredso I think that's pretty good15:39
jeblairah cool15:39
jeblairi reckon we should only try the available ones.  :)15:40
mordredc._compute_client.get('/os-availability-zone') is the call - so it's pretty nice and easy15:40
mordredjeblair: you're assuming that unavailable means unavailable15:41
mordredjeblair: it's possible we only want to try unavailable ones, since maybe "unavailable" means "can boot nodes"15:41
mordredsort of like how router:external does not mean "this network can route packets externally"15:41
mordredjeblair, Shrews: working on ipv6-preferred ... it seems like we grab "preferred_ip" and use it for the keyscan - but do not store it in the node we put in zk15:42
mordredam I just reading that wrong?15:42
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Remove SSH support from nodepool  https://review.openstack.org/44668315:46
*** hasharAway is now known as hashar15:57
Shrewsmordred: not at a computer now, but I believe that is correct.16:06
mordredk. fixing16:06
mordredjeblair: also - nodepool does a keyscan as part of boot, and zuul also does one when it's making the inventory - should we just put the value of the nodepool keyscan into the zk node record?16:09
SpamapSI thought we already did that16:11
jeblairmordred: yes -- i believe we are in mid-transition on that (moving from zuul to nodepool).  pabelanger has been working on that.16:11
mordredcool16:11
mordredI will not touch that right now then16:11
pabelangeryes!16:11
pabelangermordred: https://review.openstack.org/#/c/446785/16:11
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: Remove ipv6-preferred and rely on interface_ip  https://review.openstack.org/44970516:18
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Consume interface_ip from nodepool node  https://review.openstack.org/44970616:18
mordredShrews, jeblair: I'm getting a weird test failure on the nodepool change that I might need some advice on16:18
jeblairmordred: which test?16:19
mordredjeblair: a few - but for instance, nodepool.tests.test_commands.TestNodepoolCMD.test_alien_list_fail16:20
mordredjeblair: http://paste.openstack.org/show/604062/16:20
mordredjeblair: I figured I'd push the patch up mid-flight while I look for the break16:21
jeblairmordred: oh, i apparently broke that in 44934216:21
mordredneat!16:21
mordredPHEW16:21
mordredI could not see for the life of my how my change could have broken _that_16:22
mordredjeblair: hrm. I have a philosophical question for you16:23
jeblairmordred: (fixing)16:24
mordredjeblair: the _Actual_ failure i'm producing now is that I need to add interface_ip to Dummy's _create16:24
SpamapSso.. this is interesting16:24
mordredbut then in the test we have for this, I'll need to basically just implement logic in the dummy to set interface ip to the values we test for in the unit test16:24
mordredjeblair: which will lead us to test that our unit test matches the fake16:25
SpamapSThe Ref/Change/etc. change I'm making produced 9 tests that it says failed16:25
SpamapSbut none of those 9 fail when run alone16:25
* SpamapS trys testr run --analyze-isolation16:25
mordredSpamapS: I just  had that in a shade change - it was something assigning to global16:25
SpamapSmordred: sounds about right16:26
jeblairmordred: yes -- however, since we're not writing a unit test of the provider manager per se, but rather, nodepool, we are able to test that nodepool functions.16:26
mordredjeblair: totally - it's just - the logic as to whether interface_ip contains the v6 or the v4 value is contained in shade - so I _think_ the specific v6 test might need to get stripped down a smidge16:27
jeblairmordred: oh, lemme look at that test16:27
pabelangerSo, zuul question. If a job is in gate, and some body then does a -2, why doesn't zuul kick the patchset of the pipeline right away?16:28
*** hashar has quit IRC16:29
jeblairmordred: i agree, on the face, that's not very useful.16:30
mordredjeblair: I think I just want to make it test that if there is a v6 network specified that the v6 address is in interface_ip (and that there is a v6 address in public_v6) and if not v6 is blank, v4 and interface are the same16:30
jeblairpabelanger: bug16:30
pabelangerk16:30
mordredI'm still not 100% convince that's testing anything - other than exercising that we set interface_ip on the node16:30
jeblairmordred: yeah, the worth of this probably depends on how much other ipv6-interested code there is in nodepool.16:31
openstackgerritJames E. Blair proposed openstack-infra/nodepool feature/zuulv3: Remove api-timeout and provider.image-type  https://review.openstack.org/44934216:32
openstackgerritJames E. Blair proposed openstack-infra/nodepool feature/zuulv3: Remove deprecated networks syntax  https://review.openstack.org/44935416:32
jeblairmordred: ^ updated things for you to rebase on when ready16:32
mordredjeblair: woot!16:32
pabelangerfollow up question, we currently have zuul enqueue for the client, is there a specific reason we don't have zuul dequeue?16:32
jeblairpabelanger: https://review.openstack.org/9503516:33
pabelangerjeblair: Oh, nice16:33
jeblairwe should fix that up and merge it on its 3 year anniversary.  :)16:34
pabelangerSpamapS: ^ want to do that :)16:34
pabelangerI mean, I don't mined16:34
pabelangermind*16:34
mordredwow16:34
mordreda 5-digit change16:35
pabelangerI could actually use that today, to clean up tripleo change queue16:35
pabelangerthey are at 20hrs16:35
mordredcommitter: jeblair@hp.com16:36
jeblairthe cool thing?  it *does not need a rebase*16:36
mordredlove it16:36
mordredjeblair: that's amazing16:36
pabelangeryar16:36
mordredyah- whoever fixes it - don't rebase it16:36
pabelangergoing to take a stab at it16:36
mordredit'll be cool to have that merge in that state16:36
pabelangermordred: so, git-review will safely do things right16:37
pabelangernot rebase it16:37
mordredyes. we removed rebase-by-default a while ago16:37
pabelangercool16:37
mordredoh. wait16:37
mordredpabelanger: make sure you have rebase = false in the gitreview section of your global .gitconfig16:37
pabelangerk16:37
mordred[gitreview]16:37
mordred        rebase = false16:37
mordredis in my ~/.gitconfig ... I honestly do not know if it's required, but it can't hurt :)16:38
jeblairi'm pretty sure it's not required16:38
mordredcool16:38
* mordred wonders if he's the only human using usepushurl for git-review16:38
jeblairi don't have a [gitreview] section16:38
SpamapSplease do take that change over16:39
SpamapSI got in over my head pretty quickly.16:39
mordredwith usepushurl, my origin remote for nodepool looks like this:16:40
mordred[remote "origin"]16:40
mordred        url = https://git.openstack.org/openstack-infra/nodepool16:40
mordred        fetch = +refs/heads/*:refs/remotes/origin/*16:40
mordred        pushurl = ssh://mordred@review.openstack.org:29418/openstack-infra/nodepool.git16:40
mordredbecause I'm weird like that16:40
mordredjeblair: I'm now getting this:16:41
mordredAttributeError: 'Provider' object has no attribute 'max_servers'16:41
mordredbut only in my test :)16:41
pabelangerHmm, tests don't appear to work for me on that patch16:45
pabelangerlet me see why16:45
jeblairmordred: that sounds like something missed from my change; max_servers moved to pool16:46
jeblairmordred: paste full tb?16:46
mordredhttp://paste.openstack.org/show/604066/16:46
jeblairmordred: oh, maybe it's something that you ported over from the old ipv6 chang16:46
jeblairmordred: nope.  apparently you are testing statsd but zuul is not.16:47
mordredoh - neat?16:47
jeblairyes, that's the right word for that.16:47
mordredwell - maybe my venv is just old16:47
pabelangerthere we go, working now16:48
jeblairi'll work on adding a statsd test16:48
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Refactor out Changeish  https://review.openstack.org/44891316:49
mordredooh. that'll be fun16:49
mordredpabelanger, clarkb, jeblair: btw - the response on clarkb's mail ot themailing list about the ipv6_dhcp value is fascinating16:51
mordred"Config drive claims ipv6_dhcp, neutron api says slaac" if you wanna go read16:51
mordredthe fun line is "type ipvX_dhcp" really means "Neutron will16:51
mordredmanage ipvX addresses", not necessarily that it will use the DHCP16:51
mordredprotocol.16:51
mordredso - along with "router:external" does not mean "routes packets externally" we now have "dhcp does mean uses dhcp"16:52
mordreddoes not16:52
jeblairmordred: please address all followup responses to the wall!16:54
jeblairmordred: it looks like that statsd exception may be non-fatal, yeah?  so not blocking your progress?16:57
mordredthat's possible16:59
SpamapSmordred: our APIs are quite Orwellian.17:00
SpamapSOrwell played?17:00
clarkbmordred: fascinating is not the term I would use17:03
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: Remove ipv6-preferred and rely on interface_ip  https://review.openstack.org/44970517:04
mordredjeblair: woot. works now17:04
mordredjeblair: you were right, the statsd was not blocking me17:04
jeblairoh interesting: nodepool.launch.requestor.NodePool:min-ready.ready:133.000000|ms17:05
jeblairthat has 2 colons; i'm not sure if that's okay with statsd17:06
jeblairthe "spec" does not say17:06
* jeblair reads javascript17:07
jeblairnope17:11
jeblairwe must not have a : in the key17:12
jeblairhttps://github.com/etsy/statsd/blob/master/stats.js#L23417:12
clarkbI'm responding to the glean metadata thing now17:12
openstackgerritJames E. Blair proposed openstack-infra/nodepool feature/zuulv3: Exercise statsd in tests and fix  https://review.openstack.org/44973717:18
jeblairmordred: ^17:18
mordredwoot17:18
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Add a dequeue command to zuul client  https://review.openstack.org/9503517:32
pabelangerokay17:32
pabelangerjeblair: ^I hope that is what you were suggesting in your comment.17:33
pabelangerI also not 100% on the _doDequeueEvent() logic.17:33
pabelangerAnd jenkins +117:42
pabelangeryay17:42
pabelangerno rebase either17:42
clarkbmordred: I have added the enable_dhcp thing to my api grump etherpad17:47
mordredclarkb: yay!17:47
Shrewsi have to admit, i'm having a difficult time groking the nodepool config file change stuff18:26
jeblairShrews: yeah, it's another of those changes where, now that i've written it, i could probably re-write it to make sense as a series.  but had no idea how to start that way.18:34
jeblairShrews: probably the key thing is: i mostly just made providerworkers poolworkers, so we're going to have 1 worker thread per pool instead on one per-provider.  but all the poolworkers for a provider still use the same provider manager, so requests are serialized/rate-limited through the task manager appropriately.18:35
jeblairShrews: let me know if i can shed light on anything18:37
Shrewsjeblair: the new relationship between pool-label and the outer label... why even have the outer 'label' anymore? Can't we just specify the min-ready within the pool?18:38
Shrewsoh, i guess different providers can refer to the same label... but now each can have different specs... so i'm even more confused18:40
jeblairShrews: yes, min-ready is more or less why there's a top-level label.18:42
ShrewsSo, if 'trusty' on provider A has 2G ram, and 'trusty' on provider B has 4G ram, it doesn't matter which one we choose for min-ready.18:42
Shrewswhich is weird, but... ok18:42
jeblairShrews: and we've always been able to have different specs for a label in different providers (fundamentally, every provider is different, so that happens even if, say, you specify them the same way).  previously, that configuration was on the *image* uploaded to the provider, which doesn't make much sense as it's not an attribute of the image, it's an attribute of the node we boot from the image.18:43
clarkbwhich got clunky when you wanted to use the same image booted in different ways18:44
mordredalso - on HP Public we booted 30G instances because we had to to keep up with the 8G instances on RAX18:44
Shrewsok. i understand that part now18:44
jeblairShrews: yes, so as an operator, if you feel that 4G on B is "equivalent" to 2G on A for your purposes, yes, you can do that.  We probably won't.  But keep in mind thet ram is really just a proxy for flavor selection, and no 2 flavors are equivalent anyway.  this would probably be more obvious if, instead of specifying flavor by ram, we specified it by actual flavor name.  which, perhaps, is something we should consider doing.  :)18:45
jeblairsorry, i had typed all of that and didn't want it to go to waste.  :)18:45
Shrewsjeblair: yeah, k. found a couple of issues with the doc part of that change18:46
jeblairto clarkb's point -- yeah, this gets us halfway to being able to have have 2G xenial images *and* 8G xenial images.  (the other half is replacing max-servers with a real understanding of quotas).18:46
mordredjeblair: actually - yah - maybe we should consider making flavor selectable by ram or by name18:46
mordredjeblair: I can make a change to do that if you like18:46
jeblairmordred: cool.  we do have 'name filter' which is halfway there, but still also requires ram.  should we even keep ram as on option?18:47
mordredjeblair: well - I kind of like our use of ram as a first-class citizen, because sometimes it expresses the thing you want to express in the config better than "please give me a supersonic"18:48
mordredbut otoh - you might just know you want a supersonic and be done with it18:48
mordred(that's a flavor name from dreamhost, fwiw)18:48
jeblairmordred: yeah.  oh.  i see.  my mental model of the real world was... inaccurate.  I was imagining "Medium (8G)" or something.  boy was i wrong.18:49
clarkbalso ram happens to be our most important resource currently18:49
mordredjeblair: it's that thing where you imagine a world intended to make sense18:49
jeblairEOPENSTACK18:49
clarkbits the thing jobs constantly push against so it makes sense as a proxy18:50
mordredfwiw - shade supports jmespath expressions in the name_or_id field for this - so we could just document that and a user could use those to request a flavor by any attribute a flavor has if they wanted to18:50
mordred(I mean, if we expose flavor_name as a possibility, then we get that for free)18:50
jeblairmordred: hrm, we may want to keep this compatible with non-shade though?18:52
mordredgood call18:52
jeblairmordred: for future linch-pin/aws/etc18:52
mordredand honestly, for flavor, putting in a name seems like a fine thing18:53
clarkbthe really tricky thing though is this ins't a single dimension you have to operate in18:53
mordredthen aws people can say "m1.tiny"18:53
mordredand dreamhost people can say "supersonic"18:53
clarkbdisk, cpu, ram, ip addresses, ports, etc are all in play and you have to manage that matrix18:53
mordredand we can still say 8G for our vexxhost nodes18:53
clarkboh and the total instances quota18:53
mordredclarkb: yah - that's why I think name is likely the most common choice for most users18:53
clarkbya its mostly a "problem" when you start to mix and match within a provider18:54
mordredyup18:54
jeblairclarkb, mordred: yeah, so this may just be a matter of lifting the requirement for min-ram.  most of the rest may already be in place.18:54
clarkbfor example due to how quotas are set up today we don't get more isntances if we mixed in 2GB instances for pep8 jobs18:54
mordredbut if the user says "supersonic" - I belive we as the api consumer can say "how many ram/cpu/disks does that have, and also hey cloud, what's my quota for each of those"18:55
mordredor "hey config, how many rams, disks and cpus has my user told me I can use"18:55
clarkbmordred: yup and find the total max you can do for that18:55
mordredyup18:55
jeblairwell, quota needs to be a bit more sophisticated than that, because we'll be mixing18:55
clarkbjeblair: will we though? it provides zero benefit with how quotas currently work18:56
mordredjeblair: yha - but we'll have the bulding blocks to do the real calculations18:56
Shrewsjeblair: why per-pool AZs?18:56
jeblairShrews: pools are mostly "arbitrary grouping of cloud resources", so it seems not unreasonable that someone might say "we should only use this flavor on this AZ".18:57
jeblairShrews: i can't recall if that happened or not; i'd have to look back at our hpcloud config.18:58
jeblairmordred: ya18:58
mordredI think hpcloud _talked_ about doing that for us18:58
mordredbut never did18:58
clarkbwe did have networks per az though iirc18:58
clarkbwhich ended up effectively pooling resources across AZs18:58
jeblairclarkb: i mean that if we have 2G and an 8G labels, no instance quota, but a ram quota, then for nodepool to determine whether a new node would go over quota, it needs to add up all the RAM it's allocated and make sure it's under the max ram.18:59
Shrewsjeblair: do we want to try to group nodes for a request by az AND pool now?18:59
jeblairclarkb: yeah, that i remembered, so i put networks in pools.18:59
Shrewsor is az enough, independent of pool?18:59
jeblairShrews: yes, and that was my intent.  i think that making providerworker into poolworker has accomplished that, unless i missed something.19:00
jeblairShrews: yes == az+pool19:00
Shrewsjeblair: you don't check pool for pre-existing, which prompted my question19:00
jeblairShrews: ah, then i did miss something.  :)19:01
Shrewsjeblair: making a note for you...19:01
jeblairShrews: thx.19:01
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: Remove mention of non-clouds.yaml from docs  https://review.openstack.org/44978119:03
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP Add ability to select flavor by name or id  https://review.openstack.org/44978419:24
mordredjeblair: rough draft ^^ unfinished - but pushed it up to see what we think19:24
Shrewsjeblair: oops, ignore my comment about launcher_id. You account for that in the thread name19:24
mordredjeblair: also, best I could see there is no way in voluptuous to say "one and only one of these two keys must be there"19:25
openstackgerritK Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Add github reporter status_url config option  https://review.openstack.org/44979419:38
jesusaurjlk: ^ here's a port of the status_url change19:38
jeblairmordred: here's *a* way to do it; not sure if it's best: http://paste.openstack.org/show/604099/19:41
mordredjeblair: oh neat19:47
mordredjeblair: you're more voluptuous than I clearly19:48
jeblairmordred: s2 and s3 will need an 'extra' flag: http://paste.openstack.org/show/604100/19:49
jeblair(so you don't have to list the whole schema again)19:49
mordredjeblair: so - does Exclusive('ram', 'flavor'): int, mean "there's a field, ram, it's an int, and it's mutually exclusive with flavor19:50
jeblairmordred: no, 'flavor' is the group which ties it to name19:51
jeblairmordred: so put ram and name in the same group to make them exclusive with each other; add more items to the 'flavor' group as needed.19:52
mordredjeblair: ah - ok19:52
mordredjeblair: that hurts my brain, but in a good way which tells me I learned something19:53
mordrednot in a bad way like when I learned that dhcp doesn't necessarily mean dhcp19:53
jeblairit's like radio buttons in gui programming.  does that help or hurt?19:53
mordredjeblair: well, now I'm seeing dancing elves in front of me - is that good or bad?19:55
jeblairit is very good19:56
mordred\o/19:56
mordredI've succeeded for today!19:56
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: Encode exclusivity into voluptuous  https://review.openstack.org/44980120:01
mordredjeblair: so something like that ^^?20:01
*** hashar has joined #zuul20:03
*** hashar has quit IRC20:22
*** hashar has joined #zuul20:25
*** yolanda has quit IRC20:54
*** yolanda has joined #zuul20:56
*** yolanda has quit IRC21:04
*** yolanda has joined #zuul21:05
*** yolanda has quit IRC21:17
*** yolanda has joined #zuul21:19
*** yolanda has quit IRC21:26
*** yolanda has joined #zuul21:32
*** yolanda has quit IRC21:39
*** yolanda has joined #zuul21:39
*** pclinuxos-lxde has joined #zuul21:49
*** pclinuxos-lxde has quit IRC21:50
*** hashar has quit IRC22:03
openstackgerritK Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Add github reporter status_url config option  https://review.openstack.org/44979422:30
*** yolanda has quit IRC23:17

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!