Monday, 2019-11-04

*** ociuhandu has joined #openstack-nova00:00
*** ociuhandu has quit IRC00:05
*** Liang__ has quit IRC00:06
*** mriedem has joined #openstack-nova00:38
*** ociuhandu has joined #openstack-nova00:56
*** ociuhandu has quit IRC01:04
*** mriedem has quit IRC01:10
*** slaweq__ has joined #openstack-nova01:19
*** slaweq__ is now known as slaweq01:19
gibi_ptg~>01:25
*** nanzha has joined #openstack-nova01:27
gibi_ptgstephenfin: hi! it would be nice to sync up about the nova project update today after 17:3001:32
*** ociuhandu has joined #openstack-nova01:37
*** ociuhandu has quit IRC01:45
*** bnemec has quit IRC01:45
*** sapd1 has quit IRC01:48
*** sapd1 has joined #openstack-nova01:48
*** markvoelker has joined #openstack-nova01:56
*** markvoelker has quit IRC02:01
*** sapd1 has quit IRC02:01
*** ociuhandu has joined #openstack-nova02:29
*** ociuhandu has quit IRC02:33
*** nanzha has quit IRC02:36
*** nanzha has joined #openstack-nova02:37
*** artom has quit IRC02:41
*** yaawang has quit IRC02:44
*** slaweq has quit IRC02:52
*** mkrai has joined #openstack-nova02:52
*** bnemec has joined #openstack-nova02:59
*** mkrai has quit IRC03:01
*** mkrai_ has joined #openstack-nova03:01
*** mkrai_ has quit IRC03:14
*** tkajinam has joined #openstack-nova03:18
*** mkrai has joined #openstack-nova03:19
*** psachin has joined #openstack-nova03:21
*** slaweq has joined #openstack-nova03:32
*** ricolin has joined #openstack-nova03:33
*** bnemec has quit IRC03:34
*** xek_ has joined #openstack-nova03:36
*** xek_ has quit IRC03:38
*** tbachman has joined #openstack-nova03:38
*** xek_ has joined #openstack-nova03:39
*** tbachman_ has joined #openstack-nova03:42
*** tbachman has quit IRC03:43
*** tbachman_ is now known as tbachman03:43
*** adrianc_ has joined #openstack-nova03:44
*** bnemec has joined #openstack-nova03:46
*** xek_ has quit IRC03:47
*** slaweq has quit IRC03:48
*** slaweq has joined #openstack-nova03:50
*** ricolin_ has joined #openstack-nova03:55
*** markvoelker has joined #openstack-nova03:57
*** ricolin has quit IRC03:57
*** ricolin_ has quit IRC03:57
*** ricolin has joined #openstack-nova03:58
*** markvoelker has quit IRC04:02
*** davee__ has joined #openstack-nova04:03
*** davee_ has quit IRC04:05
*** tkajinam has quit IRC04:19
*** ricolin has quit IRC04:23
*** slaweq has quit IRC04:24
*** bnemec has quit IRC04:32
*** adrianc_ has quit IRC04:32
*** yaawang has joined #openstack-nova04:58
*** psachin has quit IRC04:59
*** mkrai has quit IRC05:00
*** ricolin has joined #openstack-nova05:08
*** ricolin has quit IRC05:09
*** slaweq_ has joined #openstack-nova05:12
*** slaweq_ has quit IRC05:22
*** ratailor has joined #openstack-nova05:22
*** slaweq_ has joined #openstack-nova05:23
*** bnemec has joined #openstack-nova05:27
*** slaweq_ has quit IRC05:35
*** gibi_cn has joined #openstack-nova05:36
gibi_cnalex_xu: talked to stephenfin we will meet up at 14:00 in front of the Marketplace entrance05:38
*** gibi_cn has quit IRC05:48
*** bnemec has quit IRC05:50
*** ociuhandu has joined #openstack-nova05:52
*** links has joined #openstack-nova05:53
*** ileixe has quit IRC05:53
*** slaweq_ has joined #openstack-nova05:54
*** ileixe has joined #openstack-nova05:56
*** ociuhandu has quit IRC05:57
*** markvoelker has joined #openstack-nova05:58
*** adrianc_ has joined #openstack-nova05:59
*** ratailor_ has joined #openstack-nova06:00
*** ratailor has quit IRC06:02
*** slaweq_ has quit IRC06:03
*** markvoelker has quit IRC06:03
*** adrianc__ has joined #openstack-nova06:19
*** adrianc_ has quit IRC06:22
*** adrianc__ has quit IRC06:27
*** adrianc_ has joined #openstack-nova06:28
*** bnemec has joined #openstack-nova06:31
*** jawad_axd has joined #openstack-nova06:31
*** sapd1 has joined #openstack-nova06:34
*** markvoelker has joined #openstack-nova06:36
*** ratailor__ has joined #openstack-nova06:41
*** ratailor_ has quit IRC06:43
*** adrianc_ has quit IRC06:52
*** markvoelker has quit IRC06:53
*** markvoelker has joined #openstack-nova06:53
*** bnemec has quit IRC06:56
*** markvoelker has quit IRC06:58
*** nanzha has quit IRC07:00
*** dpawlik has joined #openstack-nova07:05
*** ileixe has quit IRC07:09
*** ileixe has joined #openstack-nova07:09
*** nanzha has joined #openstack-nova07:10
*** jawad_axd has quit IRC07:14
*** slaweq has joined #openstack-nova07:30
*** jawad_axd has joined #openstack-nova07:30
*** dpawlik has quit IRC07:32
*** ratailor__ is now known as ratailor|lunch07:33
*** dpawlik has joined #openstack-nova07:36
*** slaweq has quit IRC07:38
*** luksky has joined #openstack-nova07:49
*** ociuhandu has joined #openstack-nova07:53
*** ociuhandu has quit IRC07:53
*** ociuhandu has joined #openstack-nova07:55
*** markvoelker has joined #openstack-nova07:55
*** damien_r has joined #openstack-nova07:58
*** damien_r has quit IRC07:58
*** slaweq has joined #openstack-nova07:59
*** damien_r has joined #openstack-nova07:59
*** ociuhandu has quit IRC08:01
*** damien_r has quit IRC08:03
*** damien_r has joined #openstack-nova08:03
*** markvoelker has quit IRC08:06
*** markvoelker has joined #openstack-nova08:07
*** ygk_12345 has joined #openstack-nova08:19
*** ygk_12345 has quit IRC08:19
*** tesseract has joined #openstack-nova08:24
*** pcaruana has joined #openstack-nova08:30
*** ratailor|lunch is now known as ratailor08:34
*** links has quit IRC08:39
*** links has joined #openstack-nova08:47
*** slaweq has quit IRC08:50
*** ralonsoh has joined #openstack-nova08:52
*** slaweq has joined #openstack-nova08:52
*** mrch_ has joined #openstack-nova09:00
openstackgerritBrin Zhang proposed openstack/nova-specs master: Add flavor group  https://review.opendev.org/66356309:06
*** luksky has quit IRC09:21
*** cdent has joined #openstack-nova09:25
*** slaweq has quit IRC09:27
*** HagunKim has joined #openstack-nova09:29
*** derekh has joined #openstack-nova09:35
*** markvoelker has quit IRC09:42
*** dpawlik has quit IRC09:48
*** luksky has joined #openstack-nova09:49
*** markvoelker has joined #openstack-nova09:50
*** dpawlik has joined #openstack-nova09:52
*** markvoelker has quit IRC09:54
*** chason has joined #openstack-nova09:54
*** markvoelker has joined #openstack-nova09:55
*** markvoelker has quit IRC09:56
*** markvoelker has joined #openstack-nova09:56
*** markvoelker has quit IRC09:58
*** markvoelker has joined #openstack-nova09:59
*** chason has quit IRC09:59
*** jraju__ has joined #openstack-nova10:01
*** links has quit IRC10:02
*** lpetrut has joined #openstack-nova10:02
*** dtantsur|afk is now known as dtantsur10:15
*** markvoelker has quit IRC10:24
*** ratailor has quit IRC10:28
*** brault has joined #openstack-nova10:36
*** xek_ has joined #openstack-nova10:45
*** xek_ has quit IRC10:46
*** xek has joined #openstack-nova10:48
*** ociuhandu has joined #openstack-nova10:55
*** ociuhandu has quit IRC10:56
*** ociuhandu has joined #openstack-nova10:56
*** links has joined #openstack-nova11:04
*** jraju__ has quit IRC11:05
*** ociuhandu has quit IRC11:10
*** markvoelker has joined #openstack-nova11:10
*** tbachman has quit IRC11:29
*** CeeMac has joined #openstack-nova11:35
*** xek has quit IRC11:35
*** xek has joined #openstack-nova11:36
*** HagunKim has quit IRC11:38
*** tetsuro has quit IRC11:42
*** markvoelker has quit IRC11:48
*** mrch_ has quit IRC11:51
*** mrch_ has joined #openstack-nova11:52
openstackgerritMerged openstack/nova stable/train: Avoid error 500 on shelve task_state race  https://review.opendev.org/69262811:52
*** arxcruz is now known as arxcruz|lunch11:55
*** brault has quit IRC11:58
*** nanzha has quit IRC12:02
*** brault has joined #openstack-nova12:03
*** nanzha has joined #openstack-nova12:10
*** xek has quit IRC12:22
*** ociuhandu has joined #openstack-nova12:22
*** jraju__ has joined #openstack-nova12:22
*** links has quit IRC12:23
*** xek has joined #openstack-nova12:25
*** markvoelker has joined #openstack-nova12:27
*** markvoelker has quit IRC12:28
*** xek has quit IRC12:28
*** xek has joined #openstack-nova12:29
*** jawad_axd has quit IRC12:31
*** slaweq_ has joined #openstack-nova12:31
*** jawad_axd has joined #openstack-nova12:31
*** FlorianFa has joined #openstack-nova12:36
*** ygk_12345 has joined #openstack-nova12:42
*** ociuhandu has quit IRC12:44
*** ociuhandu has joined #openstack-nova12:45
*** slaweq_ has quit IRC12:45
*** dviroel has joined #openstack-nova12:46
*** zbr is now known as zbr|ooo12:48
*** ociuhandu has quit IRC12:51
*** dpawlik has quit IRC12:55
*** arxcruz|lunch is now known as arxcruz12:55
*** mrch has joined #openstack-nova12:56
*** mrch has quit IRC12:56
*** mrch has joined #openstack-nova12:57
*** mrch_ has quit IRC12:59
*** mrch_ has joined #openstack-nova13:00
*** markvoelker has joined #openstack-nova13:00
*** markvoelker has quit IRC13:01
*** dpawlik has joined #openstack-nova13:01
*** nanzha has quit IRC13:02
*** mrch has quit IRC13:02
*** nanzha has joined #openstack-nova13:02
*** markvoelker has joined #openstack-nova13:04
*** tbachman has joined #openstack-nova13:04
*** yoctozepto has quit IRC13:17
*** yoctozepto has joined #openstack-nova13:21
*** ociuhandu has joined #openstack-nova13:25
*** mriedem has joined #openstack-nova13:25
*** henriqueof has joined #openstack-nova13:26
*** markvoelker has quit IRC13:36
*** markvoelker has joined #openstack-nova13:36
*** ociuhandu has quit IRC13:42
*** ociuhandu has joined #openstack-nova13:43
*** ociuhandu has quit IRC13:49
*** markvoelker has quit IRC13:51
mriedemdansmith: so back in rocky the aggregate mirroring stuff was added with a todo to remove some placement failures in the api if you didn't have the api configured for placement (to ease upgrades),13:54
mriedemi'm removing that handling now but there is one thing that is bugging me on the remove case,13:54
mriedemhttps://review.opendev.org/#/c/660852/1/nova/compute/api.py@561513:54
mriedemwhich is the sync_aggregates command can't be used to sync and remove hosts from aggregates, only add13:54
mriedemso i'm trying to decide if placement errors in the remove case should just be caught and logged or actually result in the compute API operation failing13:55
mriedemi'm thinking the latter since otherwise it means we're just silently failing and your scheduling results could be screwed up, and it would be confusing to figure that out later13:55
*** mmethot has joined #openstack-nova14:02
*** ygk_12345 has quit IRC14:13
mriedemnevermind, going with the latter. it's a bigger diff but it's more correct i think14:16
dansmithmriedem: okay can read that stuff in a sec14:26
dansmithalso, zomg my aggregates patch merged14:26
dansmither, notificatons I mean14:26
*** victor286 has joined #openstack-nova14:29
openstackgerritMatt Riedemann proposed openstack/nova master: Remove PlacementAPIConnectFailure handling from AggregateAPI  https://review.opendev.org/66085214:29
mriedemyeah a few things did over the weekend14:29
mriedemit's a miracle14:30
cdentpraise be14:30
*** artom has joined #openstack-nova14:31
openstackgerritMatt Riedemann proposed openstack/nova master: Remove PlacementAPIConnectFailure handling from AggregateAPI  https://review.opendev.org/66085214:31
mriedemcdent: i finally got that cleaned up ^ what i had sitting didn't ... sit well with me14:31
mriedembecause it was like, "oh something failed, wah, nice try retrying it sucker"14:31
cdenta good lesson for entitled whippersnappers, I say14:33
cdentlearn em up about life14:34
mriedemback in my day we didn't have idempotent apis14:34
mriedemwe had to fight to delete things14:34
cdentexactly so14:34
cdentWhat has idempotency got us? Social media addiction!14:35
*** eharney has joined #openstack-nova14:35
mriedemthat's a bit of a stretch, even for me14:36
*** jraju__ has quit IRC14:41
openstackgerritMatt Riedemann proposed openstack/nova master: Log reason for remove_host action failing  https://review.opendev.org/69283314:41
*** artom has quit IRC14:42
*** artom has joined #openstack-nova14:42
mriedemdansmith: i know it's a bit big and not very fun to review, but the bottom few patches of the cross-cell-resize series are ready for review and +2ed by gibi if you get some time to sift through one or two of those14:43
dansmithsure, I need to get back to that, sorry14:44
mriedemnp14:44
mriedemhuh, maybe another case for long_rpc_timeout - for reserve_block_device_name call to compute when attaching a volume https://zuul.opendev.org/t/openstack/build/ef0196fe84804b44ac106d011c8c29ea/log/controller/logs/screen-n-api.txt.gz?severity=414:46
mriedemwe must have some really slow nodes in the gate14:46
*** nanzha has quit IRC14:48
*** jaosorior has joined #openstack-nova14:51
*** jawad_axd has quit IRC14:54
*** bnemec has joined #openstack-nova14:54
*** jawad_axd has joined #openstack-nova14:54
mriedeminap and ovh again14:54
mriedemjust like in the 'state of the gate' email14:54
*** nanzha has joined #openstack-nova14:55
*** dpawlik has quit IRC14:55
*** xek has quit IRC14:56
*** dpawlik has joined #openstack-nova14:56
*** ociuhandu has joined #openstack-nova14:56
*** jawad_ax_ has joined #openstack-nova14:57
efriedmriedem: is https://review.opendev.org/#/c/692550/ (long_rpc_timeout) going to manifest in the gate anywhere in a way it can be seen easily? I would think not; we would have to get lucky and see an operation that takes longer than the old timeout.14:57
efriedjust wondering if we're waiting to see anything in particular before approving ^14:57
mriedemnot likely, the timeout is 30 minutes by default14:58
mriedemtempest would timeout way before that14:58
*** xek has joined #openstack-nova14:58
efriedI don't mean hitting the long one, I mean seeing an operation that takes somewhere *between* the old timeout and the new one (i.e. would have timed out before this change but doesn't now)14:58
*** jawad_axd has quit IRC14:59
efriedanyway, I'm getting ready to +2 that one, unless you or dansmith tells me there's a reason not to.15:00
mriedemi noticed it because of a gate failure http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010494.html15:00
mriedemif that's what you're asking15:00
dansmithefried: I can look after I'm done with a review15:01
mriedemapi response timed out waiting for POST /allocations to finish which took 3 minutes b/c we had no i/o on the node15:01
mriedem^ is not realistic though, and i'm not sure why that would happen in the gate15:01
efriedright, my point is, in order to see it "working", we would have to find that same situation again, but where it *didn't* timeout, but took longer than the original timeout.15:01
mriedemas cdent said in the ML, what do we do if this happens?15:01
*** jawad_ax_ has quit IRC15:01
mriedemefried: that is probably not very possible without adding some logging that logs if an operation took over a given threshold,15:02
efriedyeah, seems like we've ascertained the I/O thing is just overloading.15:02
*** mugsie has quit IRC15:02
mriedemi'm not sure if there is logging in oslo.messaging that goes off if an operation is over the given 60 second heartbeat, but there probably is15:02
efriedIt's not worth it, I was just asking whether there would be any other way to tell. Doesn't sound like it. But that's not a reason not to merge the thing.15:02
efriedThe harm would be, if something is wrong in such a way that it's never going to come back, you've just wasted time.15:03
efriedbut such is the eternal dilemma of timeouts.15:03
efriedRe the I/O thing, perhaps it would benefit us to have fewer zuul nodes if it means a lower rate of spurious failure.15:04
mriedemi see this in oslo.messaging but it's debug level which we don't index https://github.com/openstack/oslo.messaging/blob/6bca848f5b272149bb32353a62f9d37108fcbe15/oslo_messaging/_drivers/amqpdriver.py#L53315:04
mriedemwhat do you mean by fewer zuul nodes?15:04
efriedor, if what cdent suggested would work, if making the CI nodes less CPU-powerful would reduce their ability to burden the I/O. Same result, really: overall fewer runs occur in a given amount of time, but if the success rate increases, it's a win.15:05
*** mugsie has joined #openstack-nova15:05
efriedmriedem: I mean, if the I/O is overloaded, try to make it less overloaded by reducing the number of things hitting it at a time.15:06
mriedemreducing cpu is likely going to cause failures in other ways i'd think15:06
*** markvoelker has joined #openstack-nova15:06
mriedemhistorically cpu/ram has only gotten higher in devstack nodes in the gate over time15:06
mriedemwhen i started in openstack a full dsvm-tempest job took 45 minutes and could use 4vcpu/gb ram15:06
mriedemnow we're basically topped out at 8cpu/ram with 2 api workers per control service15:07
dansmithefried: oh yeah that migrate_server one, mriedem and I discussed last week, so probably good for a +215:07
*** dpawlik has quit IRC15:07
*** spatel has joined #openstack-nova15:07
efrieddansmith: that's done, waiting for your +A15:07
mriedemwe could try dropping API_WORKERS=1 in devstack if that would help with load15:07
dansmithyup yup15:07
mriedembut we might hit more timeouts with that, idk15:07
efriedmriedem: except that would slow down everywhere15:08
efriedIt seems like we're hitting this problem on specific providers, yes?15:08
dansmithmriedem: might have to lower tempest parallelism too15:08
mriedemefried: yes, inap and ovh15:08
mriedemdansmith: i thought about that last week as well,15:08
efriedSo I'm looking for an answer that would affect those providers without impacting others.15:08
mriedemcurrently we're 4 tests at a time15:08
dansmithmriedem: I just mean if you reduce the api workers15:08
mriedemyeah...so if node_provider in inap/ovh, API_WORKERS=1, tempest concurrency=2, build_timeout * 215:09
mriedemor just get them to stop throttling disk io so much15:09
dansmithI dunno how large those pools are, but could we just run unit, functional, doc, etc jobs on those providers?15:09
dansmithI think we've done that kind of job-provider affinity before15:09
mriedemthat's not a bad idea,15:09
mriedemand yeah i think provider job affinity has been done for baremetal jobs15:10
dansmithyeah15:10
dansmithif they ran *all* those kinds of jobs, the others could focus on the devstack jobs15:10
*** dosaboy has quit IRC15:10
efriedI want to say the infra guys have poo-poohed that idea in the past15:11
mriedemi don't know how easy it would be15:11
mriedemit's likely not easy15:12
*** ociuhandu has quit IRC15:12
efriedit was this thread I was thinking of most recently, but I don't see the response http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009595.html15:12
*** ociuhandu has joined #openstack-nova15:12
mriedemi'm assuming if we had a kind of node provider label for that kind of thing, then projects like nova could opt into which pool of providers each job runs in15:12
dansmithwell, if we have some affinity today, I would think it'd be doable15:12
*** eharney has quit IRC15:12
mriedembecause some functional jobs do use devstack in some projects15:13
*** slaweq_ has joined #openstack-nova15:13
*** lpetrut has quit IRC15:13
dansmithah15:13
mriedemso i'd think this kind of thing has to be up to each project since they know the jobs they run15:13
efriedhttp://lists.openstack.org/pipermail/openstack-discuss/2019-September/009592.html15:14
efriedalso no response, hm15:14
dansmithah yeah, that's exactly it15:14
*** luyao has quit IRC15:17
*** mrch has joined #openstack-nova15:17
*** slaweq_ has quit IRC15:18
*** ociuhandu has quit IRC15:18
*** eharney has joined #openstack-nova15:18
mriedemlikely need to tag with infra or ask -infra people directly to comment15:19
mriedemand they are probably all in shanghai this week15:19
*** tbachman has quit IRC15:19
efriedyeah15:19
efriedfwiw it looks like ovn has ~200 nodes15:19
dansmithovh?15:20
efriedisn't that the provider that's been choking?15:20
efriedhttp://grafana.openstack.org/d/BhcSH5Iiz/nodepool-ovh?orgId=115:20
dansmithyou said ovn15:20
efriedoh, sorry15:20
efriedyes, ovh15:20
dansmithwhich is a network technology, so just making sure :)15:21
mriedemez bake ovn15:21
efriedtaking mriedem back to his childhood15:21
mriedemdude i'm making some muffins in the thing right now15:21
*** mrch_ has quit IRC15:21
efriedTakes *way* longer with these stupid CFLs15:22
dansmithpretty sure mriedem never left childhood15:22
*** mrch_ has joined #openstack-nova15:22
* mriedem puts his legos away and pretends he never heard that15:22
*** mrch has quit IRC15:23
mriedemi will say, since maya likes legos i do find myself tempted around xmas to splurge on a $150 lego set just so i can help build it15:23
dansmithnice15:23
efrieddonnyd: can you think of an easy way $nodepool_provider could switch up configuration to reduce the chance of choking I/O?15:24
donnydOn FN?15:24
*** tbachman has joined #openstack-nova15:24
mriedem"but dad i only like the friends and disney princess sets" "you're getting medieval castle gdi"15:24
efriedspecifically ovh seems to be the problem at the moment.15:25
dansmithmriedem: lol15:25
*** JamesBenson has joined #openstack-nova15:25
donnydOh, well I know I avoid that issue by using local storage on FN. Not sure what ovh has on the backend of their instances15:25
efriednot asking for action on your part donnyd, just advice. If a patch has 10 zuul jobs an any one of them lands on ovh and chokes on I/O and times out, the whole patch has to be retried, which is a royal PITA. Been happening a really lot over the last week or two.15:26
efriedYeah, I guess they would need to look into where the actual bottleneck is.15:26
efriednot sure if there's any way to tell from here via grafana...15:26
donnydThe reality is we should try and label jobs by what they are bound by15:26
efriedheh, we were just talking about your email from Sept.15:27
donnydIf there is a cpu bound job, scheduling on FN would be less optimal15:27
efriedI couldn't see anywhere infra had responded to that idea, but I thought I remembered them shutting it down hard for some reason.15:27
cdentefried, mriedem : what I meant by less cpu, was less cpu for all nodes, not per node15:27
donnydBut IO bound jobs will go like stink on FN15:27
cdentso that we can run fewr jobs15:27
cdentbecause we run too many at once15:28
donnydI shut down FN when I put in my gen set15:28
cdentbecause oversubscribers are liars15:28
efriedcdent: okay, that's what I was suggesting too.15:28
donnydI oversub FN by a small margin, but not much15:28
donnydUsually 100% of the memory is utilized15:29
efriedso like a white lie?15:29
cdentdonnyd in my experience what you're doing seems to be working much better than some of the other providers, so kudos to you15:29
donnydBut CPU can be oversub like 1.12515:29
donnydcdent: I need faster CPUs for sure15:29
donnydMine are so slow compared to others15:30
mriedemlooking dstat when this messaging timeout happened https://zuul.opendev.org/t/openstack/build/ef0196fe84804b44ac106d011c8c29ea/log/controller/logs/screen-n-api.txt.gz?severity=4 around the time of the timeout cpu usage is low, io is basically 0 and load is spiked15:30
donnydThe biggest difference for FN is local NVME storage for all instances15:30
cdentquestion: isn't this something infra will already have a plan and solution for?15:30
cdentthis is what cloud mgt/provisioning is all about, presumably?15:30
mriedemi wonder if these providers live migrate the ci vms around frequently?15:31
donnydIf there was a way to benchmark providers and then prefer them for jobs that are bound by something I think we could optimize the CI15:31
donnydButttt.... if that provider breaks or goes away the issue becomes jobs failing because they were dependent on that super fast thing provider_x does15:32
mriedemcdent: infra is just tenants so i'm not sure how much control they have15:32
mriedembesides may requesting minimums in a flavor or something15:32
mriedem?15:32
cdentmriedem: but presumably they can tweak their consumption?15:32
mriedem*maybe15:32
donnydmriedem: well a simple benchmark could expose what jobs run optimal15:32
donnydThat was my idea and it got quickly struck down because of the issue listed above efried15:34
donnydI do all kinds of custom things and so does mnaser at vexxhost to support the CI... not sure how much usage it gets though. I think limestone is on board with it as well15:35
efriedah, thanks donnyd, I remember now15:35
donnydNP15:35
donnydHopefully that helps15:36
efriedSo really $provider needs to be able to run $job. In this case ovh needs to go figure out what the bottleneck is and fix it.15:36
donnydBingo15:36
*** ociuhandu has joined #openstack-nova15:36
efriedeven if it's by reducing the number of nodes they provide15:36
donnydBut they probably don't even know its an issue till someone complains15:36
efriednot sure where else to complain15:36
donnydprobably infra15:37
efriedopenstack-discuss@ has a thread15:37
donnydYou can always schedule the job using maybe the numa label made for sean-k-mooney15:37
efriedI guess we wait until after the summit then, infra folks are going to be pretty sparse for the next week I imagine.15:38
donnydThat would ensure the job runs on FN vexxhost or limestone if I am not mistaken15:38
efrieddonnyd: yeah, but you have to put that in the job def itself, right?15:38
efriedYou can't do it one off to make a particular patch merge15:38
donnydYea, you have to assign it a label15:38
donnydNo15:38
efriedthat's our problem here, getting approved patches through the gate.15:38
donnydSry, that is correct efried15:38
donnydWell patch the job with that label... then it will run anyways wont it15:39
efriedheh, we should just change all our devstack-based job defs to run on the fast providers...15:39
donnydWell I wouldn't call FN fast15:39
efriedwait until somebody notices15:39
efriedokay, s/fast/reliable/15:39
donnydJust better at IO bound jobs15:39
efriedYeah, I would rather my patch take 2h and succeed than 1.5h and fail.15:40
*** david-lyle has quit IRC15:40
donnydI'm sure ironic and tripleo hate FN because my CPUs are old15:40
*** david-lyle has joined #openstack-nova15:40
donnydAnd they are mostly CPU bound15:40
donnydGive it a swing with the label15:40
donnydAnd if it works, you have a bandaid15:41
openstackgerritEric Fried proposed openstack/nova master: Add cyborg tempest job.  https://review.opendev.org/67099915:42
*** macz has joined #openstack-nova15:46
*** ociuhandu has quit IRC15:48
*** ociuhandu has joined #openstack-nova15:49
*** dosaboy has joined #openstack-nova15:50
*** dosaboy has quit IRC15:50
*** dosaboy has joined #openstack-nova15:50
*** dosaboy has quit IRC15:51
*** dosaboy has joined #openstack-nova15:52
*** dosaboy has quit IRC15:53
*** markvoelker has quit IRC15:54
*** ociuhandu has quit IRC15:54
*** dosaboy has joined #openstack-nova15:55
*** david-lyle is now known as dklyle15:57
*** dtantsur is now known as dtantsur|afk15:57
*** jawad_axd has joined #openstack-nova15:59
*** tbachman has quit IRC15:59
*** mrch_ has quit IRC16:00
*** tbachman has joined #openstack-nova16:01
*** TxGirlGeek has joined #openstack-nova16:01
*** jawad_axd has quit IRC16:03
*** gyee has joined #openstack-nova16:03
*** tbachman has quit IRC16:05
*** tbachman has joined #openstack-nova16:11
efrieddonnyd: I wasn't serious about the label. That would be pretty publicly greedy of us.16:14
*** xek has quit IRC16:14
*** xek has joined #openstack-nova16:15
*** victor286 has quit IRC16:16
dansmithmriedem: question for you in here while I look at the test: https://review.opendev.org/#/c/635646/48/nova/conductor/tasks/cross_cell_migrate.py16:19
*** jawad_axd has joined #openstack-nova16:19
dansmithholy hell, everything in the check/gate are running tests and only minutes old16:19
dansmithwe should send the whole community to china more often16:19
*** ociuhandu has joined #openstack-nova16:22
*** jawad_axd has quit IRC16:24
*** luksky has quit IRC16:24
donnydWell not really efried16:24
donnydIf a job has requirements to run and a specific label meets the requirements then I don't see the issue16:24
donnydBut it's your call... just pointing out the options16:25
*** spatel has quit IRC16:25
mriedemdansmith: thanks, replied16:26
mriedemdansmith: is my sarcasm detector picking this up correctly? https://review.opendev.org/#/c/635080/48/nova/tests/unit/compute/test_compute_mgr.py@1054516:27
dansmithmriedem: yes.16:27
*** ociuhandu has quit IRC16:27
mriedemheh, verbose commentage is how i keep from feeling lonely16:28
*** ociuhandu has joined #openstack-nova16:31
*** nanzha has quit IRC16:33
dansmithmriedem: oops, my "...yes I see" was supposed to go into the fault_clone complaint nit16:34
dansmithwent back to note that and picked the wrong comment to edit16:35
dansmithmriedem: one more question16:35
*** jawad_axd has joined #openstack-nova16:40
openstackgerritMerged openstack/nova master: Use long_rpc_timeout in conductor migrate_server RPC API call  https://review.opendev.org/69255016:40
mriedemreplied16:41
*** dustinc_pto is now known as dustinc16:41
mriedemagree about the mapping == add a unit test wrinkle for that?16:43
mriedemwill do - and the variable name nits16:43
*** jawad_axd has quit IRC16:45
*** jaosorior has quit IRC16:45
dansmithcool16:45
efrieddansmith: I just put a bunch more words in the 'flavor groups' spec. Would you still be -1 if the answer to the backward compat question were "you can ignore flavor groups and keep doing extra specs forever"? IMO there's still too much complexity (most of which still isn't addressed) to make it worthwhile.16:48
*** mvkr has quit IRC16:48
efriedwhoah, that timeout patch merged already?? Didn't we approve it like an hour ago??16:49
dansmithefried: you mean would I still be effectively -2 I assume16:49
efriedyeah16:49
efriedpretty sure we're in agreement on it, I'm just wondering how aggressively I/we should say "despite feeding you paths to address all the holes, don't bother doing that, because this is going to die anyway"16:50
dansmithefried: and without reading, you mean keeping extra_specs for the hard flavor case and only allowing the groups for compose-ability? I still think it's practically useless without some way to define which things can be composed, and agree it's far too much change to too many fundamental things to be worth it or viable16:50
efriedyeah, that's more or less what I said.16:50
efriedin the review16:50
dansmithI think I'm still effectively -2 even with keeping extra_specs for those reasons, yeah16:51
dansmithI think that it's legit to -2 something like this on "this isn't the direction we want to go" and if you and I both -2 it for that reason, I think that is a fairly tight case16:51
dansmithit's not like we haven't considered it or reviewed it in detail16:51
dansmithcertainly don't want to just flip him the bird, but you wanted to be more upfront with people (as do I) so... that'd be pretty upfront16:52
efriedyeah, right now we're both -1.9 I think, but I don't want the author to take our -1s as a message that he should go try to fill it in.16:52
dansmithyeah, fair point16:52
dansmithI tried to call out my effectively -2ness for that reason, but maybe -2 would be better16:52
efriedso if you wouldn't mind reading my latest comments (when you get a chance) and if you agree with what I'm saying, go ahead and -2 and I'll follow suit.16:53
dansmithsure16:53
efriedthx16:53
mriedemi have run into "I don't want the author to take our -1s as a message that he should go try to fill it in." on a couple of specs for ussuri even though i've said a few times, "i don't think we need to do this, the existing alternatives or sufficient or this doesn't fit with the project IMO" but the specs continue to be updated16:53
mriedem*are sufficient16:54
mriedemi probably haven't -2ed because they are mostly trivial things to do, just plumbing,16:54
mriedemnot anywhere near the complexity of this flavor groups tihng16:54
mriedem*thing16:54
cdentExplicit words might be the way to go.16:56
dansmithcdent: my words were explicit, but I hedged on the vote16:57
dansmithbecause people sometimes accuse me of closing the door to further reviews with a -2 or even a -116:57
cdentwell closing the door is what's wanted here, yeah?16:58
* cdent is all for early door closes16:58
dansmithwell,16:58
dansmithyes, but I want to do that in concert with other reviewers, like efried and I just did16:58
dansmithso which do I need to kill? there is public, private, and shared16:59
dansmithoops, wrong window16:59
cdentall three /me is all for early kills16:59
cdent!16:59
mriedemhttps://review.opendev.org/#/c/682302/ and https://review.opendev.org/#/c/580336/ are the ones i'm struggling with fwiw17:02
mriedemthe latter has been coming up since berlin17:02
mriedemand we have decent alternatives already available, though like i said the change itself would probably be pretty minimal17:03
*** ociuhandu has quit IRC17:04
*** ociuhandu has joined #openstack-nova17:05
*** ociuhandu has quit IRC17:05
dansmithefried: done17:06
efriedthanks dansmith, following17:06
*** ociuhandu has joined #openstack-nova17:06
*** damien_r has quit IRC17:07
dansmithhuh, devstack defaults to configuring us for unversioned notifications apparently?17:09
mriedemyeah, because downstream projects don't all use versioned17:10
mriedemi seem to remember this coming up recently(ish) too when debugging something for watcher17:10
mriedemi think b/c we (nova) changed our default17:11
mriedemyup https://review.opendev.org/#/q/Ied9d50b07c368d5c2be658c744f340a8d1ee41e017:11
*** ociuhandu has quit IRC17:12
*** markvoelker has joined #openstack-nova17:15
*** mgoddard has quit IRC17:19
*** jawad_axd has joined #openstack-nova17:21
openstackgerritMatt Riedemann proposed openstack/nova master: Follow up to I3e28c0163dc14dacf847c5a69730ba2e29650370  https://review.opendev.org/69285617:21
*** mgoddard has joined #openstack-nova17:22
*** derekh has quit IRC17:23
mriedemheh that long_rpc_timeout change just merge conflicted most of the cross-cell-resize series, yay17:23
dansmithred pill or blue pill...17:25
*** jawad_axd has quit IRC17:25
mriedemoh well, needed to work on adding unit tests to https://review.opendev.org/#/c/637630/47 anyway so might as well rebsae17:26
mriedem*rebase17:26
* cdent awaits the power brownout17:27
mriedemthe gate is empty due to the summit so it's actually the best time17:27
*** artom has quit IRC17:28
mriedemthat reminds me,17:31
mriedemefried: https://review.opendev.org/#/q/topic:bp/support-move-ops-with-qos-ports-ussuri+status:open if you have some time - i can probably answer questions17:31
mriedemthe bottom one is mostly only big b/c of tests17:32
*** artom has joined #openstack-nova17:32
efriedmriedem: oh, yeah, lost on my list...17:33
*** gregwork has joined #openstack-nova17:38
*** cdent has quit IRC17:41
*** ociuhandu has joined #openstack-nova17:53
*** ociuhandu has quit IRC17:58
*** jmlowe has quit IRC17:59
*** pcaruana has quit IRC18:08
*** pcaruana has joined #openstack-nova18:08
openstackgerritMerged openstack/nova stable/stein: Avoid error 500 on shelve task_state race  https://review.opendev.org/69263018:10
openstackgerritMerged openstack/nova master: Add finish_snapshot_based_resize_at_dest compute method  https://review.opendev.org/63508018:10
*** ociuhandu has joined #openstack-nova18:12
*** ralonsoh has quit IRC18:19
*** jmlowe has joined #openstack-nova18:22
dansmithteehee18:35
dansmithstarting up 100 fake compute services on one machine... load of over 40 as they all slam the ever-loving crap out of conductor trying to create their db entries18:35
openstackgerritMerged openstack/nova master: Add FinishResizeAtDestTask  https://review.opendev.org/63564618:36
*** amodi has joined #openstack-nova18:42
mriedemdansmith: how big is the host?18:49
dansmithhow big?18:49
dansmithabout yay big18:50
mriedemi do the fake compute thing in a 8vcpu/8gb ram vm sometimes and can't really go over 30 computes18:50
dansmithabout 19" wide18:50
mriedemcpu/ram18:50
dansmithoh, 32G memory, 8 cores if that's what you mean18:50
mriedemyeah18:50
mriedemdid you adjust API_WORKERS or let it do ncpu/2?18:51
dansmithjust defaults18:51
mriedemah. fake computes huh. so all that time trying to figure out which network to delete and you found out you can't create a fake driver vm with networking anyway right?18:52
mriedemi always forget that when i do a devstack with the fake driver18:52
*** tesseract has quit IRC18:53
mriedemi had written up some of that here at one point https://docs.openstack.org/devstack/latest/guides/nova.html#fake-virt-driver18:53
dansmithone real compute and then a bunch of extra fakes on the same machine18:53
mriedemah ok18:53
mriedemsmells like someone is playing with aggregates18:53
*** xek has quit IRC18:53
*** ociuhandu has quit IRC19:02
*** markvoelker has quit IRC19:06
*** efried has quit IRC19:06
*** efried has joined #openstack-nova19:07
mriedemwonder if we can nuke this yet, nothing in or out of tree returns True for it that i can find https://opendev.org/openstack/nova/src/branch/master/nova/virt/driver.py#L174019:17
mriedemoh nvm i guess it defaults to True in the ComputeDriver parent class and zvm doesn't override it...19:18
mriedemor xenapi19:18
*** mmethot has quit IRC19:29
*** ociuhandu has joined #openstack-nova19:33
*** ociuhandu has quit IRC19:38
*** ociuhandu has joined #openstack-nova19:38
*** mmethot has joined #openstack-nova19:38
*** mmethot has quit IRC19:46
*** ociuhandu has quit IRC19:48
*** ociuhandu has joined #openstack-nova19:51
*** spatel has joined #openstack-nova19:57
*** mmethot has joined #openstack-nova20:00
mriedemartom: https://review.opendev.org/#/c/594139/ - am i misunderstanding your comment?20:07
artommriedem, lemme reload context20:08
artomo_O20:09
artomI am apparently retarded.20:10
mriedemdid your brain fart on how if/elif works?20:10
artomClearly20:10
artomI remember double-checking myself as well20:10
artomThinking "no way Matt did that"20:10
mriedems'ok20:10
mriedemthat rollback code is all gorpy20:11
*** ociuhandu has quit IRC20:11
artommriedem, fixed20:12
artommriedem, lemme look it over once again, then +120:12
mriedemsure, thanks20:13
artomI was busy shitting all over https://review.opendev.org/#/c/512815/820:13
artomPolitely20:13
openstackgerritMerged openstack/nova stable/rocky: Avoid error 500 on shelve task_state race  https://review.opendev.org/69263120:18
eanderssonbtw anyone got any experience with the keep alive issue and openstack services? I noticed a bug or two reported for nova and failures20:21
eandersson> 'Connection aborted.', BadStatusLine("''",)20:21
*** abaindur has joined #openstack-nova20:22
eanderssonA few of the bug reports indicates that disabling keepalive would resolve some of these.20:22
efriedmriedem: One question on the qos patches (I'm on the bottom one rn): It looks like some of the code is being hit for both evacuate and rebuild. It wouldn't do any harm for the latter, since it's just populating the rg/rp mappings. But just wanted to confirm that's what I'm seeing?20:26
mriedemartom: oh wow that's fun, smells like starlingx20:26
artommriedem, yeah eh?20:26
mriedemstarlingx did a lot of hacks with libvirt channels20:26
efriedmriedem: specifically these changes? https://review.opendev.org/#/c/688387/8/nova/conductor/manager.py20:26
mriedemartom: good comments on that patch btw20:27
mriedemall valid poitns20:27
mriedem*points20:27
artomI'm useful!20:27
mriedemmakes up for your lack of if/elif knowledge :)20:28
artomHahaha20:28
mriedemefried: hmm, weird20:28
mriedems/weird/where/?20:28
mriedemhttps://review.opendev.org/#/c/688387/8/nova/compute/manager.py only does the port mapping stuff if evacuate20:28
mriedemsame in https://review.opendev.org/#/c/688387/8/nova/conductor/manager.py20:29
mriedemor are you asking, why do we only do that for evacuate?20:29
efriedNo20:29
efriedin one file the bool is called 'evacuate', so I can buy that it's only happening for evacuate. In the other it's called 'recreate' and the comments imply we're hitting the path for both evacuate and rebuild.20:29
mriedemin compute manager that evacuate variable used to be called recreate as well,20:30
efriedso either it's the same and the var name and comments are confusing, or...20:30
mriedemi changed that awhile back,20:30
mriedemit's still recreate in conductor b/c no one has renamed the variable for clarity yet20:30
*** ociuhandu has joined #openstack-nova20:30
efriedokay. So recreate is evacuate.20:30
efriedand... `not recreate` is rebuild, which is a kind of recreate, but whatever.20:31
mriedemright20:31
mriedemhence the confusion20:31
efriedthanks for clarifying.20:31
efriedsomeone should rename that variable.20:31
mriedemhttps://review.opendev.org/#/c/508190/20:31
mriedemthat's me doing it in compute20:31
efriedah, so you didn't actually try to rename it all the way through the stack. That would indeed be hairy.20:33
mriedemright, i wasn't going to touch rpc stuff for that20:33
mriedemand the driver interface is likely nbd but it's at least an email20:34
*** abaindur has quit IRC20:34
mriedemthere is no in-tree driver that implements that anymore anyway, it was around for the old baremetal driver that eventually became ironic20:34
*** ociuhandu has quit IRC20:36
*** luksky has joined #openstack-nova20:37
*** ociuhandu has joined #openstack-nova20:37
openstackgerritEric Fried proposed openstack/nova master: cond: rename 'recreate' var to 'evacuate'  https://review.opendev.org/69290020:39
efriedmriedem: ^20:39
*** jmlowe has quit IRC20:40
*** ociuhandu has quit IRC20:42
mriedemcomments inline20:51
*** eharney has quit IRC20:57
*** jmlowe has joined #openstack-nova21:00
*** tbachman has quit IRC21:02
*** jawad_axd has joined #openstack-nova21:03
openstackgerritEric Fried proposed openstack/nova master: cond: rename 'recreate' var to 'evacuate'  https://review.opendev.org/69290021:03
efrieddone and done21:04
efriedand +A on gibi_ptg's patches.21:06
mriedemsweet21:07
*** jawad_axd has quit IRC21:08
efriedGonna go do some more damage on osc, then come back to vtpm21:09
*** tbachman has joined #openstack-nova21:14
*** artom has quit IRC21:15
*** jawad_axd has joined #openstack-nova21:24
mriedemheave ho21:24
openstackgerritMatt Riedemann proposed openstack/nova master: Follow up to I3e28c0163dc14dacf847c5a69730ba2e29650370  https://review.opendev.org/69285621:25
openstackgerritMatt Riedemann proposed openstack/nova master: Pass exception through TaskBase.rollback  https://review.opendev.org/69268921:25
openstackgerritMatt Riedemann proposed openstack/nova master: Execute CrossCellMigrationTask from MigrationTask  https://review.opendev.org/63566821:25
openstackgerritMatt Riedemann proposed openstack/nova master: Refresh instance in MigrationTask.execute Exception handler  https://review.opendev.org/66901221:25
openstackgerritMatt Riedemann proposed openstack/nova master: Plumb allow_cross_cell_resize into compute API resize()  https://review.opendev.org/63568421:25
openstackgerritMatt Riedemann proposed openstack/nova master: Filter duplicates from compute API get_migrations_sorted()  https://review.opendev.org/63622421:25
openstackgerritMatt Riedemann proposed openstack/nova master: Start functional testing for cross-cell resize  https://review.opendev.org/63625321:25
openstackgerritMatt Riedemann proposed openstack/nova master: Handle target host cross-cell cold migration in conductor  https://review.opendev.org/64259121:25
openstackgerritMatt Riedemann proposed openstack/nova master: Validate image/create during cross-cell resize functional testing  https://review.opendev.org/64259221:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add zones wrinkle to TestMultiCellMigrate  https://review.opendev.org/64345021:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add negative test for cross-cell finish_resize failing  https://review.opendev.org/64345121:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add negative test for prep_snapshot_based_resize_at_source failing  https://review.opendev.org/66901321:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add confirm_snapshot_based_resize_at_source compute method  https://review.opendev.org/63705821:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add ConfirmResizeTask  https://review.opendev.org/63707021:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add confirm_snapshot_based_resize conductor RPC method  https://review.opendev.org/63707521:25
openstackgerritMatt Riedemann proposed openstack/nova master: Confirm cross-cell resize from the API  https://review.opendev.org/63731621:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add revert_snapshot_based_resize_at_dest compute method  https://review.opendev.org/63763021:25
openstackgerritMatt Riedemann proposed openstack/nova master: Deal with cross-cell resize in _remove_deleted_instances_allocations  https://review.opendev.org/63945321:25
*** jawad_axd has quit IRC21:28
*** pcaruana has quit IRC21:30
openstackgerritmelanie witt proposed openstack/nova stable/stein: Add regression test for bug 1824435  https://review.opendev.org/69290621:30
openstackbug 1824435 in OpenStack Compute (nova) stein "fill_virtual_interface_list migration fails on second attempt" [Medium,Triaged] https://launchpad.net/bugs/182443521:30
openstackgerritmelanie witt proposed openstack/nova stable/stein: Remove redundant call to get/create default security group  https://review.opendev.org/69290721:30
*** ociuhandu has joined #openstack-nova21:33
*** ociuhandu has quit IRC21:38
*** jawad_axd has joined #openstack-nova21:45
mriedemmelwitt: i saw this today https://i.imgur.com/beq9YYf.jpg21:48
*** jawad_axd has quit IRC21:49
melwittlol yesss21:50
*** efried has quit IRC21:55
*** mvkr has joined #openstack-nova21:56
*** efried has joined #openstack-nova21:57
*** eharney has joined #openstack-nova21:58
*** TxGirlGeek has quit IRC22:00
*** TxGirlGeek has joined #openstack-nova22:04
mriedemdansmith: another spec which we've basically said no to in the past but the pushback is "we have some customers that want nova to do this" https://review.opendev.org/#/c/672400/22:06
mriedembecause pre-creating a port is hard22:06
mriedemif enough diverse users/vendors came forward asking for this like people did for passing through volume type on boot from volume then maybe it's a different discussion22:09
dansmithwell, I haven't read that other than the commit message, but..22:09
dansmithI'm pretty conflicted on these22:09
dansmithsince we're split, it really seems like an impossible-to-win scenario chasing every attribute of the other service, especially when they version their api so differently from us22:10
mriedem"we're split" meaning we (nova) from neutron?22:10
dansmithbut I also really sympathize with simple things being hard because of how we decided in 2013 to segregate the project22:10
dansmithyes22:10
*** spatel has quit IRC22:11
mriedemi'm pretty sure vnic_type is going to be in any neutron deployment because we rely on that extension pretty heavily https://docs.openstack.org/api-ref/network/v2/index.html#port-binding-extended-attributes22:11
mriedemalright, i won't block it22:12
mriedemi'm fairly certain starlingx had that in their nova fork already22:12
mriedemmaybe sean-k-mooney would know if red hat wants that as well22:13
dansmithI'm not saying we shouldn't block it22:13
dansmithI'm saying we're screwed either way22:13
*** tbachman has quit IRC22:20
*** jawad_axd has joined #openstack-nova22:21
*** jawad_axd has quit IRC22:25
*** ociuhandu has joined #openstack-nova22:30
*** dviroel has quit IRC22:34
*** ociuhandu has quit IRC22:39
*** jawad_axd has joined #openstack-nova22:41
*** luksky has quit IRC22:45
*** jawad_axd has quit IRC22:46
*** mriedem has quit IRC22:49
*** JamesBen_ has joined #openstack-nova22:57
*** JamesBenson has quit IRC23:00
openstackgerritMerged openstack/nova master: Allow evacuating server with port resource request  https://review.opendev.org/68838723:00
openstackgerritMerged openstack/nova master: Enable evacuation with qos ports  https://review.opendev.org/68868823:00
*** JamesBen_ has quit IRC23:01
*** jawad_axd has joined #openstack-nova23:02
*** jawad_axd has quit IRC23:07
*** ociuhandu has joined #openstack-nova23:10
efrieddansmith: help me here, I swear somewhere we had a functional test to validate the redirects in our .htaccess file??23:11
openstackgerritmelanie witt proposed openstack/nova stable/stein: Add regression test for bug 1824435  https://review.opendev.org/69290623:14
openstackbug 1824435 in OpenStack Compute (nova) stein "fill_virtual_interface_list migration fails on second attempt" [Medium,In progress] https://launchpad.net/bugs/1824435 - Assigned to melanie witt (melwitt)23:14
openstackgerritmelanie witt proposed openstack/nova stable/stein: Remove redundant call to get/create default security group  https://review.opendev.org/69290723:14
openstackgerritmelanie witt proposed openstack/nova stable/stein: Add integration testing for heal_allocations  https://review.opendev.org/69292323:14
efriedfound it23:14
*** ociuhandu has quit IRC23:18
*** jawad_axd has joined #openstack-nova23:23
*** jawad_axd has quit IRC23:28
*** ociuhandu has joined #openstack-nova23:31
*** ociuhandu has quit IRC23:40
openstackgerritMerged openstack/nova stable/queens: Avoid error 500 on shelve task_state race  https://review.opendev.org/69263223:43
openstackgerritMerged openstack/nova stable/stein: Imported Translations from Zanata  https://review.opendev.org/69270123:43
*** jawad_axd has joined #openstack-nova23:44
*** macz has quit IRC23:46
*** macz has joined #openstack-nova23:47
*** jawad_axd has quit IRC23:48
*** Liang__ has joined #openstack-nova23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!