Monday, 2018-11-26

*** aspiers[m] has quit IRC00:00
*** nicolasbock has quit IRC00:00
*** slaweq has joined #openstack-nova00:13
*** slaweq has quit IRC00:24
*** hshiina has joined #openstack-nova00:58
*** tetsuro has joined #openstack-nova00:59
*** tetsuro has quit IRC01:06
*** erlon has quit IRC01:10
*** lbragstad has joined #openstack-nova01:14
*** slaweq has joined #openstack-nova01:14
*** slaweq has quit IRC01:24
*** bhagyashris has joined #openstack-nova01:28
*** rcernin has quit IRC01:36
*** eandersson has quit IRC01:45
*** Dinesh_Bhor has joined #openstack-nova02:00
*** lbragstad has quit IRC02:08
*** slaweq has joined #openstack-nova02:11
openstackgerritZhenyu Zheng proposed openstack/nova master: Bump compute service to indicate attach/detach root volume is supported  https://review.openstack.org/61475002:20
*** slaweq has quit IRC02:24
*** mrsoul has quit IRC02:27
*** tetsuro has joined #openstack-nova02:34
*** itlinux has quit IRC02:36
*** annp has joined #openstack-nova02:39
*** mhen has quit IRC02:45
*** mhen has joined #openstack-nova02:48
*** hongbin has joined #openstack-nova02:56
*** slaweq has joined #openstack-nova03:12
*** lbragstad has joined #openstack-nova03:14
*** slaweq has quit IRC03:24
*** edmondsw has quit IRC03:34
*** Dinesh_Bhor has quit IRC03:34
*** edleafe has quit IRC03:34
*** udesale has joined #openstack-nova03:45
*** Kevin_Zheng has quit IRC03:47
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP per instance serial  https://review.openstack.org/61995303:50
*** sridharg has joined #openstack-nova03:54
*** tetsuro has quit IRC04:03
openstackgerritTakashi NATSUME proposed openstack/nova master: Add a bug tag for nova doc  https://review.openstack.org/61943404:11
*** slaweq has joined #openstack-nova04:11
*** slaweq has quit IRC04:24
*** lbragstad has quit IRC04:32
*** hongbin has quit IRC04:34
*** tetsuro has joined #openstack-nova04:37
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (3)  https://review.openstack.org/57410404:43
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (4)  https://review.openstack.org/57410604:43
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (5)  https://review.openstack.org/57411004:44
*** ivve has joined #openstack-nova04:44
*** bhagyashris has quit IRC04:54
*** Dinesh_Bhor has joined #openstack-nova05:09
*** janki has joined #openstack-nova05:12
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (6)  https://review.openstack.org/57411305:15
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (7)  https://review.openstack.org/57497405:15
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (8)  https://review.openstack.org/57531105:16
*** slaweq has joined #openstack-nova05:16
*** bhagyashris has joined #openstack-nova05:16
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in libvirt/test_driver.py (7)  https://review.openstack.org/57199205:16
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in libvirt/test_driver.py (8)  https://review.openstack.org/57199305:16
*** slaweq has quit IRC05:24
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (9)  https://review.openstack.org/57558105:32
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (10)  https://review.openstack.org/57601705:32
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (11)  https://review.openstack.org/57601805:33
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (12)  https://review.openstack.org/57601905:34
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (13)  https://review.openstack.org/57602005:34
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (14)  https://review.openstack.org/57602705:34
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (15)  https://review.openstack.org/57603105:34
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (16)  https://review.openstack.org/57629905:35
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (17)  https://review.openstack.org/57634405:35
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (18)  https://review.openstack.org/57667305:35
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (19)  https://review.openstack.org/57667605:36
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (20)  https://review.openstack.org/57668905:36
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (21)  https://review.openstack.org/57670905:36
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (22)  https://review.openstack.org/57671205:37
openstackgerritTakashi NATSUME proposed openstack/nova master: Use links to placement docs in nova docs  https://review.openstack.org/61405605:39
openstackgerritTakashi NATSUME proposed openstack/nova master: api-ref: Add descriptions for vol-backed snapshots  https://review.openstack.org/61508405:40
openstackgerritTakashi NATSUME proposed openstack/nova master: Fix best_match() deprecation warning  https://review.openstack.org/61120405:40
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in virt/test_block_device.py  https://review.openstack.org/56615305:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove Placement API reference  https://review.openstack.org/61443705:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Use oslo_db.sqlalchemy.test_fixtures  https://review.openstack.org/60935205:41
openstackgerritTakashi NATSUME proposed openstack/nova stable/rocky: Remove unnecessary redirect  https://review.openstack.org/60740005:42
*** ratailor has joined #openstack-nova05:59
*** slaweq has joined #openstack-nova06:16
*** whoami-rajat has joined #openstack-nova06:21
*** slaweq has quit IRC06:24
*** diga has joined #openstack-nova06:42
*** frippe75 has joined #openstack-nova06:50
*** alexchadin has joined #openstack-nova06:58
*** Luzi has joined #openstack-nova07:00
frippe75Trying to deploy Rocky on CentOS 7.5.1804 without success from a nova perspective. Everything looks good but I cannot spawn instances (NoValidHost). It looks like the nova installation is not complete.07:03
frippe75Main issue I find is : There are no compute resource providers in the Placement service nor are there compute nodes in the database. Anyone care to take a look? https://pastebin.com/raw/wPfaY4YP07:04
frippe75I deployed using packstack but have not found a resolution via channel #rdo07:04
*** adrianc has joined #openstack-nova07:06
openstackgerritZhenyu Zheng proposed openstack/nova-specs master: Amend the detach-boot-volume design  https://review.openstack.org/61916107:12
*** Dinesh_Bhor has quit IRC07:14
*** adrianc has quit IRC07:17
*** adrianc has joined #openstack-nova07:17
openstackgerritZhenyu Zheng proposed openstack/nova master: Per-instance serial number  https://review.openstack.org/61995307:21
*** Dinesh_Bhor has joined #openstack-nova07:28
*** Kevin_Zheng has joined #openstack-nova07:31
*** bhagyashris_ has joined #openstack-nova07:34
*** bhagyashris has quit IRC07:34
*** lpetrut has joined #openstack-nova07:34
*** takashin has quit IRC07:36
*** takashin has joined #openstack-nova07:37
bhagyashris_artom: Hi,07:45
*** moshele has joined #openstack-nova07:47
*** slaweq has joined #openstack-nova07:49
frippe75during nova-compute launch  get_available_resource(self, nodename)  should get called and  populate the resources into nova... But seems not to...07:52
*** jangutter has joined #openstack-nova07:53
*** ociuhandu has joined #openstack-nova07:56
*** Dinesh_Bhor has quit IRC07:58
*** ociuhandu has quit IRC07:58
*** takashin has left #openstack-nova08:00
*** ccamacho has quit IRC08:01
*** ccamacho has joined #openstack-nova08:01
frippe75I see a log entry where it "gets created" ...   "Compute node record created for openstack.frippe.com:openstack.frippe.com with uuid: 6fd2553b-26bb-466e-bde3-57c753dfa50a"08:04
frippe75But after that I get this all the time in the nova-compute.log   "No compute node record for host openstack.frippe.com: ComputeHostNotFound_Remote:"08:04
frippe75Why does it say    hostname:hostname   when it get created??08:05
*** trident has quit IRC08:10
*** trident has joined #openstack-nova08:11
*** ralonsoh has joined #openstack-nova08:25
*** helenfm has joined #openstack-nova08:29
*** xek has joined #openstack-nova08:31
*** frippe75 has quit IRC08:38
*** Dinesh_Bhor has joined #openstack-nova08:39
*** tssurya has joined #openstack-nova09:07
openstackgerritZhenyu Zheng proposed openstack/nova master: Bump compute service to indicate attach/detach root volume is supported  https://review.openstack.org/61475009:09
*** xek_ has joined #openstack-nova09:14
*** xek has quit IRC09:16
*** fyx_ has joined #openstack-nova09:17
*** Dinesh_Bhor has quit IRC09:17
*** TheJulia_ has joined #openstack-nova09:17
*** spsurya_ has joined #openstack-nova09:17
*** k_mouza has joined #openstack-nova09:17
*** Dinesh_Bhor has joined #openstack-nova09:18
*** tinwood_ has joined #openstack-nova09:20
*** Miouge- has joined #openstack-nova09:23
*** bauzas_ has joined #openstack-nova09:23
*** mgagne_ has joined #openstack-nova09:23
*** gary_perkins_ has joined #openstack-nova09:24
*** udesale has quit IRC09:25
*** spsurya has quit IRC09:25
*** zigo has quit IRC09:25
*** tinwood has quit IRC09:25
*** gary_perkins has quit IRC09:25
*** bauzas has quit IRC09:25
*** jroll has quit IRC09:25
*** dr_gogeta86 has quit IRC09:25
*** Miouge has quit IRC09:25
*** mgagne has quit IRC09:25
*** nicholas has quit IRC09:25
*** fyx has quit IRC09:25
*** TheJulia has quit IRC09:25
*** bauzas_ is now known as bauzas09:25
*** gary_perkins_ is now known as gary_perkins09:25
*** spsurya_ is now known as spsurya09:25
*** TheJulia_ is now known as TheJulia09:25
*** fyx_ is now known as fyx09:25
*** udesale has joined #openstack-nova09:26
*** xek_ is now known as xek09:28
*** jroll has joined #openstack-nova09:32
*** jaosorior has joined #openstack-nova09:38
*** derekh has joined #openstack-nova09:38
*** k_mouza has quit IRC09:57
*** k_mouza has joined #openstack-nova09:57
*** moshele has quit IRC10:01
*** bhagyashris_ has quit IRC10:01
*** Dinesh_Bhor has quit IRC10:02
*** moshele has joined #openstack-nova10:05
*** cdent has joined #openstack-nova10:13
*** xek has quit IRC10:34
*** xek has joined #openstack-nova10:34
*** alexchadin has quit IRC10:43
*** erlon has joined #openstack-nova10:43
*** Dinesh_Bhor has joined #openstack-nova10:44
*** sapd1_ has joined #openstack-nova10:45
*** zigo has joined #openstack-nova10:51
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() calls from xenapi.  https://review.openstack.org/61970010:53
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from libvirt remotefs calls.  https://review.openstack.org/61970110:53
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from quobyte libvirt storage driver.  https://review.openstack.org/61970210:53
openstackgerritMichael Still proposed openstack/nova master: Move nova.libvirt.utils away from using nova.utils.execute().  https://review.openstack.org/61970310:53
openstackgerritMichael Still proposed openstack/nova master: Imagebackend should call processutils.execute directly.  https://review.openstack.org/61970410:53
openstackgerritMichael Still proposed openstack/nova master: Remove final users of utils.execute() in libvirt.  https://review.openstack.org/61970510:53
openstackgerritMichael Still proposed openstack/nova master: Remove the final user of utils.execute() from virt.images  https://review.openstack.org/62000710:53
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from the hyperv driver.  https://review.openstack.org/62000810:53
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from virt.disk.api.  https://review.openstack.org/62000910:53
openstackgerritMichael Still proposed openstack/nova master: Move a generic bridge helper to a linux_net privsep file.  https://review.openstack.org/62001010:53
*** diga has quit IRC10:56
*** adrianc has quit IRC10:56
*** udesale has quit IRC10:58
*** adrianc has joined #openstack-nova11:07
*** sapd1_ has quit IRC11:18
*** k_mouza has quit IRC11:31
*** dpawlik has joined #openstack-nova11:33
*** davidsha_ has joined #openstack-nova11:35
*** tbachman has quit IRC11:42
*** tinwood_ is now known as tinwood11:44
*** k_mouza has joined #openstack-nova11:44
*** Dinesh_Bhor has quit IRC11:45
*** sambetts_ is now known as sambetts11:55
sean-k-mooneyo/11:58
cdento/12:01
sean-k-mooneycdent: have a good weekend12:01
*** lpetrut has quit IRC12:03
cdentsean-k-mooney: I made a pact with myself to not do work over the weekend, but that ended up being easy because I seem to have a cold. So I sat in front of the netflix watching junk tv and feeling like junk.12:05
*** lpetrut has joined #openstack-nova12:08
sean-k-mooneywell the cold suck but the fact you didnt work on your weekend is good. i spent a significat part of the weekend just listening to audio books and trying not to do anything technical12:08
sean-k-mooneyuntil lik 10 PM sunday when i decided to deploy kubernetes on an old laptop but i almost made it lol12:09
*** ratailor has quit IRC12:10
cdentI think if I hadn't had the cold I probably wouldn't have made it12:11
tobias-urdinoh boy, after reading api-ref for nova when searching for some help, seeing all the red boxes for the proxy apis deprecation almost gave me tears, nova will be so clean in the future12:34
cdentrelatively speaking :)12:39
*** N3l1x has joined #openstack-nova12:46
*** tetsuro has quit IRC12:57
*** dtantsur|afk is now known as dtantsur|mtg12:59
*** jaypipes has quit IRC13:13
*** k_mouza_ has joined #openstack-nova13:19
*** k_mouza has quit IRC13:23
*** tbachman has joined #openstack-nova13:23
*** tbachman_ has joined #openstack-nova13:25
*** takashin has joined #openstack-nova13:26
*** tbachman has quit IRC13:28
*** tbachman_ is now known as tbachman13:28
sean-k-mooneytobias-urdin: nova will only be clean if we actully remove the proxy apips form the codebase13:30
sean-k-mooneytobias-urdin: to do that would require us to increase teh minium microversion we support13:31
sean-k-mooneythat is not something we have done since we introduced microverions as far as i am aware13:31
cdentsean-k-mooney: there was talk at summit about doing such a thing13:32
cdentbut there are _many_ prereqs to make it possible.13:32
sean-k-mooneycdent: it would be nice if we could but ya i assumed we would have a lot of work to do first13:32
sean-k-mooneycdent: was there any progress on a common way of reporting errors at the summig13:33
sean-k-mooney*summit13:33
cdentnot really, no13:34
sean-k-mooneyoh well i knew that was a long shot :)13:34
* cdent needs to make lunch/breakfast before sched meeting, brb13:34
*** gouthamr has quit IRC13:34
*** gouthamr has joined #openstack-nova13:40
*** tetsuro has joined #openstack-nova13:40
tobias-urdinsean-k-mooney: true, i'm just happy about all the effort on the structure and cleanup of nova :)13:40
*** dave-mccowan has joined #openstack-nova13:46
*** dave-mccowan has quit IRC13:50
*** dave-mccowan has joined #openstack-nova13:52
*** efried has joined #openstack-nova13:53
*** rtjure has joined #openstack-nova13:53
*** sapd1_ has joined #openstack-nova13:53
*** mriedem has joined #openstack-nova13:54
efriedn-sch meeting in 5 minutes in #openstack-meeting-alt13:55
*** edleafe has joined #openstack-nova13:57
*** rtjure has quit IRC13:58
*** awaugama has joined #openstack-nova13:59
*** _hemna has joined #openstack-nova13:59
mriedemyikun: fyi https://blueprints.launchpad.net/nova/+spec/initial-allocation-ratios is back in a runway slot14:04
mriedemhttps://etherpad.openstack.org/p/nova-runways-stein14:05
mriedemdansmith: we should update the topic at some point for the current runways14:05
*** mmethot has joined #openstack-nova14:06
*** lbragstad has joined #openstack-nova14:07
*** sapd1_ has quit IRC14:12
*** k_mouza has joined #openstack-nova14:15
*** cfriesen has joined #openstack-nova14:15
*** k_mouza_ has quit IRC14:19
*** efried has quit IRC14:19
*** efried has joined #openstack-nova14:20
*** k_mouza has quit IRC14:21
*** k_mouza has joined #openstack-nova14:22
*** _alastor_ has joined #openstack-nova14:23
*** jroll has quit IRC14:24
*** jroll has joined #openstack-nova14:25
*** zul has joined #openstack-nova14:25
*** tbachman has quit IRC14:26
dansmithmriedem: ack14:27
*** tbachman has joined #openstack-nova14:27
*** ChanServ sets mode: +o dansmith14:27
*** dansmith changes topic to "Current runways: io-semaphore-for-concurrent-disk-ops / reshape-provider-tree / initial-allocation-ratios -- This channel is for Nova development. For support of Nova deployments, please use #openstack."14:28
*** ChanServ sets mode: -o dansmith14:29
alex_xumriedem: do you know resize works with numa topology change in the new flavor? I can't found we parse the numa topo from new flavor. Try to figure out it maybe a bug or something we don't support well yet.14:29
*** eharney has joined #openstack-nova14:29
mriedemalex_xu: i think it's supported...we'll do the move_claim during resize which gets the new_flavor off the migration record in the resource tracker14:31
mriedemand the request spec uses the new flavor (i think) when it runs through the scheduler (numa topo filter) during scheduling for the resize14:31
alex_xumriedem: yea...I saw the move claim code also.14:32
alex_xumriedem: but the request spec wont extract new numa topo from the new flavor. I guess we missed something14:32
*** bnemec has joined #openstack-nova14:33
mriedemcfriesen might know off the top of his head faster than me14:34
mriedembut i thought cold migration worked14:34
mriedemalex_xu: you're saying we don't update https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L55 for the new flavor right?14:35
*** k_mouza has quit IRC14:36
alex_xumriedem: yes14:36
alex_xumriedem: we generate new numa_topology in move claim. but we still use old numa_topology in the scheduler14:36
*** k_mouza has joined #openstack-nova14:37
mriedemnow i'm having a hard time finding where we set the new_flavor on the request spec before calling the scheduler14:38
alex_xuhah, probably because we don't have that :)14:39
mriedemheh, no i think that happens b/c i just wrote a functional regression test that relies on it14:40
mriedemhttps://review.openstack.org/#/c/619123/14:40
alex_xuoh, yea, I miss read that. but still not parse the numa stuff from flavor14:41
*** takamatsu has joined #openstack-nova14:43
alex_xumriedem: it's late for me, I will dig into more tomorrow. thanks for the info, good to know that isn't something we don't support, then it is probably a bug...14:43
mriedemalex_xu: https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L316 is where we set the new flavor on reqspec14:43
mriedemalex_xu: sure, i'll ping you if i figure something out :)14:44
mriedemgood night14:44
mriedemstephenfin: are you aware of cold migration/resize support for numa topology changes ^ ?14:48
mriedemi thought that was all baked in long ago14:48
*** udesale has joined #openstack-nova14:48
kashyapmriedem: IIRC, he's out for a few more days.14:49
mriedemok i'll wait for cfriesen then14:49
*** mvkr has quit IRC14:50
*** tbachman has quit IRC14:50
*** tbachman has joined #openstack-nova14:51
*** tetsuro has quit IRC14:52
mriedemdansmith: do you think https://review.openstack.org/#/c/607735/ is worth sending to rocky?14:54
mriedemit's a pretty latent issue, so not really sure it's worth it14:55
*** whoami-rajat has quit IRC14:55
dansmithmriedem: it is, but it's a pretty trivial thing to push back and we know it's a problem for people14:55
dansmithwe also know we can't push it back any farther,14:55
dansmithso it's not like it can go back to kilo or anything14:55
mriedemsure14:56
mriedemok14:56
*** tbachman has quit IRC15:00
*** takashin has left #openstack-nova15:01
cfriesenmriedem: alex_xu: pretty sure cold migration *was* working before all the placement stuff, but I seem to remember seeing at least one bug saying it's currently broken.15:04
cfriesenwe're on pike at the moment and it seems to be working, but we do have a few patches in that area for other features.15:05
sean-k-mooneycold migration in what context?15:06
sean-k-mooneymriedem: stephenfin is on PTO until next week15:06
sean-k-mooneymriedem: he left his znc bouncer running15:06
mriedemcfriesen: as alex pointed out,15:07
mriedemi don't see the request spec numa topologies field updated before it goes through the scheduler15:07
sean-k-mooneymriedem: oh alex_xu was asking about numa toplogy changes for resize/cold migration it "should" work upstream15:08
*** dpawlik has quit IRC15:08
mriedemhttps://github.com/openstack/nova/blob/master/nova/scheduler/filters/numa_topology_filter.py#L7415:08
*** sridharg has quit IRC15:08
mriedemi don't see RequestSpec.numa_topology updated from the new flavor *before* we hit the numa filter15:08
mriedemso i don't see how the scheduling is working15:09
sean-k-mooneyhum so you think its using the old flavor perhaps15:09
sean-k-mooneyi can try testing this in  a hour or so15:09
jmloweHas anybody had trouble with the placement client in nova compute hanging?15:10
sean-k-mooneyi jsut need to get my dev enviornemt running after the weekend15:10
mriedemjmlowe: never heard of that15:10
jmloweSpecifically nova.compute.resource_tracker._update_available_resource was holding a lock for about 2 min15:10
*** dpawlik has joined #openstack-nova15:10
jmloweSometimes it would do it and sometimes not15:11
jmloweno errors anywhere15:11
cdentjmlowe: how many instances on that compute node?15:12
jmloweIt did wreak havoc with instance launches15:12
cdentare you sure it was talking to placement where things were stuck, or just somewhere in _udpate*?15:12
cdentalos15:12
cdent^walso15:13
cfriesenmriedem: sorry, had to answer a call from my boss.  let me take a quick look15:13
* cdent passed jmlowe some bourbon15:13
jmloweI got it down to 2 min to run _update_inventory in nova/scheduler/client/report.py15:14
mriedemjmlowe: how many instances on that compute?15:15
jmloweit seems to have made the http put call correctly, but then just waits for tcp timeout15:15
mriedemlibvirt or vcenter or ironic?15:15
jmloweit's all of my 280 computes, so 0 - 2415:15
jmlowelibvirt15:15
mriedemhmm, well there is a lock in the resource tracker when that is called, which will make things in nova-compute slow to a crawl,15:16
mriedembut why the placement response would be so slow idk,15:16
mriedemhave you traced the request via request ID in the placement api logs?15:16
mriedemalso, which release?15:16
jmloweyes, nothing in placement takes more than a second or so15:16
jmlowequeens15:16
jmloweseems like the there's a bug in some underlying client library that is occasionally not returning from a http call without a timeout15:18
mriedemhmm, nova is using keystoneauth1 to send the requests to placement15:18
mriedemyou might be able to enable some debug logging there15:18
cdentjmlowe: I think you should make jeremy find and fix this15:19
* fungi wonders what he broke now15:20
cdentnot you fungi15:20
*** lpetrut has quit IRC15:20
jmloweI did, then went on a 6 week road trip15:20
fungiyay! for once something's not my fault15:20
jmlowethen he went15:20
cdentThe jeremy of which I speak is an old friend (on the order of 30 years)15:21
cdentTypical of him.15:21
cfriesenmriedem: alex_xu: I think it's this code that updates the flavor on a resize: https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L304-L31615:21
mriedemcfriesen: but that doesn't update the RequestSpec.numa_topology field15:22
mriedemthe else block15:22
mriedemor RequestSpec.pci_requests for that matter15:22
mriedemso if resize with a new numa topology works today, it's getting lucky b/c the scheduler picks a host that fits the original topology, and then the RT.move_claim on the compute fits the new requested numa topology15:23
*** mvkr has joined #openstack-nova15:23
mriedemas far as i can tell anyway15:23
sean-k-mooneymriedem: well we trow away all the host pinning info and recalulate it on the compute node anyway so it is likely working becaue it gets lucky or we retry15:24
cdentjmlowe: do you have load balancers or proxies between n-cpu placement?15:24
jmloweI do15:25
cdentconnections perhaps not closing?15:25
mriedemcdent: fwiw, that ENABLED_PYTHON3_PACKAGES variable in devstack looks like it should be killed now15:25
mriedem"# Special case some services that have experimental15:25
mriedem                # support for python3 in progress, but don't claim support15:25
mriedem                # in their classifier"15:25
cdentmriedem: it does come into play15:25
mriedemyeah i see where it's used15:25
mriedembut i'm just thinking it shouldn't exist anymore15:25
jmloweprobably, I'd expect the http client to close the connection once it received a 200 responxe15:25
mriedemgiven the py3 first goal15:25
mriedeme.g. neutron isn't in that list15:26
mriedemnor keystone15:26
*** munimeha1 has joined #openstack-nova15:26
sean-k-mooneymriedem: well it either shouldn not exists or default to ture right15:26
cdentmriedem: yes, I agree that it appears to be an anachronism that shouldn't be used at all, but that it sometimes does get used15:26
*** moshele has quit IRC15:26
cdentit is currently what makes my fix work15:26
mriedemi shall put out the dhellmann signal15:26
* cdent wonders what shape that takes15:26
sean-k-mooneymriedem: there are some project that dont rung on python3 i belive15:26
*** Luzi has quit IRC15:26
jmloweTesting now, but it seems I can mitigate all of this with the timeout setting in the placement section of nova.conf15:26
mriedemcdent: a peach?15:27
mriedemjmlowe: do you have to set connection timeouts for other nova interactions, like with glance?15:28
jmloweI've never needed them before15:28
*** tbachman has joined #openstack-nova15:29
mriedemweird. i think in queens we were going through ksa for glance interactions as well15:29
mriedemso it should all be the same15:29
mriedemfrom a client perspective i mean,15:29
mriedemjust different api servers15:29
jmloweI have been seeing the occasional ssl error in neutron clients, so could even be as simple as some sort of bug in haproxy15:31
cfriesenmriedem: sean-k-mooney: yeah, I think you might be right, and most of the time it gets fixed up by the claim.  ick.15:31
sean-k-mooneycfriesen: mriedem we need to do a cleanup/audit of the migraion codepaths in general. artoms live migration spec will help but we likely have some cleanup to do for resize/cold migrate too.15:33
cdentjmlowe: what's hosting your placement service? mod_wsgi, uwsgi, something else?15:34
sean-k-mooneycfriesen: mriedem i know in pratcice cold migrate and resize "usually" work and results in the numa toplogy being recaluated but it may be just getting lucky or relying on retries. when we model numa in placemnt this will cange and we will have to correct the behavior for that15:35
sean-k-mooneycfriesen: by the way i dont know if you saw my ML post regarding the pci numa affitiy policies15:36
cdentefried: thanks for the +w on the external placement in nova change. I assume we want to let its child ( https://review.openstack.org/#/c/618215/ ) sit until more things have settled15:36
cfriesensean-k-mooney: glanced at it.  haven't had a chance to dig in to it, they've got my focused on other stuff at the moment.15:36
sean-k-mooneycfriesen: i did not test all the ploices but at least the prefer policy does not work as intended so we will need to fix that15:37
efriedcdent: Been avoiding that one until "WIP" goes away.15:37
sean-k-mooney+ add the ablity for it to work with neutron sriov ports15:37
efriedcdent: Basically taking any excuse to defer reviews while I try to get my feet back under me.15:38
cdentefried: i've left it wip in expectation of it needing to hold. How do I mark something "i'd like reviews but this isn't done"? But yeah, understand the need to defer.15:38
cdent(sometimes its the concept not the implementation that needs the review)15:38
efriedcdent: If you'd like me to -2 it, I could do that. Or you could -W it. Not sure either would help get it more attention, though.15:39
mriedemcdent: should probably send a separate email to the ML about https://review.openstack.org/#/c/617941/ so nova people not paying much attention to placement know what's going on with tests now15:39
efriedwhat are we actually waiting on?15:39
*** dpawlik has quit IRC15:39
cdentmriedem: aye aye15:39
mriedemb/c when i rebase and my tests start failing i'm going to wonder wtf15:40
* cdent nods15:40
sean-k-mooneycdent: mriedem efried is https://review.openstack.org/#/c/599208/ still considerd a requiremetn before the placement extraction is complete15:41
cdentyes15:41
sean-k-mooneyis there a list of what is out standing beyond that or is that the final feature15:42
mriedemhttps://etherpad.openstack.org/p/BER-placement-extract15:42
sean-k-mooneymriedem: cool thanks15:42
mriedembauzas: did you get a functional test written that does a reshape and then schedules to the child provider inventory resource?15:43
mriedemcdent: why were these tests removed? https://review.openstack.org/#/c/617941/21/nova/tests/unit/cmd/test_status.py15:44
mnaserhttps://review.openstack.org/#/c/619349/ simple backport if anyone has a second (i'll babysit the rest)15:44
mriedemlyarwood: ^ just consider my backport a proxy +2 there15:46
cdentmriedem: because the tests uses the rp_objects directly15:46
cdentthe follow on patch may even remove the status command since it can no longer work15:46
cdentthere's a fixme added in cmd/status.py15:47
openstackgerritBalazs Gibizer proposed openstack/nova master: Send RP uuid in the port binding  https://review.openstack.org/56945915:47
openstackgerritBalazs Gibizer proposed openstack/nova master: Test boot with more ports with bandwidth request  https://review.openstack.org/57331715:47
mriedemcdent: ok but the rp objects aren't removed in that patch, so it seemed out of place15:47
mriedemwe could count rps using the placement api right?15:47
mriedemor using the fixture15:47
cdentI think that one can still work, but not the inventory-related one15:48
cdentI went through so many iterations on that series of changes, I may have lost my place15:48
cdentI assumed then (and still do) that we'll need to do some "dynamic tidying"15:49
mriedemon that size of change i can see why something would get lost15:49
mriedemit just stuck out to me looking at it fresh15:49
* cdent nods15:51
cdentmy original plan was to not remove anything from nova in the first change, and just change the tests, but it proved to confusing why debugging.15:52
mriedemanyway, i think the command could count compute resource providers by providers with VCPU inventory as it does today15:52
mriedemand i know you have a spec for filtering that way as well15:52
cdentI abandoned that spec because people weren't sure that was a sufficient use case, which confused me15:52
mriedemheh, well we have a use case right here15:52
mriedemnova-status upgrade check already hits the placement REST API to check the version in _check_placement15:53
mriedemanywhere15:53
mriedem*anyway15:53
mriedemwe might want to undo https://review.openstack.org/#/c/617941/21/nova/tests/unit/cmd/test_status.py in a follow up15:54
mriedemunless we remove that upgrade check, but that's another discussion15:54
cdentwell, we'll have to at least change the upgrade check, in which case the test will have to change so it does rather go together15:54
*** dpawlik has joined #openstack-nova15:55
mriedemsure, the test has to exist to change it though15:55
mriedemunless you're agreeing with me15:55
*** janki has quit IRC15:55
cdentI'm mostly agreeing with you, except for the part about it being another discussion15:55
cdentif we're going to remove the command, then job done15:55
mriedemit depends on what nova-compute does on startup - if it can't create a resource provider b/c placement isn't running or nova-compute isn't configured to talk to it, then we can probably remove the check15:56
mriedemotherwise it's useful as a base install verification that you've got computes reporting in and resource providers for those computes15:56
mriedemFFU makes me nervous about when we can remove these things...15:57
mriedemb/c anyone can FFU from any release to another presumably15:57
mriedemhaving said that, i'm not sure these would work for FFU'ers anyway b/c the placement api would likely be down15:58
mriedemdansmith: is that correct? we can expect placement-api to be down during an FFU?15:58
*** dpawlik has quit IRC15:59
sean-k-mooneycdent: placement does not randomise allocation candiates by defualt correct. they are just retruned in db order?16:00
mriedemhttps://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates16:00
*** tbachman has quit IRC16:01
sean-k-mooneymriedem: thanks ya its off by default. i was wondering if that could be related to  this ml list post http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000209.html16:02
sean-k-mooneythat said i woudl have expected the default weigher to kick in and spread the instances16:03
cdentmriedem: yeah, I'm wondering if coupling nova-status to placement status is a good/safe idea? If they are supposed to be independently upgraded (for some value of "independent" maybe placement-status should do some kind of check? Except that we expect placement to upgrade first (usually). /me throws hands16:04
cdentsean-k-mooney: that config item was left as not random by default so as to encoureage/allow pack, what would the weigher be doing?16:05
cdentchanging the config is certainly something they could at least try16:05
sean-k-mooneycdent: the weighers used to spread by default16:05
cdentmy impression was that most of the system was biased towards packing to save $$16:05
sean-k-mooneycdent: i think they still do16:05
cdentprobably still worth trying the config setting just to see?16:06
mriedemcdent: i left a note to self / open question in the check code so i can maybe come back to it some other time when it comes up, or is a more pressing decision that needs to be made16:06
sean-k-mooneyya i was also gong to ask them to provide the nova.conf the schduler is useing not the compute node one16:07
cdentgood point sean-k-mooney16:07
cdentmriedem: ✔16:07
dansmithmriedem: not just expect, but require16:07
mriedemok yeah then in that case, nova-status upgrade check will not work during FFU today16:09
mriedemit will fail the check placement check since the API won't be up16:10
dansmithI thought nova-status was to be db-only anyway? I guess not once placement is a different db, eh?16:10
*** udesale has quit IRC16:10
mriedemthere is a check for the minimum placement version16:11
mriedemotherwise it looks in the nova_api db yeah16:11
dansmithso just have to make the api-bound check graceful I suppose16:11
mriedemwas discussing if we should change that to hit the placement API now, or just remove the placement-related checks16:11
mriedemthe upgrade checkers things were written before FFU was really a consideration i think16:12
mriedemso i haven't put a ton of thought into it16:12
dansmithwell, I thought the plan was for that thing to be db-only anyway, so it wouldn't matter, but the scope has definitely gotten larger as we find more things for it to do16:12
mriedeme.g. if we start saying that nova requires cinder >= 3.44 to remove our api compat code, presumably we'd want an upgrade check for that16:12
mriedemyeah16:12
dansmithwell, if the api is up and you're not doing an ffu, it'd be nice to get that check, you just need to be super graceful I think16:13
dansmithmaybe a new category, peer to warning, error, for "manual check" or "unknown" ?16:13
dansmithlike "I would check this for you, but I can't so be sure you check it"16:13
mriedemi was thinking it'd just be a warning16:13
mriedemwarning is already "this might be a problem, but i'm not sure"16:14
dansmithdo the deployment things hork on warning?16:14
dansmithif not, then cool16:14
mriedemdo the deployment things run the upgrade checkers... heh16:14
mriedemosa runs it16:14
mriedemi don't think tripleo does16:14
dansmithowalsh_: ?16:14
mriedembtw, i ask this at least once every 2 weeks in here :)16:14
dansmithyou said you didn't know :)16:15
mriedemrhetorical16:15
mriedemi know tripleo doesn't run it16:15
dansmithokay16:15
mriedemhttp://codesearch.openstack.org/?q=nova-status&i=nope&files=&repos=16:15
mriedemgrenade, kolla-ansible and osa16:15
mriedemso i was going to say,16:16
mriedemunless/until someone comes along saying, "hey this doesn't work for me during FFU" i don't have much motivation to care about making it graceful in those cases16:16
dansmith:/16:17
dansmithdoes it stack trace now?16:17
mriedemif placement is down?16:17
mriedemit will return a failure16:17
dansmithyeah16:17
*** mschuppert has joined #openstack-nova16:17
dansmithoh you mean it just reports error instead of your proposed warning?16:17
mriedemhttps://github.com/openstack/nova/blob/master/nova/cmd/status.py#L23016:17
mriedemyes16:17
mriedemthere is the guantlet of ksa exceptions,16:18
mriedemso without testing it in devstack i'm not exactly sure16:18
dansmithokay16:18
*** itlinux has joined #openstack-nova16:18
dansmithwell, if it just reports error that's reasonable enough I think16:18
mriedemthat likely means any tooling running it during an FFU should ignore the results, which makes me wonder why even run it during FFU16:19
* cdent strokes chin16:19
mriedembut yeah, i just don't have the energy to figure out what our official process/stance is for upgrade checkers during FFU :)16:19
mriedemespecially that now it's a community wide goal16:20
mriedemand FFU is still very nebulous to me16:20
dansmithwell,16:20
dansmiththe checking of things that have to be done before moving on is pretty critical to FFU16:20
mriedemmy most basic understanding is take all control plane services down, and roll through each release running data migrations and schema migrations16:21
dansmithI think everyone right now is just doing it very manually, including the deployment projects16:21
dansmithlyarwood might be a good person to ask about this16:21
dansmithlyarwood: mschuppert: the question is why tripleo doesn't run nova-status during any upgrade, including ffu, even if just to collect/log the status16:22
dansmithand/or I guess: if/do tripleo people use it whilst trying to get a particular N->M transition working, and then just not run it programmatically for everyone, assuming they have the steps perfected?16:23
lyarwooddansmith / mriedem ; no reason, I did push an example up for the upgrades team a while ago and asked them to take it forward but I assume they just didn't follow up16:23
lyarwoodthis came up again at PTG, didn't we create tripleo bugs to track this during S?16:24
*** hshiina has quit IRC16:24
* lyarwood looks16:24
*** hshiina has joined #openstack-nova16:24
lyarwoodhttps://bugs.launchpad.net/tripleo/+bug/177706016:24
openstackLaunchpad bug 1777060 in tripleo "nova-status should be used during deployment and upgrades" [High,New] - Assigned to Lee Yarwood (lyarwood)16:24
*** annp has quit IRC16:25
dansmithcool16:25
*** dave-mccowan has quit IRC16:27
mriedemuntil we actually have any kind of FFU ci testing it's also hard for me to care a ton about stuff like this16:28
mriedemi mean, i don't want to lose sleep over it16:28
mriedemwhen i have so many other things i can lose sleep over16:28
*** dave-mccowan has joined #openstack-nova16:28
dansmithwe could easily just run it during grenade before we bring things back up and log the output right?16:29
mriedemwe do run nova-status upgrade check during grenade16:30
dansmithbut not when *everything* is down right?16:30
dansmithonly during nova-upgrade?16:30
mriedemhttp://git.openstack.org/cgit/openstack-dev/grenade/tree/projects/60_nova/upgrade.sh#n8816:30
dansmithright,16:31
mriedemwe specifically bring placement up before running the check16:31
*** gyee has joined #openstack-nova16:31
dansmithright16:31
dansmithand other projects before us would be up (i.e. keystone)16:31
mriedemyeah i mean i could run it before starting placement16:31
mriedemand see it fail16:31
dansmithactually verify preupgrade might run with nothing16:31
dansmithsorry verify_noapi preupgrade16:32
dansmithhttps://review.openstack.org/62010416:33
dansmithmriedem: anyway, don't lose sleep over it, I'll check in on that later to see how it goes16:34
mriedemack thanks16:34
*** tbachman has joined #openstack-nova16:38
*** tbachman has quit IRC16:39
*** tbachman has joined #openstack-nova16:40
openstackgerritAdam Spiers proposed openstack/nova-specs master: Add spec for libvirt driver launching AMD SEV-encrypted instances  https://review.openstack.org/60977916:45
*** jangutter has quit IRC16:53
mriedemi guess we never documented anywhere officially that we only support n-1 computes..16:59
mriedemeven though it comes up every so often16:59
sean-k-mooneycompute older then n-1 may work in some cases however we just dont test them17:00
mriedemyes i know that.17:01
mriedemwhat i'm asking is, didn't we ever document this because the last time it came up, i thought we said someone would document it.17:01
mriedemwhich is i guess why it didn't get done.17:02
mriedemhttps://docs.openstack.org/nova/rocky/contributor/project-scope.html?highlight=compatibility#upgrade-expectations is about as close as it gets17:03
mriedemfuck i hate that banner17:03
*** whoami-rajat has joined #openstack-nova17:03
sean-k-mooneywe dont state i t explictily in https://docs.openstack.org/nova/rocky/user/upgrade.html#rolling-upgrade-process but we do mention n to n+1 a few times17:04
cdentoh yeah that banner doth suck17:04
*** jmlowe has quit IRC17:04
sean-k-mooneye.g. in relation to db changes17:04
cdent"someone" has a lot of work on their place17:04
mriedemhttps://docs.openstack.org/nova/rocky/contributor/process.html?highlight=compatibility#smooth-upgrades17:04
sean-k-mooneymriedem: ok so we do say we only support "only support upgrades between N and N+1 major versions, to reduce technical debt relating to upgrades"17:05
mriedemyes, that's good enough for me17:06
mriedemthe question in -dev and the ML is if that also applies to inter-service compat17:06
mriedeme.g. nova and cinder17:06
mriedemand i don't think it should17:06
mriedemb/c we have versioned REST APIs17:06
sean-k-mooneymriedem: right if the rest apis are versioned coorectly it should not17:07
mriedemwhich means you shouldn't have to take down your entire cloud to upgrade nova17:07
mriedemi.e. you can leave cinder n-2 and upgrade nova and it should work17:07
mriedemwe don't test it, but it should work17:07
mriedemunless otherwise noted as we've dropped some compat17:08
sean-k-mooneyyes you should be able to do a service wise rolling upgrade17:08
sean-k-mooneyand you should be able to skip upgrade some services if you dont need too17:08
sean-k-mooneyi generally parsed the version n contolplane with n-1 agents  compatiablity to only applcy within a singel service17:09
*** mvkr has quit IRC17:10
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: [stable-only] Add report_ironic_standard_resource_class_inventory option  https://review.openstack.org/62011117:11
mriedemsmcginnis: cdent: regarding that question about nova requiring cinder >= rocky, i'll likely drop the API compat we have for cinder < rocky (really queens b/c nova-api checks for cinder 3.44 which was added in queens), with a release note and potentially an upgrade check to look at the service catalog and make sure cinder >= 3.44 is available17:13
* mriedem adds it to the todo list17:14
smcginnismriedem: Queens should be a good point. I would think from there we can probably clean up a lot of code.17:14
mriedemstill need my patches to migrate old bdm attachments, as discussed in berlin,17:15
mriedemor do someone online when the attachments are used, but i haven't put brain power into that17:16
mriedemdefinitely need https://review.openstack.org/#/c/541420/ for bfv though17:16
*** helenfm has quit IRC17:16
*** imacdonn has quit IRC17:18
*** imacdonn has joined #openstack-nova17:19
*** KeithMnemonic has joined #openstack-nova17:30
KeithMnemonicmriedem is there a chance to get some cores to review your patch https://review.openstack.org/#/c/614872/1 ?17:31
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: [stable-only] Add report_ironic_standard_resource_class_inventory option  https://review.openstack.org/62011317:31
mriedemKeithMnemonic: queens needs to go first https://review.openstack.org/#/c/614868/17:32
mriedembut yeah dansmith lyarwood https://review.openstack.org/#/q/I98a2785c07f7af02ad83650c72d9e1868290ece417:32
mriedemeasy backports17:32
KeithMnemonicthanks!17:33
mriedemyw17:34
mriedemthanks for the reminder17:34
sean-k-mooneyanyone know a better tool to search irc logs then googles site search17:35
cdentedleafe made a thing, but I don't know if he made it live17:35
*** imacdonn has quit IRC17:36
edleafesean-k-mooney: It's still rough, but you can try https://ircsearch.leafe.com17:36
*** imacdonn has joined #openstack-nova17:36
*** sambetts is now known as sambetts|afk17:36
sean-k-mooneyedleafe: does that use a local copy of the logs or does it search easedrop.openstack.org17:37
edleafesean-k-mooney: it uses its own elasticsearch database17:38
openstackgerritAdrian Chiris proposed openstack/nova master: add get_pci_request_from_vif to request.py  https://review.openstack.org/60916617:39
openstackgerritAdrian Chiris proposed openstack/nova master: Allow per-port modification of vnic_type and profile  https://review.openstack.org/60736517:39
openstackgerritAdrian Chiris proposed openstack/nova master: Add get_instance_pci_request_from_vif  https://review.openstack.org/61992917:39
openstackgerritAdrian Chiris proposed openstack/nova master: SR-IOV Live migration indirect port support  https://review.openstack.org/62011517:39
mriedemuse_cow_images=true and force_raw_images=true (defaults) is always confusing17:44
sean-k-mooneyedleafe: thanks i found the message from bauzas i was looking for but looks like he did not past the placmetend output publicaly https://ircsearch.leafe.com/timeline-middle/%23openstack-nova/2018-10-03T17:18:5617:46
adriancsean-k-mooney: Hi, added you to the above commits. ive also commented on the related spec: https://review.openstack.org/#/c/605116/17:46
sean-k-mooneyadrianc: hi i was looking at the previous version earlier today. im planning to spend tomorrow testing what you have pushed so far17:47
sean-k-mooneyadrianc: is it in a functional state17:47
*** jackding has quit IRC17:48
edleafesean-k-mooney: glad it was useful for you. I wrote it because I was annoyed that I couldn't find info from a conversation17:48
*** imacdonn has quit IRC17:48
sean-k-mooneyedleafe: ya i spent 20 mins looking of it with googles site: feature and did not find it17:49
adriancsean-k-mooney: ive tested with SRIOV MacVtap, however you it is required that the neutron mech driver to support multiple port bindings17:49
sean-k-mooneyedleafe: i found it using your seacher in 90 seocnds or so17:49
sean-k-mooneyadrianc: without the neutron change it should migration but the portstatus will be down correct17:50
adriancsean-k-mooney: i have a POC patch for neutron sriovnicswitch if you are interested17:50
edleafesean-k-mooney: the fulltext search in elasticsearch is awesome17:50
sean-k-mooneyadrianc: sure if you have it pushed i can pull it down and test with that also17:50
adriancsean-k-mooney: without it the port will be down and no MAC will be allocated to the VF, ill push it as POC code for neutron17:52
sean-k-mooneyadrianc: i normally dont have meeting on tuesday so it the day i set aside to test complicated stuff end to end so i am happy to pull in all the changes you have and try an replicate it locally17:52
sean-k-mooneyadrianc: you should still have a mac at least in the libvirt xml17:52
*** derekh has quit IRC17:52
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Use long_rpc_timeout in select_destinations RPC call  https://review.openstack.org/62012117:53
sean-k-mooneyadrianc: without the neutron change the only stuff that wont work is the operation preferment by the sriovnic agent17:53
sean-k-mooneythe vf mac is set by nova in the libvirt xml17:53
sean-k-mooneythat is based on the neutron port and should not be effect by the ports bindings or the port status17:54
adriancsean-k-mooney: yes, you are right, the issue is IIRC, the MAC on the source is not zeroed17:54
*** imacdonn has joined #openstack-nova17:55
adriancdid it a while back, but i remember it was needed :)17:55
sean-k-mooneyadrianc: ah when the vf is unbound it keeps the vm mac17:55
*** moshele has joined #openstack-nova17:55
adriancya17:55
sean-k-mooneyadrianc: that is assuming the vf is rebound to the kenel dirivce and does not stay bound to vfio-pci17:55
sean-k-mooneysorry never mind17:56
sean-k-mooneywith macvtap its not bound to vfio-pci17:56
*** k_mouza_ has joined #openstack-nova17:56
sean-k-mooneyok well regarding you question on the spec https://review.openstack.org/#/c/605116/6/specs/stein/approved/libvirt-neutron-sriov-livemigration.rst@11117:57
sean-k-mooneyi was leaning towords option 217:57
sean-k-mooneyusing a new pci request with an new uuid17:57
sean-k-mooneybut i was hoping to avoid data model changes17:58
*** k_mouza has quit IRC17:58
sean-k-mooneyill keep both option in mind when looking at your code.17:59
adriancyou will need to keep that request_id somewhere17:59
dansmithmriedem: looks like artom answered questions and pushed up a tweak to this and you were +0.9 before.. can you circle back? https://review.openstack.org/#/c/599587/18:00
sean-k-mooneyadrianc: i was wondering could we use a uuid5 that is derived from the host+vif port id18:00
mriedemdansmith: yeah, was thinking about it in my mental queue earlier18:00
dansmithmriedem: ack, thanks18:00
dansmithbauzas: I assume you're going to be the +W on that?18:01
adriancsean-k-mooney: imposing a certain logic on the uuid creation doesnt sound like something that will fly, is there a precedence in nova ?18:02
bauzasdansmith: mriedem: sorry folks, was on some internal issue18:03
sean-k-mooneyadrianc: neutron are using uuid5's for generating the placemnet uuids for bandwith awere scheduling.18:03
bauzasdansmith: and yeah, i was about looking at https://review.openstack.org/#/c/599587/18:03
sean-k-mooneyadrianc: i dont know of a precedent in nova for doing the same18:04
bauzasmriedem: for the functional test, I didn't have time yet18:04
* bauzas looks at OSP1318:04
sean-k-mooneyadrianc: but even if we did not generate the deterministaclly  we could put the pci request id in the migration data we pass back18:04
openstackgerritMatt Riedemann proposed openstack/nova master: Default zero disk flavor to RULE_ADMIN_API in Stein  https://review.openstack.org/60391018:07
adriancsean-k-mooney: so we extend the LiveMigrateData object, using the same request makes sense as in a way its the same request claimed on a different host. downside is that you have a point in time where you will have a PCI device allocated on the source host and a PCI device claimed on the destination host for the same request ID18:08
sean-k-mooneyadrianc: this may be a good usecase for a uuid5 however. https://docs.python.org/2/library/uuid.html#uuid.uuid5 the namespace uuid would be the neutron port uuid and the name would be the hostname. the adress space of UUIDs should be sufficent such that a colision is very unlikely.18:09
sean-k-mooneyadrianc: using the same request id would also work as long as we can clean up18:09
sean-k-mooneyadrianc: if we could avoid relying on the  periodic task that would be better18:10
sean-k-mooneyadrianc: could we remove the source claim in post migrate18:10
artomdansmith, oh hey, thanks for pushing that :)18:11
*** davidsha_ has quit IRC18:13
adriancsean-k-mooney: lemme check, p.s https://review.openstack.org/#/c/620123/118:14
*** slaweq_ has joined #openstack-nova18:15
mriedemartom: replies on the previous PS fwiw18:17
mriedemi think you're understimating the claim issue18:17
mriedemthe claim happens on the dest before live migration starts18:17
mriedemand returns to conductor18:17
mriedembut it's the source that will need to orchestrate what happens with the claim after a successful or failed live migration18:17
mriedemmaybe it's as simple as rt.drop_move_claim like you said18:18
sean-k-mooneymriedem: artom for what its worth we will have to do the exact same claims dance for the sriov migration as well. the main difference being one is in the pci code and the other is in the numa code18:18
mriedemthe claims dance is one of my most hated dances18:19
mriedemright up there with line dancing18:19
artomThere are dances you like?18:19
sean-k-mooneyand river dacne if you an irish person18:19
mriedemdude18:19
mriedemTHE HUMPTY DANCE18:19
artomI... Have I been missing out?18:19
mriedemdansmith: that reminds me, was it you that didn't get my humpty dance reference while we were strolling through the death camp?18:19
mriedemhttps://www.youtube.com/watch?v=PBsjggc5jHM18:20
adriancsean-k-mooney: to which source claim are you referring ?18:20
dansmithmriedem: I knew you were talking about this song, I just didn't get how it related to, uh, mass murder18:21
mriedemi don't remember18:21
artommriedem, you can pass the host to drop_move_claim18:22
adriancsean-k-mooney: if you mean removing the free_instance_allocations() in _post_live_migration then yes, but it will be another place we rely on the periodic resource_tracker job18:22
artomSo even if we call it from the source, we could drop the claim on the... wait, if we call it on the source, we just drop the resources on the source, since the instance is on the dest18:23
adriancartom: Hi, in regards to numa aware live migration, the plan is to converge for stein right ? as the SRIOV live migration will not mean much without it18:24
sean-k-mooneyadrianc: in option 1 we would have 2 vfs calimed with the same pci request uuid so if we do that i was wondering if we can avaoid relying on the periodic heal and proactivly release the vf on the source when we then migration completes18:24
sean-k-mooneyadrianc: well the sriov migration should be doable without the numa one18:24
mriedemartom: see18:25
*** ircuser-1 has joined #openstack-nova18:25
adriancsean-k-mooney: in the PS i am freeing the instance allocation on the source node.18:25
sean-k-mooneyadrianc: i ok ill read what you are currently doing and then ill respond to the question on the spec or updated it to match what you have implmeneted18:26
adriancsean-k-mooney: unless you request dedicated CPUs right ? (previous comment)18:27
sean-k-mooneyadrianc: yes but wwe shoudl treat these as seperate specs and seperate work items but makes sure they both work togeter in the end18:28
artommriedem, wait, so the cell conductor isn't involved at all? It's just superconductor and the source and dest?18:28
adriancsean-k-mooney: i agree, they do not depend, i was just wondering if its planned for stein as well :)18:28
sean-k-mooneyadrianc: in the simple case fo a floating instace with neutron sriov interface there is no numa affinity or numa topology for the guest18:28
*** jmlowe has joined #openstack-nova18:29
sean-k-mooneyadrianc: the numa aware migration is proably more impactful to land in stien then sriov but hopefully both can land18:29
*** k_mouza has joined #openstack-nova18:31
mriedemartom: yes18:31
mriedemsuperconductor orchestrates everything to find the correct dest host, then kicks things off with an rpc cast to the source18:31
mriedemand then source/dest computes just rpc back and forth18:31
mriedemthere is no reschedule or anything within the cell conductor for live migration18:32
artomOK, I need to eat, but I think I'm starting to understand the problem you're explaining, mriedem. Namely: we can't keep a claim context going, so... I guess we'll need to shove it in the migration context, like with cold migration?18:32
mriedemi guess?18:32
mriedemthe instance.migration_context is still a bit of a mystery to me18:32
mriedembut i also need to eat18:32
artomYou and everyone else18:32
mriedemdansmith: comments on the numa live migration spec which maybe you can answer,18:32
mriedemre: move claims18:32
artomI think if Nikola came back today, he'd still know more than all of us combined18:32
mriedemon that very hairy part of the code? i agree.18:33
mriedemthere are also TODOs in there from him about the move claim stuff for reize18:33
mriedem*resize18:33
mriedemhttps://github.com/openstack/nova/blob/594c653dc1a312d0364ad24c703e1a9b228133e1/nova/compute/manager.py#L398818:34
mriedemanyway, turkey leftovers18:34
*** k_mouza_ has quit IRC18:35
*** k_mouza has quit IRC18:35
sean-k-mooneymriedem: when you are back maybe you could weigh in on https://review.openstack.org/#/c/605116/6/specs/stein/approved/libvirt-neutron-sriov-livemigration.rst@111 also.18:36
*** moshele has quit IRC18:40
*** ralonsoh has quit IRC18:41
*** adrianc has quit IRC18:46
*** tssurya has quit IRC18:51
*** slaweq_ has quit IRC18:51
*** eandersson has joined #openstack-nova18:54
*** dave-mccowan has quit IRC19:00
*** dave-mccowan has joined #openstack-nova19:01
*** brault has quit IRC19:01
*** brault has joined #openstack-nova19:04
*** mvkr has joined #openstack-nova19:18
*** itlinux has quit IRC19:41
*** itlinux has joined #openstack-nova20:05
*** erlon has quit IRC20:06
*** whoami-rajat has quit IRC20:06
*** dave-mccowan has quit IRC20:21
*** moshele has joined #openstack-nova20:35
*** slaweq_ has joined #openstack-nova20:37
*** slaweq_ has quit IRC20:37
*** dave-mccowan has joined #openstack-nova20:39
*** jmlowe has quit IRC20:39
*** moshele has quit IRC20:41
*** ivve has quit IRC20:56
openstackgerritMatt Riedemann proposed openstack/nova master: Remove ironic/pike note from *_allocation_ratio help  https://review.openstack.org/62015420:59
*** moshele has joined #openstack-nova21:01
*** jmlowe has joined #openstack-nova21:12
*** moshele has quit IRC21:15
mnaserso while answering an ML post about all_tenants and friends, i found this TODO since 2015 -- https://github.com/openstack/nova/commit/be41910ac6be28060d9007778fb33766077de59b21:15
mnaserdo we just drop that part of the code at that point? given it's been uncommented for years now21:15
*** moshele has joined #openstack-nova21:19
artommriedem, does the superconductor really rescheduler if the live migrations fails? I'm looking but I can't find anything in _execute, and in the conductor manager if there's a failure in _live_migrate it just sets an error.21:19
artomNot sure it's super relevant to the spec, but for my own personal edification21:20
mriedemmnaser: you mean drop it at this point?21:21
mnasermriedem: i think so? i mean it's just dead code for 5 years, do we want to muck around with microversion bumps and blah21:22
mriedemmnaser: tbc, you're saying just ditch the commented out code since no one cares enough to change it with a new microversion21:23
mnaseryep21:23
mriedemshrug21:24
mriedemi don't see anyone caring enough to change it21:24
mriedemthe fact you have to supply the all_tenants parameter to filter on project_id does always confuse me21:24
mnaserwell21:25
mriedembut at least it's documented in the api-ref21:25
mnaserthe clients workaround it now..21:25
mnaserpretty sure you dont need to do that anymore with the cli21:25
mriedemhttps://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-list21:25
mriedemnova list --tenant will just implicitly add --all-tenants21:25
mriedemif that's what you mean by workaround21:25
mnaseryeah21:26
mnaserhttps://github.com/openstack/python-openstackclient/blob/master/openstackclient/compute/v2/server.py#L1147-L115321:26
mnasersame for osc21:26
mriedemif you want to push a patch to remove the cruft, fine by me21:27
mriedemi might even +2 that21:27
mriedemanything to make that method smaller b/c god is it long21:27
mriedembnemec: have you ever heard of requests for something like a PostitiveIntOpt or PositiveFloatOpt in oslo.config? we have some options which can be set to 0.0 as the min, but really shouldn't be <= 0.21:35
mriedembut we can't really describe that with just min21:35
dansmithchoices=range(1,1000) ? :P21:37
bnemecmriedem: So an opt where min is a < comparison instead of a <=?21:37
mriedemsomething like that21:38
mriedemfor context https://review.openstack.org/#/c/602804/9/nova/conf/compute.py21:38
mriedeminitial_cpu_allocation_ratio should never be 0.021:38
dansmithwe really just want a validation function parameter, right?21:38
mriedemyeah21:38
dansmithwe wanted that for something else recently21:38
openstackgerritMohammed Naser proposed openstack/nova master: Drop cruft code for all_tenants behaviour  https://review.openstack.org/62016521:38
dansmithvalidator=lambda str: ...21:38
*** xek has quit IRC21:39
bnemecA validator callback seems like something we could do.21:41
bnemecAlternatively, in this case min=0.000001 is probably also sane.21:41
*** awaugama has quit IRC21:44
bnemecYou could also create a custom type that did the validation in the constructor.21:46
bnemecSubclass Float and put whatever logic you need in there: https://github.com/openstack/oslo.config/blob/master/oslo_config/types.py#L40921:46
mriedemI wasn't sure how kosher subclassing oslo.config opt types was21:47
bnemecThen create the opt as Opt(type=MyCustomType, ...).21:47
bnemecThey're part of the public API so I'd say they're fair game.21:47
mriedemok yeah that's probably cleanest21:49
bnemecThey danger might be creating a completely new class as a type, which could theoretically be done, but if we ever added to the type API you might get broken.21:49
bnemecAs long as you inherit from an existing type you should be okay though.21:49
bnemecThey all descend from a single ABC.21:49
mriedemthe adam of config opts?21:50
bnemecIndeed.21:51
bnemecAlso, I think I was wrong. You want to do validation in __call__.21:51
bnemecThat's where we're doing it in the existing types: https://github.com/openstack/oslo.config/blob/master/oslo_config/types.py#L83021:51
mriedemah yeah https://github.com/openstack/oslo.config/blob/master/oslo_config/types.py#L30521:52
bnemecYeah, better example. :-)21:52
*** wolverineav has joined #openstack-nova21:53
*** moshele has quit IRC21:55
*** eharney has quit IRC21:56
mriedemwelp https://review.openstack.org/#/c/613126/ kind of blows up https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/initial-allocation-ratios.html#manually-set-placement-allocation-ratios-are-overwritten22:03
mriedemhow am i not surprised22:03
mriedemefried: you'll have some context on ^22:05
mriedemi'm not exactly sure what we should do about it, outside of passing an initial flag to update_provider_tree or something gross like that...although maybe upt can check the provider tree to see if it already has inventory with allocation_ratio set, and if so, don't provide a value unless CONF.*_allocation_ratio is not None22:06
openstackgerritArtom Lifshitz proposed openstack/nova master: Give drop_move_claim() correct docstring  https://review.openstack.org/62017022:09
artommriedem, ^^ related to the numa live migration spec. I *think* I'm right, and it might clarify the confusion around where we're removing usages.22:09
mriedemartom: totally forgot you pinged me earlier, sec22:10
mriedemartom: superconductor does not reschedule if live migration fails, no22:10
mriedemit reschedules if the pre-checks on the dest/source fail22:10
artomRight, sorry, I wasn't being precise enough. Though I can't find it rescheduling *anywhere*22:11
mriedemso here conductor asks the scheduler for a host https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L37022:11
mriedemin this try block it does the pre-checks for dest/source https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L38522:12
mriedemif that fails, blacklist the host https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L39122:12
artom*facepalm*22:12
artomGot it, sorry.22:12
mriedemset host=None and hit the while again22:12
mriedemhttps://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L36622:12
mriedemnp22:12
artomMy brain skipped over _find_destination() as an atomic thing22:12
artomThanks for taking the time to walk me through this :)22:13
artomNow that comment patch, if you have time ;) The patch itself is trivial, but requires thinking to make sure I got it right.22:14
openstackgerritMerged openstack/nova master: Add debug logs when doubling-up allocations during scheduling  https://review.openstack.org/61701622:14
openstackgerritMerged openstack/nova stable/rocky: Default embedded instance.flavor.is_public attribute  https://review.openstack.org/61934922:14
*** sambetts|afk has quit IRC22:14
openstackgerritArtom Lifshitz proposed openstack/nova master: Give drop_move_claim() correct docstring  https://review.openstack.org/62017022:16
*** sambetts_ has joined #openstack-nova22:19
*** dave-mccowan has quit IRC22:27
*** aspiers has quit IRC22:27
*** itlinux has quit IRC22:28
mriedemartom: ok some random comments inline22:31
mriedemi guess drop_move_claim called from the source on confirm_resize always confuses me,22:31
mriedembecause the claim is made on the dest during prep_resize22:31
efriedmriedem: Sorry, I'm back now. We're trying to figure out how to know whether allocation ratios were defaulted by placement or conf or nova?22:31
mriedemefried: yeah - need to determine when initial_*_allocation_ratio config should be used (first time the compute was created) and report that to placement, or if we should set allocation ratios because CONF.*_allocation_ratio is some non-default value, or if we should leave it alone because the admin set some allocation_ratio via the placement API directly22:32
mriedemi think we can know the initial case from upt if the provider tree doesn't have those inventory resource class keys in it22:33
mriedemotherwise only ever overwrite if CONF.*_allocation_ratio is not None22:33
efried"if the provider tree doesn't have those inventory resource class keys" <== I don't think this can happen within upt22:34
mriedemon initial startup, wont upt just have a root provider with no inventory on it?22:34
mriedemthe driver reports the inventory initially22:34
efriedI don't think so, because of the (legacy) code that's bootstrapping the CPU/MEMORY_MB/DISK_GB inventories before we get to upt22:34
mriedemi'm not sure what you're referring to22:35
efriedI could be wrong, it's possible we've removed that at this point.22:35
efriedbut there was a time when code *outside* of virt driver space would do the initial populate of placement22:35
mriedemby calling driver.get_inventory() right?22:35
mriedemthat no longer happens if the driver implements upt22:36
efriedget_available_resource22:36
mriedemyeah, get_available_resource was before get_inventory22:36
efriedwhich *does* still get called I believe.22:36
mriedemi think it's upt -> get_inventory -> get_available_resource22:36
efriedgar doesn't get called in the rt update flow, but it still gets called somewhere else, lemme find it.22:36
mriedemhttps://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/compute/resource_tracker.py#L71122:37
*** aspiers has joined #openstack-nova22:37
mriedemwe'd create the provider root initially here https://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/compute/resource_tracker.py#L92122:37
mriedemthen flush inventory from upt here https://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/compute/resource_tracker.py#L94422:38
mriedemelse this https://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/compute/resource_tracker.py#L95122:38
mriedemfailing those, this https://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/compute/resource_tracker.py#L96122:38
mriedemwhich would use the values from gar22:38
mriedemhttps://github.com/openstack/nova/blob/c1de096098344c733565c244163fc3ebf8c35e68/nova/scheduler/client/report.py#L144222:38
mriedemmight be interesting to drop that final path,22:39
mriedemlooks like maybe only the hyperv driver hasn't implemented an alternative22:39
mriedemoh and zvm22:40
mriedemforgot that was in tree...22:40
efriedyeah, I remember looking at this when I was working on https://review.openstack.org/#/c/615705/ to see if I could just real quick implement upt for all the drivers.22:40
efriedand realizing that was going to be more work than I was ready to undertake at the time.22:41
efriedbut unwinding the gar stuff, that's going to require some synapses I haven't yet explored.22:41
mriedemheh, good thing you noted that because i was just thinking about removing that old code path to see what would break22:42
mriedemi haven't seen hyper-v run on the latest version of that, but zvm failed22:42
efriedany case, I think the point is that we can't rely on upt being in the code path for initial population of the root provider inventories.22:42
efriedyet22:42
mriedemno it doesn't need to be though22:43
efriedif we want upt to be able to decide whether to use initial_*_allocation_ratio it does.22:43
mriedemas of https://review.openstack.org/#/c/613126/, if a driver implements upt, it sets the allocation_ratio on the inventory it reports,22:43
mriedemotherwise that normalize method in the RT does22:43
mriedem_normalize_inventory_from_cn_obj22:43
efriedright, it sets the allocation ratio based on the non-initial_* conf values.22:43
mriedemi think upt can determine if initi allocation ratios can be used though22:43
efriedhow?22:43
mriedemif the inventory for a given class is not in the tree, it's initial22:44
*** cdent has quit IRC22:44
efriedWait, did you just prove (to yourself, at least) that if upt is implemented, it *does* get first crack at the root provider inventory?22:45
mriedemyes22:45
mriedemi believe so anyway22:45
efriedshould be relatively easy to prove with a func test?22:46
mriedemi think the one i wrote here will do it https://review.openstack.org/#/c/613126/4/nova/tests/functional/compute/test_resource_tracker.py22:46
mriedemalong with the fake driver todo being resolved https://review.openstack.org/#/c/613126/4/nova/virt/fake.py22:47
mriedembut yeah https://review.openstack.org/#/c/602804/ really needs to run through the expected / support scenarios in a functional test,22:47
mriedem1. initial create22:47
mriedem2. overwrite in placement API22:47
mriedem3. overwrite via config22:47
mriedemand make sure #2 doesn't get f'ed up when the periodic runs22:48
efriedso we're talking about changing https://review.openstack.org/#/c/613126/4/nova/virt/libvirt/driver.py and its brethren to have logic like:22:49
efriedinv = ptree.data(root)22:49
efriedif CPU not in inv: ratio = CONF.initial_cpu_allocation_ratio or 16.022:49
efriedelse: ratio = CONF.cpu_allocation_ratio or 16.022:49
efriedand similar for mem/disk?22:49
efriedoh, except f'ed up when periodic runs22:50
mriedemthe "or 16.0" gets removed22:50
mriedemCONF.initial_cpu_allocation_ratio defaults to 16.022:50
efriedthe second one?22:50
mriedemboth22:50
*** wolverineav has quit IRC22:51
mriedemthe non-initial else becomes only set the ratio if CONF.cpu_allocation_ratio is not None22:51
efriedright22:51
efriedso22:51
mriedemin other words, don't change the allocation ratio if it was set externally22:51
mriedemand config hasn't changed22:51
*** wolverineav has joined #openstack-nova22:51
efriedif CPU not in inv: ratio = CONF.initial_cpu_allocation_ratio # which defaults to 16.022:51
efriedelif CONF.cpu_allocation_ratio: ratio = CONF.cpu_allocation_ratio22:51
efried# else no-op, leave it tf alone22:51
mriedemexactly22:52
efriedI guess technically that last bit would have to be22:52
efriedelse: ratio = data[CPU][ratio]22:52
efriedso it doesn't get omitted and wind up with the placement default :(22:52
mriedemyup22:52
efriedokay, I can dig it. Assuming we can prove the upt-gets-initial-look thing. Let me take a look at that test you highlighted...22:53
efriedstill looks slightly holey.22:55
efriedLet me go list all the permutations...22:55
mriedemin yikun's patch?22:55
mriedemi'm dumping notes in there22:55
efriedI'll just pastebin 'em for now22:55
efriedmm, if both values are set we want to end up with the conf one, so my above algo won't quite work22:58
mriedemwhy not?22:58
efriedyou would end up with the CONF.initial22:58
efrieduntil the next time update runs.22:59
efriedworks if you reverse the conditions I think.22:59
efriedif CONF.cpu_allocation_ratio: ratio = CONF.cpu_allocation_ratio23:00
efriedelif CPU not in inv: ratio = CONF.initial_cpu_allocation_ratio  # which defaults to 16.023:00
efriedelse: ratio = data[CPU][allocation_ratio]23:00
mriedemyou should only get initial config if CPU not in inv though23:00
mriedemok i think either would be ok23:00
openstackgerritMichael Still proposed openstack/nova master: Move bridge creation to privsep.  https://review.openstack.org/62018023:00
efriedbut in all cases if CONF.cpu_allocation_ratio is set you want *that* value - i.e. ignore initial_*23:01
mriedemwell well well, look who it is23:01
efriedbrb23:01
mriedemefried: true yeah23:01
mriedemok i got your point now23:01
*** rcernin has joined #openstack-nova23:01
openstackgerritMerged openstack/nova stable/queens: Fix NoneType error in _notify_volume_usage_detach  https://review.openstack.org/61486823:04
efriedmriedem: I think we might strive to embed that logic in a base class helper somehow.23:06
efriedmriedem: to replace (and subsume) https://review.openstack.org/#/c/613126/4/nova/virt/driver.py@86023:07
mriedemyeah it would be nice to not duplicate it all over the place23:08
*** slaweq has quit IRC23:10
*** slaweq has joined #openstack-nova23:11
mriedemalright with that i'm done23:12
mriedemefried: thanks for the sound board23:12
efriedfo sho23:12
openstackgerritMatt Riedemann proposed openstack/nova master: Drop cruft code for all_tenants behaviour  https://review.openstack.org/62016523:12
*** wolverineav has quit IRC23:13
*** mriedem is now known as mriedem_away23:13
*** wolverineav has joined #openstack-nova23:14
*** wolverineav has quit IRC23:21
*** wolverineav has joined #openstack-nova23:21
*** slaweq has quit IRC23:24
*** itlinux has joined #openstack-nova23:28
*** munimeha1 has quit IRC23:30
*** dave-mccowan has joined #openstack-nova23:36
openstackgerritEric Fried proposed openstack/nova master: Add missing ws seperator between words  https://review.openstack.org/61849123:42
*** dave-mccowan has quit IRC23:44
*** wolverineav has quit IRC23:50
*** wolverineav has joined #openstack-nova23:52
*** gyee has quit IRC23:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!