Tuesday, 2020-02-04

*** ociuhandu has joined #openstack-nova00:00
*** ociuhandu has quit IRC00:05
*** efried has quit IRC00:18
*** tosky has quit IRC00:22
*** luksky has quit IRC00:22
*** efried has joined #openstack-nova00:27
*** spatel has joined #openstack-nova00:43
*** spatel has quit IRC00:48
*** slaweq has joined #openstack-nova01:11
*** slaweq has quit IRC01:16
*** mdbooth has quit IRC01:16
*** mdbooth has joined #openstack-nova01:18
*** yedongcan has joined #openstack-nova01:41
*** yedongcan has quit IRC01:52
*** yedongcan has joined #openstack-nova01:53
*** gyee has quit IRC02:10
*** hongbin has joined #openstack-nova02:29
*** Liang__ has joined #openstack-nova02:46
*** slaweq has joined #openstack-nova03:11
*** slaweq has quit IRC03:15
*** mkrai has joined #openstack-nova03:33
*** mkrai has quit IRC03:36
*** mkrai_ has joined #openstack-nova03:36
*** mkrai_ has quit IRC03:37
openstackgerritGuo Jingyu proposed openstack/nova-specs master: Proposal for a safer noVNC console with password authentication  https://review.opendev.org/62312003:51
*** vishalmanchanda has joined #openstack-nova03:52
*** mkrai has joined #openstack-nova04:00
*** udesale has joined #openstack-nova04:03
*** hongbin has quit IRC04:33
*** yedongcan has quit IRC04:37
*** yedongcan has joined #openstack-nova04:40
*** larsks has quit IRC05:05
*** cmurphy has quit IRC05:05
*** cmurphy has joined #openstack-nova05:07
*** slaweq has joined #openstack-nova05:11
*** mkrai has quit IRC05:11
*** larsks has joined #openstack-nova05:12
*** slaweq has quit IRC05:16
*** links has joined #openstack-nova05:18
*** Liang__ has quit IRC05:29
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-nova05:34
*** udesale_ has joined #openstack-nova05:45
*** udesale has quit IRC05:48
*** ratailor has joined #openstack-nova05:55
*** mkrai has joined #openstack-nova06:08
openstackgerritSundar Nadathur proposed openstack/nova master: ksa auth conf and client for Cyborg access  https://review.opendev.org/63124206:24
openstackgerritSundar Nadathur proposed openstack/nova master: Add Cyborg device profile groups to request spec.  https://review.opendev.org/63124306:24
openstackgerritSundar Nadathur proposed openstack/nova master: Define Cyborg ARQ binding notification event.  https://review.opendev.org/69270706:24
openstackgerritSundar Nadathur proposed openstack/nova master: Create and bind Cyborg ARQs.  https://review.opendev.org/63124406:24
openstackgerritSundar Nadathur proposed openstack/nova master: Pass accelerator requests to each virt driver from compute manager.  https://review.opendev.org/69858106:24
openstackgerritSundar Nadathur proposed openstack/nova master: Compose accelerator PCI devices into domain XML in libvirt driver.  https://review.opendev.org/63124506:24
openstackgerritSundar Nadathur proposed openstack/nova master: Delete ARQs for an instance when the instance is deleted.  https://review.opendev.org/67373506:24
openstackgerritSundar Nadathur proposed openstack/nova master: Enable hard/soft reboot with accelerators.  https://review.opendev.org/69794006:24
openstackgerritSundar Nadathur proposed openstack/nova master: Enable start/stop of instances with accelerators.  https://review.opendev.org/69955306:24
openstackgerritSundar Nadathur proposed openstack/nova master: Enable and use COMPUTE_ACCELERATORS trait.  https://review.opendev.org/69955406:24
openstackgerritSundar Nadathur proposed openstack/nova master: Bump compute rpcapi version and reduce Cyborg calls.  https://review.opendev.org/70422706:24
openstackgerritSundar Nadathur proposed openstack/nova master: Add cyborg tempest job.  https://review.opendev.org/67099906:24
*** udesale_ has quit IRC06:45
*** udesale has joined #openstack-nova06:53
*** slaweq has joined #openstack-nova07:11
*** lpetrut has joined #openstack-nova07:15
*** slaweq has quit IRC07:16
*** dpawlik has joined #openstack-nova07:26
*** mkrai has quit IRC07:26
*** mugsie has quit IRC07:37
*** kiseok7 has joined #openstack-nova07:39
*** mugsie has joined #openstack-nova07:41
*** evrardjp has quit IRC07:46
*** evrardjp has joined #openstack-nova07:50
*** dtantsur|afk is now known as dtantsur07:56
*** vesper11 has quit IRC08:06
*** vesper11 has joined #openstack-nova08:07
*** slaweq has joined #openstack-nova08:12
*** maciejjozefczyk has joined #openstack-nova08:14
*** tkajinam has quit IRC08:17
*** luksky has joined #openstack-nova08:21
*** mkrai has joined #openstack-nova08:22
*** tesseract has joined #openstack-nova08:27
*** priteau has joined #openstack-nova08:31
*** rpittau|afk is now known as rpittau08:33
*** HagunKim has joined #openstack-nova08:34
*** tosky has joined #openstack-nova08:40
*** ociuhandu has joined #openstack-nova08:45
*** ralonsoh has joined #openstack-nova08:48
*** martinkennelly has joined #openstack-nova08:57
*** tetsuro has joined #openstack-nova09:04
*** hoonetorg has quit IRC09:06
*** shilpasd has joined #openstack-nova09:07
gibiefried: reasoning for the group_policy none default https://review.opendev.org/#/c/657796/12//COMMIT_MSG unfortunatly the train ptg etherpad is dead :/09:07
gibiefried: I remember that it wasn't a 100% coin toss to select none09:08
gibibah https://etherpad.openstack.org/p/nova-ptg-train-5 is also dead09:10
gibiand there is no -609:11
gibithere was a dump of the etherpad on the ML http://www.codeha.us/openstack-discuss/msg05171.html but not much details there09:12
gibi"#agree: 'none' seems to be a sensible policy"09:12
*** ociuhandu has quit IRC09:14
gibiisolate is the stricter value so that feels a better default09:14
gibibut I'm not trying the re-create the reasoning alone09:14
*** hoonetorg has joined #openstack-nova09:18
*** xek has joined #openstack-nova09:19
*** ociuhandu has joined #openstack-nova09:24
*** ociuhandu has quit IRC09:29
*** ociuhandu has joined #openstack-nova09:32
*** ociuhandu has quit IRC09:36
*** priteau has quit IRC09:44
*** derekh has joined #openstack-nova09:44
*** Liang__ has joined #openstack-nova09:59
*** davidsha has joined #openstack-nova10:05
*** jaosorior has quit IRC10:08
*** mkrai has quit IRC10:14
*** ociuhandu has joined #openstack-nova10:21
huaqiangstephenfin: do you updated comment for nova spec https://review.opendev.org/#/c/668656/?10:27
bauzasgibi: FWIW, I'm writing a new rev for https://review.opendev.org/#/c/552924/10:34
gibibauzas: ack10:34
bauzaswe could discuss about the group_policy with this one10:34
gibido we still have an open question about group_policy?10:36
*** mkrai has joined #openstack-nova10:44
openstackgerritLee Yarwood proposed openstack/nova master: virt: Provide block_device_info during rescue  https://review.opendev.org/70081110:45
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Add support for stable device rescue  https://review.opendev.org/70081210:45
openstackgerritLee Yarwood proposed openstack/nova master: compute: Report COMPUTE_RESCUE_BFV and check during rescue  https://review.opendev.org/70142910:45
openstackgerritLee Yarwood proposed openstack/nova master: api: Introduce microverion 2.82 allowing boot from volume rescue  https://review.opendev.org/70143010:45
openstackgerritLee Yarwood proposed openstack/nova master: compute: Extract _get_bdm_image_metadata into nova.utils  https://review.opendev.org/70521210:45
openstackgerritLee Yarwood proposed openstack/nova master: WIP libvirt: Support boot from volume instance rescue  https://review.opendev.org/70143110:45
*** mkrai has quit IRC10:52
*** dtantsur is now known as dtantsur|afk10:54
*** udesale has quit IRC11:04
*** ociuhandu has quit IRC11:06
*** dpawlik has quit IRC11:06
*** ociuhandu has joined #openstack-nova11:07
*** dpawlik has joined #openstack-nova11:07
*** yedongcan has quit IRC11:10
*** ociuhandu has quit IRC11:11
*** ociuhandu has joined #openstack-nova11:12
*** yedongcan has joined #openstack-nova11:13
*** hoonetorg has quit IRC11:14
*** tosky has quit IRC11:14
*** HagunKim has quit IRC11:14
*** slaweq has quit IRC11:14
*** mugsie has quit IRC11:14
*** ratailor has quit IRC11:14
*** cmurphy has quit IRC11:14
*** vishalmanchanda has quit IRC11:14
*** mdbooth has quit IRC11:14
*** ircuser-1 has quit IRC11:14
*** mmethot_ has quit IRC11:14
*** knikolla has quit IRC11:14
*** ildikov has quit IRC11:14
*** adriant has quit IRC11:14
*** abhishekk has quit IRC11:14
*** dansmith has quit IRC11:14
*** jkulik has quit IRC11:14
*** stephenfin has quit IRC11:14
*** mgoddard has quit IRC11:14
*** adrianc has quit IRC11:14
*** lchabert has quit IRC11:14
*** gryf has quit IRC11:14
*** hemna has quit IRC11:14
*** tonyb[m] has quit IRC11:14
*** _erlon_ has quit IRC11:14
*** cz3 has quit IRC11:14
*** Anticimex has quit IRC11:14
*** dtantsur|afk has quit IRC11:14
*** davidsha has quit IRC11:14
*** maciejjozefczyk has quit IRC11:14
*** kiseok7 has quit IRC11:14
*** arne_wiebalck has quit IRC11:14
*** rajinir has quit IRC11:14
*** pas-ha has quit IRC11:14
*** Jeffrey4l has quit IRC11:14
*** StevenK has quit IRC11:14
*** sapd1 has quit IRC11:14
*** mvkr has quit IRC11:14
*** dustinc has quit IRC11:14
*** mnasiadka has quit IRC11:14
*** andreaf has quit IRC11:14
*** szaher has quit IRC11:14
*** seba has quit IRC11:14
*** ab-a has quit IRC11:14
*** Alon_KS has quit IRC11:14
*** donnyd has quit IRC11:14
*** lyarwood has quit IRC11:14
*** logan- has quit IRC11:14
*** fyx has quit IRC11:14
*** vdrok has quit IRC11:14
*** sorrison has quit IRC11:14
*** melwitt has quit IRC11:14
*** bauzas has quit IRC11:14
*** irclogbot_2 has quit IRC11:15
*** openstackstatus has quit IRC11:16
*** ttx has quit IRC11:17
*** irclogbot_1 has joined #openstack-nova11:17
*** tonyb[m] has joined #openstack-nova11:17
*** _erlon_ has joined #openstack-nova11:17
*** cz3 has joined #openstack-nova11:17
*** Anticimex has joined #openstack-nova11:17
*** dtantsur|afk has joined #openstack-nova11:17
*** Liang__ has quit IRC11:18
*** vesper11 has quit IRC11:18
*** rnoriega_ has quit IRC11:18
*** kaisers has quit IRC11:18
*** smcginnis|FOSDEM has quit IRC11:18
*** johnthetubaguy has quit IRC11:18
*** purplerbot has quit IRC11:18
*** ioni has quit IRC11:18
*** gary_perkins has quit IRC11:18
*** ericyoung has quit IRC11:18
*** tristanC has quit IRC11:18
*** arxcruz has quit IRC11:18
*** johanssone has quit IRC11:18
*** bauzas has joined #openstack-nova11:18
*** hoonetorg has joined #openstack-nova11:18
*** tosky has joined #openstack-nova11:18
*** HagunKim has joined #openstack-nova11:18
*** slaweq has joined #openstack-nova11:18
*** mugsie has joined #openstack-nova11:18
*** ratailor has joined #openstack-nova11:18
*** cmurphy has joined #openstack-nova11:18
*** vishalmanchanda has joined #openstack-nova11:18
*** mdbooth has joined #openstack-nova11:18
*** ircuser-1 has joined #openstack-nova11:18
*** mmethot_ has joined #openstack-nova11:18
*** knikolla has joined #openstack-nova11:18
*** ildikov has joined #openstack-nova11:18
*** adriant has joined #openstack-nova11:18
*** abhishekk has joined #openstack-nova11:18
*** stephenfin has joined #openstack-nova11:18
*** dansmith has joined #openstack-nova11:18
*** jkulik has joined #openstack-nova11:18
*** mgoddard has joined #openstack-nova11:18
*** adrianc has joined #openstack-nova11:18
*** lchabert has joined #openstack-nova11:18
*** gryf has joined #openstack-nova11:18
*** hemna has joined #openstack-nova11:18
*** davidsha has joined #openstack-nova11:19
*** maciejjozefczyk has joined #openstack-nova11:19
*** kiseok7 has joined #openstack-nova11:19
*** arne_wiebalck has joined #openstack-nova11:19
*** rajinir has joined #openstack-nova11:19
*** pas-ha has joined #openstack-nova11:19
*** Jeffrey4l has joined #openstack-nova11:19
*** StevenK has joined #openstack-nova11:19
*** sapd1 has joined #openstack-nova11:19
*** mvkr has joined #openstack-nova11:19
*** dustinc has joined #openstack-nova11:19
*** mnasiadka has joined #openstack-nova11:19
*** andreaf has joined #openstack-nova11:19
*** szaher has joined #openstack-nova11:19
*** seba has joined #openstack-nova11:19
*** fyx has joined #openstack-nova11:19
*** ab-a has joined #openstack-nova11:19
*** Alon_KS has joined #openstack-nova11:19
*** donnyd has joined #openstack-nova11:19
*** lyarwood has joined #openstack-nova11:19
*** logan- has joined #openstack-nova11:19
*** vdrok has joined #openstack-nova11:19
*** sorrison has joined #openstack-nova11:19
*** melwitt has joined #openstack-nova11:19
*** johnthetubaguy has joined #openstack-nova11:19
*** _erlon_ has quit IRC11:20
*** arxcruz has joined #openstack-nova11:20
*** purplerbot has joined #openstack-nova11:20
*** johanssone has joined #openstack-nova11:20
*** cz3 has quit IRC11:20
*** rm_work has quit IRC11:21
*** tonyb[m] has quit IRC11:21
*** knikolla has quit IRC11:21
*** donnyd has quit IRC11:21
*** tonyb[m] has joined #openstack-nova11:21
*** mnaser has quit IRC11:21
*** _erlon_ has joined #openstack-nova11:22
*** rm_work has joined #openstack-nova11:22
*** mnaser has joined #openstack-nova11:22
*** vesper11 has joined #openstack-nova11:22
*** tesseract has quit IRC11:22
*** ildikov has quit IRC11:22
*** rnoriega_ has joined #openstack-nova11:23
*** ericyoung has joined #openstack-nova11:23
*** smcginnis|FOSDEM has joined #openstack-nova11:23
*** tristanC has joined #openstack-nova11:23
*** cz3 has joined #openstack-nova11:23
*** knikolla has joined #openstack-nova11:23
*** donnyd has joined #openstack-nova11:23
*** ildikov has joined #openstack-nova11:23
*** ttx has joined #openstack-nova11:23
*** tesseract has joined #openstack-nova11:24
*** andreykurilin has joined #openstack-nova11:26
*** bbowen has quit IRC11:36
*** tbachman has quit IRC11:38
*** yedongcan has quit IRC11:48
*** yedongcan has joined #openstack-nova11:49
*** tetsuro has quit IRC11:56
*** tesseract has quit IRC11:56
*** ioni has joined #openstack-nova12:10
*** tesseract has joined #openstack-nova12:26
*** vishalmanchanda has quit IRC12:29
*** tesseract has quit IRC12:29
*** tesseract has joined #openstack-nova12:31
*** tesseract has quit IRC12:37
*** Luzi has joined #openstack-nova12:37
*** tesseract has joined #openstack-nova12:38
*** tesseract has quit IRC12:40
*** tesseract has joined #openstack-nova12:40
*** ratailor has quit IRC12:45
*** damien_r has quit IRC12:46
openstackgerritStephen Finucane proposed openstack/nova master: tox: Integrate mypy  https://review.opendev.org/67620812:46
openstackgerritStephen Finucane proposed openstack/nova master: mypy: Add type annotations to 'nova.pci'  https://review.opendev.org/67620912:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Remove unused 'cache_utils' APIs  https://review.opendev.org/70565212:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Address TODO  https://review.opendev.org/70565312:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Bump minimum version of websockify  https://review.opendev.org/70565412:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Merge unnecessary 'NovaProxyRequestHandlerBase' separation  https://review.opendev.org/70565512:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Remove 'run_once' helper  https://review.opendev.org/70565612:46
openstackgerritStephen Finucane proposed openstack/nova master: mypy: Add nova.cmd, nova.conf, nova.console  https://review.opendev.org/70565712:46
openstackgerritStephen Finucane proposed openstack/nova master: WIP: mypy: Add type annotations to top-level modules  https://review.opendev.org/70565812:46
*** ociuhandu has quit IRC12:48
*** tesseract has quit IRC12:50
openstackgerritStephen Finucane proposed openstack/nova master: nova-net: Remove unused parameters  https://review.opendev.org/70397412:53
*** bbowen has joined #openstack-nova12:56
openstackgerritStephen Finucane proposed openstack/nova master: hardware: Add TODO to remove '(un)pin_cpu_with_siblings'  https://review.opendev.org/70566612:58
openstackgerritStephen Finucane proposed openstack/nova master: Address release note nits for cpu-resources series  https://review.opendev.org/70566712:58
openstackgerritStephen Finucane proposed openstack/nova master: doc: Address some trivial nits with port QoS doc  https://review.opendev.org/70566812:58
openstackgerritMerged openstack/nova master: Enable live migration with qos ports  https://review.opendev.org/69906612:58
openstackgerritMerged openstack/nova master: Avoid fetching metadata when no subnets found  https://review.opendev.org/67924712:58
*** N3l1x has joined #openstack-nova12:58
openstackgerritChen Hanxiao proposed openstack/nova master: libvirt: don't call sync_guest_time if qga is not enabled  https://review.opendev.org/52483613:02
*** tesseract has joined #openstack-nova13:03
*** tesseract has quit IRC13:07
*** damien_r has joined #openstack-nova13:10
*** artom has quit IRC13:11
*** tbachman has joined #openstack-nova13:12
*** tesseract has joined #openstack-nova13:15
*** luksky has quit IRC13:19
*** ociuhandu has joined #openstack-nova13:20
*** mdbooth has quit IRC13:20
*** mdbooth has joined #openstack-nova13:21
*** ociuhandu has quit IRC13:27
*** jmlowe has joined #openstack-nova13:27
*** jmlowe has quit IRC13:29
huaqiangstephenfin: do you have updated comment for the mixed instance spec https://review.opendev.org/#/c/668656/ ?13:30
*** tkajinam has joined #openstack-nova13:34
*** nweinber has joined #openstack-nova13:35
*** maciejjozefczyk has quit IRC13:36
*** b3nt_pin has joined #openstack-nova13:37
*** mkrai has joined #openstack-nova13:38
kashyapsfinucan: or any rST-aware-human: On headings automatically getting anchors -- is that a Sphinx thing, or rST proper?13:39
*** davidsha has quit IRC13:39
*** pcaruana has quit IRC13:43
*** yedongcan has left #openstack-nova13:43
*** maciejjozefczyk has joined #openstack-nova13:46
*** jmlowe has joined #openstack-nova13:49
*** sandonov has joined #openstack-nova13:52
*** damien_r has quit IRC13:57
*** damien_r has joined #openstack-nova13:58
*** shilpasd has quit IRC13:59
*** ociuhandu has joined #openstack-nova14:01
*** ociuhandu has quit IRC14:06
*** mkrai has quit IRC14:07
sean-k-mooneykashyap: it depens on your theme i think14:09
kashyapThe theme is Sphinx in this case (as alluded to above)14:10
sean-k-mooneykashyap: but heading in lists dont get css ids allcoated14:10
sean-k-mooneyso if you look at the flavor docs for example because we are using a list none of the falvor exrta specs have clickable ancors14:11
sean-k-mooneythe title in the list will stilll be bolded but the element that renders the list does not add the css id14:12
sean-k-mooneykashyap: i am assuming you are asking why overview has an ancor bug flavor ID does not right https://docs.openstack.org/nova/latest/user/flavors.html#overview14:13
* kashyap clicks14:13
kashyapsean-k-mooney: No, not really; the example is:14:14
*** sapd1_x has joined #openstack-nova14:17
* kashyap getting a paste-bin14:17
*** spatel has joined #openstack-nova14:19
kashyapsean-k-mooney: Compare and contrast: http://paste.openstack.org/show/789109/14:19
*** eharney has quit IRC14:19
kashyapThe bottom example is the round-about way; the first is the straight-forward way.  Anyway14:20
sean-k-mooneyoh you are tralking about the automatic refrences14:20
sean-k-mooneyso that you can refer to it without needing to do .. _`first-section`:14:20
kashyapYep14:21
sean-k-mooneyya that only works in the current doc/page14:21
sean-k-mooneyif you need to reference someitn in a different rst file you have to do it manually14:21
sean-k-mooney`First Section`__ will not work aross rst files14:22
*** udesale has joined #openstack-nova14:22
*** mriedem has joined #openstack-nova14:23
*** tbachman has quit IRC14:23
*** jmlowe has quit IRC14:23
*** spatel has quit IRC14:24
kashyapRight, just within the file :-)  All sorted14:25
sean-k-mooneyyep we only use the _`first-section`: way when we need too14:26
kashyapsean-k-mooney: Unrelated, do you know how to look up what version of Debian does this package edk2-(0~20190606.20d2e5a1-2) have?14:26
*** jmlowe has joined #openstack-nova14:26
*** ociuhandu has joined #openstack-nova14:26
bauzasefried: sean-k-mooney: had fully read the etherpad and the proposals14:26
bauzas<3 for you14:26
bauzasnow processing those14:26
kashyapRephrasing: "... what version of Debian has this package edk2-(0~20190606.20d2e5a1-2)"14:26
*** dtantsur|afk is now known as dtantsur14:27
sean-k-mooneybauzas: cool, did it seam familar? i have proposed it in the past but with same same_subtree and a few other enhancements to placmeet it is now much easier to use14:28
bauzassean-k-mooney: same_subtree should be good, yes14:28
bauzassean-k-mooney: efried: just a question about upgrades14:28
bauzasI provided a config option for listing the resource types14:29
bauzasI understand you wouldn't want to use it14:29
bauzasbut then that means that when we upgrade, then we would have to transform the inventories and allocations directly14:30
bauzasI'm cool with this14:30
bauzasbut you all okay?14:30
sean-k-mooneyi was ok with the config option14:31
sean-k-mooneywe will still need at least a config option to say report numa or not. where it needs to be a list is debatable14:31
ralonsohsean-k-mooney, stephenfin https://bugs.launchpad.net/nova/+bug/186187614:32
openstackLaunchpad bug 1861876 in OpenStack Compute (nova) "[Neutron API] Neutron Floating IP not always have 'port_details'" [Undecided,New]14:32
sean-k-mooneyi was ok with the list14:32
*** tkajinam has quit IRC14:32
sean-k-mooneyralonsoh: looking14:33
ralonsohthanks14:33
sean-k-mooneyoh the nova net removal14:33
ralonsohyeah!14:33
ralonsohjust a couple of things14:33
* stephenfin tries to figure out why we weren't doing that before14:34
openstackgerritMykola Yakovliev proposed openstack/nova master: Fix boot_roles in InstanceSystemMetadata  https://review.opendev.org/69804014:34
sean-k-mooneyralonsoh: is this the issue https://review.opendev.org/#/c/697153/16/nova/api/openstack/compute/floating_ips.py@3914:34
ralonsohsean-k-mooney, exactly14:34
sean-k-mooneywe shoudl be useing floating_ip.get('prot_detail')14:35
sean-k-mooneyto not cause a key error if it is not set14:35
ralonsohsean-k-mooney, and the way the "pool" key is retrieved14:35
ralonsohthe network id is stored in "floating_network_id"14:35
ralonsohhttps://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L1030-L103714:35
sean-k-mooneyah i see14:35
sean-k-mooneyim glad we are so consistent14:36
ralonsohhahahahaha14:36
ralonsohsorry for that14:36
sean-k-mooneyno its fine so that shoudl be an easy fix14:36
efriedbauzas, sean-k-mooney: I really don't want the list. I think a boolean toggle is the right thing. As we move forward, for example moving VGPUs from under the root to under the NUMA RPs, things will still work correctly.14:36
sean-k-mooneyefried: ya im totally fine with the boolean toggel14:37
bauzasefried: we don't really need a boolean for triggering if so14:37
efriedWe'll "gain" support for new flavors that express affinity of the VGPUs, but old flavors that don't express such affinity will still work.14:37
efriedbauzas: we do; one of the important distinctions here is that we're segregating the data center into numa-aware and not.14:37
sean-k-mooneyi think by defualt we shoudl not do the reshape on upgrade but when the config option is set we will do the reshape and it should not be possibel to undo while instance are on the host14:38
bauzasunless we wanna somehow prepare ops to switch when they want14:38
bauzascool enough, let's then make it a boolean14:38
sean-k-mooneybauzas: we need the config option because we said we would partion the cloud into numa hosts and non numa hosts14:38
efriedI agree we should reshape when the config option is set and the service is restarted. We knew the restriction of "reshape only on upgrade" was artificial/temporary when we made it.14:38
sean-k-mooneyso that on the non numa hosts you could continue to run floating instance that use more resouces then fit in 1 numa node14:39
sean-k-mooneywithout haveing to make the multi numa instnces14:39
stephenfinralonsoh: ah, so previously we were making two requests to neutron via neutronclient: one to retrieve all floating IPs and one to retrieve all ports14:39
bauzasthat looks seamless indeed14:39
bauzasbut my only concern is that this conf opt is only for Ussuri14:39
sean-k-mooneye.g. the i want to use openstack to run one giant vm per host to run anohter orcastrator on top usecase we all hate14:40
*** jaosorior has joined #openstack-nova14:40
bauzasand then we would turn into it either way in Victoria14:40
sean-k-mooneybauzas: i wont be14:40
dansmithefried: sean-k-mooney bauzas but they will have to reshape eventually, right?14:40
stephenfinralonsoh: and then we simply matched 'port_id' for each entry in the former to the corresponding port from the latter. I tried to make that cleverer by using 'port_details', but you're saying that's an extension that I can't rely on14:40
sean-k-mooneyi will need to be kept as long as we support non numa instnaces14:40
bauzasdansmith is expressing my concern14:40
stephenfinralonsoh: I can fix that now if you haven't already, but how can I check if that extension is available?14:40
efrieddansmith: I'm not sure we ever need to force them to make a host NUMA-aware if they don't want to.14:40
ralonsohstephenfin, exactly, this is an extension and this key is not mandatory14:40
bauzasefried: if so, we somehow need to make the path clear that we *will* remove filter things in Victoria either way14:41
sean-k-mooneydansmith: they will have to reshape only if we decied that all instance will have a numa toplogy of 1 gust numa node by defualt14:41
efriediow the segregated-on-NUMA-ness is a permanent characteristic of the data center14:41
dansmithif they can stay off forever, then fine, but I imagine that leaves us with a pretty big variable for a long time14:41
bauzasalso,14:42
bauzaswhat I'm not okay is keeping the old filter processing for NUMA placement for a while14:42
dansmithanyway, my point was that 95% of people will not pay attention and flip that flag until they have to, so reshape-on-flag only buys you N cycles until you turn it on permanently14:42
ralonsohstephenfin, you can use the neutron-client14:42
sean-k-mooneydansmith: ya so i have been advocating that we should make all instnaces numa instance for a long time. but it breaks the "run one giant vm" usecase that some people care about14:42
ralonsohstephenfin, you can retrieve the extension list14:43
ralonsohbut in this case, do you need it? just checking if this key is in the FIP dict14:43
efriedHm, dansmith won't "people who care about NUMA" (like "edge") flip it right away so they can take advantage? Otherwise NUMA-ness won't work at all.14:43
dansmithsean-k-mooney: you mean just in how they specify the one big VM, not that we need to be able to provide numa-ignorant vms right?14:43
sean-k-mooneydansmith: if we decided that if you want to do that you have to specify multiple numa nodes in the flavor then we could get rid of the flag14:43
bauzasefried: that's my point14:44
*** Luzi has quit IRC14:44
*** luksky has joined #openstack-nova14:44
sean-k-mooneydansmith: ya they would jsut add hw:numa_nodes=2 or whatever to the giant flavor14:44
efriedLike, I thought we were making this dividing line here such that, if you ask for a NUMA topo, and you don't have any computes with that flag switched on, you just won't land.14:44
stephenfinralonsoh: I need to figure out what instance the floating IP is associated with so I can return 'instance_id' in the API response14:44
bauzasefried: if we want them to flip the conf opt, we somehow need to stop supporting the existing by the filter14:44
efriedbauzas: but we don't *want* them to flip the conf opt... unless they *need* NUMA on that host.14:44
stephenfinralonsoh: which appears to be stored in the 'device_id' field of the floating IP's port14:44
dansmithefried: if it's required for numa at all then don't those people have to switch at upgrade anyway?14:45
efrieddansmith: only if they want NUMA-aware VMs on that host.14:45
*** davidsha_ has joined #openstack-nova14:45
bauzasefried: correct14:45
bauzasefried: people who don't care a bit about NUMA wouldn't get reshapes14:45
sean-k-mooneydansmith: as it stand we should not be mixing numa vms and non numa vms on the same host14:45
dansmithefried: right so people that have numa guests right now, they do the upgrade and if they don't flip it, they're unable to boot new instances?14:45
bauzasbut people who care should opt-in now so that in Victoria we could remove the filter bits responsible for placing instances14:46
sean-k-mooneythat is becasue of how we track and affitize the numa vms memory14:46
efriedah, I see what you're saying. Yes dansmith that's right.14:46
dansmithsean-k-mooney: I understand14:46
*** mkrai has joined #openstack-nova14:46
dansmithefried: right, so anyone else will not flip that flag until they have to14:46
efried why is that a problem?14:46
sean-k-mooneydansmith: yes we could maybe check if the host has numa instnace currenly an for enable it?14:47
dansmithefried: as I said before, if we're going to support numa-aware and numa-ignorant forever, then maybe it's not14:47
dansmithsean-k-mooney: seems a little scary14:47
sean-k-mooneyya i agree14:47
efriedRight, that's what I thought we were going to do. If FooHost doesn't ever care about NUMA instances, it can just leave that flag off forever, and never have a flavor with hw:numa* in it, and go on happily forever.14:47
ralonsohstephenfin, yes, but I'm reviewing the code and if the extension is not enabled, this information won't be there, in the FIP14:47
sean-k-mooneyi dont really like basing the behavior off what the current vms are using14:47
efriedgibi, bauzas: To close on group_policy: sean-k-mooney and I talked about it a bit yesterday and decided that, in order to support both NUMA and bandwidth, we need to retain the group_policy=none default and do the anti-affinity in the NTF14:48
efriedUNTIL we can design & implement proper granular anti-affinity syntax in placement, which IMO is too ambitious for Ussuri.14:48
efriedsean-k-mooney: +114:48
sean-k-mooneyefried: keep in mind that almost all host are numa hosts14:48
dansmithsean-k-mooney: in reality, ALL of them are14:48
efriedmeaning almost all hosts are *capable* of NUMA.14:48
efriedNot that all hosts are hosting NUMA-aware instances.14:48
sean-k-mooneydansmith: yes14:48
dansmithnot capable, they are numa and if not configured, are losing performance to that fact14:48
sean-k-mooneyefried: correct14:49
*** vishalmanchanda has joined #openstack-nova14:49
efriedright, which some VMs don't care about.14:49
stephenfinralonsoh: WDYM? As I understood it, we won't have the 'port_details' field but the 'device_id' field will still be there in the ports API response, right?14:49
sean-k-mooneyefried: no i think dansmit ment at the host level. not the vm level14:49
dansmithefried: no, they all care about it, just some operators don't care to take on the burden of configuring it because we make it so difficult14:49
ralonsohstephenfin, no no14:49
dansmithnobody is opting out of numa for any reason other than the cost benefit isn't there for configuring it14:50
ralonsohstephenfin, if you don't have 'port_details', you won't have this info14:50
stephenfinohhhh, they're tied together14:50
stephenfin?14:50
ralonsohstephenfin, if you really need the host but you don't have 'port_details'14:50
efrieddansmith: oh? Not because opting out lets them squeeze more VMs onto their host?14:50
ralonsohthen you have the "port_id" always14:50
sean-k-mooneyefried: well it depends14:50
ralonsohstephenfin, and then, you can call the Neutron API to retrieve the port info (and the host)14:50
sean-k-mooneyin general no14:51
efriedyes, it depends, that's what I'm saying.14:51
sean-k-mooneybut if you use pinning or hugepage that disable oversubsription or cpus or memroy14:51
dansmithefried: only because complexity of what we provide makes that increasingly difficult as density increases14:51
sean-k-mooneyif you jsut use numa over subsription is allowed14:51
efriedright, and that's not a fitting problem we're going to solve any time soon.14:51
bauzasI feel we need to define a clear upgrade path14:52
stephenfinralonsoh: I'm confused. You seem to be saying the same thing as me :) Let me try again14:52
bauzas1/ for NUMA-aware instances14:52
bauzas2/ for non-NUMA-aware instances14:52
efried1/ flip the switch14:52
efried2/ don't14:52
efriedIf we've already decided we're going to segregate, it really is that simple.14:53
sean-k-mooneyso the boolean config option allows us to punt on the final decission on this for a cycle or two.14:53
dansmithis there some reason that the switch can't be flipped for everyone?14:53
sean-k-mooneye.g. default to on14:53
sean-k-mooneyand you opt out?14:53
*** links has quit IRC14:53
dansmithhaving compute nodes behave like one thing or another isn't something I'd like to see us doing long-term14:53
bauzaswe could also have an implicit switch14:53
bauzaslike, if you ask for mempages, then basically you ask for NUMA things14:54
bauzas(even if, and I hate to say, is unrelated)14:54
bauzasor cpu_pin14:54
bauzasor whatever14:54
efrieddansmith: the reason we didn't want to do that is because, if your priority is "land my VM", that becomes difficult/impossible as your cloud reaches saturation.14:54
sean-k-mooneybauzas: those are properties fo teh guest14:54
dansmithefried: that is specifically called out in our project scope as not a problem nova solves14:54
sean-k-mooneybauzas: not of the host14:54
stephenfinralonsoh: I can do 'GET /floatingips' (or whatever the API is). Each item in the response may contain a 'port_details' field but only if this extension is enabled. Correct?14:55
dansmithefried: i.e. fitting the last VM into available memory14:55
bauzassean-k-mooney: for cpu pinset, it's for host14:55
bauzasbut I agree with large pages14:55
bauzasdammit, I'm torn14:55
ralonsohstephenfin, exactly, the key 'port_details' will be there only if the extension is enabled14:55
sean-k-mooneyya so we could enable it by default if you have defiend cpu_dedicated_set14:55
efriedbauzas: that's backwards though. If you ask the scheduler for large pages, you're asking to land on a host that knows how to do that. You can't use that question to decide to make a host large page-aware.14:56
sean-k-mooneyif cpu_dedicated_set is defiend then you are reporting PCPU and the only instance that can consume PCPUs have a numa toplogy14:56
*** pcaruana has joined #openstack-nova14:56
sean-k-mooneyso that is once case where yes we can implcitly enable the numa reporting14:56
efrieddansmith: So how did we get to a point where we decided it was important to segregate the cloud? Hate to drag you into a second simultaneous discussion stephenfin, but weren't you part of that?14:57
stephenfinralonsoh: Okay. So if I do 'GET /ports/{port_id}', will that response always contain a 'device_id' field? If not, is this because it's provided by the same extension?14:57
sean-k-mooneyhaving hugepages allocated on the host is not really a good enough reason in my book and we should not assume that all numa hosts use cpu pinning14:57
bauzassean-k-mooney: right, that's my point14:57
*** artom has joined #openstack-nova14:57
bauzasyou could only care about large pages or just standard NUMA sharding14:58
dansmithefried: segregate what? numa and non-numa instances?14:58
sean-k-mooneybauzas: well you could be useing the hugepages on the host for a dpdk vswitch for example14:58
efriedyes14:58
sean-k-mooneyit might not be for the vms14:58
bauzaslike, "I want 2 vCPUs on the same NUMA node" doesn't absolutely require CPU pinning14:58
stephenfinefried: Because we can't make everything have NUMA14:58
ralonsohstephenfin, exactly https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_common.py#L23114:58
efriedstephenfin: right, so dansmith wants to know why not14:58
sean-k-mooneybauzas: right that just need hw:numa_nodes=114:58
ralonsohstephenfin, the device_id will be there14:59
sean-k-mooneybauzas: i use hugepages on my home system but not pinning14:59
stephenfinefried: 2 sockets w/ a 32 core CPU in each socket (no HyperThreading). Go boot a 33 core instance14:59
bauzaseither way, I think we need to move forward14:59
sean-k-mooneybauzas: beacue i want cpu oversubscition but not memory over subscrtion14:59
bauzasI'll write the new revision with a bool flag and mention the alternative of an automatic all-NUMA world in the spec15:00
stephenfinYou can't because the instance will no longer split across the NUMA nodes, and we don't let an instance oversubscribe against itself15:00
bauzaspeople will chime in and we'll see15:00
* efried <== choir stephenfin15:00
efrieddansmith: --^15:00
dansmithstephenfin: efried: because what nova currently provides is "no numa means 1 numa" yeah?15:00
sean-k-mooneyefried: yes we are aware that is the giant vm case.15:00
stephenfinralonsoh: It will *always* be there?15:00
stephenfindansmith: No, no NUMA means no NUMA15:00
ralonsohstephenfin, yes15:00
stephenfincurrently15:00
*** tbachman has joined #openstack-nova15:01
bauzasdansmith: nova currently provides "I can spread my VM across many NUMA nodes if I don't care'15:01
dansmithstephenfin: how is no numa and 1 numa node different to theguest/15:01
ralonsohstephenfin, in the port this key, "device_id", is mandatory15:01
sean-k-mooneydansmith: in the guest it does not15:01
dansmithbauzas: right, we lie and say it's one node when it's not you mean?15:01
dansmithsean-k-mooney: right exactly15:01
bauzasdansmith: while that being broken if you turn into the new modeling15:01
sean-k-mooneydansmith: but if you do hw:numa_nodes=1 we require that he guest virtual numa node be mapped to at most 1 host numa node15:01
stephenfinIt's not for the guest, but we insist on 1 guest NUMA node being mapped to one host NUMA node15:02
sean-k-mooneyand that is where it breaks15:02
dansmithso my point is, for the giant vm case right now, we either require you to ignore or be insulated from the numaness that *does* exist, or configure it in a detailed fashion15:02
bauzasdansmith: it's a breaking change tbc15:02
stephenfindansmith: correct15:02
bauzasthat's the proposal15:02
dansmiththat's all I'm saying.. nobody is opting *into* this behavior, they're choosing it because it's less bad than having to hand configure flavors15:02
dansmith...for their giant vms15:03
stephenfindon't use anything that would configure NUMA (hw:numa_nodes, hw:cpu_policy=dedicated, hw:mem_page_size=foo) or hand tune for your chosen host topology15:03
stephenfindansmith: Yup, agreed15:03
dansmithefried: ^15:03
openstackgerritKashyap Chamarthy proposed openstack/nova-specs master: Re-propose "Secure Boot support for KVM & QEMU guests" for Ussuri  https://review.opendev.org/69384415:03
bauzasthe ideal would be us being able to serve an instance request *not* asking for NUMA with a NUMA-aware inventory15:03
bauzasbecause if so, we would just make the switch for *all* hosts15:04
dansmithright15:04
dansmithand15:04
dansmithwe'd not be pretending with the topology15:04
stephenfinralonsoh: Thanks! So I'm going to rework this to check if the port details extension is enabled. If it is, use 'device_id' from that. If it is not, make a second request to '/ports/{port_id}' and extract 'device_id' from that15:04
stephenfinralonsoh: Does that sound reasonable?15:04
efriedAm I understanding that you can't boot a huge VM today either?15:05
stephenfinralonsoh: And assuming you're not planning to fix this yourself. If you've started, please continue :)15:05
bauzasso, basically, the flag is just an expression of us being unable to serve such query :(15:05
bauzasa technical limitation of our own15:05
sean-k-mooneyefried: you can but only if you use no numa feature15:05
ralonsohstephenfin, no, I didn't start doing it hehehe15:05
ralonsohstephenfin, but yes, this should be the procedure15:05
sean-k-mooneyor you use multiple guset numa nodes15:05
stephenfinefried: You can but only if you don't configure anything with NUMA15:05
sean-k-mooneygot to join a call15:05
stephenfindarn, ninja'd by sean-k-mooney15:05
efriedFeel like that came all the way back in a circle.15:05
*** dtantsur is now known as dtantsur|afk15:06
stephenfinefried: the original question was why can't we always report NUMA to placement, yeah?15:06
efriedIOW to boot a huge instance today, you can artificially give it a multi-numa topo, or you can say nothing about NUMA.15:07
efriedAnd the proposal we started the morning with would allow you to do the same.15:07
efriedBut now we're considering removing that first possibility by forcing all hosts to be NUMA-aware.15:07
efriedsorry, removing the *second* possibility15:07
stephenfinI don't think it's possible to force all hosts to be NUMA-aware15:08
bauzasthat's the crux of the problem15:09
efriedstephenfin: I thought we had this all sussed out. I thought we were going to segregate the cloud into NUMA-aware (placement resources split along NUMA RPs) and non-NUMA-aware (what it looks like today, with all proc/mem on the root RP) hosts15:09
efriedAnd you could only boot flavors with hw:numa*isms into the former; and you could only boot flavors *without* numa*isms into the latter.15:09
efriedBut dansmith has been arguing against that.15:09
bauzasare we able nowadays to express a placement query for a large VM with a  NUMA-aware host ?15:09
bauzasgiven the new proposal you made in the etherpad15:10
stephenfinAgree on the first point15:10
dansmithefried: dude, can you let up a bit? I think multiple people are saying it would be nice to not have this restriction, no?15:10
*** sapd1_x has quit IRC15:10
stephenfinDon't recall agreeing to the latter15:10
bauzaslike, "I want 8 VCPUs" can it be satisfied with 4 CPUs on each NUMA node ?15:10
dansmithefried: I'm not demanding anything, I'm just saying I don't think that expecting to need to segregate the fleet forever is the best long term plan15:10
efriedIt would be nice, but (and again I may be misremembering the discussions) I thought we decided to compromise because that would be too hard to do.15:11
bauzaslike, could we assume a specific query attribute to placement unless others are expressed ?15:11
efriedbauzas: no, that doesn't work.15:11
bauzasefried: I'm just challenging this idea15:11
efriedbauzas: we've been down that road before -- that was the thing where you would have to ask for individual MB of memory in a zillion granular groups with group_policy=none, remember?15:12
bauzasI see15:12
efriedWe also proposed can_split to help with that, but abandoned the idea for reasons.15:13
bauzasyeah, I was considering can_spit15:13
bauzassplit heh15:13
efriedone reason was that it was going to be really hard to make the syntax work properly15:13
efriedBut the other reason was that we had decided on the above architecture and designed same_subtree etc. to accommodate it.15:13
bauzaslike, until we somehow have a placement construction that allows us to 'spread a query across multiple RPs', then the option is mandatory :(15:14
stephenfindansmith: Yeah, no reason this opt-out of NUMA behavior couldn't be phased out over multiple releases15:14
dansmithstephenfin: that's specifically what I'm saying15:14
efriedstephenfin: so in order to fit my large VM, I would have to specify multiple NUMA nodes?15:14
efried...in that future release?15:15
stephenfinefried: Yup15:15
stephenfinOr turn off NUMA on your host15:15
stephenfinIt's usually tucked away in the BIOS15:15
dansmithfrom the user's perspective,15:15
efriedokay, I didn't know that was even an option. So the driver would report effectively a single NUMA node in that case15:16
stephenfincorrect15:16
efriedwell shit15:16
stephenfinfwiw, this is the same decision we made with the thread policies15:16
dansmithif we had a hw:numa_hodes_min= thing, then they could express in the flavor whether they *need* two numa nodes, or are willing to *tolerate* multiple nodes, the current thing being the upper limit if both specified, right?15:16
bauzasI need to come to a conclusion because parenting taxi duties15:16
dansmiththe fact that we've backed ourselves into a corner with placement and expressivity notwithstanding15:17
efrieddansmith: So yeah, I was going to address that. The problem is that we have no way to translate that into placement... yeah.15:17
bauzasA/ we are about to propose a flag for allowing NUMA architecture15:17
stephenfindansmith: the biggest issues with that is that we've to make multiple requests to placement at the moment15:17
bauzasB/ we're not intending to remove this flag in a foreseenable future15:17
dansmithefried: and I'm saying that sucks for the users, and why they're opting into the dumb behavior (whether through nova or bios) because _we_ can't figure out how to organize our own data15:18
bauzasC/ we explicity ask our operators to turn this flag on to allow them to boot NUMA-aware guests on such hosts15:18
dansmithstephenfin: yep, understand15:18
stephenfini.e. give me a host that can fit all N in 1 NUMA node, else give me one that fit them in 2 nodes, ...15:18
stephenfin*all N instance cores15:18
efriedstephenfin: but if in 2 nodes, we can split evenly or asymmetrically...15:18
bauzasD/ I state in the alternatives section that this whole plan sucks because we miss placement expressivity15:18
bauzasWFY folks ?15:18
* bauzas needs to go AFK15:19
dansmithefried: right, and we'd want some "minimum split is 70/30" type expression too.. I get that it's not something we can do today, and will be harder to land that across now two separate projects15:19
dansmithI'm just saying, the user sees this as a rock and a hard place, with no real-life justification for it15:19
* bauzas now drops15:20
bauzasttyl later folks and will scroll back15:20
dansmithI guess if you don't care about a specific topology, you want the same percentage of memory on each node as cpus15:20
sean-k-mooneyok back15:20
gibiefried: thanks the summary about group_policy, it works fo me15:20
stephenfinefried: yeah, correct, otherwise you force people to use those awful 'hw:numa.cpu{N}=<cpumap>' extra specs15:20
stephenfindansmith: sure, though it's a bit of weird one, implementation wise15:21
stephenfinbecause placement will give us e.g. a 70/30 split on cores, but we wouldn't really be reflecting this in the pinning of the instance to the host15:22
*** eharney has joined #openstack-nova15:22
stephenfinso we'll have to be careful not to do strict NUMA memory affinity in that case15:22
dansmithyeah15:22
dansmithpresumably there's a middle ground between fully constrained and unconstrained,15:23
dansmithwhere your vcpus are constrained to only run on cores that are on the numa node they represent, right?15:23
stephenfinCorrect. That's what happens when you turn on a NUMA topology without pinning at the moment15:23
dansmithyou balance between cores looser than pinning, but... right okay15:23
stephenfini.e. use 'hw:numa_nodes' or 'hw:mem_page_size'15:24
dansmiththat seems fine to me then15:24
stephenfinWe "pin" to the whole range of enabled cores from N NUMA nodes15:24
stephenfinrn there's no way to say give me N $resource from adjacent/child providers, right? That's what the 'can_split' thing was supposed to do?15:24
dansmithif placement were able to cough up a topology for 1-2 numa nodes (i.e. i don't care) and then I get vcpus loosely pinned to cores on the right numa node according to the memory split...15:25
stephenfinyeah, ideal15:25
dansmithaye15:25
stephenfinI guess to retain the current behavior, you'd cough up a topology for *all* NUMA nodes on the host15:26
stephenfinbut we can't do that since it would break e.g. a 1 core instance on a 2 node host15:26
dansmiththat's why it has to be a range I think15:26
stephenfinyup15:26
sean-k-mooneyyou know we can totally allcoate memroy from multiple host numa nodes and expose it as one gues numa node with qemu right15:27
sean-k-mooneysame with cores15:27
dansmithsure, but that's not helpful15:27
efriedisn't that what we do today for a don't-care-about-NUMA guest?15:27
efriedand is the exact flexibility we're talking about getting rid of?15:28
dansmithnobody is asking to be lied to :)15:28
sean-k-mooneydansmith: well im jsut saying we can always use the resources that correspond to the placmenet allcaotion and expose a different virtual numa toplogy if we were willing to not require the 1:1 mapping unless you said you cared about numa15:28
dansmithefried: no, that's not flexibility15:28
dansmithefried: nobody is asking for "show me one numa node even though that's not the truth"15:29
sean-k-mooneyefried: lie to it yes15:29
mriedemignorance is bliss15:29
efriedIf I ask for a sausage, I'm going to be fine if the sausage is beef or pork.15:29
efriedIf I ask for a kosher sausage, I'm going to be upset if it's pork.15:29
dansmithsean-k-mooney: gotcha15:29
efriedIt's not about being lied to. It's about not caring.15:29
dansmithsigh15:30
efriedI'm not convinced that everyone cares.15:30
dansmithif we lie to the guest, then the guest *os* is *going* to make bad decisions that don't represent what is actually being offered15:30
dansmithnobody wants that,15:30
dansmiththey're opting into that over the more painful "care about this in extreme detail"15:30
sean-k-mooneyefried: if you add hw:numa_nodes=1 to a random flavor we normaly expepct about a 20-30% performance improvment15:31
efriedthen isn't it the responsibility of the libvirt driver (not the scheduler) to take a generic simple request and make a real numa topo out of it?15:31
sean-k-mooneyeven thoughg it is still floating over cores and using 4k small pages15:31
sean-k-mooneyjust becuase the memroy and cpus all come form a single numa node15:32
efriediow if my host is configured monolithically in placement and my VM requests simply VCPU=N,MEMORY_MB=M, we'll place the VM even if (and without knowing) the resources have to be spread across NUMA nodes.15:33
efriedNow it's the job of the driver (via the overloaded NTF, presumably?) to carve those N VCPUs and M MEMORY_MBs out of whatever NUMA nodes they're available in, and create the appropriate topo for the guest, no matter how many nodes that happens to be?15:33
dansmithno? the virt driver doesn't have visibility into enough of the (nova) system to make those kinds of decisions I don't think15:34
sean-k-mooneyif we model numa in placment then the rps the allcoation come form force the dirver to allcoate the resouces form spefici host numa nodes15:34
efriedyes, but then we *must* frame the request accordingly.15:34
sean-k-mooneyyes15:35
efriedthat's the whole problem we're trying to avoid15:35
sean-k-mooneythe only way to avoid that is to not repor tnuma in placment15:35
efriedbecause, once again, the VM didn't care about the specifics of the NUMA topology. By making it a real one, of whatever shape, we're still conferring the perf advantages to the VM. But we would do that at the host, having decided there are enough resources in total.15:35
dansmithefried: the request is the important part here because we're talking about multiple computers.. the scheduler is looking for something that fits best amongst the options, not "well, we're on this host how do we best cram this into the hole we have"15:35
efriedright, I'm saying from the perspective of the scheduler, any number of fits can be considered "best".15:36
sean-k-mooneyright we use weighers to determin what best is15:36
sean-k-mooneywe have disused the idea of have a weigher based on the allcoation candiate in the past15:36
dansmithwhich is why the scheduler doesn't pick actual resources, it picks hosts, and why before placement, we got that wrong a *lot*15:36
dansmithwhich means we reschedule, which is super expensive15:36
sean-k-mooneybut that still does not change the fact that the placment query is the import thing to get right15:37
dansmithsean-k-mooney: agreed15:37
sean-k-mooneyin the non numa case if we had a weigher and we had 1 allcoation candiate with 1 numa node and another with 2 we could chosse the singel numa node15:37
sean-k-mooneybut i dont know how to allow that today with the /ac api15:38
sean-k-mooneythat is kind fo what can_split was ment to solve but that is not a thing currnly15:38
sean-k-mooneyin the non numa case we would jsut lump everything in the un numberd group and say you can split the vcpus and ram15:39
sean-k-mooneythen weigh by the least number of numa nodes15:39
sean-k-mooneybut i dont see that happening anytime soon15:40
*** jmlowe has quit IRC15:43
*** jaosorior has quit IRC15:43
efriedAgreed.15:45
efriedSo barring the ideal15:45
efriedwe agreed on this 80/20 approach15:45
sean-k-mooneywith the partioning of the cloud15:45
efriedyes15:45
sean-k-mooneyya so you know way way way back before numa and pinning was merged they was a counter propoal to make them host wide config options15:46
sean-k-mooneywe are slowly getting back to that15:46
efriedso that we can have one simple placement query for non-NUMA15:47
efriedand one complex strict placement query for NUMA15:47
efriedand almost never have to reschedule in either case.15:47
sean-k-mooneyyes although i think at some point we will want to have 1 code path15:47
efriedThat's the 2015:47
sean-k-mooneyyes but it could also be a refinment of scope15:48
efrieda refinement of scope for Ussuri?15:48
efriedor adding restrictions in future releases?15:48
sean-k-mooneyno. if we say all vms are numa vms in the future we reduce fucntionality as we did with cpu pinning15:48
sean-k-mooneybut make this all simpler15:48
sean-k-mooneyefried: to model PCPUs in placment we reduced the funcatiolity of the thread policies15:49
sean-k-mooneywe could in a future relase consider the same here15:49
*** jmlowe has joined #openstack-nova15:49
stephenfinralonsoh: One more question regarding this comment -> "The same patch also introduced an error when retrieving the network ID. The network ID is stored in a key named 'floating_network_id'"15:49
sean-k-mooneyevery time we try to do that we end up makeing no progress at all15:50
sean-k-mooneyso im saying defer to V+ and take the incremental improvment in U15:50
ralonsohstephenfin, you don't have the network name, only the network ID in this key 'floating_network_id'"15:50
stephenfinralonsoh: Ah, so I don't think that's an issue since I'm building that field myself https://github.com/openstack/nova/blob/master/nova/network/neutron.py#L2593-L260015:51
stephenfinralonsoh: My question is would that make a good extension for neutron (adding 'network_details' to complement 'port_details')15:51
sean-k-mooneystephenfin: you should proably swap to using .get() for all those filed lookups by the way15:52
ralonsohstephenfin, ahhhhh ok. BTW, do you really need to make this call? This is expensive and the ID, IMO, is good enough15:52
stephenfintbf, we only need that information for this one (deprecated) API, but maybe it would be useful for others15:52
ralonsohstephenfin, unless you specifically need the name15:52
*** jmlowe has quit IRC15:52
ralonsohthis could be an optimization, avoiding this "show_network" call15:53
stephenfinralonsoh: Unfortunately, yes. It's a deprecated API but we expect to return network names first15:53
ralonsohstephenfin, okidoki15:53
stephenfinI could probably make it optional for this deprecated API. Let me investigate that15:53
stephenfinralonsoh: Wait, I lied. I've another question :) The 'alias' field will always be present in the 'GET /v2.0/extensions' response, right? https://docs.openstack.org/api-ref/network/v2/?expanded=list-extensions-detail#id515:57
ralonsohstephenfin, let met check15:57
stephenfinWe're caching the extension name and I've no idea why, because the alias seems a lot better to use a reference constant https://github.com/openstack/nova/blob/master/nova/network/neutron.py#L1254-L125515:58
*** ociuhandu has quit IRC15:58
*** ociuhandu has joined #openstack-nova15:59
*** udesale has quit IRC16:00
*** tbachman has quit IRC16:02
*** mkrai has quit IRC16:04
*** ociuhandu has quit IRC16:04
ralonsohstephenfin, sorry, I took my a while, this is not a regular DB call16:05
ralonsohstephenfin, the call returns a list of this16:05
ralonsoh<class 'dict'>: {'name': 'Address scope', 'alias': 'address-scope', 'description': 'Address scopes extension.', 'updated': '2015-07-26T10:00:00-00:00', 'links': []}16:06
ralonsoh(this is one element of the list)16:06
*** eharney has quit IRC16:06
stephenfinokay, so alias will always be there16:06
stephenfinI'll rework that to use aliases so16:06
stephenfinmuch clearer, IMO16:06
ralonsohyes and this is something static16:06
ralonsohsure, you can consider the alias as a constant16:06
ralonsohand it's in neutron-lib16:06
stephenfinBug fix done too. Just fixing up tests16:06
*** lpetrut has quit IRC16:16
*** gyee has joined #openstack-nova16:17
*** Sundar has joined #openstack-nova16:17
*** eharney has joined #openstack-nova16:19
huaqiangstephenfin: for the mixed vCPU instance spec https://review.opendev.org/#/c/668656/, do you have more comments?16:20
huaqiangI know there is no concensus now16:20
huaqiangshould I raise a discussion in Thursday's team meeting?16:21
huaqiangwhich way is better for you?16:21
*** mlavalle has joined #openstack-nova16:22
kashyapstephenfin: That rST thing on line-38 didn't work out, afraid; going to revert to using explicit references (https://review.opendev.org/#/c/693844/5/specs/ussuri/approved/allow-secure-boot-for-qemu-kvm-guests.rst)16:22
*** ociuhandu has joined #openstack-nova16:26
*** slaweq_ has joined #openstack-nova16:30
*** maciejjozefczyk has quit IRC16:32
*** slaweq has quit IRC16:33
*** jamesdenton has joined #openstack-nova16:39
huaqiangwhich way is better for you?16:41
*** ociuhandu has quit IRC16:44
*** ociuhandu has joined #openstack-nova16:45
*** ociuhandu has quit IRC16:46
*** ociuhandu has joined #openstack-nova16:46
stephenfinhuaqiang: Sorry, been working on a bug. I'll try look at that before the meeting but it would be good to bring up anyway16:52
openstackgerritStephen Finucane proposed openstack/nova master: Handle neutron without the fip-port-details extension  https://review.opendev.org/70576016:52
stephenfinralonsoh: ^16:52
ralonsohstephenfin, reviewing it16:52
stephenfinefried, dansmith, bauzas: Apparently this is breaking neutron's gate so we should get this in once it's green16:52
bauzasmmmm ok16:52
stephenfinI ran/fixed a subset of tests that I though might break16:52
stephenfin*thought16:52
efriedsounds like a review for gibi16:53
stephenfinoh, how'd I forget gibi16:53
gibiack16:54
gibibut in 5 minutes it is beertime here16:54
stephenfinI bet you can get this done in 4 though16:54
efriedheh16:54
stephenfin;)16:54
openstackgerritSasha Andonov proposed openstack/nova master: rbd_utils: increase _destroy_volume timeout  https://review.opendev.org/70576416:55
gibiworking on it ...16:55
gibidone.16:58
gibiin 316:58
gibi:D16:58
*** lbragstad_ is now known as lbragstad16:59
* gibi runs away16:59
*** tesseract has quit IRC17:04
*** luksky has quit IRC17:07
openstackgerritKashyap Chamarthy proposed openstack/nova-specs master: Re-propose "Secure Boot support for KVM & QEMU guests" for Ussuri  https://review.opendev.org/69384417:14
*** nweinber_ has joined #openstack-nova17:17
*** ociuhandu has quit IRC17:20
*** ociuhandu has joined #openstack-nova17:20
*** tbachman has joined #openstack-nova17:22
*** Sundar has quit IRC17:24
*** sandonov has quit IRC17:24
*** ociuhandu_ has joined #openstack-nova17:25
*** rpittau is now known as rpittau|afk17:27
*** ociuhandu has quit IRC17:29
*** ociuhandu_ has quit IRC17:30
*** evrardjp has quit IRC17:33
*** evrardjp has joined #openstack-nova17:34
gmannstephenfin: cmurphy melwitt alex_xu this is ready to re-review now.  - https://review.opendev.org/#/c/701624/17:35
cmurphythanks gmann17:35
*** ociuhandu has joined #openstack-nova17:35
*** davidsha_ has quit IRC17:35
openstackgerritStephen Finucane proposed openstack/nova master: Handle neutron without the fip-port-details extension  https://review.opendev.org/70576017:38
openstackgerritStephen Finucane proposed openstack/nova master: Avoid calling neutron for N networks  https://review.opendev.org/70578417:38
stephenfinralonsoh: ^17:38
ralonsohstephenfin, I'm on it now17:38
*** ociuhandu has quit IRC17:40
*** luksky has joined #openstack-nova17:50
*** martinkennelly has quit IRC17:52
*** igordc has joined #openstack-nova17:53
*** derekh has quit IRC18:00
*** tosky has quit IRC18:01
*** damien_r has quit IRC18:03
openstackgerritVladyslav Drok proposed openstack/nova master: Minor improvements to cell commands  https://review.opendev.org/69805318:09
*** ralonsoh has quit IRC18:15
openstackgerritStephen Finucane proposed openstack/nova master: Use extension aliases, not names  https://review.opendev.org/70579218:22
*** tbachman_ has joined #openstack-nova18:29
*** tbachman has quit IRC18:30
*** tbachman_ is now known as tbachman18:30
gmannmelwitt: this is ready from gate side to change your vote from +1 -> +2 -  https://review.opendev.org/#/c/705135/418:34
gmannyou can see the tests of other-projects-context working as expected in https://review.opendev.org/#/c/705126/718:35
openstackgerritGhanshyam Mann proposed openstack/nova master: Introduce scope_types in os-attach-interfaces  https://review.opendev.org/70579918:49
*** gyee has quit IRC19:00
*** nweinber has quit IRC19:00
*** Anticimex has quit IRC19:00
*** dtantsur|afk has quit IRC19:00
*** gyee has joined #openstack-nova19:02
*** nweinber has joined #openstack-nova19:02
*** Anticimex has joined #openstack-nova19:02
*** dtantsur|afk has joined #openstack-nova19:02
*** dtantsur|afk has quit IRC19:03
*** nweinber has quit IRC19:13
*** dtantsur has joined #openstack-nova19:15
*** vishalmanchanda has quit IRC19:53
*** jmlowe has joined #openstack-nova20:00
*** jmlowe has quit IRC20:05
openstackgerritMykola Yakovliev proposed openstack/nova master: Fix boot_roles in InstanceSystemMetadata  https://review.opendev.org/69804020:24
*** bbowen has quit IRC20:33
*** eharney has quit IRC20:40
*** jamesdenton has quit IRC20:48
*** TxGirlGeek has joined #openstack-nova21:01
*** umbSublime has joined #openstack-nova21:18
*** bbowen has joined #openstack-nova21:27
*** sean-k-mooney has quit IRC21:31
openstackgerritMichael Bayer proposed openstack/nova master: remove DISTINCT ON SQL instruction that does nothing on MySQL  https://review.opendev.org/70585021:38
*** luksky has quit IRC21:41
*** xek has quit IRC21:42
*** eharney has joined #openstack-nova21:45
*** dpawlik has quit IRC22:00
*** spatel has joined #openstack-nova22:02
*** eharney has quit IRC22:04
*** sean-k-mooney has joined #openstack-nova22:05
*** spatel has quit IRC22:06
gmanndoes anyone know way/hack to skip the deps installation while creating tox env22:07
gmannthis could have helpful for me to fix <py3.6 - https://github.com/tox-dev/tox/issues/41022:07
gmannstephenfin: ^^22:07
*** sean-k-mooney has quit IRC22:10
*** eharney has joined #openstack-nova22:10
*** sean-k-mooney has joined #openstack-nova22:10
*** sean-k-mooney has quit IRC22:23
*** sean-k-mooney has joined #openstack-nova22:23
*** damien_r has joined #openstack-nova22:28
*** nweinber_ has quit IRC22:28
*** damien_r has quit IRC22:32
openstackgerritMerged openstack/nova master: nova-net: Remove use of legacy 'Network' object  https://review.opendev.org/69715422:42
efriedgmann: Can't you just hack out the relevant lines of the tox.ini section?22:48
efriedor is this something you want to be able to do in a merged patch?22:48
gmannefried: config value we can set via skip_install but i wanted to skip while we create tox at runtime22:48
gmannon stable branch where i want to skip the master upper constraint dep hard coded in tempest.tox.ini master branch.22:49
gmannanyways i did it via env var to use stable constraint - https://review.opendev.org/#/c/705089/4/lib/tempest@70322:49
sean-k-mooneythere is a hack where you can just touch i think the egg.info file locally or somehting22:49
sean-k-mooneyi know stephenfin had a way to work around it in the past22:50
efriedYeah, I'm still confused as to whether you're talking about needing to do it locally or in "prod".22:50
gmannohk, that will be good to do instead of global env var at least for my local run on stable22:50
gmannefried: locally as well as in  stable branches jobs testing22:51
sean-k-mooneyim not sure if that will work in this case or not i just know he used to do something like that to skip instlaling deps22:52
gmannat runtime ? and not via skip_install config value in tox.ini22:53
gmannruntime i mean while running tox env not in tox.ini22:54
sean-k-mooneyyes i think he used to do touch <file> && tox -e ...22:55
gmannohk. if that happens before installing deps then that is something i am looking for.22:57
*** tosky has joined #openstack-nova22:57
*** mriedem has quit IRC22:59
*** slaweq_ has quit IRC23:03
efrieddansmith: Sorry, I got started, but ran out of time. I'll try to get to that patch (cyborg request groups) early tomorrow.23:06
efriedI got to the point where I was starting to remember asking Sundar to make this look more like how we feed bandwidth requests into the RequestSpec, and this doesn't look like it's doing that (at least the way I expected). But I'll dig in more tomorrow.23:07
*** eharney has quit IRC23:07
*** nweinber_ has joined #openstack-nova23:07
*** tkajinam has joined #openstack-nova23:09
*** slaweq_ has joined #openstack-nova23:11
dansmithefried: okay, well, glad I waited then but..yeah, would be good if we can get that feedback in the queue so he has some time to address it23:13
*** slaweq_ has quit IRC23:16
*** nweinber_ has quit IRC23:24
openstackgerritMichael Bayer proposed openstack/nova master: remove DISTINCT ON SQL instruction that does nothing on MySQL  https://review.opendev.org/70585023:30
openstackgerritSundar Nadathur proposed openstack/nova master: Enable hard/soft reboot with accelerators.  https://review.opendev.org/69794023:40
openstackgerritSundar Nadathur proposed openstack/nova master: Enable start/stop of instances with accelerators.  https://review.opendev.org/69955323:40
openstackgerritSundar Nadathur proposed openstack/nova master: Enable and use COMPUTE_ACCELERATORS trait.  https://review.opendev.org/69955423:40
openstackgerritSundar Nadathur proposed openstack/nova master: Bump compute rpcapi version and reduce Cyborg calls.  https://review.opendev.org/70422723:40
openstackgerritSundar Nadathur proposed openstack/nova master: Add cyborg tempest job.  https://review.opendev.org/67099923:40
*** tosky has quit IRC23:55
*** igordc has quit IRC23:55
sean-k-mooneyim still fleshing out the script and ill continue working on it tomorrow but i am still seeing arq bidning issues23:56
sean-k-mooneythis is what i ahve so far http://paste.openstack.org/show/789138/23:56
sean-k-mooneythat was the error reported to the user http://paste.openstack.org/show/789139/23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!