Monday, 2018-12-10

*** rodolof has quit IRC00:07
*** dave-mccowan has quit IRC00:23
*** sapd1_ has joined #openstack-nova00:36
*** sapd1_ has quit IRC00:44
*** brinzhang has joined #openstack-nova00:53
*** dave-mccowan has joined #openstack-nova00:57
*** Nel1x has quit IRC01:02
*** sapd1_ has joined #openstack-nova01:07
*** sapd1_ has quit IRC01:19
*** markvoelker has quit IRC01:21
*** markvoelker has joined #openstack-nova01:22
*** markvoelker has quit IRC01:26
*** tetsuro has joined #openstack-nova01:27
*** hongbin has quit IRC01:45
*** tiendc has joined #openstack-nova01:46
*** sapd1_ has joined #openstack-nova01:50
*** thanhnb has joined #openstack-nova01:54
*** fanzhang has joined #openstack-nova01:54
*** wolverineav has joined #openstack-nova02:02
*** fanzhang has quit IRC02:04
*** lei-zh has joined #openstack-nova02:07
*** mrsoul has quit IRC02:11
*** mschuppert has quit IRC02:11
*** lbragstad has quit IRC02:13
*** Dinesh_Bhor has joined #openstack-nova02:15
*** lbragstad has joined #openstack-nova02:16
*** Dinesh_Bhor has quit IRC02:17
openstackgerritTakashi NATSUME proposed openstack/nova master: Add descriptions of numbered resource classes and traits  https://review.openstack.org/62149402:20
*** bhagyashris has joined #openstack-nova02:21
*** itlinux has quit IRC02:29
*** mhen has quit IRC02:32
*** mhen has joined #openstack-nova02:34
*** Dinesh_Bhor has joined #openstack-nova02:35
*** fanzhang has joined #openstack-nova02:54
*** wxy-xiyuan has joined #openstack-nova02:54
*** alex_xu has joined #openstack-nova02:55
*** wolverineav has quit IRC02:55
*** wolverineav has joined #openstack-nova02:56
*** psachin has joined #openstack-nova02:59
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP detach root volume API changes  https://review.openstack.org/62398103:05
*** hongbin has joined #openstack-nova03:05
*** Kevin_Zheng has joined #openstack-nova03:09
*** wolverineav has quit IRC03:13
*** lbragstad has quit IRC03:38
*** udesale has joined #openstack-nova03:40
*** wolverineav has joined #openstack-nova03:44
*** Dinesh_Bhor has quit IRC03:46
*** Dinesh_Bhor has joined #openstack-nova03:55
*** hongbin has quit IRC04:00
*** tetsuro has quit IRC04:03
*** wolverineav has quit IRC04:19
*** wolverineav has joined #openstack-nova04:19
*** zzzeek has quit IRC04:41
*** zzzeek has joined #openstack-nova04:41
*** dave-mccowan has quit IRC04:53
*** bhagyashris has quit IRC05:00
*** pooja_jadhav has joined #openstack-nova05:00
*** sapd1_ has quit IRC05:05
*** sapd1_ has joined #openstack-nova05:18
openstackgerritGhanshyam Mann proposed openstack/nova stable/rocky: DNM: For testing multinode and tempest-slow job  https://review.openstack.org/62399005:21
*** sapd1_ has quit IRC05:23
*** khomesh has joined #openstack-nova05:24
*** wolverineav has quit IRC05:26
*** sridharg has joined #openstack-nova05:27
*** pooja_jadhav has quit IRC05:31
openstackgerritGhanshyam Mann proposed openstack/nova stable/queens: DNM: For testing multinode and tempest-slow job  https://review.openstack.org/62399205:32
*** wolverineav has joined #openstack-nova05:43
*** wolverineav has quit IRC05:46
*** ratailor has joined #openstack-nova05:56
*** brinzhang_ has joined #openstack-nova06:01
*** sapd1_ has joined #openstack-nova06:03
openstackgerritMerged openstack/nova master: Transform scheduler.select_destinations notification  https://review.openstack.org/50850606:05
brinzhang#join #oo06:10
openstackgerritZhenyu Zheng proposed openstack/nova master: Bump compute service to indicate attach/detach root volume is supported  https://review.openstack.org/61475006:10
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP detach root volume API changes  https://review.openstack.org/62398106:12
*** brinzhang has quit IRC06:13
*** wolverineav has joined #openstack-nova06:24
*** _alastor_ has joined #openstack-nova06:29
*** takashin has left #openstack-nova06:44
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (3)  https://review.openstack.org/57410406:47
*** sapd1_ has quit IRC06:48
*** sapd__ has joined #openstack-nova06:48
*** moshele has joined #openstack-nova06:53
*** sapd__ has quit IRC06:55
*** Luzi has joined #openstack-nova06:58
*** wolverineav has quit IRC07:00
belmoreiraleakypipes: thanks. I will give you feedback asap07:04
*** belmoreira has quit IRC07:04
*** alexchadin has joined #openstack-nova07:16
*** belmoreira has joined #openstack-nova07:16
*** belmoreira has quit IRC07:18
*** belmoreira has joined #openstack-nova07:18
*** dpawlik has joined #openstack-nova07:22
*** rcernin has quit IRC07:23
openstackgerritRui Zang proposed openstack/nova-specs master: support virtual persistent memory  https://review.openstack.org/60159607:23
*** udesale has quit IRC07:24
openstackgerritRui Zang proposed openstack/nova-specs master: Virtual persistent memory libvirt driver implementation  https://review.openstack.org/62289307:25
*** moshele has quit IRC07:26
*** moshele has joined #openstack-nova07:32
*** ondrejme has joined #openstack-nova07:40
*** slaweq has joined #openstack-nova07:47
*** rodolof has joined #openstack-nova07:49
*** ccamacho has joined #openstack-nova07:55
*** Dinesh_Bhor has quit IRC07:57
*** _alastor_ has quit IRC08:00
*** sahid has joined #openstack-nova08:03
*** trident has quit IRC08:10
*** evrardjp_ is now known as evrardjp08:11
*** tetsuro has joined #openstack-nova08:11
*** helenafm has joined #openstack-nova08:11
*** trident has joined #openstack-nova08:13
*** wolverineav has joined #openstack-nova08:16
*** xek has joined #openstack-nova08:18
*** wolverineav has quit IRC08:20
*** imacdonn has quit IRC08:22
*** bhagyashris has joined #openstack-nova08:22
*** imacdonn has joined #openstack-nova08:22
*** skatsaounis has joined #openstack-nova08:25
*** jonher_ has joined #openstack-nova08:30
*** jonher has quit IRC08:31
*** jonher_ is now known as jonher08:31
*** adrianc has quit IRC08:31
*** adrianc has joined #openstack-nova08:43
openstackgerritZhenyu Zheng proposed openstack/nova master: Bump compute service to indicate attach/detach root volume is supported  https://review.openstack.org/61475008:45
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP detach root volume API changes  https://review.openstack.org/62398108:46
*** udesale has joined #openstack-nova08:49
*** bhagyashris has quit IRC08:51
*** ralonsoh has joined #openstack-nova08:53
kaiserskaisers08:57
*** bhagyashris has joined #openstack-nova08:59
*** Dinesh_Bhor has joined #openstack-nova09:00
openstackgerritSilvan Kaiser proposed openstack/nova master: Exec systemd-run without --user flag in Quobyte driver  https://review.openstack.org/55419509:00
*** Dinesh_Bhor has quit IRC09:09
openstackgerritBalazs Gibizer proposed openstack/nova master: Final release note for versioned notification transformation  https://review.openstack.org/62402209:14
*** k_mouza has joined #openstack-nova09:15
*** tssurya has joined #openstack-nova09:16
*** leakypipes has quit IRC09:19
openstackgerritLee Yarwood proposed openstack/nova master: compute: Reject migration requests when source is down  https://review.openstack.org/62348909:28
*** derekh has joined #openstack-nova09:31
*** bhagyashris has quit IRC09:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Transfer port.resource_request to the scheduler  https://review.openstack.org/56726809:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Extend RequestGroup object for mapping  https://review.openstack.org/61952709:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Calculate RequestGroup resource provider mapping  https://review.openstack.org/61623909:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Fill the RequestGroup mapping during schedule  https://review.openstack.org/61952809:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Pass resource provider mapping to neutronv2 api  https://review.openstack.org/61624009:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Recalculate request group - RP mapping during re-schedule  https://review.openstack.org/61952909:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Send RP uuid in the port binding  https://review.openstack.org/56945909:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Test boot with more ports with bandwidth request  https://review.openstack.org/57331709:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject interface attach with QoS aware port  https://review.openstack.org/57007809:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject networks with QoS policy  https://review.openstack.org/57007909:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove port allocation during detach  https://review.openstack.org/62242109:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Ensure that allocated PF matches the used PF  https://review.openstack.org/62354309:44
*** tetsuro has quit IRC09:44
*** k_mouza has quit IRC09:45
*** k_mouza has joined #openstack-nova09:46
*** Dinesh_Bhor has joined #openstack-nova09:46
*** Dinesh_Bhor has quit IRC09:47
*** sapd1_ has joined #openstack-nova09:54
*** lei-zh has quit IRC10:00
openstackgerritMerged openstack/nova master: Add ratio online data migration when load compute node  https://review.openstack.org/61349910:05
*** sapd1_ has quit IRC10:06
openstackgerritMerged openstack/nova master: Add compute_node ratio online data migration script  https://review.openstack.org/60999510:11
openstackgerritMerged openstack/nova master: Note the aggregate allocation ratio restriction in scheduler docs  https://review.openstack.org/62071310:11
*** izza_ has joined #openstack-nova10:17
izza_hi10:17
izza_anyone here already deployed volume with attached image (windows) in openstack tripleo10:18
izza_need help pls10:18
*** udesale has quit IRC10:18
*** betherly has joined #openstack-nova10:22
*** jarodwl has quit IRC10:25
*** priteau has joined #openstack-nova10:28
openstackgerritMerged openstack/nova stable/rocky: Ignore MoxStubout deprecation warnings  https://review.openstack.org/62354510:30
*** dpawlik has quit IRC10:39
*** cdent has joined #openstack-nova10:41
*** dpawlik has joined #openstack-nova10:45
*** sapd1_ has joined #openstack-nova10:52
*** sapd1_ has quit IRC10:58
*** k_mouza has quit IRC11:09
*** thanhnb has quit IRC11:10
*** udesale has joined #openstack-nova11:11
*** erlon has joined #openstack-nova11:18
*** tiendc has quit IRC11:19
*** izza_ has quit IRC11:21
openstackgerritChris Dent proposed openstack/nova master: Add python 3.7 unit and functional tox jobs  https://review.openstack.org/62405511:24
*** dtantsur|afk is now known as dtantsur11:25
openstackgerritChris Dent proposed openstack/nova master: Use external placement in functional tests  https://review.openstack.org/61794111:29
openstackgerritChris Dent proposed openstack/nova master: Delete the placement code  https://review.openstack.org/61821511:29
*** k_mouza has joined #openstack-nova11:30
*** sapd1_ has joined #openstack-nova11:39
*** sapd1_ has quit IRC11:43
*** tbachman has quit IRC11:46
cdentthanks for continuing to +2 the placement functional stuff gibi, we'll get it merged one of these days11:46
gibicdent: this morning when I did a git pull I got happy as the pull brought in the deletion of placement related files. But then I had to realize that it was just api sample removal not your patch (yet)11:48
cdent:)11:48
*** brinzhang_ has quit IRC12:20
*** dpawlik has quit IRC12:23
openstackgerritChris Dent proposed openstack/nova master: Add python 3.7 unit and functional tox jobs  https://review.openstack.org/62405512:23
*** dpawlik has joined #openstack-nova12:23
cdentdansmith looks like the multi_cell query job is not happy with python 3.7 ^12:24
openstackgerritYikun Jiang proposed openstack/nova master: Add live migration timeout action  https://review.openstack.org/61914312:24
openstackgerritYikun Jiang proposed openstack/nova master: Remove live_migration_progress_timeout config  https://review.openstack.org/61914212:24
*** ratailor has quit IRC12:40
*** tssurya has quit IRC12:41
*** pchavva has joined #openstack-nova12:56
*** jistr is now known as jistr|medchk12:57
*** k_mouza has quit IRC13:02
*** sahid has quit IRC13:05
*** tbachman has joined #openstack-nova13:05
*** psachin has quit IRC13:05
*** tbachman has quit IRC13:10
*** dave-mccowan has joined #openstack-nova13:12
*** tbachman has joined #openstack-nova13:12
*** dpawlik has quit IRC13:17
openstackgerritBalazs Gibizer proposed openstack/nova master: Ensure that allocated PF matches the used PF  https://review.openstack.org/62354313:20
openstackgerritBalazs Gibizer proposed openstack/nova master: Refactor PortResourceRequestBasedSchedulingTestBase  https://review.openstack.org/62408013:20
*** k_mouza has joined #openstack-nova13:20
*** dpawlik has joined #openstack-nova13:20
*** priteau has quit IRC13:23
*** k_mouza has quit IRC13:26
*** k_mouza has joined #openstack-nova13:27
*** ohorecny2 has joined #openstack-nova13:28
*** k_mouza_ has joined #openstack-nova13:29
*** dpawlik has quit IRC13:30
*** k_mouza has quit IRC13:32
*** sapd1_ has joined #openstack-nova13:32
*** dpawlik has joined #openstack-nova13:32
*** sahid has joined #openstack-nova13:35
*** davidsha_ has joined #openstack-nova13:35
*** sapd1_ has quit IRC13:36
*** jmlowe has quit IRC13:45
*** takashin has joined #openstack-nova13:45
*** owalsh_ has joined #openstack-nova13:47
*** owalsh has quit IRC13:50
*** priteau has joined #openstack-nova13:50
*** owalsh has joined #openstack-nova13:50
*** owalsh_ has quit IRC13:52
*** jistr|medchk is now known as jistr13:56
*** s10 has joined #openstack-nova13:56
*** tetsuro has joined #openstack-nova13:57
*** mriedem has joined #openstack-nova14:00
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Add workaround to cleanup instance dir during evac with rbd  https://review.openstack.org/61847814:02
lyarwoodmelwitt: ^ when you're around would you mind taking another look at that? I honestly can't see a way for cleanup to be called erroneously on the source host during a evacuation failure.14:17
*** lbragstad has joined #openstack-nova14:17
*** openstackstatus has joined #openstack-nova14:18
*** ChanServ sets mode: +v openstackstatus14:18
*** takashin has quit IRC14:20
*** jding1_ has quit IRC14:22
*** k_mouza has joined #openstack-nova14:22
*** jackding has joined #openstack-nova14:22
*** jmlowe has joined #openstack-nova14:24
*** jmlowe has quit IRC14:24
*** k_mouza_ has quit IRC14:25
*** jmlowe has joined #openstack-nova14:25
*** aspiers has quit IRC14:26
*** tetsuro has quit IRC14:27
*** s10 has quit IRC14:28
*** takashin has joined #openstack-nova14:28
*** jmlowe has quit IRC14:30
*** takashin has left #openstack-nova14:32
*** awaugama has joined #openstack-nova14:34
*** jmlowe has joined #openstack-nova14:35
*** mlavalle has joined #openstack-nova14:37
*** alexchadin has quit IRC14:42
sean-k-mooneybauzas: melwitt could one of ye apporve https://review.openstack.org/#/c/618239/ as release liaison/ptl when ye get a chance.14:42
sean-k-mooneythe main change is fixing the os-vif side of https://bugs.launchpad.net/neutron/+bug/1734320 + https://bugs.launchpad.net/os-vif/+bug/180107214:43
openstackLaunchpad bug 1734320 in os-vif "Eavesdropping private traffic" [High,In progress] - Assigned to sean mooney (sean-k-mooney)14:43
openstackLaunchpad bug 1801072 in os-vif "vif_plug_ovs.linux_net.delete_net_dev is called outside the privsep context" [Critical,Fix released] - Assigned to sean mooney (sean-k-mooney)14:43
*** sapd1_ has joined #openstack-nova14:51
*** aspiers has joined #openstack-nova14:53
*** KeithMnemonic has joined #openstack-nova14:54
KeithMnemonicIs it possible to please get some reviews on this patch? https://review.openstack.org/#/c/573066/14:55
*** mvkr has quit IRC14:55
sean-k-mooneyKeithMnemonic: you realise that libvirt/qemu has a limit on how many volumes you can attach to an instance too14:58
*** tbachman has quit IRC14:58
KeithMnemonicyes there is a pci limit i thought14:58
KeithMnemonicis that 26?14:58
sean-k-mooneyi think its 20 not 2614:58
KeithMnemonicit is more than 20 i am pretty sure, problem is this bug is still open https://bugs.launchpad.net/nova/+bug/1770527 so we have a customer asking for it14:59
openstackLaunchpad bug 1770527 in OpenStack Compute (nova) "openstack server add volume fails over 26vols" [Wishlist,In progress] - Assigned to Tsuyoshi Nagata (yukari-papa)14:59
sean-k-mooneythe 26 limit on device names i sobviousl just because tehre are 26 ascii/english letter14:59
sean-k-mooneyKeithMnemonic: it may depend on the qemu/libvirt version15:00
KeithMnemonichttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/virtualization/sect-virtualization-virtualization_limitations-kvm_limitations15:00
KeithMnemonicHence, of the 32 available PCI devices for a guest, 4 are not removable. This means there are 28 PCI slots available for additional devices per guest. Every para-virtualized network or block device uses one slot. Each guest can use up to 28 additional devices made up of any combination of para-virtualized network, para-virtualized disk devices, or other PCI devices using VT-d.15:01
*** cfriesen has joined #openstack-nova15:02
sean-k-mooneydoes the 4 include the vnc/spice resouces15:03
sean-k-mooneyyou can use virtio-scsi too which may allow mulitple volumes per pci device15:04
*** cfriesen has quit IRC15:08
*** priteau has quit IRC15:10
*** KeithMnemonic has quit IRC15:10
*** Luzi has quit IRC15:18
mriedemlyarwood: surprise surprise multiattach swap volume across 2 hosts is broken15:18
lyarwoodmriedem: wasn't mdbooth looking at blocking all attempts to swap/migrate multiattach volumes?15:21
lyarwoodhe isn't around today btw15:21
mriedemhe was for at least cases of the multiattach volume having >1 read/write attachment15:22
mriedemhttps://review.openstack.org/#/c/572790/15:22
mriedembut this isn't that issue,15:22
*** moshele has quit IRC15:22
mriedemthe swap volume code in compute updates the bdm record for the "old" volume and changes the volume_id to the "new" volume15:22
openstackgerritMerged openstack/nova master: Add docs for (initial) allocation ratio configuration  https://review.openstack.org/62258815:22
mriedemessentially orphaning the old volume, so we don't cleanup properly15:22
mriedemthis https://github.com/openstack/nova/blob/ae3064b7a820ea02f7fc8a1aa4a41f35a06534f1/nova/compute/manager.py#L5798-L580615:23
mriedemthe bdm that gets updated is the old bdm (source volume), but save_volume_id is the new volume15:24
*** tbachman has joined #openstack-nova15:24
mriedemwe're also clearly wrongly updating the old bdm.connection_info with the new vol connection_info,15:24
mriedemfor new volume attach flows that doesn't matter as we don't use the bdm.connection_info, but it's still wrong15:24
*** mvkr has joined #openstack-nova15:25
mriedemlooks like that code assumes we did a cinder-induced retype/migration, "# correct volume_id returned by Cinder."15:25
mriedembut that's not the case here15:25
mriedemif only we had that volume_id uniqute constraint in the bdms table :)15:27
mriedemthe bdm update would blow up hard15:27
lyarwoodif only, sooooooooooo Matt also had a plan to block the direct use of this API FWIW at some point recently15:28
lyarwoodso only allow it to be used via volume migration API15:28
*** wwriverrat has joined #openstack-nova15:29
lyarwoodbut this still sounds valid, do we have a launchpad bug for this?15:29
mriedemnot yet, i'm debugging the failures in the tempest-slow job here https://review.openstack.org/#/c/606981/ - i'll be dumping notes in a launchpad bug15:29
* lyarwood is up to his eyeballs in puppet, TripleO yaml files and Brexit chaos but can look later today15:30
mriedemi'm not sure how the compute api would block swap volume unless we checked the volume status to see if it's either 'retyping' or 'migrating' and fail otherwise15:30
mriedemand if we'd restrict that based on a policy check15:31
mriedembecause swap volume is admin-only today already15:31
lyarwoodYeah not sure what he had in mind tbh15:32
*** rodolof has quit IRC15:35
*** rodolof has joined #openstack-nova15:35
mriedemhttps://bugs.launchpad.net/nova/+bug/180772315:43
openstackLaunchpad bug 1807723 in OpenStack Compute (nova) "swap multiattach volume intermittently fails when servers are on different hosts" [Medium,Confirmed]15:43
*** itlinux has joined #openstack-nova15:44
kashyaplyarwood: What about Bregret?15:46
*** jmlowe has quit IRC15:46
*** mmethot has quit IRC15:46
kashyaplyarwood: I mean, what are you "dealing" with?  Aren't you just supposed to just "suffer the consequences", or you have the power to "do something"? :D15:46
*** itlinux has quit IRC15:46
kashyaplyarwood: Oh ... disregard me; just "caught up" with the Tories.15:47
stephenfinjangutter: Reviewed https://review.openstack.org/#/c/607610/. Looks good to me, for the most part, the few updates suggested by others aside15:47
stephenfinjangutter: If you rework today, let me know and I'll swing by again15:48
jangutterstephenfin: thanks, respinning!15:48
janguttersean-k-mooney: any objection to choosing "Option 1" in the spec now?16:00
sean-k-mooneyjangutter: i have not looked in a while but didnt i say that in a previous version16:00
sean-k-mooneyah its still at the same version so sure16:01
janguttersean-k-mooney: yeah, doing a respin now, just wanted to make doubly sure. I mean, I sold a _lot_ of tickets to "Sean vs Jay's punch-the-ginger contest".16:02
*** maciejjozefczyk has quit IRC16:03
sean-k-mooneyjangutter: every know you dont punch ginger people in case its contagious :P16:03
janguttersean-k-mooney: that's why boxing gloves were developed to be so thick.16:04
sean-k-mooneyif you want to leave it till later to decide we can but you already have patches for option 116:05
sean-k-mooneymy main concern was serialiastion size16:05
sean-k-mooneywe can adress that at a later date in other ways16:06
*** macza has joined #openstack-nova16:08
openstackgerritJack Ding proposed openstack/nova-specs master: Select cpu model from a list of cpu models  https://review.openstack.org/62095916:11
*** lpetrut has joined #openstack-nova16:12
openstackgerritBen Nemec proposed openstack/nova master: Migrate upgrade checks to oslo.upgradecheck  https://review.openstack.org/60349916:14
*** mmethot has joined #openstack-nova16:17
*** tbachman has quit IRC16:17
*** munimeha1 has joined #openstack-nova16:21
*** mmethot has quit IRC16:24
*** jmlowe has joined #openstack-nova16:24
*** mmethot has joined #openstack-nova16:26
*** jmlowe has quit IRC16:34
*** openstackgerrit has quit IRC16:35
*** openstackgerrit has joined #openstack-nova16:40
openstackgerritJan Gutter proposed openstack/nova-specs master: Spec to implement os-vif generic datapath offloads  https://review.openstack.org/60761016:40
*** psachin has joined #openstack-nova16:42
*** gyee has joined #openstack-nova16:47
*** k_mouza_ has joined #openstack-nova16:50
*** tbachman has joined #openstack-nova16:51
*** helenafm has quit IRC16:52
*** ccamacho has quit IRC16:52
*** k_mouza has quit IRC16:53
*** k_mouza_ has quit IRC16:55
*** itlinux has joined #openstack-nova16:56
melwitto/16:58
*** udesale has quit IRC16:58
sean-k-mooneymelwitt: o/16:59
*** spatel has joined #openstack-nova17:00
sean-k-mooneyjohnthetubaguy: o/ care to take another look at https://review.openstack.org/#/c/591607/1117:02
*** rodolof has quit IRC17:02
*** rodolof has joined #openstack-nova17:02
*** igordc has joined #openstack-nova17:04
*** ohorecny2 has quit IRC17:07
*** rodolof has quit IRC17:14
*** munimeha1 has quit IRC17:14
*** rodolof has joined #openstack-nova17:15
*** _alastor_ has joined #openstack-nova17:17
*** khomesh has quit IRC17:18
melwittmriedem: I was thinking we should cancel the dec 20 nova meeting bc efried_cya_jan is out and I think dansmith is out that day too. and I was considering taking the day off as well17:20
*** sahid has quit IRC17:21
*** dtantsur is now known as dtantsur|afk17:21
mriedemshrug17:21
mriedemi'll be around17:21
mriedemi can run it if needed17:21
dansmithI won't be17:22
*** s10 has joined #openstack-nova17:23
melwittmriedem: ok, if you are able to run it then that works too. I was thinking there might be a lack of quorum17:25
s10Please review backports for https://review.openstack.org/#/q/topic:bug/180606417:26
sean-k-mooneylyarwood: ^ that proably for you17:28
*** derekh has quit IRC17:29
*** rodolof has quit IRC17:32
*** rodolof has joined #openstack-nova17:33
*** moshele has joined #openstack-nova17:34
janguttermriedem: would you be able to take a look at  https://review.openstack.org/#/c/607610/ in your copious free time...? Jay's already +2'ed the previous revision.17:43
mriedemuhh17:44
sean-k-mooneyby the way was there a proposal to do another spec review day tomorow or was i imagining that17:48
melwittsean-k-mooney: I suggested it and we discussed it in the nova meeting and decided to give it a miss. I replied on the ML accordingly17:49
sean-k-mooneymelwitt: ah ok17:49
*** amodi has joined #openstack-nova17:51
*** moshele has quit IRC17:53
openstackgerritSurya Seetharaman proposed openstack/python-novaclient master: API microversion 2.68: Handles Down Cells  https://review.openstack.org/57956317:53
mriedemlbragstad: have you seen this? http://logs.openstack.org/81/606981/4/check/tempest-slow/fafde23/controller/logs/screen-n-api.txt.gz#_Dec_08_01_45_42_74570917:53
* lbragstad waits for logs to load17:55
mriedemseems like something in an api-paste middleware17:55
mriedemb/c it's logged before we call the actual controller method to handle that request17:55
mriedemoh i bet i know what it is17:57
lbragstadit's related to https://review.openstack.org/#/c/619260/3/oslo_policy/policy.py17:57
mriedemnova's RequestContext calls check_is_admin during init17:57
mriedemwhich is for every request17:58
lbragstaddoes that call enforce()?17:58
mriedemcalls return _ENFORCER.authorize('context_is_admin', target, credentials)17:59
mriedemwhich calls enforce17:59
lbragstadah - it does17:59
lbragstadright18:00
lbragstadmakes sense18:00
lbragstadoh - weird, target is being overloaded as credentials?18:00
lbragstadi suppose, is it too early in the pipeline for nova to know what the actual target is?18:00
mriedemyeah18:01
lbragstadhuh18:01
lbragstadinteresting18:01
mriedemremember this? https://review.openstack.org/#/c/564349/18:01
lbragstadoh - yeah... kinda18:03
*** ralonsoh has quit IRC18:04
lbragstadlooks like that would still address the issue18:04
mriedemthis is where we get _DeprecatedPolicyValues https://github.com/openstack/oslo.context/blob/0daf01065d1d51694e06aaecb3dcf4dcc78710fe/oslo_context/context.py#L31818:05
mriedempassing that in as the target18:05
lbragstadright18:05
lbragstadyou should be able to pass context objects as credentials18:05
lbragstadbut we don't really link anything about target to context18:05
lbragstadat least in oslo.policy18:06
mriedemseems we should just pass target={}18:06
mriedemsince we don't know18:06
lbragstad(we wanted to use context objects as a replacement for a dictionary called credentials since it makes things easier for services using oslo.policy - instead of just assuming service developers know what to put in credentials or how the internals of oslo.policy parses it)18:07
lbragstadand it relays a bunch of useful information about the user making the request - which is what credentials was trying achieve anyway18:07
lbragstadbut yeah - if you don't know anything about the target, an empty dictionary would seem safer than overloading target as a set of context attributes18:08
*** sridharg has quit IRC18:10
lbragstadfor example - in keystone we protect APIs like this https://github.com/openstack/keystone/blob/master/keystone/api/users.py#L13718:12
*** jmlowe has joined #openstack-nova18:12
lbragstadwe look at the context object associated with the request and generate "credentials" https://github.com/openstack/keystone/blob/master/keystone/common/rbac_enforcer/enforcer.py#L38018:12
lbragstadand then pass that to oslo.policy https://github.com/openstack/keystone/blob/master/keystone/common/rbac_enforcer/enforcer.py#L81-L8218:12
*** moshele has joined #openstack-nova18:14
openstackgerritDan Smith proposed openstack/nova master: Make compute rpcapi version calculation check all cells  https://review.openstack.org/62328418:18
openstackgerritDan Smith proposed openstack/nova master: Make service.get_minimum_version_all_cells() cache the results  https://review.openstack.org/62328318:18
openstackgerritGeorg Hoesch proposed openstack/nova master: refactor get_console_output() for console logfiles  https://review.openstack.org/57573518:19
*** wolverineav has joined #openstack-nova18:20
*** psachin has quit IRC18:23
*** sapd1_ has quit IRC18:24
*** wolverineav has quit IRC18:24
*** dtrainor has joined #openstack-nova18:36
*** davidsha_ has quit IRC18:36
dtrainorHowdy.  I'm trying to troubleshoot a failed deployment that ultimately results in "Message: No valid host was found. There are not enough hosts available., Code: 500".  Digging deeper, it looks like RamFilter is returning 0 hosts because the hosts appears to return a value of 0 MB RAM http://paste.openstack.org/show/736921/18:38
etpDo max_placement_results and host aggregates/AggregateInstanceExtraSpecsFilter work together or am I looking at configuration error? Say cloud has 100 hypervisors, max_placement_results = 10, two host aggregates 50 hypervisors each, flavor extra specs matching latter aggregate with hypervisors 51-10018:38
dtrainorthe nodes get properly introspected.  the introspection data is good.  there may be a bug in the Ravello hypervisor that this is being deployed on which ahs been known to return 0MB if the host BIOS returned a negative number (read something about an overcommit ratio sometimes causing this)18:38
*** wolverineav has joined #openstack-nova18:41
*** tssurya has joined #openstack-nova18:41
dtrainorI'm using all default values for this downstream queens deployment.  The flavors show a minimum requirement of 4096MB, that's the only place I can find this number occurring.18:42
*** cfriesen has joined #openstack-nova18:43
*** wolverineav has quit IRC18:44
*** wolverineav has joined #openstack-nova18:47
dansmithetp: 10 is small enough that you're likely to not get back any hosts in the aggregate you're looking for yeah18:50
dansmithetp: unless you're enabling a request filter in nova that helps to ensure that you do18:50
dansmithwhich is currently az or tenant filtering18:50
*** sapd1_ has joined #openstack-nova18:51
*** tssurya has quit IRC18:53
TheJuliaHopefully someone can help jog my memory... A long time ago I remember there was a discussion of how nova represents a baremetal node being torn down.  I believe the state then and now is that the instance is generally "ACTIVE" until the node is actually deleted, and I think the motivation behind that was resource accounting wise. Does anyone remember this discussion (it would have been 2+ years ago I think)18:53
*** sapd1_ has quit IRC18:55
dansmithTheJulia: the instance goes away immediately, AFAIK, as that's the way delete is supposed to work, but nova tries to make the node not schedule-able until the cleaning is done18:57
*** sapd1_ has joined #openstack-nova18:58
mriedemdtrainor: you shouldn't need the RamFilter in queens18:58
TheJuliaI've got a QE person reporting that they are seeing it sit in "ACTIVE" until cleaning is over18:58
dansmiththe strategy we proposed for doing this betterly was to set the reserved amount of resource equal to the total for a node so that it appeared to not have any room until after it's cleaning18:58
mriedemdtrainor: the placement service does the filtering on MEMORY_MB18:58
TheJuliaand I thought there was some reason for that, just struggling to decompress those memories18:58
dansmithTheJulia: okay I could be wrong, but I don't think that's what is supposed to happen18:58
etpdansmith: tnx, we have only one az atm and aggregate should be for all tenants so I think request filter doesn't help here, I'll revert config back to default18:59
mriedemdtrainor: you're scheduling to baremetal nodes there?18:59
mriedemdtrainor: if those are baremetal nodes, you should read up on https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html19:00
mriedemdtrainor: i guess the Core/Ram/Disk filters weren't deprecated until stein but they should have been since pike i think https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#ramfilter19:01
dansmithTheJulia:             self.ironicclient.call("node.set_provision_state", node.uuid,19:01
dansmith                                   "deleted")19:01
dansmithTheJulia: does that take a long time?19:02
TheJuliadansmith: that can take hours upon hours depending on the hardware the operator configuration19:02
dansmithTheJulia: yeah, looks like that has a poll loop after it19:02
TheJuliadansmith: to be precise, that should return immediately19:02
TheJuliabut the background it runs for a long time19:02
dtrainorsorry, i'm back.  mriedem, I'm familiar with that doc, yep.  I'm familiar with how it's scheduled and can even make this deployment work on different platforms, e.g. on top of different hardware.  that's what leaves me puzzled.19:02
*** wolverineav has quit IRC19:03
*** wolverineav has joined #openstack-nova19:03
dansmithTheJulia: yeah, so I guess it does have to go through a full cleaning before the instance appears to actually go away19:03
dtrainorVCPU, MEMORY_MB, and DISK_GB are all set to 0 to disable scheduling based on properties for a bare metal flavor.  i am using bare metal nodes with the intent of using the default flavor properties such as these19:03
dansmithTheJulia: but the instance shouldn't be just ACTIVE, it should be ACTIVE with a task-state of "deleting" at least19:04
TheJuliamriedem: I'm fairly sure somewhere there was a statement saying that it would no longer work and was deprecated, but that was a long time ago since we kept compatability for a while19:04
TheJuliadansmith: that is what it reports now afaik19:04
mriedemdtrainor: ok, well i can tell you the IronicHostManager, baremetal filters and core/ram/disk filters have all been deprecated and can cause issues when trying to schedule to baremetal nodes19:06
mriedemso you shouldn't be using those19:06
mriedemand should be scheduling based on custom resource classes19:06
dtrainorright, that was my understanding19:07
dtrainorall the nodes are created by default with a resource_class of 'baremetal', which my understanding is, this should match a flavor based on a flavor's properties of e.g. resources:CUSTOM_BAREMETAL='1'19:08
dtrainoragain, these all being defaults19:08
*** igordc has quit IRC19:08
*** mvkr has quit IRC19:09
*** wolverineav has quit IRC19:09
*** igordc has joined #openstack-nova19:09
dtrainori have nova-compute and nova-scheduler log files, I don't want to ask for too much but maybe another set of eyes would help me find something that I'm not able to see?19:10
TheJuliadansmith: do you know, off the top of your head, if nova only represents a deletion in progress in the task_status field?19:10
TheJuliadansmith: I mean, in documentation19:10
* TheJulia finds nova's vm state diagram and lists19:11
*** wolverineav has joined #openstack-nova19:11
dansmithTheJulia: from the outside, that's the indication the user has yeah19:16
dansmithTheJulia: normally it happens fairly quick of course19:16
TheJuliayeah19:16
*** N3l1x has joined #openstack-nova19:16
mriedemdtrainor: paste your nova.conf for the scheduler19:18
mriedemdtrainor: in addition, are the cpu/ram/disk values on the flavor overridden to report 0?19:19
mriedem119:19
mriedemsee, "Another set of flavor properties must be used to disable scheduling based on standard properties for a bare metal flavor:" from https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html19:20
dtrainormy scheduler section is pretty boring by default with only one change http://paste.openstack.org/show/736924/ reading...19:24
dtrainoryes, the values are set to 0, as this is a default https://github.com/openstack/tripleo-heat-templates/blob/stable/queens/extraconfig/post_deploy/undercloud_post.sh#L12419:25
cdentmriedem or yikun_ : https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L213 has trouble when the compute node is 'deleted' and the context hasn't diddled read_deleted19:27
mriedemwhere are we reading deleting compute_nodes?19:27
mriedem*deleted19:27
dtrainorso by default, a node's resource class is not uppercase, which sounds like it's required per https://docs.openstack.org/ironic/queens/install/configure-nova-flavors.html .  I just changed the resource class of a node from 'baremetal' to 'BAREMETAL', I'll try a deployment with that.19:28
cdentthis is showing up in the status test that I'm moving for placement fixture: line 265 here https://review.openstack.org/#/c/617941/28/nova/tests/functional/test_nova_status.py19:28
cdentcn2.create() is of a node that is deleted (so that it can be "not seen" during the test)19:28
dtrainorI think the thing that bothers me the most is that this deployment succeeds on other sets of hardware, i'm starting to suspect a bug on the platform on which this is deployed but i can't prove it19:28
cdentI can diddle the context, but wanted to be sure that was the right things to do19:28
mriedemcdent: yeah you didn't copy part of that change from yikun19:28
* cdent shakes fist at merges19:29
mriedemcdent: https://review.openstack.org/#/c/613499/16/nova/tests/unit/cmd/test_status.py19:29
cdentthanks19:29
mriedemdtrainor: hmm i'm not sure how you have the RamFilter enabled then19:29
cdentlatency is the everything killer19:30
mriedemdtrainor: especially since we removed that from the default enabled filters list in pike https://review.openstack.org/#/c/491854/19:30
mriedemalthough i see you're using the ironic_host_manager,19:31
mriedemwhich is also deprecated and you shouldn't need19:31
dtrainorlike i said earier, I'm using a downstream release of queens, so i think that may be adding to the confusion19:31
dtrainorright19:31
dtrainorfeels like queens was a weird transition state to deprecate some scheduling bits in place of some more modern techniques, which may very well lead to me doing something Wrong(TM)19:32
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/conf/scheduler.py#L30919:32
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/conf/scheduler.py#L34119:32
dtrainorvery good.  thanks for the links.19:32
mriedemthe default on use_baremetal_filters is False so you shouldn't be using those19:32
mriedemanyway, i'd change host_manager=host_manager in nova.conf and see if that helps19:33
mriedemas i said, the deprecations should have happened earlier, but we likely didn't catch this stuff until tripleo failed by using the old stuff19:34
mriedemand we don't deprecate things in stable branches19:34
dtrainorunderstood.19:34
mriedemthere is a mention of this in https://docs.openstack.org/releasenotes/nova/queens.html19:34
openstackgerritChris Dent proposed openstack/nova master: Use external placement in functional tests  https://review.openstack.org/61794119:34
dtrainorinteresting.  again thanks for the links!19:36
dtrainori'll do some homework.19:36
openstackgerritChris Dent proposed openstack/nova master: Use external placement in functional tests  https://review.openstack.org/61794119:38
openstackgerritChris Dent proposed openstack/nova master: Delete the placement code  https://review.openstack.org/61821519:38
openstackgerritMatt Riedemann proposed openstack/nova master: DNM: add debug logging for bug 1807723  https://review.openstack.org/62418119:43
openstackbug 1807723 in OpenStack Compute (nova) "swap multiattach volume intermittently fails when servers are on different hosts" [Medium,Confirmed] https://launchpad.net/bugs/180772319:43
*** wolverineav has quit IRC19:46
*** wolverineav has joined #openstack-nova19:47
*** rodolof has quit IRC19:49
*** wolverineav has quit IRC19:52
*** xek has quit IRC19:52
*** xek has joined #openstack-nova19:52
*** _alastor_ has quit IRC20:00
openstackgerritMatt Riedemann proposed openstack/nova master: Fix target used in nova.policy.check_is_admin  https://review.openstack.org/62418520:04
*** cdent has quit IRC20:04
*** bringha has joined #openstack-nova20:10
*** igordc has quit IRC20:12
openstackgerritMatt Riedemann proposed openstack/nova master: Fix target used in nova.policy.check_is_admin  https://review.openstack.org/62418520:13
*** wolverineav has joined #openstack-nova20:14
mriedemgibi: there is a runway slot open https://etherpad.openstack.org/p/nova-runways-stein so i'll throw https://blueprints.launchpad.net/nova/+spec/bandwidth-resource-provider in there bug you've got two separate entries queued up so I assume you meant to keep those separate wrt runways slots20:15
mriedem*but you've got20:15
*** mvkr has joined #openstack-nova20:16
*** wolverineav has quit IRC20:19
*** wolverineav has joined #openstack-nova20:20
*** tbachman has quit IRC20:25
*** jmlowe has quit IRC20:26
*** lpetrut has quit IRC20:27
*** pchavva has quit IRC20:27
*** bringha has quit IRC20:28
mriedemdansmith: melwitt: i think we can move forward with moving nova-cells-v1 to the experimental queue https://review.openstack.org/#/c/62353820:31
dansmithmriedem: I figured you'd drop the +W when ready, but.. got it20:32
mriedemdon't want to +W my own change20:32
*** jmlowe has joined #openstack-nova20:32
*** jmlowe has quit IRC20:33
sean-k-mooneymriedem: did you mention the intel nfv ci was broken recently20:33
mriedemyes20:34
mriedemwell,20:34
mriedemit just skips20:34
mriedemso not sure why it's litsening on nova changes20:34
mriedem*listening20:34
dansmithmriedem: well, we left enough +2s on there, but ... as you wish20:34
sean-k-mooneyim currently setting up a replacement for it at home20:35
sean-k-mooneyim going to dedicate one of my dev servers too it. assumeing the noise stays low ill leave it that way for at least the time being. im going to see if i can use other resource i have acess too also.20:36
sean-k-mooneymy plan is to add the nova core team to the list of people that can leave a comment to have it run and try to have it run on all chagne to a subset of file.20:38
sean-k-mooneye.g. the libvirt dirver and neutron code20:38
*** erlon has quit IRC20:40
dtrainormriedem, setting host_manager=host_manager didn't seen to make a difference.  The deployment still fails with the same errors.20:40
dtrainorblows my mind.  this exact sample deployment with the exact same config on other hardware works just fine.20:41
dtrainorusing the exact same defaults, no less20:41
sean-k-mooneydtrainor: whats the error you get?20:42
dtrainorinitially "No valid host was found. There are not enough hosts available., Code: 500", digging deeper in, scheduler is filtering all the hosts out http://paste.openstack.org/show/736932/20:45
sean-k-mooneylooks like the ramfilter is filtering out the final hosts20:46
dtrainoryep20:46
sean-k-mooneyis this an ironic deployment or something else20:46
*** moshele has quit IRC20:46
dtrainorinitially it was, but per mriedem's suggestion, i tried a host_manager deployment with the same results.  i don't much understand the difference between an ironic deployment and a host deployment though i suppose20:47
dtrainoralso i want to point out that this is a downstream queens20:47
dtrainori know that factors in to this to a degree20:47
sean-k-mooneyya i was looking at mast an saw the entrypoint was gone20:48
sean-k-mooneyin queens you still have https://github.com/openstack/nova/blob/stable/queens/setup.cfg#L84-L8720:48
sean-k-mooneydtrainor: what do you see if you look at the hyperviors api?20:48
sean-k-mooneye.g. do you have ram listed for the nodes20:49
dtrainorthat's what i'm looking at right now, so far no.  i see 0 for all disk vcpus and memory, they are all of type 'ironic'20:50
sean-k-mooneydtrainor: that would be expected in rocky20:50
sean-k-mooneyin rocky we disable the ram filter20:50
dtrainorright, that's what i was reading20:50
dtrainorin lie of resource matching20:51
sean-k-mooneyi wonder did a backport of the reporting change get lanned into osp 13 that could be cause the issue20:51
sean-k-mooneyi think there is an "openstack hypervior stats" command you can run but if the hypervior is listed as 0 ram there the scuderl would appear to be doing the right thing20:52
dtrainor'openstack hypervisor stats show' lists all zeroes20:53
sean-k-mooneyand the current hyperviors are ironic nodes or libvirt?20:54
dtrainorthey all have a hypervisor_type of 'ironic'20:54
sean-k-mooneyok well from rocky on that would be expected but for queens not so much20:55
dtrainorgotcha20:56
*** moshele has joined #openstack-nova20:58
*** jmlowe has joined #openstack-nova20:58
*** moshele has quit IRC21:00
*** tbachman has joined #openstack-nova21:00
mriedemthe RamFilter should not be running21:05
mriedemso something is messed up with your configuration21:05
*** rodolof has joined #openstack-nova21:06
sean-k-mooneyin queens?21:06
mriedemcorrect21:06
sean-k-mooneymriedem: the cpu disk and ram filster should be disabled in rocky definetly but i tought we used them in queens21:06
mriedemno21:06
sean-k-mooneyoh if that change happend in queen then ya21:06
mriedemhaven't needed them since pike21:06
sean-k-mooneyah ok21:07
sean-k-mooneyit should be an easy fix then to just disable them21:07
sean-k-mooneynot sure why triplo whould have enabled them in that case however21:07
sean-k-mooneyoh i think i messed that we removed the caching schduler recently https://github.com/openstack/nova/blob/master/setup.cfg#L86-L8821:09
sean-k-mooneydid that happen before the summit?21:09
sean-k-mooneyah october 18th excelent https://github.com/openstack/nova/commit/25dadb94db37e0f1c6769bf586ec06c3b5ea3051#diff-380c6a8ebbbce17d55d50ef17d3cf90621:10
mriedemdtrainor: clearly your scheduler is picking up non-default filters because TripleOCapabilitiesFilter is in the list21:10
mriedemdtrainor: so you need to figure out which config file is being used for nova-scheduler and where the enabled_filters option is being set21:10
mriedembecause you need to remove RamFilter from that list21:10
mriedemdtrainor: likely a queens version of this http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/environments/undercloud.yaml#n5721:11
mriedemdtrainor: see http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/environments/undercloud.yaml?id=04b235652b44701b8703f63aee10fac6fad13ced21:11
dtrainorreading :)21:11
mriedemhttps://bugs.launchpad.net/tripleo/+bug/178791021:11
openstackLaunchpad bug 1787910 in OpenStack Compute (nova) rocky "OVB overcloud deploy fails on nova placement errors" [High,Fix committed] - Assigned to Matt Riedemann (mriedem)21:11
dtrainorthank you both21:11
mriedemi'm pretty sure that's exactly your issue21:11
sean-k-mooneythis is what generate the config but i dont know where the parmaters are set https://github.com/openstack/tripleo-heat-templates/blob/stable/queens/puppet/services/nova-scheduler.yaml#L90-L9121:14
dtrainoryeah, that smells about right.  idk how I overlooked that bug, I had been searching about this topic for a while now21:14
mriedemdansmith: in the cross-cell resize spec there are a couple of places we talked about moving the policy check from conductor to api https://review.openstack.org/#/c/616037/1/specs/stein/approved/cross-cell-resize.rst@247 - thinking about that, if we move the policy check to api and it's true (cross-cell resize is ok for a given request), do you think it's legit to change the rpc call from api->conductor into a cast? i'm thinki21:14
mriedemes...21:14
dtrainorthis is great stuff.  thank you.  i need to run an errand, i'll be back at this later on.21:15
dtrainorthanks for the help mriedem, sean-k-mooney.  helps tremendously.21:15
mriedemyw21:15
sean-k-mooneydtrainor: so it look like upstrea queens enaable the ram filter https://github.com/openstack/tripleo-heat-templates/blob/stable/queens/environments/undercloud.yaml#L2321:16
mriedemyes21:16
dtrainorgotcha.21:16
sean-k-mooneymriedem: you mentioned it should be drop after pike right so the change form rocky needs to be backported to queens21:17
mriedemfor tripleo sure21:17
sean-k-mooneydtrainor: so you need to backport https://github.com/openstack/tripleo-heat-templates/commit/49916c09216479a8dd54e55b4c6e86dae8246fa321:17
sean-k-mooneyand ya the reference the bug mriedem found21:18
sean-k-mooneyso dumb question. why is that not causeing the triplo gate to explode ?21:19
sean-k-mooneyon stable queens at least21:19
*** sapd1_ has quit IRC21:21
*** sapd1_ has joined #openstack-nova21:24
*** efried_cya_jan has quit IRC21:29
*** s10 has quit IRC21:30
mriedemi believe tripleo found the bug after we removed the ironic driver code to report vcpu/ram/disk inventory for ironic nodes in stein21:35
mriedemso things "worked" before that with the RamFilter21:35
mriedemand we deprecated the core/ram/disk filters in rocky21:35
mriedemoh i guess that was stein as well https://review.openstack.org/#/c/596502/21:36
mriedemshould have been earlier21:36
sean-k-mooneyin that case it would seam to indicate taht OSP13 may have backported the stein change but unless that closed a bug im not sure why they would have21:36
*** rodolof has quit IRC21:36
*** rodolof has joined #openstack-nova21:37
*** wolverineav has quit IRC21:44
*** wolverineav has joined #openstack-nova21:46
*** takashin has joined #openstack-nova21:53
*** sapd1_ has quit IRC21:58
mriedemcfriesen: i've got something you might like to lose sleep over22:00
cfriesenoh, goody22:00
mriedemso RequestSpec has a limits field22:00
mriedemwhich if we ever update that thing, it's on accident22:00
mriedeme.g. if we resize from flavor1 to flavor2 and it's persisted, it's on accident,22:01
mriedemand if we revert the resize back to flavor1, we definitely don't wipe out the limits22:01
mriedemlooking at something like unshelve, the limits passed down to compute to do the claim (which would only be used for pci/numa claims now) possibly does not reflect the actual flavor being used22:01
openstackgerritJack Ding proposed openstack/nova master: [WIP] Flavor extra spec and image properties validation  https://review.openstack.org/62070622:02
mriedemactually unshelve might be safe here,22:03
*** sapd1_ has joined #openstack-nova22:03
mriedemi was thinking unshelve would use stale limits https://github.com/openstack/nova/blob/08d617084e5aa69ada0898d674022621d130aef3/nova/conductor/manager.py#L81322:03
mriedembut those get overwritten in the filter_properties dict here based on the selected host from the scheduler https://github.com/openstack/nova/blob/08d617084e5aa69ada0898d674022621d130aef3/nova/conductor/manager.py#L84122:04
mriedemgod it's a mess22:04
mriedemtl;dr probably shouldn't ever trust RequestSpec.limits22:04
mriedemand we probably shouldn't even persist it22:04
cfriesenwas just going to ask why it's persisted. :)22:04
mriedembecause limits are totally based on configured filters at any given time you scheduled22:04
mriedemdansmith: ^22:04
mriedemcfriesen: everything was persisted with the request spec and we've just be winding that back case by case ever since22:05
mriedembecause it's a turducken of original request / last flavor used but also as a glorified parameter bag to send requests to the scheduler22:05
*** _alastor_ has joined #openstack-nova22:06
mriedemlimits are just everywhere. the rpc cast to build_and_run_instance contains a (1) limits parameter, a (2) nested limits entry within filter_properties and (3) the RequestSpec.limits22:07
mriedemit's *awesome*22:07
*** sapd1_ has quit IRC22:07
cfriesenwhat the heck...NUMATopologyLimits contains cpu_allocation_ratio, ram_allocation_ratio, and "network_metadata"?  how does that make any sense?22:09
mriedemthe network_metadata is the physnet/numa node stuff stephenfin added in rocky22:10
cfriesenokay, so not crazy22:10
cfriesenso it sounds like either we need to persist the limits properly and update them properly when they change, or else not persist them and create them before calling the scheduler.  that sound right?22:12
mriedem*create them *after* calling the scheduler, but yes22:13
mriedemthose are the options really22:13
mriedemeither never trust them across requests, or make sure they are always updated22:13
mriedemi'd opt to never trust them22:13
mriedemi.e. we can only revert the flavor in the request spec here https://github.com/openstack/nova/blob/08d617084e5aa69ada0898d674022621d130aef3/nova/compute/api.py#L3489 because the instance stashes the old_flavor22:14
mriedemwe don't stash the old limits anywhere22:14
mriedemanyway, i just needed to dump my brain here while i was looking at this limits code to see if i could re-use it22:15
cfriesenyou've probably jinxed me now, and our test guys will be firing off an email with a limit-related failure22:20
mriedemmisery loves company22:20
mriedemthis is likely also semi broken depending on how you choose to manage allocation rations now, either via config (what this would use) or via placement api https://github.com/openstack/nova/blob/08d617084e5aa69ada0898d674022621d130aef3/nova/scheduler/filters/numa_topology_filter.py#L7122:21
mriedem*ratios22:21
mriedemif i'm managing VCPU/MEMORY_MB allocation ratios via the placement api, ^ will not reflect what is actually being used22:22
mriedemthat might have been true before placement if you were only managing allocation ratios in aggregate22:23
*** awaugama has quit IRC22:23
dansmithmriedem: is there anything in reqspec that wasn't an ill-conceived steaming pile of poo?22:27
mriedemumm22:28
mriedemwell, it is the only place we track that is_bfv thing!22:29
mriedemotherwise it's caused probably more harm than it was worth22:29
mriedemoh yeah and because of that numa filter limits those are per-host,22:30
mriedemso RequestSpec.limits is really per operation22:31
openstackgerritMerged openstack/nova stable/rocky: Add functional regression test for bug 1806064  https://review.openstack.org/62393122:31
openstackbug 1806064 in OpenStack Compute (nova) rocky "Volume remains in attaching/reserved status, if the instance is deleted after TooManyInstances exception in nova-conductor" [Medium,In progress] https://launchpad.net/bugs/1806064 - Assigned to s10 (vlad-esten)22:31
mriedemalex also found out that we never actually update the numa/pci requests in the reqspec when we resize to another flavor that might have different numa/pci reqs https://review.openstack.org/#/c/621077/22:31
mriedemjust lots of fun22:32
*** efried has joined #openstack-nova22:39
*** itlinux has quit IRC22:43
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from quobyte libvirt storage driver.  https://review.openstack.org/61970222:43
openstackgerritMichael Still proposed openstack/nova master: Move nova.libvirt.utils away from using nova.utils.execute().  https://review.openstack.org/61970322:43
openstackgerritMichael Still proposed openstack/nova master: Imagebackend should call processutils.execute directly.  https://review.openstack.org/61970422:43
openstackgerritMichael Still proposed openstack/nova master: Remove final users of utils.execute() in libvirt.  https://review.openstack.org/61970522:43
openstackgerritMichael Still proposed openstack/nova master: Remove the final user of utils.execute() from virt.images  https://review.openstack.org/62000722:43
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from the hyperv driver.  https://review.openstack.org/62000822:43
openstackgerritMichael Still proposed openstack/nova master: Remove utils.execute() from virt.disk.api.  https://review.openstack.org/62000922:43
openstackgerritMichael Still proposed openstack/nova master: Move a generic bridge helper to a linux_net privsep file.  https://review.openstack.org/62001022:43
openstackgerritMichael Still proposed openstack/nova master: Move bridge creation to privsep.  https://review.openstack.org/62018022:43
openstackgerritMichael Still proposed openstack/nova master: Move some linux network helpers to use privsep.  https://review.openstack.org/62139822:43
openstackgerritMichael Still proposed openstack/nova master: Move simple execute call to processutils.  https://review.openstack.org/62152722:43
openstackgerritMichael Still proposed openstack/nova master: Move interface enabling to privsep.  https://review.openstack.org/62152822:43
openstackgerritMichael Still proposed openstack/nova master: Move setting mac addresses for network devices to privsep.  https://review.openstack.org/62152922:43
openstackgerritMichael Still proposed openstack/nova master: Move interface disabling to privsep.  https://review.openstack.org/62215022:43
openstackgerritMichael Still proposed openstack/nova master: Move binding ips to privsep.  https://review.openstack.org/62215122:43
openstackgerritMichael Still proposed openstack/nova master: create_veth_pair is unused, remove it.  https://review.openstack.org/62422622:43
openstackgerritMichael Still proposed openstack/nova master: Create specialist set_macaddr_and_vlan helper.  https://review.openstack.org/62422722:43
openstackgerritMichael Still proposed openstack/nova master: Move create_tap_dev into privsep.  https://review.openstack.org/62422822:43
mriedemso much for ever getting zuul to run tests on your change now22:44
*** wolverineav has quit IRC22:49
*** _alastor_ has quit IRC22:55
*** rcernin has joined #openstack-nova22:59
mriedemdansmith: i figured out how i'm getting 1 server out of the API when listing and there is a copy in both cells: https://github.com/openstack/nova/blob/08d617084e5aa69ada0898d674022621d130aef3/nova/compute/api.py#L268423:03
dansmithmriedem: oh is that the filter for the BR if the instance is already created?23:04
mriedemyeah i think that's what it's for23:04
mriedemlaski added it23:04
dansmithyeah, well, at least it makes sense23:04
*** panda is now known as panda|off23:06
*** wolverineav has joined #openstack-nova23:10
*** rodolof has quit IRC23:10
*** spatel has quit IRC23:17
*** dave-mccowan has quit IRC23:19
*** slaweq has quit IRC23:19
*** igordc has joined #openstack-nova23:25
*** mlavalle has quit IRC23:26
*** igordc has quit IRC23:30
*** N3l1x has quit IRC23:38
*** igordc has joined #openstack-nova23:41
*** itlinux has joined #openstack-nova23:48
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Cross-cell resize  https://review.openstack.org/60393023:49
*** hshiina has joined #openstack-nova23:56
mriedemjangutter: i'll star that offload spec but it's going to be greek to me23:58
mriedemand i'll have to come back to it, done for the day23:58
*** mmethot has quit IRC23:58
*** mriedem is now known as mriedem_away23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!