Friday, 2018-10-19

*** betherly has joined #openstack-nova00:03
*** s10 has quit IRC00:06
*** s10 has joined #openstack-nova00:06
*** s10 has quit IRC00:07
*** s10 has joined #openstack-nova00:07
*** betherly has quit IRC00:08
*** s10 has quit IRC00:08
*** s10 has joined #openstack-nova00:08
*** s10 has quit IRC00:08
*** s10 has joined #openstack-nova00:09
*** s10 has quit IRC00:09
*** s10 has joined #openstack-nova00:10
*** s10 has quit IRC00:10
*** s10 has joined #openstack-nova00:10
*** s10 has quit IRC00:11
*** s10 has joined #openstack-nova00:11
*** s10 has quit IRC00:12
*** takashin has joined #openstack-nova00:12
*** s10 has joined #openstack-nova00:12
*** s10 has quit IRC00:12
*** s10 has joined #openstack-nova00:13
*** s10 has quit IRC00:13
*** s10 has joined #openstack-nova00:13
*** s10 has quit IRC00:14
*** s10 has joined #openstack-nova00:14
*** s10 has quit IRC00:15
*** s10 has joined #openstack-nova00:15
*** s10 has quit IRC00:15
*** brinzhang has joined #openstack-nova00:15
*** s10 has joined #openstack-nova00:16
*** s10 has quit IRC00:16
*** s10 has joined #openstack-nova00:17
*** s10 has quit IRC00:17
*** s10 has joined #openstack-nova00:17
*** s10 has quit IRC00:18
*** s10 has joined #openstack-nova00:18
*** s10 has quit IRC00:18
*** READ10 has joined #openstack-nova00:19
*** s10 has joined #openstack-nova00:19
*** s10 has quit IRC00:19
*** s10 has joined #openstack-nova00:20
*** s10 has quit IRC00:20
*** s10 has joined #openstack-nova00:20
*** s10 has quit IRC00:21
*** s10 has joined #openstack-nova00:21
*** s10 has quit IRC00:22
*** s10 has joined #openstack-nova00:22
*** s10 has quit IRC00:22
*** s10 has joined #openstack-nova00:23
*** s10 has quit IRC00:23
*** betherly has joined #openstack-nova00:23
*** s10 has joined #openstack-nova00:23
*** s10 has quit IRC00:24
*** s10 has joined #openstack-nova00:24
*** s10 has quit IRC00:25
*** s10 has joined #openstack-nova00:25
*** s10 has quit IRC00:25
*** s10 has joined #openstack-nova00:26
*** s10 has quit IRC00:26
*** s10 has joined #openstack-nova00:27
*** s10 has quit IRC00:27
*** s10 has joined #openstack-nova00:27
*** s10 has quit IRC00:28
*** s10 has joined #openstack-nova00:28
*** betherly has quit IRC00:28
*** s10 has quit IRC00:29
*** s10 has joined #openstack-nova00:29
*** s10 has quit IRC00:29
*** s10 has joined #openstack-nova00:30
*** s10 has quit IRC00:30
*** s10 has joined #openstack-nova00:30
*** s10 has quit IRC00:31
*** s10 has joined #openstack-nova00:31
*** s10 has quit IRC00:32
*** s10 has joined #openstack-nova00:32
*** s10 has quit IRC00:32
*** s10 has joined #openstack-nova00:33
*** s10 has quit IRC00:33
*** s10 has joined #openstack-nova00:34
*** s10 has quit IRC00:34
*** s10 has joined #openstack-nova00:34
*** s10 has quit IRC00:35
*** s10 has joined #openstack-nova00:35
*** s10 has quit IRC00:36
*** s10 has joined #openstack-nova00:36
*** s10 has quit IRC00:36
*** s10 has joined #openstack-nova00:37
*** s10 has quit IRC00:37
*** s10 has joined #openstack-nova00:37
*** s10 has quit IRC00:38
*** s10 has joined #openstack-nova00:38
*** s10 has quit IRC00:39
*** s10 has joined #openstack-nova00:39
*** s10 has quit IRC00:39
*** s10 has joined #openstack-nova00:40
*** s10 has quit IRC00:40
*** s10 has joined #openstack-nova00:41
*** s10 has quit IRC00:41
*** s10 has joined #openstack-nova00:41
*** s10 has quit IRC00:42
*** s10 has joined #openstack-nova00:42
*** s10 has quit IRC00:42
*** s10 has joined #openstack-nova00:43
*** s10 has quit IRC00:43
*** s10 has joined #openstack-nova00:44
*** s10 has quit IRC00:44
*** s10 has joined #openstack-nova00:44
*** s10 has quit IRC00:45
*** s10 has joined #openstack-nova00:45
*** s10 has quit IRC00:46
*** s10 has joined #openstack-nova00:46
*** s10 has quit IRC00:46
*** s10 has joined #openstack-nova00:47
*** s10 has quit IRC00:47
*** s10 has joined #openstack-nova00:47
*** s10 has quit IRC00:48
*** s10 has joined #openstack-nova00:48
*** s10 has quit IRC00:49
*** markvoelker has joined #openstack-nova00:58
openstackgerritBrin Zhang proposed openstack/nova master: Add restrictions on updated_at when getting migrations  https://review.openstack.org/60779800:58
openstackgerritBrin Zhang proposed openstack/nova master: Add restrictions on updated_at when getting instance action records  https://review.openstack.org/60780101:05
*** erlon__ has quit IRC01:13
*** tommylikehu has joined #openstack-nova01:17
*** beekneemech has joined #openstack-nova01:17
*** bnemec has quit IRC01:19
*** imacdonn has quit IRC01:22
*** imacdonn has joined #openstack-nova01:22
*** mrsoul has quit IRC01:25
*** markvoelker has quit IRC01:32
*** tetsuro has joined #openstack-nova01:37
*** Dinesh_Bhor has joined #openstack-nova01:44
*** pooja_jadhav has quit IRC01:45
*** mhen has quit IRC01:47
*** dklyle has joined #openstack-nova01:50
*** mhen has joined #openstack-nova01:50
*** david-lyle has joined #openstack-nova01:53
*** pooja_jadhav has joined #openstack-nova01:53
*** dklyle has quit IRC01:56
*** dklyle has joined #openstack-nova01:56
*** david-lyle has quit IRC01:59
*** hongbin has joined #openstack-nova02:03
openstackgerritMerged openstack/python-novaclient master: Recommend against using --force for evacuate/live migration  https://review.openstack.org/61143602:04
*** lei-zh has joined #openstack-nova02:06
*** bnemec has joined #openstack-nova02:13
*** TuanDA has joined #openstack-nova02:14
*** beekneemech has quit IRC02:15
*** beekneemech_ has joined #openstack-nova02:15
*** bnemec has quit IRC02:18
*** dklyle has quit IRC02:20
*** betherly has joined #openstack-nova02:26
*** bnemec has joined #openstack-nova02:26
*** beekneemech_ has quit IRC02:29
*** betherly has quit IRC02:30
*** lei-zh has quit IRC02:31
*** Dinesh_Bhor has quit IRC02:31
*** lei-zh has joined #openstack-nova02:31
*** beekneemech has joined #openstack-nova02:31
*** bnemec has quit IRC02:34
*** Dinesh_Bhor has joined #openstack-nova02:35
*** bnemec has joined #openstack-nova02:48
*** beekneemech has quit IRC02:51
*** betherly has joined #openstack-nova02:56
*** tetsuro has quit IRC02:59
*** betherly has quit IRC03:00
*** tetsuro has joined #openstack-nova03:02
openstackgerritMerged openstack/nova stable/rocky: Ignore uuid if already set in ComputeNode.update_from_virt_driver  https://review.openstack.org/61133703:04
*** READ10 has quit IRC03:12
*** lei-zh has quit IRC03:32
openstackgerritMerged openstack/nova stable/rocky: Use unique consumer_id when doing online data migration  https://review.openstack.org/61131503:34
*** hongbin has quit IRC03:46
*** betherly has joined #openstack-nova03:47
*** betherly has quit IRC03:52
*** tetsuro has quit IRC04:03
*** tetsuro has joined #openstack-nova04:05
*** Dinesh_Bhor has quit IRC04:08
*** beekneemech has joined #openstack-nova04:10
*** bnemec has quit IRC04:13
*** tommylikehu has quit IRC04:16
*** jaosorior has joined #openstack-nova04:24
*** tetsuro has quit IRC04:33
*** bnemec has joined #openstack-nova04:45
*** beekneemech has quit IRC04:48
*** bnemec has quit IRC04:51
*** bnemec has joined #openstack-nova04:51
*** beekneemech has joined #openstack-nova05:03
*** bnemec has quit IRC05:06
*** andreaf has quit IRC05:10
*** andreaf has joined #openstack-nova05:15
*** Dinesh_Bhor has joined #openstack-nova05:17
*** s10 has joined #openstack-nova05:18
openstackgerritTakashi NATSUME proposed openstack/nova master: Fix best_match() deprecation warning  https://review.openstack.org/61120405:35
*** Luzi has joined #openstack-nova05:45
openstackgerritBrin Zhang proposed openstack/nova master: Add restrictions on updated_at when getting migrations  https://review.openstack.org/60779805:46
openstackgerritBrin Zhang proposed openstack/nova master: Add restrictions on updated_at when getting instance action records  https://review.openstack.org/60780105:48
*** TuanDA has quit IRC05:54
*** pcaruana has joined #openstack-nova06:14
*** s10 has quit IRC06:19
*** lei-zh has joined #openstack-nova06:20
*** jding1_ has joined #openstack-nova06:24
*** jackding has quit IRC06:27
*** jding1__ has joined #openstack-nova06:32
*** jaosorior has quit IRC06:32
*** renxiaof has joined #openstack-nova06:34
*** jding1_ has quit IRC06:34
*** jaosorior has joined #openstack-nova06:34
*** Dinesh_Bhor has quit IRC06:36
jaosoriorCould I get a review for this https://review.openstack.org/#/c/609591/ ?06:36
*** jding1_ has joined #openstack-nova06:37
*** jackding has joined #openstack-nova06:40
*** jding1_ has quit IRC06:41
*** jding1__ has quit IRC06:41
*** ccamacho has joined #openstack-nova06:45
*** sahid has joined #openstack-nova06:57
*** sahid has quit IRC06:57
*** sahid has joined #openstack-nova06:57
bauzasGood morning Nova07:09
*** pcaruana has quit IRC07:09
*** stakeda has joined #openstack-nova07:17
*** rcernin has quit IRC07:17
openstackgerritZhenyu Zheng proposed openstack/nova-specs master: Detach and attach boot volumes - Stein  https://review.openstack.org/60062807:17
*** Dinesh_Bhor has joined #openstack-nova07:26
*** pcaruana has joined #openstack-nova07:28
*** ralonsoh has joined #openstack-nova07:29
*** TuanDA has joined #openstack-nova07:29
*** pcaruana is now known as pcaruana|elisa|07:30
*** renxiaof has quit IRC07:31
*** slaweq has quit IRC07:32
*** helenafm has joined #openstack-nova07:32
*** slaweq has joined #openstack-nova07:33
*** tssurya has joined #openstack-nova07:49
*** takashin has left #openstack-nova08:00
*** Dinesh_Bhor has quit IRC08:02
openstackgerritMerged openstack/nova master: Migrate nova v2.0 legacy job to zuulv3  https://review.openstack.org/61040308:19
*** derekh has joined #openstack-nova08:26
*** tetsuro has joined #openstack-nova08:27
*** s10 has joined #openstack-nova08:34
*** erlon has joined #openstack-nova08:39
*** vabada has quit IRC08:42
*** vabada has joined #openstack-nova08:43
*** erlon has quit IRC08:58
*** lei-zh has quit IRC08:58
*** helenafm has quit IRC09:01
*** Dinesh_Bhor has joined #openstack-nova09:06
*** ttsiouts has joined #openstack-nova09:12
*** zigo has joined #openstack-nova09:14
*** s10 has quit IRC09:15
*** helenafm has joined #openstack-nova09:21
*** maciejjozefczyk has quit IRC09:30
*** maciejjozefczyk has joined #openstack-nova09:31
*** erlon has joined #openstack-nova09:47
*** s10 has joined #openstack-nova09:50
*** brinzhang has quit IRC09:53
*** TuanDA has quit IRC10:05
openstackgerritTetsuro Nakamura proposed openstack/nova-specs master: Spec: Support filtering by forbidden aggregate  https://review.openstack.org/60335210:13
*** sambetts|afk is now known as sambetts10:14
*** stakeda has quit IRC10:16
*** tetsuro has quit IRC10:20
*** Dinesh_Bhor has quit IRC10:22
*** helenafm has quit IRC10:23
openstackgerrithuanhongda proposed openstack/nova master: AZ operations: check host has no instances  https://review.openstack.org/61183310:30
*** tetsuro has joined #openstack-nova10:31
*** tbachman has quit IRC10:33
openstackgerrithuanhongda proposed openstack/nova master: AZ operations: check host has no instances  https://review.openstack.org/61183310:33
*** cdent has joined #openstack-nova10:35
*** helenafm has joined #openstack-nova10:36
*** tbachman has joined #openstack-nova10:37
*** dtantsur|afk is now known as dtantsur10:43
*** tbachman has quit IRC10:46
*** phillu has joined #openstack-nova10:51
*** dave-mccowan has joined #openstack-nova10:53
*** tetsuro has quit IRC10:58
*** skatsaounis has quit IRC11:25
*** skatsaounis has joined #openstack-nova11:26
*** phillu has quit IRC11:35
*** phillu has joined #openstack-nova11:36
openstackgerritRadoslav Gerganov proposed openstack/nova master: Preserve compute stats used by the scheduler  https://review.openstack.org/61185211:39
*** phillu has quit IRC11:41
*** panda is now known as panda|lunch11:53
*** jdillaman1 has joined #openstack-nova11:57
*** dillaman has quit IRC11:59
*** tbachman has joined #openstack-nova12:05
*** tbachman has quit IRC12:10
*** tbachman has joined #openstack-nova12:15
*** jiaopengju has quit IRC12:16
*** jiaopengju has joined #openstack-nova12:16
*** mriedem has joined #openstack-nova12:19
*** cdent has quit IRC12:24
*** jangutter has quit IRC12:30
*** dpawlik has quit IRC12:30
*** dpawlik has joined #openstack-nova12:30
*** jangutter has joined #openstack-nova12:32
*** eharney has joined #openstack-nova12:33
*** priteau has joined #openstack-nova12:34
*** dpawlik has quit IRC12:35
openstackgerritMatt Riedemann proposed openstack/nova master: Document each libvirt.sysinfo_serial choice  https://review.openstack.org/61142612:39
mriedemso uh, do we want to do this stable-only ironic inventory workaround thing in rocky? https://review.openstack.org/#/c/609043/12:43
mriedemand queens and pike12:43
mriedemtl;dr once you've migrated all of your ironic instances to resource classes, you don't want to report vcpu/ram/disk inventory anymore on those nodes12:44
*** dansmith is now known as SteelyDan12:45
SteelyDanI'd say so12:48
mriedemi wasn't sure if the option should be deprecated immediately? seems kind of weird, but it will just be gone when you get to stein.12:53
mriedemnot sure how much it matters12:53
SteelyDanme either12:56
mriedembauzas: were you witholding a +W on https://review.openstack.org/#/c/610088/ for some reason?12:56
SteelyDanmriedem: does this ring any bells? http://logs.openstack.org/58/591658/12/check/tempest-full-py3/5224550/controller/logs/screen-n-api.txt.gz?level=TRACE#_Oct_18_20_49_20_08613812:56
mriedemeven though you were +2?12:56
*** stephenfin is now known as finucannot12:57
SteelyDanthere's a ton of unrelated red in that log, so ignore the rest, but the FK error there doesn't seem related to the patch12:57
SteelyDanand there are also rabbit connection failures later in the log12:57
*** slaweq has quit IRC12:57
mriedemhmm, no, also seems weird that we'd get cell0 FK errors for a resize operation....12:58
mriedemwhich shouldn't have anything to do with cell012:58
SteelyDanit's an action,12:58
mriedemunless it's trying to record an action in the api,12:58
SteelyDanbut yeah that clearly looks like we're talking to the wrong db12:58
mriedemand defaulting to the cell0 db connectoin in nova.conf,12:58
mriedembut when we look up the instance we should target the context to cell112:59
SteelyDanoh I bet I know12:59
mriedemso, it looks like it's probably shitting b/c it's trying to create an action in cell0 for an instance in cell112:59
SteelyDanthis changes it so we're not permanently targeting the context.. so the resize goes on to be untargeted12:59
SteelyDandidn't think of that until you made the connection to cell013:00
SteelyDanthat's also probably why we can't connect to fake:/// or whatever rabbit13:00
mriedemuh huh13:00
openstackgerritDan Smith proposed openstack/nova master: Return a minimal construct for nova show when a cell is down  https://review.openstack.org/59165813:02
openstackgerritDan Smith proposed openstack/nova master: Return a minimal construct for nova service-list when a cell is down  https://review.openstack.org/58482913:02
*** slaweq has joined #openstack-nova13:07
*** derekh has quit IRC13:11
*** derekh has joined #openstack-nova13:11
*** mchlumsky has joined #openstack-nova13:14
bauzasmriedem: just a miss I guess13:16
bauzasfixed13:16
*** jmlowe has joined #openstack-nova13:17
mriedemdanke13:18
bauzasbitte13:18
* bauzas now has 1800XP for Duolingo in German13:18
*** lbragstad is now known as elbragstad13:23
*** slaweq has quit IRC13:27
openstackgerritDaniel Abad proposed openstack/nova master: Fix ironic client ironic_url deprecation warning  https://review.openstack.org/61187213:30
*** dpawlik has joined #openstack-nova13:37
*** dpawlik has quit IRC13:38
*** dpawlik has joined #openstack-nova13:38
*** awaugama has joined #openstack-nova13:43
*** slaweq has joined #openstack-nova13:49
*** munimeha1 has joined #openstack-nova13:53
*** rpittau has quit IRC13:56
melwitt14:01
*** sidx64 has joined #openstack-nova14:05
*** mriedem has quit IRC14:06
*** Luzi has quit IRC14:07
*** mriedem has joined #openstack-nova14:09
*** s10 has quit IRC14:15
openstackgerritStephen Finucane proposed openstack/osc-placement master: Enforce key-value'ness for 'allocation candidate list --resource'  https://review.openstack.org/61188314:18
openstackgerritStephen Finucane proposed openstack/osc-placement master: tox: Hide deprecation warnings from stdlib  https://review.openstack.org/61188414:18
*** s10 has joined #openstack-nova14:21
*** k_mouza has joined #openstack-nova14:24
*** efried is now known as efried_pto14:26
*** mlavalle has joined #openstack-nova14:28
*** jistr is now known as jistr|call14:29
*** k_mouza has quit IRC14:30
*** k_mouza has joined #openstack-nova14:31
*** jangutter has quit IRC14:33
*** jangutter has joined #openstack-nova14:33
*** slaweq has quit IRC14:40
*** maciejjozefczyk has quit IRC14:45
*** sidx64 has quit IRC14:46
*** jistr|call is now known as jistr14:47
*** spatel has joined #openstack-nova14:48
*** panda|lunch is now known as panda14:49
spatelI got this error when i reboot one of my instance any idea what is this? http://paste.openstack.org/show/732501/14:49
*** ttsiouts has quit IRC14:50
*** pcaruana|elisa| has quit IRC14:56
*** pcaruana|elisa| has joined #openstack-nova14:57
*** s10 has quit IRC15:01
*** ttsiouts has joined #openstack-nova15:01
*** helenafm has quit IRC15:05
*** pcaruana|elisa| has quit IRC15:08
*** pcaruana has joined #openstack-nova15:08
*** tssurya has quit IRC15:11
*** ttsiouts has quit IRC15:12
*** k_mouza has quit IRC15:12
*** mriedem is now known as mriedem_afk15:27
cfriesenspatel: looks like corrupt filesystem15:27
cfriesenor at least corrupt something on disk15:28
*** pcaruana has quit IRC15:31
mriedem_afki've triaged https://bugs.launchpad.net/nova/+bug/1798805 but not sure if we would ever do anything about it15:31
openstackLaunchpad bug 1798805 in OpenStack Compute (nova) "Nova scheduler schedules VMs on nodes where nova-compute is down" [Wishlist,Triaged]15:31
finucannotbauzas: If I delete a compute nodes RP, it'll be recreated, right?15:33
*** spatel has quit IRC15:33
SteelyDanmriedem_afk: I think that means disable in the "break its needs" sort of sense15:34
*** dpawlik has quit IRC15:34
*** dpawlik has joined #openstack-nova15:34
SteelyDanI'm strongly in favor of keeping disable purely for scheduler reasons, but I think it's saying if the compute is actually down, we shouldn't take action on instances15:34
SteelyDanwhich is probably reasonable15:34
*** dpawlik has quit IRC15:34
*** k_mouza has joined #openstack-nova15:34
SteelyDanor make our super awesome bug-free state sync periodic power the instance on when the compute comes back15:35
*** dpawlik has joined #openstack-nova15:35
imacdonnI kinda sorta wish there was a "don't schedule anything new" status, which is different from "don't try to interact with me at all"15:36
*** dpawlik has quit IRC15:36
SteelyDanimacdonn: that's what disable means15:36
*** dpawlik has joined #openstack-nova15:36
SteelyDanit's the only thing it means15:36
imacdonnthe former, you mean15:36
SteelyDanit means don't schedule anything there15:36
imacdonnright15:37
*** ttsiouts has joined #openstack-nova15:37
SteelyDanyou're saying you wish there was a "kneecap this compute"15:37
SteelyDanAPI?15:37
imacdonnIMO, it should really mean "don't talk that node at all right now", and there should be some other way to tell the scheduler not to pout anything NEW there15:37
finucannotmriedem_afk, awaugama: More investigation needed but I think there's something wrong with those foo_allocation_ratio options. I configured them on the compute node, waited ages aaand...nada. Deleting the RP fixed things15:38
imacdonn(or maybe the opposite, so keep the existing meaning of "disable") ... but I think they are use-cases for each15:38
SteelyDanwe're not changing the existing meaning for disable15:38
SteelyDanwe could add another state, but it just means for every single operation we do another db hit to check the state of the host15:38
finucannotmriedem_afk, awaugama: At least, assuming my understanding of how that's _supposed_ to work is correct. It it Friday so maybe it's not15:39
imacdonn"hard_disabled" ? :)15:39
SteelyDanno.15:39
*** jangutter has quit IRC15:40
*** dpawlik has quit IRC15:40
imacdonnpossible use-case: node is being taken down for hardware maintenance. operator has shut down the instances on it. We don't want to allow the user to star their instances back up until the maintenance is completed15:42
imacdonn(we do this all the time, in a private cloud context)15:42
*** spatel has joined #openstack-nova15:44
spatelcfriesen: I have other VMs running on same compute node they are fine.. very strange issue15:44
SteelyDanimacdonn: yeah I'm sure everyone does.. I get the use case, it's unfortunate to need to check the status on every call, but I get it15:45
*** dpawlik has joined #openstack-nova15:52
openstackgerritStephen Finucane proposed openstack/nova master: api-ref: 'os-hypervisors' doesn't reflect overcommit ratio  https://review.openstack.org/61160415:53
*** ccamacho has quit IRC15:56
*** dpawlik has quit IRC15:56
*** jmlowe has quit IRC16:01
cfriesenimacdonn: so lock the instances?16:07
*** beekneemech has quit IRC16:08
*** bnemec has joined #openstack-nova16:08
cfriesenimacdonn: or stop the nova-compute process?16:09
imacdonncfriesen: I guess, but the owner can unlock it?16:09
cfriesenimacdonn: you could always make lock/unlock admin-only.16:09
imacdonncfriesen: I guess stopping nova-compute may cause the bug that mriedem_afk was reviewing ... although there are some open questions in the bug16:09
*** mgariepy has quit IRC16:10
*** mgariepy has joined #openstack-nova16:15
cfriesenI'm not sure it's really a bug...more like a feature to more gracefully handle cases where things get "stuck" in what was supposed to be a transitional state.16:16
*** tbachman has quit IRC16:21
*** openstackgerrit has quit IRC16:24
*** ttsiouts has quit IRC16:30
*** sambetts is now known as sambetts|afk16:32
*** k_mouza_ has joined #openstack-nova16:39
*** sahid has quit IRC16:42
*** bnemec is now known as beekneemech16:43
*** k_mouza has quit IRC16:43
*** k_mouza_ has quit IRC16:44
*** dtantsur is now known as dtantsur|afk16:44
*** openstackgerrit has joined #openstack-nova16:52
openstackgerritMerged openstack/nova master: Fix deprecated base64.decodestring warning  https://review.openstack.org/61040116:52
openstackgerritStephen Finucane proposed openstack/nova master: Fail to live migration if instance has a NUMA topology  https://review.openstack.org/61108817:00
*** derekh has quit IRC17:01
openstackgerritStephen Finucane proposed openstack/nova master: Fail to live migration if instance has a NUMA topology  https://review.openstack.org/61108817:05
*** mdbooth has quit IRC17:06
*** gyee has joined #openstack-nova17:12
*** k_mouza has joined #openstack-nova17:16
*** cdent has joined #openstack-nova17:20
*** k_mouza has quit IRC17:21
awaugamafinucannot, was at lunch.  so you're able to reproduce that setting the value on the compute node doesn't change anything in placement?17:23
sean-k-mooneyawaugama: which setting is that?17:29
awaugamacpu_allocation_ratio I believe17:30
awaugamayeah17:30
sean-k-mooneyawaugama: there is a spec current for changing how we adress this for stien17:30
awaugamaI'm not aware of one17:31
sean-k-mooneyawaugama: there are 2 but i belive we have setteled on one ill see if i can find it17:31
awaugamacool17:31
sean-k-mooneyawaugama: this is one of the specs https://review.openstack.org/#/c/552105/17:32
sean-k-mooneyi think that is the most recent one17:33
sean-k-mooneythis is the other one https://review.openstack.org/#/c/544683/ but im not sure if it will still be requried17:35
awaugamachecking17:35
awaugamayeah that looks like what finucannot was talking about being messed up17:36
sean-k-mooneywell that depends on what you were expecting17:36
sean-k-mooneyawaugama: what was your initall issue17:36
awaugamasetting cpu_allocation_ratio to 1 in nova.conf changed the value on the compute node (even on compute node startup it showed it was changed) but placement still has it set to 1617:37
sean-k-mooneyawaugama: right i belive currently it will only use the value if it is first creating the provider17:37
sean-k-mooneyif the provider exists it will not update it i think...17:38
awaugamathat seems counter-intuitive17:38
sean-k-mooneyawaugama: this will be changed going forward to allow the cpu_allocation_ratio confige option to allow specify the placement value too and cpu_inital_allocation_ratio to specify what it shoudl be for newly creeted resouce providers17:39
sean-k-mooneyawaugama: the current behavior i belive is intened to allow you to mainge the  allocation ratios via the api instead of the config17:39
awaugamainteresting. finucannot and mriedem_afk ^17:40
awaugamai need to step away for a few minutes, brb17:40
sean-k-mooneyawaugama: finucannot is likely not around anymore.17:41
sean-k-mooneyawaugama: he is usually enjoying his weekend by now :)17:42
awaugamasean-k-mooney, yeah, I figure he'll get the scrollback at somepoint17:50
sean-k-mooneyawaugama: the inital allocation ratios spec is likely the one that is most relevent to you https://review.openstack.org/#/c/552105/17:57
*** ralonsoh has quit IRC18:03
spatelsean-k-mooney: hey! how are you doing :)18:04
*** jmlowe has joined #openstack-nova18:04
sean-k-mooneyspatel: quite well thank you. how are you :)18:04
spatelafter longtime i am seeing you online or may be i was not paying attention18:04
sean-k-mooneyi was on company traing most of this week18:05
spatelI am great!! and my openstack also going great18:05
sean-k-mooneyso ya i was not online much. that is good to hear.18:05
spatelI have added 80 SR-IOV compute nodes and put them in production :)18:06
sean-k-mooneywow that was fast. are you happy with the feature set/perfromance you have achived?18:06
spatelYup!! i am not seeing any performance impact for my application so far18:06
*** medberry has joined #openstack-nova18:07
sean-k-mooneyyou decided to go with vnic_type direct with a flavor with hugepages and pinned cpus in the end?18:07
spatelI have create two AZ group as we spoken last time, tor-1 and tor-2 and spreading application across two AZ18:07
spatelYes vnic_type=direct / hugepages / cpu pinning / numa=218:08
spatelwhatever setting you suggested last time18:08
sean-k-mooneycool that should be a good starting point for any VNF deployed on openstack18:09
spatelI am happy with those setting :)18:09
sean-k-mooneyand for the tor config did you split the control and dataplane across the tors or keep them on the same tor18:09
spatelThat i didn't tested yet because of deployment urgency!18:11
spateli need to test that in LAB first18:11
spatelthat is in my to-do list18:11
sean-k-mooneywell the original config you proposed should work but i was not certen the failure mode was optimal. that said you intended to add a deicated manament switch at a later poit so that will likely be the best longterm solution in anycase18:12
spatelas soon as i get time i want to run test on dpdk (SR-IOV is still painful even its better)18:12
*** gibi is now known as gibi_off18:12
*** medberry has quit IRC18:14
spatelsean-k-mooney: yes i have plan in future to isolate mgmt traffic using extra nic18:14
spatelsean-k-mooney: Quick question, how do you guys monitor instance stats? like CPU/memory from KVM18:16
spatelcurrently i have snmp agent running on compute node and also inside instance but that won't give you 100% result right? i want to monitor hypervisor level stats18:18
spatelKVM level monitoring in short18:18
sean-k-mooneyam you have several options18:19
sean-k-mooneyin generall i would recommened collectd for host level perfomance monitoring18:20
sean-k-mooneyif you have deployed the openstack ceilometer project you can also confure it to monitro the libvirt isntances18:20
sean-k-mooneyi belive the prometious project has good monitoring capablites for the openstack servcies but it does not have good monitoring capablites for the host or the vms as far as i am aware18:21
spatelI didn't configure ceilometer because i wasn't sure i need that component because its private cloud and we don't need billing18:22
sean-k-mooneyspatel: that is a common missconceptions ceilometer is for telemetry not billing18:22
sean-k-mooneythat said its not nessisarly the best solution for telemetry either18:23
spateli was worried it will eat my resources18:24
sean-k-mooneyif you decide to use collectd you could look into https://github.com/openstack/collectd-openstack-plugins to plublis metrics to celometer or gnocci but collectd can also advertise the stata it monitors over snmp or toher protocols18:24
sean-k-mooneyspatel: yes that is a concern with celimiter. it does not scale well which is why i generally dont recomend it as my first choice18:24
spatelAgreed ++18:25
spateli had same concern so i didn't bother to install18:25
sean-k-mooneyya so in general i would check out collectd and see if it meets your needs18:26
spatelI am going to try collectd sure!18:26
sean-k-mooneymy former team enhanced the collectd libvirt plugin to allow reporting stats for vms using the uuid field instead of the id field18:26
sean-k-mooneythe uuid filed is set to the nova instance uuid18:27
sean-k-mooneyso its easy to do a 1:1 mapping between the stats and the workload18:27
spatelhmm! interesting i think i have to look into collectd now18:27
spatelalso i was thinking push collectd data to influx + grafana so i have good dashboard18:28
sean-k-mooneyyou can also have collectd use its network plugin to send the stats to influxdb and then use graphana to visualise the results18:28
sean-k-mooneyjinks18:28
sean-k-mooney:) that is a good solution18:28
spatelyup!18:28
sean-k-mooneyare you familar with OPNFV?18:29
spatelno much18:29
sean-k-mooneythats ok its rather vast in scope like openstack18:29
sean-k-mooneyspatel: i just wanted to highlight the barometer project https://wiki.opnfv.org/display/fastpath/Barometer+Home18:30
spatelreading..18:31
sean-k-mooneyspatel: they have be working with integrating collect graphana and other tools for openstack monitoring for telco/NFV usecases18:32
sean-k-mooneyhttps://www.youtube.com/watch?v=-82UShFiBBM18:32
spatelsean-k-mooney: very interesting... also i had question about SR-IOV VF nic stats, it doesn't tell you how much data flowing through your VF18:32
sean-k-mooneyspatel: that depend on the nic18:32
spatelI have Qlogic so i doubt it has that feature18:33
sean-k-mooneyso most nics dont have enough hardware counters to gatar stats on all the VFs from the host side18:33
sean-k-mooneyspatel: if you are able to run collectd in the guest however it can monitor the kernel stats and give you that info18:34
spatelYes! that is what i am doing currently! guest base snmp monitoring18:34
sean-k-mooneyspatel: one other tool that you might also find userful in this space is skydive. http://skydive.network/18:34
spatelhmm! looking cool18:35
spatelwe are using observium and Cisco DCNM18:35
spatelhmm! they have openstack support too18:36
sean-k-mooneyskydive was developed in the last 2 years ago specificly for cloud and conterised enviornments but it began to mature just after i deployed my last cloud so i have not used it myself18:37
*** tbachman has joined #openstack-nova18:37
spatelCan i use for Cisco switches and router?18:38
spatellook like they use agent do i doubt18:38
sean-k-mooneyi belive so. it uses multiple protocols to do its monitoring18:38
sean-k-mooneyif thet cisco switch support sflow i belive it would work18:40
spatelWe have all Cisco nexus switches and they do sflow :) i am going to explore it now18:40
spatelDo you guys using collectd to monitor your compute nodes?18:41
mriedem_afkawaugama: as far as i know if you set the cpu_allocation_ratio in nova.conf on the nova-compute service, it should mirror that to the new OR existing resource provider in placement18:41
*** mriedem_afk is now known as mriedem18:41
mriedemif it's nit mirroring those updates i think that's a bug18:41
mriedemSteelyDan: on that bug, tbc, i think they got confused about "scheduling"18:42
awaugamamriedem: that's what I thought would happen.  sean-k-mooney you think it won't update existing ones?18:42
mriedemit's not scheduling anything, they just power on an instance on a stopped compute and the rpc cast is sent to the void18:42
sean-k-mooneyawaugama: im not sure.18:43
mriedemcfriesen: yes the bug is saying they power on the instance, the api changes the task_state, but b/c the compute is down, we never finish the task and it's "stuck"18:43
sean-k-mooneyawaugama: it will do one of two things either it will overright it allways with the config value or it will only use the config value on creattion fo the rp18:43
mriedemfixing that would mean needing to check the compute service status for every instance action...18:43
sean-k-mooneyawaugama: both cases are "wronge" depending on who you ask hench the sepc for initall allcoation ratios https://review.openstack.org/#/c/552105/18:44
mriedemon every periodic, we update the resource class inventory allocation ratio based on config that we send to placement https://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/compute/resource_tracker.py#L10818:44
mriedemhttps://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/compute/resource_tracker.py#L95218:45
mriedemthe question is if prov_tree.update_inventory(nodename, inv_data) has a bug thinking that nothing changed18:45
mriedemb/c it's a cache18:45
mriedemand the reportclient itself has a cache of the provider tree18:45
mriedemhttps://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/scheduler/client/report.py#L150118:46
mriedem"The specified ProviderTree is compared against the local cache.  Any                               changes are flushed back to the placement service. "18:46
awaugamamakes sense18:47
mriedemthe bug is probably here https://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/scheduler/client/report.py#L1575-L157618:48
mriedemif we have a single compute node resource provider and that doesn't change, both of those sets will be empty18:48
cfriesenmriedem: as I said in the bug, that won't actually fix it, just make the race window smaller.18:48
mriedemand the for loops below won't flush any changes to placement18:48
mriedemcfriesen: isure18:48
mriedem*sure18:48
mriedemb/c of the service group api18:48
mriedemunless you force down the service18:49
mriedemawaugama: in queens we weren't using that provider tree stuff18:49
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/compute/resource_tracker.py#L88318:49
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/scheduler/client/report.py#L111218:50
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/scheduler/client/report.py#L85018:50
cdentbecause it is friday and I don't feel like filtering, can I just say: god I hate caches, why do we do caches?18:50
mriedemidk18:50
mriedemto avoid calling the placement API 500 times per periodic?18:50
mriedemi just know caches are very tricky18:51
mriedembrittle18:51
sean-k-mooneycdent: so the processor has another way to mess with your view of a sequtially constent exectuion of your program18:51
* cdent gives both mriedem and sean-k-mooney an unpleasant kiss18:51
awaugamamriedem: so basically it's a stale value and placement is never refreshing to pick up the new conf setting?18:51
awaugamaor the rp isn't refreshing?18:52
sean-k-mooneycdent: hehe since you are here can i get your input on https://review.openstack.org/#/c/610034/18:52
mriedemhttps://github.com/openstack/nova/blob/stable/queens/nova/compute/provider_tree.py#L12418:52
sean-k-mooneycdent: actully perhaps the placement channel would be better.18:52
mriedemawaugama: i think the scheduler report client in master/rocky is thinking nothing is changing18:53
cdentyeah, join me over there because I think that may be fixed18:53
mriedemi just had doritos, you probably don't want to kiss me18:53
mriedemat least not open mouth18:53
awaugamathis channel gets weird on Fridays18:53
mriedemawaugama: i think on master/rocky the problem is we're not getting here https://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/scheduler/client/report.py#L164618:54
mriedemawaugama: it should be pretty easy to recreate this18:54
awaugamamriedem: I think finucannot was able to reproduce on his system18:55
awaugamathink he was just using devstack18:55
mriedemthat or https://github.com/openstack/nova/blob/5b815eec4c5fc8c19863aa38b1d1920705b17bfa/nova/scheduler/client/report.py#L1112 is short circuirting18:55
mriedem*circuiting18:55
mriedemat one point i had a debug patch for a bunch of this b/c it's really hard to know wtf is going on without any logs18:56
awaugamamriedem, I had to redeploy my system for another feature test, I can see about reproducing next week18:56
mriedemi'll see if i can dredge that up18:56
awaugamawith logs18:56
mriedemawaugama: https://review.openstack.org/#/c/597560/18:57
awaugamacool, I'll make a note of that patch and see if I can get it applied18:59
mriedemthe stuff in here is probably still useful https://review.openstack.org/#/c/597560/6/nova/scheduler/client/report.py18:59
mriedemthe rest was for debugging a specific thing that is now fixed18:59
mriedemmaybe i'll restore and rev that to clean it up18:59
mriedemon top of https://review.openstack.org/#/c/597553/19:00
*** erlon has quit IRC19:07
openstackgerritMatt Riedemann proposed openstack/nova master: Log the operation when updating generation in ProviderTree  https://review.openstack.org/59755319:19
openstackgerritMatt Riedemann proposed openstack/nova master: Add debug logs for when provider inventory changes  https://review.openstack.org/59756019:19
mriedemmelwitt: https://bugs.launchpad.net/nova/+bug/179878719:29
openstackLaunchpad bug 1798787 in OpenStack Compute (nova) "Installation help documentation is incorrect - verify & nova-consoleauth" [Medium,Triaged]19:29
mriedemthe install guide tells you to verify nova-consoleauth is running but we don't tell you to install/start it19:30
mriedemb/c that part was removed in rocky19:30
melwittgah19:31
cdentIs there a bug that is associated with provider tree cache problems discussed above?19:32
mriedemcdent: not that i'm aware of19:32
mriedemi think awaugama hit it in QE19:32
awaugamayeah verifying vcpu weighter feature19:33
cdentawaugama: are you making a bug? If so I want to be sure to follow along19:33
awaugamacdent, I will next week.  I need to reinstall my system (did another feature test in the meantime) so I'll need to recollect logs19:33
mriedemis this a tripleo system that takes 3 days?19:34
SteelyDanheh19:34
* SteelyDan thinks 3 days sounds optimistic19:34
cdentgreat, thanks awaugama19:34
awaugamamriedem: I can probably get it repro'd by EOD Tuesday19:34
mriedembtw, speaking of tripleo19:34
awaugamabut yeah19:34
mriedemwho from the red hat nova cabal can add nova-status upgrade check to tripleo?19:35
mriedemowalsh: ^?19:35
SteelyDanor mschuppert19:35
awaugamayeah one of those two is the nova deployment guy19:36
awaugamatheir specialty is based on the need19:36
openstackgerritElod Illes proposed openstack/nova master: Transform scheduler.select_destinations notification  https://review.openstack.org/50850619:38
sean-k-mooneyawaugama: and the fact that everyone else avoid using tripleo to deploy our dev envs if we can19:38
awaugamafair enough19:41
mriedemSteelyDan: oooo guess what just rotated in https://www.youtube.com/watch?v=jJ9Xk-VoGqo19:41
SteelyDannice19:41
SteelyDanI enjoyed when this rotated in for me this morning: https://www.youtube.com/watch?v=KCdKBHdPz3019:42
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Add regression test for bug 1797580  https://review.openstack.org/61193819:43
openstackbug 1797580 in OpenStack Compute (nova) "NoValidHost during live migration after cold migrating to a specified host" [High,In progress] https://launchpad.net/bugs/1797580 - Assigned to Matt Riedemann (mriedem)19:43
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Don't persist RequestSpec.requested_destination  https://review.openstack.org/61193919:43
mriedemi know i'd never do it without the fez on19:44
openstackgerritDan Smith proposed openstack/nova master: Modify get_by_cell_and_project() to get_not_deleted_by_cell_and_projects()  https://review.openstack.org/60766319:44
openstackgerritDan Smith proposed openstack/nova master: Minimal construct plumbing for nova list when a cell is down  https://review.openstack.org/56778519:44
openstackgerritDan Smith proposed openstack/nova master: Refactor scatter-gather utility to return exception objects  https://review.openstack.org/60793419:44
openstackgerritDan Smith proposed openstack/nova master: Return a minimal construct for nova show when a cell is down  https://review.openstack.org/59165819:44
openstackgerritDan Smith proposed openstack/nova master: Return a minimal construct for nova service-list when a cell is down  https://review.openstack.org/58482919:44
SteelyDanheck no19:44
melwittnever heard of either of those. I only know the major steely dan hits19:44
mriedemkid charlemagne is a major hit19:45
mriedemall the major dudes know that19:45
SteelyDanwell,19:45
SteelyDanshe probably means like reelin' in the years and rikki19:45
melwittmajor dudes??19:45
SteelyDanand dirty work19:45
melwittyeah and like NO STATIC AT ALLLL19:45
mriedemhttps://www.youtube.com/watch?v=kND8TRZap8Y19:45
SteelyDanlike floyd, the major hits are not very representative of the larger work19:45
melwittand hey nineteen19:45
melwittand deacon blues19:46
melwittlol, of course it was a reference to another song I don't know19:46
mriedemwhen the sax solo from FM comes on in the car, i always yell at the girls to quit down and crank it way up19:47
mriedemobnoxiously so19:47
mriedem*quiet19:47
SteelyDanheh19:47
melwittas one does19:47
sean-k-mooneymriedem: oh i actully have heard the last link you posted before19:47
openstackgerritMerged openstack/nova master: Add regression test for bug 1797580  https://review.openstack.org/61008819:55
openstackbug 1797580 in OpenStack Compute (nova) rocky "NoValidHost during live migration after cold migrating to a specified host" [High,In progress] https://launchpad.net/bugs/1797580 - Assigned to Matt Riedemann (mriedem)19:55
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: Add regression test for bug 1797580  https://review.openstack.org/61194419:56
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: Don't persist RequestSpec.requested_destination  https://review.openstack.org/61194519:56
mriedemSteelyDan: remember this? https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L431019:57
melwittwah wah19:57
*** dave-mccowan has quit IRC19:57
mriedemit's pretty safe to assume that all cold/live migrations are migration-based allocations now right?19:57
mriedemb/c that was added to compute in queens19:58
mriedemi'm wondering if we can start rolling those compat shims back, including https://github.com/openstack/nova/blob/master/nova/conductor/tasks/migrate.py#L10219:58
mriedemand just assume compute is new enough to always send a migration record19:58
mriedemand do that hot swap action19:58
openstackgerritTakashi NATSUME proposed openstack/nova master: Use oslo_db.sqlalchemy.test_fixtures  https://review.openstack.org/60935219:59
mriedemi think that's also safe because of https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L71619:59
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (3)  https://review.openstack.org/57410419:59
mriedemwe're unconditionally sending the migration record19:59
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (4)  https://review.openstack.org/57410619:59
SteelyDanmriedem: yeah19:59
SteelyDanmriedem: I leave those TODOs for others19:59
mriedemtl;dr i would like to rip that code out before landing gibi's https://review.openstack.org/#/c/606050/ which breaks resize to same host allocatoins if we don't have the source allocations on the migration record19:59
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (5)  https://review.openstack.org/57411020:00
mriedemok cool20:00
mriedemthe other thing with that was caching scheduler which is also queued up for death20:00
* SteelyDan mriedem: bee tee dubs, did you ever look at this? https://review.openstack.org/#/c/611665/20:00
melwittthat would be a funny TODO. "TODO(dan): Other people remove this in Rocky"20:00
mriedemSteelyDan: i have ont yet20:00
mriedem*not yet20:00
SteelyDanmelwitt: next time. I promise.20:00
melwittawesome20:00
SteelyDanwhoa, errant /me in there, sorry20:01
*** awaugama is now known as awaugama_away20:02
cfriesenephemeral disks are supposed to last the life of the instance, right?20:03
openstackgerritMerged openstack/nova master: Don't persist RequestSpec.requested_destination  https://review.openstack.org/61009820:03
sean-k-mooneycfriesen: yes and know20:04
sean-k-mooneycfriesen: i dont know if they are ment to be copied if we move the guest or not20:05
sean-k-mooneycfriesen: they are ment to exsits for the life of an insntace on a singel host definetly.20:05
sean-k-mooneycfriesen: resize is totally undefiend what happens. live/coldmigrate i think we copy them but havent checked that is just my intuition20:06
sean-k-mooneycfriesen: similarly for shelve unshelve not sure we persit tiem. it might be driver specific20:07
cfriesensean-k-mooney: hmm...I think this wording in the compute API has changed since I looked at it: "Ephemeral disks may be written over on server state changes. So should only be used as a scratch space for applications that are aware of its limitations."20:07
sean-k-mooneycfriesen: ya root disk will be perserved obviourly but adtionl epheral disk i think its totally up to the driver on what state chages they are preserved and when they are recreated20:09
SteelyDansean-k-mooney: I don't think any of those things are right20:09
SteelyDanI don't think it's well defined at all, but the original assumption was that you couldn't even rely on them across start/stop cycles on your instance20:09
cfriesenon the other hand, nova/doc/source/user/flavors.rst says "Ephemeral disks offer machine local disk storage linked to the lifecycle of a20:10
cfriesen  VM instance. When a VM is terminated, all data on the ephemeral disk is lost."20:10
sean-k-mooneySteelyDan: from an api persectiv i would totally aggree.20:10
sean-k-mooneycfriesen: yes but terminated is not stopped20:11
cfriesenokay...I had been thinking that they were supposed to be preserved over everything except termination.20:11
cfriesenbut it sounds like that was incorrect20:11
sean-k-mooneySteelyDan: i think the libvirt driver preservs the ephemeral disk in more cases then its required too20:11
SteelyDanI'm quite sure they're not included in any snapshot, so not for shelve20:11
SteelyDansean-k-mooney: yes20:11
melwittyou mean ephemeral in the flavor, not ephemeral like a normal local disk of any instance without ephemeral in the flavor20:11
cfriesenmelwitt: yes20:12
melwittgotcha20:12
*** cdent has quit IRC20:12
*** spatel has quit IRC20:12
cfriesenokay.  that simplifies my life, though not the end user's. :)20:12
sean-k-mooneycfriesen: so as far as i can tell this is what determins what disks are migrated https://github.com/openstack/nova/blob/e2a39bb30f716c78af30d61efb3fb7526f9bdf6d/nova/virt/libvirt/driver.py#L7208-L724720:32
cfriesensean-k-mooney: for live migration specifically20:33
sean-k-mooneycfriesen: so for libvirt i think, that will include the ephermeral disk on live migration20:33
sean-k-mooneyyes20:34
sean-k-mooneyfor rebuild/resize/cold migration i think we dont copy them20:34
cfriesenthe case I was looking into was a resize, followed by a resize-revert.20:34
sean-k-mooneyi can take a look. it makes sense that we copyt them on live migrate as that does not effect the life time of the instance as long as it succeeds20:35
*** owalsh has quit IRC20:36
*** dklyle has joined #openstack-nova20:37
sean-k-mooneycfriesen: so for resize we copy the image here https://github.com/openstack/nova/blob/e2a39bb30f716c78af30d61efb3fb7526f9bdf6d/nova/virt/libvirt/driver.py#L8309-L8343 but i need to chec if the ephermeral disks are in elf._get_instance_disk_info(20:38
cfriesensean-k-mooney: don't waste time on my account, if the compute API says it can change, that's good enough for me.20:40
sean-k-mooneycfriesen: it looks like its looping over all the disks too https://github.com/openstack/nova/blob/e2a39bb30f716c78af30d61efb3fb7526f9bdf6d/nova/virt/libvirt/driver.py#L798620:40
sean-k-mooneythe code is very similar to the live migration code,  they shoudl proably be refactored to gether but is also differnt enough that im not sure it does the same thing exactly20:42
sean-k-mooneycfriesen: ya from an api persective it can. libvirt appears to keep them for cold and live migrate. i would assume they are not kept for rebuild,shelve and evacuate. i also think libvirt may be expanding the existing ephmeral disk in resize20:46
sean-k-mooneycfriesen: the libvirt direver is specifclaly checking that we are not resizeing the ephermerl disk down https://github.com/openstack/nova/blob/e2a39bb30f716c78af30d61efb3fb7526f9bdf6d/nova/virt/libvirt/driver.py#L8268-L8275 implying that its not recreateing them but may be expanding them on resize20:47
*** dklyle has quit IRC20:50
*** munimeha1 has quit IRC20:51
*** beekneemech has quit IRC20:53
*** boden has joined #openstack-nova20:58
bodenanyone heard about a "oslo_db.exception.DBNonExistentTable" cropping up across projects? Appears to have cropped up around 10/10 http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22oslo_db.exception.DBNonExistentTable%5C%22  and includes neutron, nova and others20:58
*** owalsh has joined #openstack-nova21:10
mriedemboden: those are unit/functional tests,21:15
mriedemand 10 days is as far back as logstash goes21:15
mriedemso 10/10 is probably not really when it started21:15
mriedemthat's just how far back we have logs21:16
mriedemit's showing patches in pike21:16
mriedemand also shows up in successful job runs, so most likely unrelated to anything that's failing21:16
bodenmriedem perhaps that's the case for nova (I haven't dug there), but it doesnt appear to be the case for all others; there are valid failures21:17
sean-k-mooneymriedem: boden is this realated why we have to loop for nova-manage out of interest21:21
sean-k-mooneye.g. why wer did https://review.openstack.org/#/c/608091/21:21
sean-k-mooneyactully looking at the logs no this is unrealted21:24
bodenyeah my bad... I assumed nova was failing without digging... we have been getting this error on some other projects randomly; appears to be memory/resource related21:35
*** mriedem has quit IRC21:40
openstackgerritMatt Riedemann proposed openstack/nova master: Drop legacy cold migrate allocation compat code  https://review.openstack.org/61197021:50
openstackgerritMatt Riedemann proposed openstack/nova master: Drop legacy cold migrate allocation compat code  https://review.openstack.org/61197021:52
*** boden has quit IRC21:52
*** sean-k-mooney has quit IRC22:03
*** dklyle has joined #openstack-nova22:03
*** priteau has quit IRC22:12
openstackgerritmelanie witt proposed openstack/nova master: libvirt: set device address tag only if setting disk unit  https://review.openstack.org/61197422:18
openstackgerritMatt Riedemann proposed openstack/nova master: Drop legacy live migrate allocation compat code  https://review.openstack.org/61197522:22
openstackgerritMatt Riedemann proposed openstack/nova master: Drop legacy live migrate allocation compat code  https://review.openstack.org/61197522:23
*** tbachman has quit IRC23:07
*** bnemec has joined #openstack-nova23:25
*** medberry has joined #openstack-nova23:26
*** tbachman has joined #openstack-nova23:44
*** gyee has quit IRC23:50
*** dklyle has quit IRC23:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!