Friday, 2018-02-23

*** tbachman has quit IRC00:01
*** tbachman has joined #openstack-nova00:02
*** tbachman has quit IRC00:02
*** itlinux has quit IRC00:03
*** yamamoto has joined #openstack-nova00:03
*** slaweq has joined #openstack-nova00:04
*** gbarros has quit IRC00:06
*** gjayavelu has joined #openstack-nova00:07
*** slaweq has quit IRC00:09
*** yamamoto has quit IRC00:09
*** gbarros has joined #openstack-nova00:10
*** AlexeyAbashkin has joined #openstack-nova00:19
*** stakeda has joined #openstack-nova00:22
*** lbragstad has quit IRC00:23
*** AlexeyAbashkin has quit IRC00:23
*** munimeha has quit IRC00:24
*** slaweq has joined #openstack-nova00:25
*** Zames has joined #openstack-nova00:26
*** slaweq has quit IRC00:29
*** hshiina|afk is now known as hshiina00:30
*** edmondsw has quit IRC00:31
*** vladikr has joined #openstack-nova00:32
*** chyka_ has joined #openstack-nova00:33
openstackgerritTetsuro Nakamura proposed openstack/nova-specs master: Support shared and dedicated VMs in one host  https://review.openstack.org/54380500:35
*** tbachman has joined #openstack-nova00:36
*** chyka has quit IRC00:36
*** mlavalle has quit IRC00:37
*** chyka_ has quit IRC00:37
*** r-daneel has quit IRC00:43
*** vladikr has quit IRC00:44
*** threestrands has joined #openstack-nova00:44
*** Zames has quit IRC00:45
*** slaweq has joined #openstack-nova00:46
*** moshele has joined #openstack-nova00:48
*** slaweq has quit IRC00:50
*** tbachman has quit IRC00:50
*** tiendc has joined #openstack-nova00:54
*** moshele has quit IRC00:55
openstackgerritTakashi NATSUME proposed openstack/nova stable/queens: [placement] Add sending global request ID in put (3)  https://review.openstack.org/54311300:55
*** tbachman has joined #openstack-nova00:56
*** vladikr has joined #openstack-nova00:56
*** takashin has joined #openstack-nova00:56
*** claudiub has joined #openstack-nova00:58
*** amodi has quit IRC01:02
*** phuongnh has joined #openstack-nova01:03
*** tbachman has quit IRC01:05
*** yamamoto has joined #openstack-nova01:05
*** zhaochao has joined #openstack-nova01:05
*** slaweq has joined #openstack-nova01:06
*** XueFeng_ is now known as XueFeng01:07
*** gjayavelu has quit IRC01:08
*** hieulq has joined #openstack-nova01:09
*** yamamoto has quit IRC01:11
*** slaweq has quit IRC01:11
*** wolverineav has quit IRC01:16
*** Tom-Tom has joined #openstack-nova01:24
*** slaweq has joined #openstack-nova01:27
*** Tom-Tom has quit IRC01:29
*** claudiub has quit IRC01:30
*** phuongnh has quit IRC01:31
*** phuongnh has joined #openstack-nova01:32
*** slaweq has quit IRC01:32
*** yamahata has quit IRC01:32
*** moshele has joined #openstack-nova01:35
*** hieulq has quit IRC01:38
*** tbachman has joined #openstack-nova01:40
*** itlinux has joined #openstack-nova01:46
*** gbarros has quit IRC01:47
*** gbarros has joined #openstack-nova01:48
*** slaweq has joined #openstack-nova01:48
*** slaweq has quit IRC01:53
*** Dinesh_Bhor has joined #openstack-nova01:53
*** mriedem has quit IRC02:06
*** phuongnh has quit IRC02:06
*** yamamoto has joined #openstack-nova02:07
*** phuongnh has joined #openstack-nova02:07
*** slaweq has joined #openstack-nova02:09
*** slaweq_ has joined #openstack-nova02:09
*** fragatina has quit IRC02:12
*** yamamoto has quit IRC02:12
*** slaweq_ has quit IRC02:14
*** slaweq has quit IRC02:14
*** edmondsw has joined #openstack-nova02:17
*** Tom-Tom has joined #openstack-nova02:18
openstackgerritTakashi NATSUME proposed openstack/nova stable/queens: [placement] Add sending global request ID in get  https://review.openstack.org/54311602:20
*** edmondsw has quit IRC02:22
*** fragatina has joined #openstack-nova02:22
*** bhujay has joined #openstack-nova02:24
*** acormier has quit IRC02:26
*** fragatina has quit IRC02:26
*** zhenguo has joined #openstack-nova02:27
*** itlinux has quit IRC02:28
*** slaweq has joined #openstack-nova02:30
*** yamahata has joined #openstack-nova02:31
*** acormier has joined #openstack-nova02:32
openstackgerritMerged openstack/nova stable/pike: Drop extra loop which modifies Cinder volume status  https://review.openstack.org/54621802:34
*** slaweq has quit IRC02:35
*** itlinux has joined #openstack-nova02:38
*** hieulq has joined #openstack-nova02:39
*** acormier has quit IRC02:40
*** acormier has joined #openstack-nova02:40
*** xnox has quit IRC02:46
*** slaweq has joined #openstack-nova02:46
*** salv-orl_ has joined #openstack-nova02:46
*** acormier has quit IRC02:48
*** salv-orlando has quit IRC02:49
*** annp has joined #openstack-nova02:49
*** hongbin has joined #openstack-nova02:51
*** slaweq has quit IRC02:51
*** slaweq has joined #openstack-nova02:51
*** slaweq has quit IRC02:55
*** acormier has joined #openstack-nova02:56
*** tinwood has quit IRC03:05
*** tinwood has joined #openstack-nova03:06
*** yamamoto has joined #openstack-nova03:09
*** takashin has left #openstack-nova03:09
*** acormier has quit IRC03:09
*** acormier has joined #openstack-nova03:10
*** slaweq has joined #openstack-nova03:10
*** bhujay has quit IRC03:10
*** Faster-Fanboi_ has quit IRC03:10
*** slaweq_ has joined #openstack-nova03:12
*** Faster-Fanboi has joined #openstack-nova03:12
*** yamamoto has quit IRC03:14
*** slaweq has quit IRC03:15
*** slaweq_ has quit IRC03:16
*** yamamoto has joined #openstack-nova03:27
*** slaweq has joined #openstack-nova03:28
*** itlinux has quit IRC03:30
*** slaweq has quit IRC03:32
mnaseri'm seeing more and more cinder snapshot failures in stable/pike03:36
*** acormier has quit IRC03:43
*** slaweq has joined #openstack-nova03:43
*** slaweq has quit IRC03:48
*** vivsoni__ has joined #openstack-nova03:55
*** gyee has quit IRC03:56
*** vivsoni_ has quit IRC03:58
*** slaweq has joined #openstack-nova03:59
*** zhaochao has quit IRC03:59
*** zhaochao has joined #openstack-nova04:00
*** slaweq has quit IRC04:04
*** edmondsw has joined #openstack-nova04:06
*** janki has joined #openstack-nova04:06
*** edmondsw has quit IRC04:11
*** chergd has quit IRC04:11
*** AlexeyAbashkin has joined #openstack-nova04:18
*** Dinesh_Bhor has quit IRC04:18
*** Dinesh_Bhor has joined #openstack-nova04:19
*** psachin has joined #openstack-nova04:20
*** links has joined #openstack-nova04:21
*** AlexeyAbashkin has quit IRC04:23
*** slaweq has joined #openstack-nova04:23
*** udesale has joined #openstack-nova04:25
*** abhishekk has joined #openstack-nova04:27
*** slaweq has quit IRC04:29
*** slaweq has joined #openstack-nova04:31
*** links has quit IRC04:33
*** Dinesh_Bhor has quit IRC04:34
*** slaweq has quit IRC04:35
*** itlinux has joined #openstack-nova04:36
*** zhenguo has quit IRC04:37
*** sree_ has joined #openstack-nova04:40
*** andreas_s has joined #openstack-nova04:40
*** sree_ is now known as Guest1654104:40
*** Dinesh_Bhor has joined #openstack-nova04:41
*** ratailor has joined #openstack-nova04:42
*** Dinesh_Bhor has quit IRC04:42
alex_xumriedem, dansmith, working on eric patch now04:43
*** andreas_s has quit IRC04:44
*** Faster-Fanboi has quit IRC04:46
*** slaweq has joined #openstack-nova04:47
*** links has joined #openstack-nova04:47
*** Dinesh_Bhor has joined #openstack-nova04:48
*** slaweq has quit IRC04:51
*** hieulq has quit IRC04:52
*** Faster-Fanboi has joined #openstack-nova04:52
*** fragatina has joined #openstack-nova04:55
*** hongbin has quit IRC04:55
*** fragatina has quit IRC04:59
*** slaweq has joined #openstack-nova05:02
*** baoli has quit IRC05:03
*** fragatina has joined #openstack-nova05:03
*** baoli has joined #openstack-nova05:03
*** Dinesh_Bhor has quit IRC05:07
*** slaweq has quit IRC05:07
*** Dinesh_Bhor has joined #openstack-nova05:07
*** Faster-Fanboi has quit IRC05:14
*** liuyulong has joined #openstack-nova05:16
*** slaweq has joined #openstack-nova05:18
*** Faster-Fanboi has joined #openstack-nova05:20
*** chyka has joined #openstack-nova05:21
openstackgerritTetsuro Nakamura proposed openstack/nova-specs master: Support shared/dedicated vCPUs in one instance  https://review.openstack.org/54573405:21
*** slaweq has quit IRC05:23
*** baoli has quit IRC05:25
openstackgerritNakanishi Tomotaka proposed openstack/nova master: Test Compute API in multiple cells  https://review.openstack.org/54727305:26
*** chyka has quit IRC05:26
*** inara has quit IRC05:31
openstackgerritTetsuro Nakamura proposed openstack/nova-specs master: Support shared/dedicated vCPUs in one instance  https://review.openstack.org/54573405:34
*** slaweq has joined #openstack-nova05:34
*** inara has joined #openstack-nova05:34
*** slaweq has quit IRC05:39
*** lajoskatona has joined #openstack-nova05:40
*** slaweq has joined #openstack-nova05:41
*** moshele has quit IRC05:45
*** slaweq has quit IRC05:46
*** mdnadeem has joined #openstack-nova05:49
*** slaweq has joined #openstack-nova05:50
*** abhishekk is now known as abhishekk|afk05:50
*** claudiub has joined #openstack-nova05:50
*** vladikr has quit IRC05:53
*** vladikr has joined #openstack-nova05:53
*** pengdake_ has joined #openstack-nova05:54
*** slaweq has quit IRC05:54
*** edmondsw has joined #openstack-nova05:55
*** udesale_ has joined #openstack-nova05:58
*** vishwana_ has joined #openstack-nova05:59
*** edmondsw has quit IRC05:59
*** udesale has quit IRC06:00
*** vishwanathj has quit IRC06:03
*** slaweq has joined #openstack-nova06:05
*** gokhan has joined #openstack-nova06:10
*** slaweq has quit IRC06:10
*** Spazmotic has joined #openstack-nova06:11
SpazmoticMorning Novaers06:11
*** slaweq has joined #openstack-nova06:11
*** namnh has joined #openstack-nova06:12
*** threestrands has quit IRC06:13
*** namnh has quit IRC06:16
*** tiendc has quit IRC06:16
*** annp has quit IRC06:16
*** phuongnh has quit IRC06:16
*** slaweq has quit IRC06:16
*** phuongnh has joined #openstack-nova06:16
*** namnh has joined #openstack-nova06:16
*** tiendc has joined #openstack-nova06:16
*** annp has joined #openstack-nova06:16
tetsurois cfriesen around?06:16
itlinuxhello nova team.. i have a question will it be better to set the hypervisor to kvm or just leave them to qemu?06:17
itlinuxthanks for the advice..06:17
*** udesale_ is now known as udesale06:18
*** Zames has joined #openstack-nova06:19
cfriesentetsuro: I'm here.06:19
tetsuroitlinux: KVM has more functionality. Let me quote from the document https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html “QEMU - Quick EMUlator, generally only used for development purposes.”06:20
itlinuxok.06:20
tetsurocfriesen thanks for reply06:20
cfriesentetsuro: and I just added a bunch of comments to the review06:20
itlinuxthanks..06:20
tetsuroYup, I didn’t understand your comment…06:20
tetsuroso what is it from the placement view?06:21
tetsuroalway 9 VCPUs in the inventory, right?06:21
*** slaweq has joined #openstack-nova06:21
cfriesenI'm not an expert on placement, but I think so06:21
cfriesenI'm not sure how it tracks allocated cpus though, whether it factors in cpu_allocation_ratio or not06:23
tetsurookay, if you want (8*1+1*(1/2))=9, who will exclude the host if you claim the third shared vcpu ?06:23
tetsuroI mean (1/2)*2+(1/2)>106:24
tetsurowhere is the logic of  “(1/2)*2+(1/2)>1”?06:25
*** slaweq has quit IRC06:26
cfriesenin our internal patch we allocate "dedicated" pCPUs dynamically, so we would start with 9, allocate 8 dedicated ones (reporting a usage of 8), then allocate a "shared" one (and report a usage of 8.5)06:26
tetsuroMaybe corefilter is extended and he is aware of shared cpus and shared dedicated cpus?06:26
tetsurooops06:26
cfriesenthen at that point CoreFilter would prevent you from allocating another dedicated pCPU since that would put usage at 9.5 which is larger than 906:27
cfriesenbut you could still allocate another shared pCPU since that would put usage right at 906:28
cfriesenIf we statically determine which pCPUs are shared/dedicated, then I think CoreFilter or the placement equivalent would need to be modified.06:29
tetsuroAh, so you mean your approach doesn’t need to pre-set something like “cpu_dedicate_set” ?06:31
*** phuongnh has quit IRC06:31
itlinuxok thanks tetsuro: my nova.conf says virt_type = kvm  but my openstack hypervisor list show qemu type.. confused06:32
cfriesenyes.  anything in NUMACell.cpuset that is not in NUMACell.pinned_cpus is available to run "shared" vCPUs06:32
tetsuroYou are taking dynamically allocation approach that I put into alternative.06:32
tetsuroOkay.06:32
cfriesenyes, that's why I mentioned it originally06:33
tetsurocleared06:33
tetsuroitlinux: that’s the default behavior https://bugs.launchpad.net/nova/+bug/119536106:34
openstackLaunchpad bug 1195361 in OpenStack Compute (nova) "QEMU hypervisor type returned when libvirt_type = kvm" [Low,In progress] - Assigned to Tetsuro Nakamura (tetsuro0907)06:34
cfriesenthe way we avoid needing to re-affine all the VMs when we allocate a new "dedicated" pCPU is that we put all the vCPU tasks into linux cpusets on the host.  that way we can just reaffine the linux cpuset and all the VMs are affected06:35
cfriesenthis does mean we don't use the libvirt cpusets though, so I'm not sure that can be used as a generic solution for upstream06:35
cfriesenwith statically-allocated divisions between "shared" and "dedicated" pCPUs we might want to report two separate "cpus" and "cpus_used" numbers in the hypervisor details, one for "shared" and one for "dedicated"06:36
*** slaweq has joined #openstack-nova06:37
*** sridharg has joined #openstack-nova06:38
*** Zames has quit IRC06:38
tetsuroitlinux: because KVM only works with QEMU hypervisor, but agree this is confusing. We will fix this at least in Rocky realease.06:39
itlinuxthanks tesuro: I am in OOO06:39
tetsuroyour welcome06:41
*** slaweq has quit IRC06:42
*** andreas_s has joined #openstack-nova06:42
*** acormier has joined #openstack-nova06:44
tetsurocfriesen: Ah, I see. I think I got you. So you are not re-configure xml files but re-pinning them in linux layer.06:44
*** andreas_s has quit IRC06:45
cfriesenitlinux: the complication arises because both kvm and qemu end up calling the "qemu" binary as the hypervisor, but in the kvm case libvirt passes something like "-machine pc-i440fx-rhel7.3.0,accel=kvm" to tell it to use the hardware acceleration06:47
*** slaweq has joined #openstack-nova06:47
cfriesentetsuro: yes, that's correct06:47
*** acormier has quit IRC06:48
itlinuxcfriesen: how can I verify that?06:48
cfriesenfrom the compute node you can just look at the "qemu" commandline using "ps -ef" or similar06:49
tetsurocfriesen: Okay, I’ll add some words in the alternative and discuss with cores how they think. BTW, are  you coming to PTG?06:49
itlinuxis that in the /etc/libvirt/qemu has a file I will chk that..06:50
cfriesenyes, I'll be there starting monday afternoon06:50
cfriesenitlinux: no, when you have a guest up and running, then look at the commandline for the running process06:50
itlinuxlooking now06:50
itlinuxhttp://paste.openstack.org/show/682875/06:51
itlinuxlooks correct to me but could you confirm it cfriesen:06:51
itlinuxthanks06:51
*** tpatil has joined #openstack-nova06:52
cfriesentetsuro: my concern about not reporting usage for "dedicated" pCPUs is that it'll make it tricky to debug scheduler failures if we don't have any way to see how "full" the compute nodes are06:52
*** slaweq has quit IRC06:52
cfriesenitlinux: looks good to me06:52
itlinuxthanks06:52
tetsuroI’m coming to Dublin, too. looking forward to see ya. :)06:53
cfriesenlikewise.  I should get to bed, it's 1am06:53
tetsuroOMG06:54
cfriesen:)06:54
tetsuroI got your concern, too.06:54
cfriesencool. should be a fun discussion06:54
cfriesenlater06:54
tetsurohave a good night! I gotta go now, too.06:55
*** cfriesen has quit IRC06:55
*** pcaruana has joined #openstack-nova06:56
*** andreas_s has joined #openstack-nova07:00
*** pcaruana has quit IRC07:02
*** pcaruana has joined #openstack-nova07:02
*** sahid has joined #openstack-nova07:03
*** Tom-Tom has quit IRC07:03
*** Tom-Tom has joined #openstack-nova07:03
*** Tom-Tom has quit IRC07:03
*** Tom-Tom has joined #openstack-nova07:04
*** udesale_ has joined #openstack-nova07:12
*** udesale has quit IRC07:15
*** lajoskatona has quit IRC07:16
*** josecastroleon has joined #openstack-nova07:16
*** fragatina has quit IRC07:18
*** lajoskatona has joined #openstack-nova07:19
*** slaweq has joined #openstack-nova07:23
*** josecastroleon has quit IRC07:25
*** josecastroleon has joined #openstack-nova07:25
*** slaweq has quit IRC07:27
*** Zames has joined #openstack-nova07:28
*** cz2 has quit IRC07:30
*** McNinja has quit IRC07:30
*** vdrok has quit IRC07:30
*** khappone has quit IRC07:30
*** homeski has quit IRC07:30
*** itlinux has quit IRC07:31
*** trozet has quit IRC07:31
*** dgonzalez has quit IRC07:31
*** ebbex has quit IRC07:31
*** Zames has quit IRC07:31
*** slaweq has joined #openstack-nova07:37
*** tpatil has quit IRC07:40
openstackgerritAndreas Jaeger proposed openstack/osc-placement master: DNM: Testing py35  https://review.openstack.org/54731307:40
*** slaweq has quit IRC07:42
*** slaweq has joined #openstack-nova07:42
*** edmondsw has joined #openstack-nova07:43
*** slaweq has quit IRC07:44
*** slaweq has joined #openstack-nova07:44
*** edmondsw has quit IRC07:48
*** jichen has joined #openstack-nova07:48
*** josecastroleon has quit IRC07:51
*** vladikr has quit IRC07:52
*** vladikr has joined #openstack-nova07:52
*** itlinux has joined #openstack-nova07:53
*** trozet has joined #openstack-nova07:53
*** dgonzalez has joined #openstack-nova07:53
*** ebbex has joined #openstack-nova07:53
*** Dinesh_Bhor has quit IRC07:55
*** josecastroleon has joined #openstack-nova07:56
*** cz2 has joined #openstack-nova07:57
*** McNinja has joined #openstack-nova07:57
*** vdrok has joined #openstack-nova07:57
*** khappone has joined #openstack-nova07:57
*** homeski has joined #openstack-nova07:57
*** vivsoni__ has quit IRC08:05
*** claudiub has quit IRC08:07
*** claudiub|2 has joined #openstack-nova08:08
*** vivsoni has joined #openstack-nova08:08
*** claudiub has joined #openstack-nova08:10
*** claudiub|2 has quit IRC08:11
*** cz2 has quit IRC08:11
*** McNinja has quit IRC08:11
*** vdrok has quit IRC08:11
*** khappone has quit IRC08:11
*** homeski has quit IRC08:11
*** itlinux has quit IRC08:11
*** trozet has quit IRC08:11
*** dgonzalez has quit IRC08:11
*** ebbex has quit IRC08:11
*** yangyapeng has joined #openstack-nova08:13
*** amoralej|off is now known as amoralej08:14
*** jichen_ has joined #openstack-nova08:14
*** fusmu has joined #openstack-nova08:14
*** jichen has quit IRC08:15
*** jichen_ is now known as jichen08:15
*** jpena|off is now known as jpena08:16
*** rcernin has quit IRC08:16
*** yangyapeng has quit IRC08:19
*** tesseract has joined #openstack-nova08:19
*** acormier has joined #openstack-nova08:19
*** acormier has quit IRC08:20
*** acormier has joined #openstack-nova08:20
*** dgonzalez has joined #openstack-nova08:22
*** ebbex has joined #openstack-nova08:22
*** trozet has joined #openstack-nova08:22
sahidjaypipes: i saw you interest on some libvirt related specs, can you have look at these specs too: https://review.openstack.org/#/c/511188/08:23
sahidhttps://review.openstack.org/#/c/539605/08:23
sahidthey address important use cases related to NFV08:24
*** edmondsw has joined #openstack-nova08:28
*** phuongnh has joined #openstack-nova08:30
*** tetsuro has left #openstack-nova08:32
*** homeski has joined #openstack-nova08:33
*** vdrok has joined #openstack-nova08:33
*** McNinja has joined #openstack-nova08:33
*** cz2 has joined #openstack-nova08:33
*** khappone has joined #openstack-nova08:34
*** ttsiouts has quit IRC08:36
*** ttsiouts has joined #openstack-nova08:37
*** danpawlik has joined #openstack-nova08:38
*** tssurya has joined #openstack-nova08:40
*** damien_r has joined #openstack-nova08:44
*** slaweq_ has joined #openstack-nova08:48
*** ccamacho has joined #openstack-nova08:49
*** slaweq_ has quit IRC08:53
*** acormier has quit IRC08:53
*** udesale_ is now known as udesale08:53
*** vks1 has joined #openstack-nova08:55
bauzasgood morning Nova08:57
SpazmoticMorning bauzas08:57
SpazmoticAny cores that could pu this on their radar for this week?  https://review.openstack.org/#/c/538415/  Been sitting for about 3 weeks now.08:57
*** slaweq_ has joined #openstack-nova08:58
SpazmoticI am back from Korea btw,  good to see you all again.08:59
bauzasSpazmotic: most of the folks are preparing to travel for the PTG08:59
bauzasmaybe you could ask for reviewing after the next week ?09:00
SpazmoticYeah I know it, just hoping to get some eyes on it I guess09:00
bauzastoday, I'll look at specs09:00
*** blkart has quit IRC09:00
*** tianhui has quit IRC09:00
*** tianhui has joined #openstack-nova09:01
*** liuyulong has quit IRC09:01
*** blkart has joined #openstack-nova09:03
*** yamahata has quit IRC09:03
SpazmoticBut asking is about all that I can do for now, and would remiss if I didn't do it :)09:03
bauzassure, just explaining09:07
SpazmoticYeah I understand of course :D09:08
*** rcernin has joined #openstack-nova09:09
*** slaweq_ has quit IRC09:09
*** slaweq_ has joined #openstack-nova09:10
*** slunkad has quit IRC09:13
*** slunkad has joined #openstack-nova09:14
*** slaweq_ has quit IRC09:14
*** lucas-pto is now known as lucasagomes09:15
*** yangyapeng has joined #openstack-nova09:16
*** moshele has joined #openstack-nova09:18
*** yamamoto_ has joined #openstack-nova09:21
*** yamamoto has quit IRC09:24
*** slunkad has quit IRC09:27
*** slunkad has joined #openstack-nova09:28
*** moshele has quit IRC09:29
*** vivsoni has quit IRC09:31
*** rubasov has joined #openstack-nova09:33
*** liuyulong has joined #openstack-nova09:35
*** slunkad has quit IRC09:38
openstackgerritLajos Katona proposed openstack/nova master: WIP: ServerMovingTests with custom resources  https://review.openstack.org/49739909:39
*** vivsoni has joined #openstack-nova09:39
*** belmoreira has joined #openstack-nova09:42
*** mgoddard_ has joined #openstack-nova09:44
ttsioutsjohnthetubaguy: Goodmorning, are you around?09:44
*** liuyulong has quit IRC09:47
johnthetubaguyttsiouts: I am09:48
openstackgerritArvind Nadendla proposed openstack/nova-specs master: Support traits in Glance  https://review.openstack.org/54150709:48
* johnthetubaguy unplugs headphones so he can hear when people say hello09:49
*** slunkad has joined #openstack-nova09:50
ttsioutsjohnthetubaguy: Hello! :) we were thinking about the second call to placement09:51
johnthetubaguyttsiouts: what was the plan before, just returning what you had previously fetched from placement?09:51
*** Tom-Tom has quit IRC09:52
ttsioutsjohnthetubaguy: hmmm if the freeing of the resources is successful then we can call placement for a second time.09:52
*** Tom-Tom has joined #openstack-nova09:52
ttsioutswe chose to form the response in the service just to save time09:53
ttsioutsand not having to trigger placement again..09:53
*** derekh has joined #openstack-nova09:53
johnthetubaguyttsiouts: I think that proxy is bad really, you don't know what microversion nova wants to ask for, etc09:53
johnthetubaguyttsiouts: simpler to tell Nova if you were successful or not09:53
ttsioutsjohnthetubaguy: Great. I'll do that.09:54
johnthetubaguyttsiouts: I think its worth doing a cheaky import of your client in there for now, or a link to the code at least, just so its possible to follow the breadcrumbs09:55
ttsioutsjohnthetubaguy: yes, seems better09:56
ttsioutsjohnthetubaguy: what do you think is better? an api call or importing the service?09:56
johnthetubaguyttsiouts: cool, getting back to the spec updates (oils delete key)09:56
johnthetubaguyttsiouts: I quite like the API call, copying vendor data09:57
*** Tom-Tom has quit IRC09:57
openstackgerritSurya Seetharaman proposed openstack/nova-specs master: Support disabling a cell  https://review.openstack.org/54668409:57
ttsioutsjohnthetubaguy: Great!! thanks John, I'll follow that up09:57
johnthetubaguyttsiouts: traditionally any cross service communication in OpenStack is via a REST API, so good not to break that rule09:58
*** rubasov has left #openstack-nova09:58
johnthetubaguyttsiouts: being an API on the other end will making caching easier anyways09:58
*** rcernin has quit IRC09:59
ttsioutsjohnthetubaguy: cool! thanks John!10:00
johnthetubaguyno worries10:01
*** masber has quit IRC10:01
bauzasoh man, FF10:04
*** bauzas is now known as bauwser10:04
*** liuyulong has joined #openstack-nova10:06
openstackgerritArvind Nadendla proposed openstack/nova-specs master: Support traits in Glance  https://review.openstack.org/54150710:07
*** pengdake_ has quit IRC10:09
*** Guest16541 has quit IRC10:13
*** ispp has quit IRC10:15
*** vivsoni has quit IRC10:19
*** vivsoni has joined #openstack-nova10:19
*** mdnadeem has quit IRC10:22
*** sree_ has joined #openstack-nova10:24
*** sree_ is now known as Guest8258110:25
*** stakeda has quit IRC10:27
*** udesale_ has joined #openstack-nova10:28
*** Guest82581 has quit IRC10:29
*** udesale has quit IRC10:29
*** bnemec-pto has quit IRC10:32
*** mdnadeem has joined #openstack-nova10:34
*** janki has quit IRC10:35
*** namnh has quit IRC10:44
*** chyka has joined #openstack-nova10:45
openstackgerritHironori Shiina proposed openstack/nova-specs master: Ironic: Instance switchover  https://review.openstack.org/44915510:47
*** chyka has quit IRC10:50
*** tianhui has quit IRC10:58
redondo-mkHi. Is there anyone there that would be willing to give me some info how nova compute pulls NUMA memory info...would save me some time just trying to figure it out from nova codebase...11:00
*** xnox has joined #openstack-nova11:02
*** xnox has joined #openstack-nova11:02
redondo-mkI'm looking at info that I get in `at /proc/meminfo | grep Huge` and `grep Huge /sys/devices/system/node/node*/meminfo` and I see numbers adding up...what I don't get is the following....Total-Free should give you Used, right but then I see different numbers for used in compute_nodes db table (`select numa_topology from compute_nodes`)?11:02
openstackgerritHironori Shiina proposed openstack/nova master: ironic: Support resize and cold migration  https://review.openstack.org/50067711:04
*** abhishekk|afk has quit IRC11:05
*** gouthamr has joined #openstack-nova11:08
*** phuongnh has quit IRC11:09
*** pooja_jadhav has quit IRC11:09
*** annp has quit IRC11:11
*** udesale__ has joined #openstack-nova11:15
*** moshele has joined #openstack-nova11:16
*** udesale_ has quit IRC11:18
*** moshele has quit IRC11:19
*** tiendc has quit IRC11:20
*** moshele has joined #openstack-nova11:23
*** janki has joined #openstack-nova11:28
*** mdnadeem_ has joined #openstack-nova11:35
*** mdnadeem has quit IRC11:35
*** udesale__ has quit IRC11:36
*** dtantsur|afk is now known as dtantsur11:40
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Spec on preemptible servers  https://review.openstack.org/43864011:41
*** liuyulong has quit IRC11:41
*** BryanS68 has joined #openstack-nova11:42
*** links has quit IRC11:42
johnthetubaguyttsiouts: got a first go at the updates to the spec done: ^11:43
*** vks1 has quit IRC11:47
*** sahid has quit IRC11:50
*** fusmu has quit IRC11:52
*** fusmu_ has joined #openstack-nova11:52
*** tbachman has quit IRC11:55
*** links has joined #openstack-nova11:55
*** moshele has quit IRC12:01
strigazijohnthetubaguy: thanks for the update12:01
*** mdnadeem_ has quit IRC12:06
*** fragatina has joined #openstack-nova12:09
*** srf has joined #openstack-nova12:15
srfHello i want to ask when i install devstack i meet nova error12:16
SpazmoticFeel free to let us know the error and i'm sure someone has seen it before, or you may also want to try in #openstack for deployment help if you don't get any help as many people are preparing to travel to Irelund12:17
*** edmondsw has quit IRC12:18
*** pengdake_ has joined #openstack-nova12:18
*** mdnadeem_ has joined #openstack-nova12:18
*** edmondsw has joined #openstack-nova12:18
*** edmondsw has quit IRC12:23
*** lucasagomes is now known as lucas-hungry12:24
srfI want to send but i can't paste or copy or send the pic12:24
SpazmoticWell devstack is a command line installer so many copy and paste the error from terminal into a github gist :)12:25
openstackgerritElod Illes proposed openstack/nova master: Functional test: cold migrate to compute down  https://review.openstack.org/49628012:27
*** bhagyashris has quit IRC12:29
srfOke i'll try.12:29
*** links has quit IRC12:30
*** sahid has joined #openstack-nova12:34
*** masber has joined #openstack-nova12:38
*** links has joined #openstack-nova12:43
*** ratailor has quit IRC12:46
*** tbachman has joined #openstack-nova12:54
*** jpena is now known as jpena|lunch12:55
jaypipesbauwser: is sahid going to PTG?12:56
*** jaypipes is now known as leakypipes12:56
*** tbachman has quit IRC12:59
*** tbachman has joined #openstack-nova13:00
bauwserleakypipes: no, mostly all our team but him :)13:00
leakypipesbauwser: shame :( there's a bunch of specs I wanted to discuss with him.13:03
*** gouthamr has quit IRC13:04
*** liverpooler has joined #openstack-nova13:04
*** gouthamr has joined #openstack-nova13:04
leakypipesdansmith: still around?13:05
leakypipesdansmith: question for you on that IO thread vs vCPU thread thing...13:05
sean-k-mooneyleakypipes: e.g. qemu emulator treads?13:06
sean-k-mooneythey are the ones that od io right?13:06
*** fusmu_ has quit IRC13:06
leakypipessean-k-mooney: are you making fun of me? :)13:07
sean-k-mooneyhaha no it geunely works differently depend on kernel vs dpdk vhost or kvm  vs qeum with tci backend so it never keep this strait in my head13:07
leakypipessean-k-mooney: no, AFAIK, the emulator threads are the QEMU control process threads -- they send communication events to the guest I think?13:08
*** fusmu has joined #openstack-nova13:08
leakypipessean-k-mooney: the I/O threads are for doing device read/write operations on behalf of the guest. And the vCPU threads are for the guest's userspace code to run.13:08
leakypipessean-k-mooney: but I had a question about whether libvirt allows each of those three types of threads to be pinned to a specific pCPU or whether only emulator and vCPU threads were possible to pin...13:09
johnthetubaguyyou know I assumed it pinned them all together, but that is a good question13:10
*** edmondsw has joined #openstack-nova13:10
*** srf has quit IRC13:10
leakypipessean-k-mooney: and further to that, can you pin the emulator thread(s) to the same pCPU as the I/O threads...13:10
sahidleakypipes: you can pin i/o threads with emulator threads on a same pCPUs but it can have bad effects, and increase I/O lantency13:12
leakypipessahid: ok, cool. thanks :)13:12
leakypipessahid: the emulator thread doesn't get a whole lot of control process "traffic" though, right?13:13
leakypipessahid: pretty minimal compared to IO or vCPU threads, yeah?13:13
leakypipessahid: I mean, I understand for RT environments wanting to have guaranteed execution and all that, I'm just curious about the internals of it all, nothing more.13:14
sahidhonnestly i don't know exaclty, it's just that we know the emulator threads can lock the full CPU so we want avoid to have vCPUs running sensitive apps13:15
sahidi would say we want the same for i/o threads so it's probably better to avoid to pin them together13:16
leakypipesgot it.13:16
leakypipesthanks :)13:16
*** srf has joined #openstack-nova13:18
srfi'll try to install devstack in my computer but always error in nova http://paste.openstack.org/show/682946/   anybody can help me what wrong this13:21
*** lucas-hungry is now known as lucasagomes13:21
*** Tom-Tom has joined #openstack-nova13:21
*** udesale has joined #openstack-nova13:22
hshiinaleakypipes: hi, i have a question about your note at https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L85513:24
hshiinaInventoryInUse exception is periodically logged with stacktrace in ironic job: http://logs.openstack.org/19/546919/2/check/ironic-tempest-dsvm-ipa-partition-pxe_ipmitool-tinyipa-python3/2737ab0/logs/screen-n-cpu.txt.gz?level=ERROR13:25
hshiinais there anything to fix here?13:25
*** acormier has joined #openstack-nova13:26
*** acormier has quit IRC13:29
*** acormier has joined #openstack-nova13:29
leakypipeshshiina: one sec, reading :)13:33
*** lajoskatona has left #openstack-nova13:34
leakypipeshshiina: no, that looks to be a different scenario than the one described in that comment. lemme dig some more into the log there.13:35
*** jichen has quit IRC13:36
hshiinaleakypipes: sure, thanks.13:37
leakypipeshshiina: if I had to guess, what is happening in your case is the following:13:37
leakypipes1) nova boots an Ironic flavor13:37
leakypipes2) nova-scheduler finds an available Ironic compute node that has CUSTOM_BAREMETAL inventory of 1, used of 0.13:38
leakypipes3) nova-compute calls the Ironic virt driver to launch the instance13:38
leakypipes4) Ironic virt driver launches the instance. All good.13:39
leakypipes5) At some point, the Ironic node goes into an "unavailable" status13:39
leakypipes6) The resource tracker in nova-compute runs update_available_resource()13:39
leakypipes7) the Ironic virt driver returns an inventory total of 0 CUSTOM_BAREMETAL for the unavailable node13:39
leakypipes8) The nova-compute resource tracker dutifully tries to remove the inventory for that node13:40
leakypipes9) the Placement API returns an InventoryInUse exception because there is an instance running on that Ironic node13:40
leakypipeshshiina: so what I would check in your job logs is whether that Ironic node that the instance ended up on was set to unavailable at some point between 11:17:05 and 11:18:1313:41
leakypipeshshiina: from what I can tell, the resource tracker and placement API are doing "the right thing" and preventing inventory from being deleted when there is an active instance on that Ironic node.13:41
hshiinaleakypipes: thank you. i will dig it later.13:43
hshiinaleakypipes: i am interested in. but, i have to leave office shortly.13:43
leakypipesnp13:43
*** pchavva has joined #openstack-nova13:48
*** edleafe is now known as figleaf13:50
*** gouthamr has quit IRC13:50
srfi'll try to install devstack in my computer but always error in nova http://paste.openstack.org/show/682946/   anybody can help me, ?13:50
*** acormier has quit IRC13:52
Spazmoticsrf:  Quick google revealed this.. did you try to search around a bit?  https://bugs.launchpad.net/devstack/+bug/172626013:54
openstackLaunchpad bug 1726260 in devstack "devstack fails while running ./stack.sh in master/queens in ubuntu 16.04 while starting n-cpu" [Undecided,New]13:54
*** jpena|lunch is now known as jpena13:54
SpazmoticGive that a try :)13:54
*** lyan has joined #openstack-nova13:54
*** vks1 has joined #openstack-nova13:55
*** links has quit IRC13:56
*** hshiina is now known as hshiina|afk13:57
srfSpazmotic : I've tried to adding ENABLED_SERVICES=placement-api but still error13:58
*** pengdake_ has quit IRC13:59
*** moshele has joined #openstack-nova14:02
*** vks1 has quit IRC14:04
*** psachin has quit IRC14:04
*** hshiina|afk has quit IRC14:04
*** dklyle has quit IRC14:04
SpazmoticNot too sure then, but i'm sure someone may have seen it before.. also again you may wish to ask in #openstack as that's more designed for deployment support14:05
*** imacdonn has quit IRC14:06
*** imacdonn has joined #openstack-nova14:07
*** eharney has joined #openstack-nova14:08
*** vks1 has joined #openstack-nova14:09
*** fragatina has quit IRC14:09
*** Tom-Tom has quit IRC14:11
*** Tom-Tom has joined #openstack-nova14:13
*** baoli has joined #openstack-nova14:14
*** moshele has quit IRC14:15
*** saphi_ has joined #openstack-nova14:18
*** esberglu has joined #openstack-nova14:21
*** lyan has quit IRC14:21
*** moshele has joined #openstack-nova14:23
*** Guest70 has joined #openstack-nova14:24
Guest70hello14:24
*** lyan has joined #openstack-nova14:25
*** Guest70 has left #openstack-nova14:26
*** awaugama has joined #openstack-nova14:28
*** udesale_ has joined #openstack-nova14:28
*** udesale has quit IRC14:29
*** Tom-Tom has quit IRC14:31
*** fusmu has quit IRC14:33
*** fusmu_ has joined #openstack-nova14:33
*** moshele has quit IRC14:35
*** acormier has joined #openstack-nova14:35
*** acormier has quit IRC14:37
*** acormier has joined #openstack-nova14:37
*** lbragstad has joined #openstack-nova14:37
*** yamahata has joined #openstack-nova14:41
*** amoralej is now known as amoralej|lunch14:42
openstackgerritEric Berglund proposed openstack/nova master: WIP: PowerVM Driver: Network interface attach/detach  https://review.openstack.org/54681314:44
openstackgerritTheodoros Tsioutsias proposed openstack/nova master: WIP: Add the reaper entry point  https://review.openstack.org/54745014:45
*** cfriesen has joined #openstack-nova14:51
*** Zames has joined #openstack-nova14:51
*** acormier has quit IRC14:52
bauwserif folks have specs, I'm hungry to eat them14:53
bauwserlike, you're going to the PTG and would like to discuss about your spec, I'm your man14:53
*** gbarros has quit IRC14:53
figleafbauwser: after you eat them, you know what they turn into :)14:54
*** r-daneel has joined #openstack-nova14:54
bauwserif they are made of cheese, I'm +214:54
*** Zames has quit IRC14:55
bauwsereven if https://www.thelocal.fr/20180223/french-cheese-wars-are-the-days-of-the-real-normandy-camembert-numbered14:55
figleafguess it will be like taking a microbrew beer and then mass-producing it14:57
*** tidwellr has joined #openstack-nova14:59
*** mriedem has joined #openstack-nova14:59
*** edmondsw has quit IRC15:00
*** edmondsw has joined #openstack-nova15:00
dansmithleakypipes: I'm really not the right person to answer detailed questions about that15:04
*** edmondsw has quit IRC15:05
*** masber has quit IRC15:05
*** amodi has joined #openstack-nova15:07
*** gbarros has joined #openstack-nova15:08
*** gbarros has quit IRC15:14
*** gbarros has joined #openstack-nova15:15
bauwserdansmith: I don't know your current TZ, but it's Friday here :)15:15
dansmithbauwser: I don't know my current TZ either, but I'm pretty sure it's not friday in my body15:15
bauwserstill on the air ?15:16
dansmithno, you caught me, I know my current TZ15:16
dansmithbut my body doesn't15:16
bauwserheh15:16
*** mdnadeem_ has quit IRC15:17
mriedemwhy do we have https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/db/sqlalchemy/api_models.py#L409 ?15:19
mriedemseems silly to have an entire table to hold one string15:19
mriedemmaybe it goes back to the idea that server groups would have >1 policy15:19
mriedembut the API doesn't allow that15:19
mriedemhttps://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/api/openstack/compute/schemas/server_groups.py#L29-L3915:20
*** srf has quit IRC15:20
*** david-lyle has joined #openstack-nova15:21
dansmithmriedem: yeah it's 1:N I think15:21
dansmithnot that we can do that, I don't think15:21
mriedemthe api doesn't allow it15:22
mriedembut that's the legacy reason i guess,15:22
mriedemthe schema makes you pass a single-item list for the policy15:22
dansmithbut even still, none of the stuff we do to honor the policy considers multiple options right?15:22
mriedemand it's called "policies" but doesn't allow >115:22
mriedemcorrect15:22
mriedemit's just weird15:22
leakypipescfriesen: am I gonna have to get out the beating stick for you? :P15:23
mriedemdansmith: context is https://review.openstack.org/#/c/546925 which is tied to an old blueprint from sgordon15:23
leakypipescfriesen: well, I suppose we could rent a pair of those fake Sumo wrestler costumes and go at it.15:23
dansmithmriedem: ah yes, affrinity15:25
mriedemi'm currently in the process of ripping up the spec15:25
dansmithmriedem: ugh, that max_number_per_host thing is just silly.. that's never going to do something useful, IMHO15:26
*** BryanS68 has quit IRC15:27
mriedemi can see the use,15:27
mriedemit makes the soft policy less soft15:28
mriedemjello-like if you will15:28
dansmithwhat's the point in that?15:28
mriedemthere is some nfv use case in the spec15:28
*** amoralej|lunch is now known as amoralej15:28
dansmithI think from a modeling point of view, it'd be hard to reason about that unless you knew "I'm going to start ten of the exact same thing for throughput and I want those spread across three nodes"15:29
mriedemanti-affinity to a point, so they don't need all VMs in the group on separate hosts, but they can only tolerate up to a limit of VMs on the same host15:29
*** dave-mccowan has joined #openstack-nova15:29
mriedemotherwise today if you have 10 VMs in a soft-anti-affinity group, and only 2 available hosts, you could get like 1 on 1 host and 9 on the other15:30
dansmithright, so a more useful thing would be to say "instances in this group should spread across N hosts before considering doubling up"15:30
dansmiththat's an easier to reason-about model from the outside I think15:30
dansmithand it doesn't depend on the size of the group15:30
mriedemi can't say if that's easier to reason about from the outside or not,15:31
dansmithit focuses on the amount of redundancy you need regardless15:31
mriedemsince i've had this in my head the way it's proposed15:31
mriedemdansmith: mention it in the spec?15:31
mriedemeither way i need to bring it up at the ptg b/c of some questions about how to model the thing in the api and db15:31
dansmithI really hate the idea of adding extra_specs to anything, but especially groups15:32
mriedemi said that on the spec15:32
*** gbarros has quit IRC15:32
*** gbarros has joined #openstack-nova15:33
dansmithI guess their use-case actually specifically requires this as the model15:34
*** BryanS68 has joined #openstack-nova15:36
mriedemis sgordon still a man about town?15:37
mriedemisn't he in the UK/Ireland area?15:37
sgordonmriedem, nah canadia15:37
mriedemdrats15:37
mriedemsgordon: can you take a look at that spec regardless of your physical location?15:37
sgordonmriedem, yeah i just got the email notification and opened it up15:38
mriedemgibi might also have some input here since ericsson added the soft affinity stuff15:38
*** jpena is now known as jpena|brb15:39
sgordonmriedem, i have to admit i havent had anyone clamouring for this in a fair while, but that may just mean folks are working around it using host aggregates and higher level orchestration15:39
*** kenperkins has joined #openstack-nova15:41
*** fusmu_ has quit IRC15:42
*** BryanS68 has quit IRC15:42
*** sahid has quit IRC15:43
bauwsermriedem: dansmith: we have a clear distinction between filters which hard-check possibilities and weighers which soft-check15:43
bauwserhaving some middleground is something more than just for affinity15:44
bauwsertbc, I'm -2 on the spexc15:44
bauwsermriedem: dansmith: but johnthetubaguy had an idea to deprecate filters/weighers and have a different semantic for saying "here I want to hard-stop or fail-safe"15:45
* bauwser will try to find the backlog spec for that before -2ing the change15:45
johnthetubaguybauwser: got a pointer to the new one15:46
*** sahid has joined #openstack-nova15:46
johnthetubaguymight be better than my previous re-write the world plan15:46
dansmithbauwser: this is harder than soft15:46
bauwsersoft is weighting15:46
bauwserhard is filtering15:46
bauwserhere they want to have a soft-but-hard15:46
dansmithbauwser: right, this spec specifies something hard, which is filtering,15:46
bauwserbref15:47
dansmithbut it weakens the normally hard anti-affinity for some value of allowed overlap15:47
dansmithI really don't like it15:47
bauwserit's unclear and I don't think modifying filters or weighers is a good thing15:47
bauwseryeah exactly that15:47
bauwserinstead of hijacking what we currently have, we should discuss on a better approach15:47
bauwserwhich is what was johnthetubaguy exactly trying to do15:47
bauwserjohnthetubaguy: what is the new proposal, tbh I haven't gone thru all specs yet15:48
mriedemi'm not following you15:49
mriedemyou can do hard anti affinity and fail if you can't place the VMs on separate hosts,15:49
mriedemyou can do soft anti affinity and get more VMs on a host than you'd really like15:49
mriedemthis is saying, i want soft but i can only tolerate up to a limit15:49
*** Spazmotic has quit IRC15:49
mriedemor, soft is ok but to a limit15:49
dansmithmriedem: I see it as a variant of hard not soft15:50
mriedemsure, it's in between15:50
dansmithmriedem: it's the same filter has hard but instead of ==0, it's <=$max15:50
dansmithyou can't do this with a weigher is my point15:50
johnthetubaguydansmith: +115:50
dansmithand that's what soft anti-affinity is15:50
mriedema weigher is not being proposed15:50
dansmithI know15:50
*** dklyle has joined #openstack-nova15:50
bauwsermriedem: soft anti-affinity should never hard-stop15:50
bauwserhard anti-affinity is something we already have15:51
mriedemumm15:51
dansmithmriedem: I'm saying this is proposing changing our hard filter from ==0 to <$max15:51
mriedemso flacid anti affinity as a new policy?15:51
bauwserwhat the proposer asks is a way to have a mechanism that would describe a 'max" limit15:51
mriedemyes i understand15:51
bauwserdansmith: and I don't like that15:51
bauwserbecause it would confuse a lot of people15:52
dansmithbauwser: neither do I, I think that's a confusing model15:52
dansmithI would rather a policy of min-redundancy, with a value of the minimum15:52
mriedemso your issue is creating a group with the soft policy but then having this attribute which makes it more hard than soft,15:52
bauwsermy point is, instead of trying to hack our current model, rather try to describe a better model15:52
mriedemso it's a conception issue15:52
bauwserexactly15:52
dansmithmriedem: but it wouldn't be that15:52
mriedemso a new policy15:52
dansmithmriedem: it would be hard with a grace limit15:52
bauwsersometimes people want to hard-fail on a filter, but sometimes they prefer to soft-fail15:53
dansmithbauwser: I'm not sure what that has to do here,15:53
dansmithbecause this is still hard fail on this filter, it's just at a nonzero level of duplication15:53
*** Spazmotic has joined #openstack-nova15:53
dansmithif you go over that limit, which is currently zero, you hard fail all the time15:53
mriedemok, so create a server group with the anti-affinity policy (hard) with some limit on overlap15:53
dansmithwith what is proposed here15:53
*** david-lyle has quit IRC15:53
*** Spazmotic is now known as Guest3340515:53
dansmithmriedem: that's what is being proposed here15:54
mriedemno it's not, it's the opposite15:54
mriedemthe spec is talking about the soft policy15:54
mriedemyou're saying soft policy is bad optics,15:54
mriedemso flip it to be hard policy,15:54
mriedemwith the overlap limit15:54
mriedemeither way we get to the same thing15:54
dansmithmriedem: but the spec is wrong in how things work15:54
*** slaweq has quit IRC15:54
leakypipesdid someone say flaccid affinity?15:54
* leakypipes jumps in15:54
mriedemleakypipes: yes, word of the day15:54
dansmithmriedem: they think they're adding a limit to soft, but you can't do that15:54
dansmithyou can increase the limit on hard though15:55
mriedemdansmith: b/c soft is enforced via weigher15:55
mriedemsure,15:55
dansmithmriedem: exactly15:55
mriedemso yeah let's make that point in the spec15:55
*** zhaochao has quit IRC15:55
bauwserI'm just commenting it15:55
*** slaweq has joined #openstack-nova15:55
bauwserdansmith: are you proposing to have a max limit defined on the filter ? I don't like that15:55
* dansmith is writing this on the spec15:56
dansmithbauwser: no, I'm saying that is what they are proposing here15:56
jroll(disclaimer, haven't read the spec) so, use case: I want to be able to say "I want 20 instances spread across at least 5 hosts, but preferably across more" - I think that's what is proposed here (based on this conversation), is that correct?15:56
dansmiththey just don't know it15:56
dansmithjroll: no, that's what I was saying though15:56
dansmiththat's what I would prefer I mean15:56
jrolldansmith: and this is what the spec thinks it is proposing, but it's really not? :)15:56
bauwserjroll: that's exactly what weighers aim to achieve15:56
dansmithjroll: no, that's not what the spec thinks its proposing15:57
bauwserfor that usecase15:57
mriedemjroll: dansmith: well one thing is when you create the group, you don't say how many instances are going to be in it15:57
bauwserie. ponder your list of hosts based on some criterias that are given by weighers15:57
dansmithjroll: it's proposing that you know how big the group will ever be, and set the allowed overlap per host to some number so you get the desired amount of minimum redundancy15:57
mriedemso "I want 20 instances spread across at least 5 hosts" is a bit chicken and egg to me15:57
dansmithwhich is much harder to reason about15:57
*** slaweq_ has joined #openstack-nova15:57
*** slaweq_ has quit IRC15:57
jrolldansmith: ah, ok, will read again15:57
mriedembut i guess if you already know when you create the group how many VMs are going to be in it, then that works15:57
dansmithmriedem: that's what this spec requires, is you to know how many instances will be in there15:58
*** slaweq_ has joined #openstack-nova15:58
dansmithmriedem: but, if you say "I want minimum redundancy of three" then you don't get any overlap until you boot the fourth and now you can start to double up because you've achieved that minimum level15:58
*** slaweq_ has quit IRC15:59
*** slaweq_ has joined #openstack-nova16:00
leakypipesdansmith: I think you saw the use case section begin with "As a NFV user, ..." and immediately threw up in your mouth ;)16:00
*** slaweq_ has quit IRC16:00
bauwserdansmith: it's very different16:00
bauwserdansmith: but I agree, describing for anti-affinity how many groups you want is somehow reasonable16:00
jrollleakypipes: there's vomit everywhere in here from that16:00
leakypipes:)16:00
*** slaweq_ has joined #openstack-nova16:00
dansmithbauwser: how many groups? you lost me there :)16:01
mriedemi think he means group of VMs per host in the group?16:01
bauwserman16:01
bauwsermriedem: bonus point16:01
dansmithoh min-redundancy you mean?16:01
jrolldansmith: I read through it, I agree with what you're suggesting there, thanks16:01
dansmithI think that's how most people will want to define a thing like this16:01
dansmithand jroll does, so.. end of story16:02
* jroll likes the thought of "best effort" affinity that's halfway between hard and soft16:02
bauwsercall it failure zone16:02
* leakypipes thinks this reminds him of something.... OH THAT'S RIGHT... HEAT.16:02
bauwserbut if we call it failure zone, then leakypipes would kill me16:02
bauwserand I'm scared16:02
leakypipesbauwser: indeed I would.16:02
* leakypipes readies giant block of cheese to throw at bauwser 16:03
bauwserwhy people are doing instance groups besides the idea that they expect groups of failure domains16:03
bauwser?16:03
bauwserleakypipes: you don't imagine my mood now... restating https://www.thelocal.fr/20180223/french-cheese-wars-are-the-days-of-the-real-normandy-camembert-numbered16:03
leakypipesbauwser: because... N. F. V.16:03
leakypipeshahaha16:04
* bauwser runs16:04
leakypipesthat's awesome.16:04
leakypipesonly in La France16:04
*** jaosorior has quit IRC16:05
*** slaweq_ has quit IRC16:05
leakypipesmriedem: did you say that https://review.openstack.org/#/c/546925/2/specs/rocky/approved/allow-specifying-limit-for-affrinity-group.rst was somehow related to something sgordon put together?16:08
mriedemsee the bp16:09
mriedemhttps://blueprints.launchpad.net/nova/+spec/complex-soft-anti-affinity-policies16:09
leakypipesoh... 11335516:09
leakypipesok, not 11335516:10
mriedemhttps://review.openstack.org/#/c/546925/16:10
mriedemhttps://review.openstack.org/#/c/224325/ is the old backlog spec16:10
leakypipesmriedem: yeah, was confused since the spec is called something different than the blueprint.16:10
*** mgoddard_ has quit IRC16:11
leakypipesmriedem: ftr, k8s does this very well.16:13
leakypipesmriedem: because, you know, it's an *orchestrator*.16:14
leakypipesmriedem: would be much better, IMHO, to recommend to your group to install k8s on the nova compute hosts in the NFVi and just have at it.16:14
dansmithleakypipes: do we provide heat enough control to do this externally?16:14
jrollk8s does this well because the orchestrator and the scheduler can work together, imo16:15
leakypipesdansmith: initially, yes. but k8s will do it on an ongoing basis -- i.e. if the state of the pods on a node changes, k8s will re-evaluate the tolerances and scheduling conditions and place pods on different nodes accordingly.16:15
leakypipesjroll: actually, they don't.16:16
dansmithright, sure16:16
dansmithI guess I'm not sure how heat would do this with our api16:16
*** yamamoto_ has quit IRC16:18
leakypipesjroll: what actually happens in k8s is the descheduler (an orchestrator piece, not part of the scheduler) will evict pods from nodes that no longer meet scheduling constraints and the scheduler will try to re-place those evicted pods somewhere else (or the same node if conditions happen to be right at that specific moment)16:19
leakypipesjroll: the descheduler was originally called the rescheduler, but there's actually a rescheduler thing in k8s that does a different type of eviction, and that rescheduler piece is being deprecated.16:19
jrollleakypipes: sure, but the scheduler is aware of the orchestrator's requirements, right? or does it randomly choose and the descheduler continues to evict until things are right?16:20
leakypipesjroll: the reason the k8s works well is specifically because there is no coupling between the orchestrator and the resource tracker/scheduler.16:20
leakypipesjroll: scheduler is not aware of descheduler, no.16:21
*** danpawlik has quit IRC16:21
jrollleakypipes: I probably need to do some reading to fully understand16:21
leakypipesjroll: descheduler is basically just a daemon that runs a periodic loop that checks to see if the constraints that were met on initial placement are still met. and if not, simply evicts the pod. that puts the pod into a "ready for scheduling" state and the scheduler, on its next loop, will scheduler that pod again.16:22
*** yamamoto has joined #openstack-nova16:22
openstackgerritClaudiu Belu proposed openstack/nova master: hyper-v: autospec classes before they are instantiated  https://review.openstack.org/34221116:23
jrollcurious how heat would accomplish this affinity model, though. does heat know about compute nodes, which instances are on them, the affinity characteristics of said compute nodes (to know if two instances are actually in a separate power domain, for example?)16:23
leakypipesincidentally, this is precisely how I proposed that spot instances be handled in Nova instead of a highly-coupled-to-the-scheduler architecture that the guys at Inria proposed.16:23
*** vks1 has quit IRC16:23
*** jpena|brb is now known as jpena16:23
jrollleakypipes: is the scheduler aware of the constraints, like affinity, that need to be met? (and therefore that the descheduler is trying to meet?)16:23
leakypipesjroll: sure. the constraints are simply part of the pod scheduling policy that is associated with the pod when it's originally created.16:25
bauwserokay, so let's pretend we have "affinity knobs" :p16:25
bauwserleakypipes: I guess you know that some other openstack projects are trying to achieve the descheduler thing in OpenStack right?16:26
bauwserthe reconciliation of resource usage IIUC what the k8s descheduler is trying to achieve16:26
jrollleakypipes: cool, so does nova expose sufficient data (instances, instance groups, affinity policies, compute node information, etc) for an external thing like heat to accomplish things like this spec proposes?16:26
leakypipesjroll: yes.16:26
leakypipesjroll: for the most part :)16:27
jrollheh16:27
leakypipesjroll: but yes.16:27
* jroll is suspect about that, for a non-admin user16:27
* bauwser is about to call it a day16:27
leakypipesjroll: heat is admin user, no?16:27
bauwserleakypipes: heat can have trusted users16:27
bauwserI mean trusts16:27
jrollleakypipes: I've no clue, honestly16:27
bauwserit's user delegation16:28
bauwserbut anyway16:28
leakypipesbauwser: and trusts don't work with federated keystone, so...16:28
bauwserit's 5pm here and I need to pack my stuff :)16:28
leakypipes:)16:28
dansmithleakypipes: so how would heat go about doing this?16:28
bauwserso, see you folks in a pub or somewhere else16:28
bauwserlike in a stadium16:28
dansmithleakypipes: it could boot the first three instances in an anti-affinity group and then after that, the fourth and later go anywhere?16:28
jrollleakypipes: but, this means that an openstack user needs to rely on their cloud provider to provide heat to be able to do this, rather than being able to write a little program to diy it16:28
*** bauwser is now known as bauzas16:28
jrollwhich might be fine but kinda sucks16:29
leakypipesdansmith: are you asking about heat or k8s?16:29
dansmithjroll: does it? I kinda expected people would be able to use heat themselves with just their creds16:29
dansmithleakypipes: heat and nova16:29
leakypipesjroll: "a little program"? :)16:29
jroll:P16:29
jrolldansmith: not if you need admin rights to nova to get compute node information16:29
dansmithjroll: you don't need that, and even if you had it I don't think it would help16:30
leakypipesdansmith: no.. if heat specifies the same instance group for all 5 instances, Nova guarantees they will go on different compute hosts...16:30
dansmithjroll: users can tell compute nodes apart, just not the actual hostnames16:30
leakypipesdansmith: but you already knew that, so I suspect I am missing yoru point.16:30
dansmithleakypipes: right, but how would heat implement this "no more than three instances per host" requirement with nova?16:31
jrolldansmith: fair point, I guess I'm thinking ahead to affinity around power/network domains, sorry16:31
dansmithjroll: yeah, but unless you let heat create aggregates, I still don't think being admin is helpful16:31
dansmithjroll: and each tenant creating aggregates via heat would be kinda crazy I think16:31
dansmithmaybe just using them with custom flavors? I dunno16:32
jrolldansmith: do regular users have access to aggregate info?16:32
dansmithI'm legit asking because I don't know much about heat16:32
leakypipesdansmith: how would it implement it in a transactional/atomic way? it couldn't. but it could certainly call a compute API to get the compute node -> instance association for all instances in a group and do the calculation itself, no?'16:32
dansmithif this is doable outside of nova with heat, then I'm super -2 on the spec, I just don't know that it's possible16:32
jrollI don't either, so I hope you aren't asking me :)16:32
dansmithjroll: no, but that's what I'm saying, I don't know that you'd give your heat admin user ability to do much with them either16:32
dansmithleakypipes: but it can't control placement of those things16:33
*** openstackgerrit has quit IRC16:33
leakypipesdansmith: hmm, true nuf16:34
leakypipesdansmith: yeah, you're right. it would need to be a part of the request spec sent to placement/scheduler :(16:34
jrolldansmith: I would think you'd need enough info to be able to map instances -> aggregates, decide if the spread is sufficient (>3 power domains or whatever), and I was going to say whatever jay is thinking from there :)16:34
dansmiththat's why I'm saying I expect this is a primitive we need to provide so heat can provide the automatic kill-one-spawn-one rebalancing sort of behavior16:34
dansmithjroll: well, you still can't control where a new instance lands enough to effect this policy I think16:35
jrolljust boot and delete instances until it looks right :P16:35
dansmithjroll: monte carlo scheduling? I'm in16:35
jrolldansmith: right, I was hoping jay was going to solve that problem as I typed16:35
*** claudiub|2 has joined #openstack-nova16:35
jroll:P16:35
leakypipesdansmith: there will be a point (already reached?) where this stuff is just too expensive to try and calculate for each scheduling request and makes the interface between scheduler and placement overly cumbersome.16:36
dansmithleakypipes: I don't think this has anything to do with what scheduler asks of placement16:37
dansmithleakypipes: what is described in the spec is a simple change to the current hard affinity filter16:37
dansmithand what I was proposing would be similar but with some different math16:37
dansmiththe biggest change is "oh now we're going to have extra specs on instance groups"16:37
dansmithwhich I think is a very big step we should not take lightly16:38
dansmith(or maybe at all)16:38
*** claudiub has quit IRC16:38
dansmithespecially my point about changing that value on an existing group, because people will expect that if they change that from 3 to 2, existing instances get moved around16:38
dansmithand that ain't hap'nan16:38
*** gbarros has quit IRC16:42
leakypipesdansmith, jroll: if you want some mind-bending reading from k8s on this subject: https://github.com/kubernetes/kubernetes/pull/18265/files16:48
leakypipesdansmith, jroll: note that k8s affinity stuff is still in beta...16:48
*** slaweq_ has joined #openstack-nova16:48
jrolloh my16:48
* jroll puts tab away for later16:48
mriedemdansmith: i think the 'extra spec' term in his spec is a mistake, or just a WIP thing,16:49
mriedemi don't think we should add a random bag of extra specs to instance groups either16:49
mriedemdansmith: i also said updating the max-per-host value on an existing group isn't going to happen16:50
mriedemfor the same reasons we don't allow changing policy on an existing group, or changing membership16:50
leakypipeswhat about an awkward bag of flaccid specs?16:51
mriedemif it's as simple as putting an attribute on a group in a hard affinity policy that says, instead of allowing no more than 1 vm from this group per host, you can allow up to 3 (or whatever), then that seems to fit the bill16:51
mriedemleakypipes: they have pills for that16:51
leakypipeslol16:51
mriedemlet me get you frank thomas' number16:51
*** sahid has quit IRC16:51
mriedemactually, jimmy johnson sells them too and he already lives in FL16:52
*** udesale__ has joined #openstack-nova16:52
*** udesale_ has quit IRC16:52
mriedemsorry i got distracted, was hastily throwing together a PBC...16:52
cfriesendansmith: we originally had metadata in instance groups, but it got pulled out due to not really having any users16:52
mriedemcfriesen: yeah that was linked into the spec and was something i didn't even know existed16:52
mriedemmetadata in general makes our lives terrible16:53
mriedemlike aggregate meta, and flavor extra specs16:53
mriedemi realize it's use though16:53
cfriesendansmith: it'd be possible to prohibit changing the value on an existing group to something that would result in the current spread being invalid....alternately you could just allow that and document that it'll only affect the *next* scheduling decision.16:54
*** gbarros has joined #openstack-nova16:54
mriedemcfriesen: to determine if the new requested value would invalidate things would mean running through the scheduler all over again16:54
mriedemand you could still get it wrong16:54
cfriesenmriedem: there has been that recurring spec for allowing instances to be added/subtracted from a group16:54
mriedemwhich is why we have the late affinity check on the compute hosts16:54
mriedemcfriesen: i remember powervc pushing it back in kilo but that's been abandoned for a long time16:55
cfriesenarguably we've got races all over with instance group affinity, so what's one more. :)16:55
dansmithcfriesen: yeah then the group is in violation of policy with no way of getting it out, unless you do your own shuffling16:55
*** slaweq_ has quit IRC16:56
dansmithmriedem: and yeah I'd rather a real attribute, but I also think we're just going to end up with unlimited attributes for other things like this16:56
dansmithmriedem: so I dunno.. it's a can of worms16:56
*** yamamoto has quit IRC16:57
mriedemidk, if this is the first time somtehing like this has come up in the last what 5 years?16:57
mriedemit would get annoying if, over time, we had a bunch of attributes on groups that only applied to specific policies16:58
cfriesenfor what it's worth, internally we added a "best-effort" flag to the "hard" affinity/antiaffinity policies to allow us to migrate instances off a compute node for maintenance16:58
mriedemcfriesen: so soft affinity?16:58
cfriesenno, you can turn the best-effort flag on and off dynamically16:58
* dansmith goes to dinner16:59
cfriesenso you'd normally run with it strict, but if you need to take down a compute node you can set best-effort, move everything, then turn it back off16:59
*** mgoddard_ has joined #openstack-nova16:59
cfriesenotherwise if you've got hard-affinity you can't migrate any of them16:59
mriedemwhich is why people want a force flag on evacuate, live migrate (and now cold migrate)17:00
leakypipesmriedem: and the late affinity check is (the only?) remaining upcall from a cell to API, no?17:00
mriedemleakypipes: hells no17:00
leakypipesmriedem: it's not an upcall?17:00
mriedemhttps://docs.openstack.org/nova/latest/user/cellsv2-layout.html#operations-requiring-upcalls17:00
mriedemit is an upcall17:00
mriedembut it's not the only one17:00
leakypipesoh, not the only reminaing..17:00
mriedemwe've got aggregates too17:00
mriedemso aggregates and affinity are the remaining upcall issues17:01
mriedembut right now there are 2 each17:01
mriedemso 4 upcall issues17:01
*** yamamoto_ has joined #openstack-nova17:01
leakypipesmriedem: then you'll LOVE my aggregate affinity spec! :) now with moar AFFINITY and moar AGGREGATES!17:01
*** damien_r has quit IRC17:01
mriedemleakypipes: i already said 'upcall upcall upcall' on that spec several times :)17:01
leakypipesI know :)17:01
mriedemi think i also hedged with something like, 'but we already do this in a few other places so people already have to rely on it, so maybe another log on the fire doesn't kill us'17:02
cfriesenOn a totally different topic...has anyone ever heard of nova allocating duplicate network interfaces?  (So the user boots while asking for 2 network interfaces, and nova allocates two  ports on each network.)17:02
mriedemyes17:03
mriedemthat's old news17:03
mriedemtempest has a test for it also i think17:03
*** chyka has joined #openstack-nova17:03
mriedembeen around since juno?17:03
cfriesenI mean nova is allocating twice as many as were asked for.17:05
mriedemdouble your pleasure17:05
mriedemidk, going to lunch17:05
*** fragatina has joined #openstack-nova17:06
*** ttsiouts has quit IRC17:06
*** lucasagomes is now known as lucas-afk17:10
*** gyee has joined #openstack-nova17:10
*** yamamoto_ has quit IRC17:12
*** efried has joined #openstack-nova17:13
efriedGreetings from JFK airport17:14
*** jpena is now known as jpena|off17:15
*** dave-mccowan has quit IRC17:15
*** dave-mccowan has joined #openstack-nova17:16
*** dklyle has quit IRC17:17
*** wolverineav has joined #openstack-nova17:17
*** edmondsw has joined #openstack-nova17:18
*** sree_ has joined #openstack-nova17:21
*** sree_ is now known as Guest9364317:22
*** jafeha has quit IRC17:24
*** belmoreira has quit IRC17:25
*** Guest93643 has quit IRC17:25
* mnaser grumbles at grendae17:25
*** sridharg has quit IRC17:26
*** yamamoto has joined #openstack-nova17:27
mnaserok17:27
mnaserim convinced grenade is broken17:27
mnaserfor stable/pike17:27
efriedIsn't that mnaser guy known for being johnny-on-the-spot for grenade fixes?17:28
mnaseronly when i have to :(17:28
mnaserHost 'ubuntu-xenial-rax-dfw-0002683360' is not mapped to any cell17:28
mnaserwe keep getting this in multinode17:28
*** fragatina has quit IRC17:28
efriedIt would seem odd that cell discovery isn't being run.  Like, nothing would ever work.17:29
efriedAnd that's pretty much the only thing I know about cells.17:30
*** slaweq_ has joined #openstack-nova17:30
mnaserefried: indeed seems to be the case.  looks like there was a change to add 'CELLSV2_SETUP=singleconductor' in there, so not sure if that might have affected it17:30
efriedmnaser: You're going to have something of a hard time finding a core today, but I'll +1 your fix :)17:31
mnaserefried: looks like there's only 3 cores for grenade too..17:31
*** hongbin has joined #openstack-nova17:32
*** yamamoto has quit IRC17:32
efriedLooks like qa-release is included by inheritance.17:32
efriedSo seven17:33
mnaserok looks like this runs => nova-manage cell_v2 simple_cell_setup --transport-url rabbit://stackrabbit:secretrabbit@10.209.130.218:5672/17:35
mnaserbut discover_hosts is never called17:35
efriedAnd simple_cell_setup doesn't run discovery itself?17:36
efriedI remember having to fix this around pike timeframe.17:36
mnaserefried: going through the code it looks like it does call _map_cell_and_hosts()17:37
*** amoralej is now known as amoralej|off17:37
*** swamireddy has quit IRC17:40
*** yamamoto has joined #openstack-nova17:42
*** yamamoto has quit IRC17:42
*** mlavalle has joined #openstack-nova17:44
*** danpawlik has joined #openstack-nova17:45
*** yamahata has quit IRC17:47
mnasersigh https://bugs.launchpad.net/grenade/+bug/1708039 looks like it was 'supposed' to be fixed17:47
openstackLaunchpad bug 1708039 in devstack "gate-grenade-dsvm-neutron-multinode-ubuntu-xenial fails with "No host-to-cell mapping found for selected host"" [Medium,Fix released] - Assigned to Sean Dague (sdague)17:47
*** udesale_ has joined #openstack-nova17:47
*** lpetrut has joined #openstack-nova17:48
mnaserit looks like it regressed17:49
mnaserand its all stable/pike hits17:49
*** mlavalle has quit IRC17:49
*** udesale__ has quit IRC17:50
*** danpawlik has quit IRC17:51
*** tidwellr has quit IRC17:53
mnaser"Didn't find service registered by hostname after 60 seconds" .. found it17:54
mnaserit's actually listed but the bash for some reason doesnt find it17:54
efriedleakypipes: You around today?17:55
efriedHoho, it's Friday17:55
*** efried is now known as fried_rice_jfk17:56
andreafmriedem hey I'm setting up a zuul-v3 multinode job, and everything works fine apart from nova that gives me "Host is not mapped to any cell" http://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/ara/result/525a60bd-ac22-4fc4-9db7-e61fce8ac1f5/17:56
andreafmriedem: I compared configs in localrc and nova and I don't see anything obvious - do you have any idea about what this could be?17:56
leakypipesfried_rice_jfk: ues17:56
leakypipesyes17:56
fried_rice_jfkleakypipes: It looks like alex_xu may be right.  I wrote a gabbit for it.17:57
mnaserandreaf: im actually looking into this right ow17:57
mnaserim seeing this issue in stable/pike17:57
*** fragatina has joined #openstack-nova17:58
andreafmnaser oh ok at least it's not just me :P17:58
*** dtantsur is now known as dtantsur|afk17:58
mnaserandreaf: i'm seeing the devstack start waiting for compute to go up at "2018-02-23 04:24:33.575" (with a 60s timeout) and the compute record get created at "2018-02-23 04:25:44.182"17:59
mnaserwith a 60 second timeout, it means that devstack gives up at 04:25:33, but the compute record actually gets created 11 seconds later17:59
* mnaser investigates moar18:00
fried_rice_jfkleakypipes: Trying to figure out why.  Is the order in which I construct my query supposed to not matter?18:01
*** hongbin has quit IRC18:01
*** derekh has quit IRC18:03
fried_rice_jfkleakypipes: Maybe query.select_from() overwrites any previous .select_from()?18:03
mnaserso it takes 42 seconds to go from "Connecting to libvirt: qemu:///system _get_new_connection /opt/stack/old/nova/nova/virt/libvirt/host.py:366" => "Registering for lifecycle events"18:04
mnaserand those 42 seconds are enough for devstack to timeout waiting for compute18:04
mnaserso somehow "wrapped_conn = self._connect(self._uri, self._read_only)" takes 42 seconds here https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/host.py#L36818:05
*** rm_work has quit IRC18:06
*** dansmith has quit IRC18:07
mnaserlibvirtd starts at "2018-02-23 04:23:59.741" from devstack18:07
*** StevenK has quit IRC18:07
*** homeski has quit IRC18:07
fried_rice_jfkleakypipes: Gah, my apologies; I was misreading my test failure.  It's fine, it's cumulative as we expected.18:07
*** openstackgerrit has joined #openstack-nova18:08
openstackgerritEric Fried proposed openstack/nova master: rp: GET /resource_providers?required=<traits>  https://review.openstack.org/54683718:08
leakypipesfried_rice_jfk: k18:08
fried_rice_jfkleakypipes, alex_xu: Added a gabbit to prove ANDness with resources ^18:08
*** mgoddard_ has quit IRC18:11
*** gjayavelu has joined #openstack-nova18:14
*** salv-orl_ has quit IRC18:18
mnaserwho's the person to bug for libvirt related questions18:18
*** salv-orlando has joined #openstack-nova18:18
mnaseri'm seeing 1116 "device-list-properties" on service start18:19
*** tesseract has quit IRC18:19
*** tidwellr has joined #openstack-nova18:19
*** yamahata has joined #openstack-nova18:20
*** tssurya has quit IRC18:20
figleaffried_rice_jfk: looks good!18:22
fried_rice_jfkThanks figleaf18:22
*** salv-orlando has quit IRC18:22
*** bnemec has joined #openstack-nova18:23
*** r-daneel has quit IRC18:24
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: vSCSI volume driver  https://review.openstack.org/52609418:25
*** fragatina has quit IRC18:27
*** fragatina has joined #openstack-nova18:28
*** bnemec is now known as bnemec-pto18:30
*** lbragstad has quit IRC18:31
*** esberglu has quit IRC18:31
*** pcaruana has quit IRC18:33
*** dansmith has joined #openstack-nova18:33
*** dansmith is now known as Guest1702618:33
*** danpawlik has joined #openstack-nova18:33
*** udesale_ has quit IRC18:36
*** rajinir_ has joined #openstack-nova18:37
*** xyang_ has joined #openstack-nova18:37
*** vdrok_ has joined #openstack-nova18:37
*** hogepodge_ has joined #openstack-nova18:37
*** mwhahaha_ has joined #openstack-nova18:37
*** coreycb_ has joined #openstack-nova18:37
*** khappone_ has joined #openstack-nova18:37
*** gus_ has joined #openstack-nova18:38
*** andreykurilin_ has joined #openstack-nova18:43
*** yamamoto has joined #openstack-nova18:43
*** logan_ has joined #openstack-nova18:43
*** McNinja_ has joined #openstack-nova18:43
*** mgoddard_ has joined #openstack-nova18:44
*** johnthetubaguy_ has joined #openstack-nova18:44
*** khappone has quit IRC18:44
*** vdrok has quit IRC18:44
*** McNinja has quit IRC18:44
*** mwhahaha has quit IRC18:44
*** rajinir has quit IRC18:44
*** mgoddard has quit IRC18:44
*** logan- has quit IRC18:44
*** andreykurilin has quit IRC18:44
*** coreycb has quit IRC18:44
*** hogepodge has quit IRC18:44
*** xyang has quit IRC18:44
*** gus has quit IRC18:44
*** johnthetubaguy has quit IRC18:44
*** vdrok_ is now known as vdrok18:44
*** rajinir_ is now known as rajinir18:44
*** mwhahaha_ is now known as mwhahaha18:44
*** mgoddard_ is now known as mgoddard18:44
*** xyang_ is now known as xyang18:44
*** hogepodge_ is now known as hogepodge18:44
*** coreycb_ is now known as coreycb18:45
*** danpawlik has quit IRC18:47
*** logan_ is now known as logan-18:47
*** wolverineav has quit IRC18:49
*** yamamoto has quit IRC18:49
*** wolverineav has joined #openstack-nova18:49
*** rm_work has joined #openstack-nova18:51
*** rm_work has quit IRC18:51
*** rm_work has joined #openstack-nova18:51
*** wolverineav has quit IRC18:54
* fried_rice_jfk waves. See y'all in Dublin!18:56
*** lbragstad has joined #openstack-nova18:57
*** fried_rice_jfk has quit IRC18:57
*** artom has quit IRC19:03
*** mgoddard_ has joined #openstack-nova19:04
*** mchlumsky has quit IRC19:06
*** mchlumsky has joined #openstack-nova19:07
*** pchavva has quit IRC19:14
mnasero/19:17
*** andreas_s has quit IRC19:18
*** salv-orlando has joined #openstack-nova19:19
mriedemandreaf: mnaser: maybe on a slow test node,19:20
mriedemdevstack times out waiting for the compute host to get mapped to the cell, which won't happen until after the compute node record gets auto-created when the nova-compute service starts up19:20
mnasermriedem: exactly, but libvirt takes 42 seconds to start up and seems to be doing a ton of commands before starting up19:21
mnasersorry, not 4219:21
mnaserit takes 1m22s19:21
mnaseri wonder if we should add wait_for_libvirt :\19:22
mnaserseeing a tooon of "Send command '{"execute":"device-list-properties","arguments":{"typename":"kvm-pci-assign"},"id":"libvirt-19"}' for write with FD -1"19:22
*** itlinux has joined #openstack-nova19:22
mriedemnot sure why libvirt would take a long time to startup in pike and not ocata...or is it the old side of grenade failing, meaning it's ocata?19:22
mnasermriedem: old side is failing19:23
mriedemok so that's ocata19:23
mriedembut not sure why anything would have changed there wrt libvirt19:23
*** salv-orlando has quit IRC19:23
mnaserlooks like uca published libvirt-bin_2.5.0-3ubuntu5.6~cloud2_amd64.deb on 9th of february19:25
mnaserprior to that libvirt-bin_2.5.0-3ubuntu5.6~cloud1_amd64.deb was released 23 jan 201819:26
mnaserand before that.. nothing in 2.x series19:26
mriedemocata and pike are both using the ocata UCA https://github.com/openstack-dev/devstack/blob/stable/ocata/tools/fixup_stuff.sh#L8419:27
mriedemhttps://github.com/openstack-dev/devstack/blob/stable/pike/tools/fixup_stuff.sh#L8719:27
mnasermriedem: interesting, the issue doesnt seem to be there with the master which uses 3.6.019:29
mnasermriedem: i dunno, i feel like bumping that timeout to 120s, but that might mask regressions with nova taking too long to register for whatever reason19:31
mnaseri guess i can write a wait_for_libvirt19:31
mriedemi'd be ok with i think bumping the timeout on pike/ocata19:33
*** mgoddard_ has quit IRC19:33
mriedemwe know we have slower nodes now,19:34
mriedemand if there were some changes to libvirt on stable that makes it take longer to start in the ocata UCA, we can workaround it with a slower timeout; ocata is only around for a few more weeks anyway19:34
*** itlinux has quit IRC19:35
mriedemlooks like some recent cve fixes went into the libvirt packages19:36
mriedemso that's probably the cause19:36
mriedemhttps://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/ocata-staging/+sourcepub/8774271/+listing-archive-extra19:37
mriedemSECURITY UPDATE: Add support for Spectre mitigations19:38
mriedemso there is your slowdown19:38
*** chergd has joined #openstack-nova19:38
mriedemlooks like the pike package has some security fixes but not that same one https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/pike-staging/+sourcepub/8806001/+listing-archive-extra19:39
*** danpawlik has joined #openstack-nova19:40
*** itlinux has joined #openstack-nova19:44
*** yamamoto has joined #openstack-nova19:47
*** danpawlik has quit IRC19:48
*** itlinux has quit IRC19:50
*** burt has joined #openstack-nova19:51
*** moshele has joined #openstack-nova19:52
*** pchavva has joined #openstack-nova19:52
*** yamamoto has quit IRC19:52
*** awaugama has quit IRC19:55
*** sree__ has joined #openstack-nova19:57
openstackgerritLogan V proposed openstack/nova stable/pike: Allow os_interface and os_region_name on Keystone reqs  https://review.openstack.org/54765419:59
*** sree__ has quit IRC20:02
*** figleaf is now known as edleafe20:08
*** lpetrut has quit IRC20:12
*** wolverineav has joined #openstack-nova20:17
*** itlinux has joined #openstack-nova20:17
*** salv-orlando has joined #openstack-nova20:19
*** moshele has quit IRC20:21
*** wolverineav has quit IRC20:22
*** saphi_ has quit IRC20:23
*** salv-orlando has quit IRC20:24
*** weshay is now known as weshay_bbin3020:25
*** david-lyle has joined #openstack-nova20:25
*** weshay_bbin30 is now known as weshay20:28
*** hongbin has joined #openstack-nova20:28
*** READ10 has joined #openstack-nova20:30
*** tssurya has joined #openstack-nova20:31
*** pramodrj07 has joined #openstack-nova20:34
*** itlinux has quit IRC20:36
*** itlinux has joined #openstack-nova20:42
andreafmriedem thanks for the follow up20:43
*** danpawlik has joined #openstack-nova20:43
andreafmriedem I see that same issue on a new multinode job I'm working on, which runs on master20:44
andreafmriedem two runs twice the same issue, but maybe I've been unlucky with slow nodes...20:44
*** sree__ has joined #openstack-nova20:45
*** yamamoto has joined #openstack-nova20:48
*** sree__ has quit IRC20:49
*** slaweq_ has quit IRC20:50
*** slaweq_ has joined #openstack-nova20:51
*** wolverineav has joined #openstack-nova20:52
*** yamamoto has quit IRC20:53
*** slaweq_ has quit IRC20:55
*** wolverineav has quit IRC20:57
mriedemi see a neutron-grenade-multinode job of mnaser's fail on pike, and it's running on rax-dfw nodes which i've always seen to be slows since the spectre patches http://logs.openstack.org/19/546219/3/check/neutron-grenade-multinode/ae7d292/zuul-info/inventory.yaml20:57
mriedem*slow20:57
*** amodi has quit IRC20:58
mnaserim writing a short wait_for_libvirt20:58
mriedemwe could just hack it on pike and ocata for now and set https://github.com/openstack-dev/devstack/blob/stable/pike/lib/nova#L966 to 12020:59
*** r-daneel has joined #openstack-nova21:00
mriedemand site https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/ocata-staging/+sourcepub/8774271/+listing-archive-extra21:00
mnasermriedem: i can do that too, probably easier and quicker21:01
mriedem*cite ?21:01
mriedemmnaser: it would be easier and quicker for now yeah,21:01
mriedemcould get mtreinish to take a gander21:01
mriedemalternatively we could run the pike UCA on stable/pike...21:01
andreafmriedem mnaser there's a patch up to make that value configurable already21:02
mriedemhttps://review.openstack.org/#/c/536798/21:02
*** pramodrj07 has quit IRC21:02
andreafhttps://review.openstack.org/#/c/547431/21:02
andreafmriedem mnaser ^^^21:02
mnasermriedem: grenade uses stable/ocata though21:02
mriedemmnaser: yeah the old side does21:02
mriedemtrue21:03
mnaserandreaf: thats a nice patch, what does the gate use for SERVICE_TIMEOUT ?21:03
mriedemit defaults to 6021:03
mnaserso i guess we use that patch+backport+bump up SERVICE_TIMEOUT in stable/ocata + stable/pike ?21:04
*** weshay is now known as weshay_bbiab21:04
*** tbachman has quit IRC21:04
mriedemi guess...i'm not sure how much people are going to want to bump the default timeout21:04
mriedemglobally i mean21:04
*** wolverineav has joined #openstack-nova21:05
mriedemthat seems worse than just hard-coding the wait_for_compute timeout to 12021:05
mnaseryeah i think i'd want to bump up wait_for_compute because then everything will wait 120s which might be hiding regressions21:05
mriedemso i suggest a new stackrc variable that defaults to $SERVICE_TIMEOUT, but that we can bump to be higher in stable/ocata/pike if we want21:07
mriedemnoted that in frickler's patch21:07
*** amodi has joined #openstack-nova21:10
*** pchavva has quit IRC21:11
*** Guest98116 has quit IRC21:12
*** janki has quit IRC21:14
andreafmriedem this is the log line in devstack where there is some sign of the issue going on I think http://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/compute1/logs/devstacklog.txt.gz#_2018-02-23_17_22_21_70921:14
andreafmriedem but I few lines later it seems to be fine21:15
mnasermriedem, andreaf, mtreinish: https://review.openstack.org/547431 Create NOVA_COMPUTE_SERVICE_TIMEOUT in is_nova_ready function21:15
mriedemthat's normal21:15
mriedemandreaf: n-cpu barfs that the compute node that represents the local host doesn't exist, and then it creates it21:15
mnaserif that can be approved, i can backport it and then bump it in stable/ocata and stable/pike21:15
mriedemand stops the barfing21:15
andreafmriedem so I don't know why I get http://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/job-output.txt.gz#_2018-02-23_17_22_40_367118 on every single VM created in that job21:16
*** itlinux has quit IRC21:16
mriedemis it every one or just the subnode?21:18
mriedemubuntu-xenial-inap-mtl01-0002692907 is the subnode21:18
andreafmriedem actually every single VM on the 907 node21:19
mriedemandreaf: so this is the warning you get on first startup21:19
mriedemhttp://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/compute1/logs/screen-n-cpu.txt.gz?level=INFO#_Feb_23_17_22_20_95453721:19
mriedemNo compute node record found for host ubuntu-xenial-inap-mtl01-0002692907. If this is the first time this service is starting on this host, then you can ignore this warning.: ComputeHostNotFound_Remote: Compute host ubuntu-xenial-inap-mtl01-0002692907 could not be found.21:19
mriedemthen the compute node record is created21:20
mriedemhttp://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/compute1/logs/screen-n-cpu.txt.gz?level=INFO#_Feb_23_17_22_21_02141221:20
mriedemFeb 23 17:22:21.021412 ubuntu-xenial-inap-mtl01-0002692907 nova-compute[21947]: INFO nova.compute.resource_tracker [None req-5ceeacfd-f9f6-4b19-9044-6fd13c9adadc None None] Compute node record created for ubuntu-xenial-inap-mtl01-0002692907:ubuntu-xenial-inap-mtl01-0002692907 with uuid: 85868727-6173-4345-bd2e-a92bfaf1de8a21:20
openstackgerritMatthew Edmonds proposed openstack/nova master: Fix N358 hacking check  https://review.openstack.org/54767021:20
*** salv-orlando has joined #openstack-nova21:20
mriedemon that subnode, wait_for_compute starts here:21:21
mriedemhttp://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/compute1/logs/devstacklog.txt.gz#_2018-02-23_17_22_18_88021:21
mriedem2018-02-23 17:22:18.880 | + lib/nova:is_nova_ready:1003              :   wait_for_compute 6021:21
mriedemand ends here21:21
mriedem2018-02-23 17:22:21.607 | + functions:wait_for_compute:463           :   return 021:21
andreafso it's successful21:21
mriedemso in that case, it looks like the compute node record was created before devstack started waiting21:21
*** salv-orlando has quit IRC21:22
mnasermriedem: round 2 https://review.openstack.org/#/c/547431/21:22
andreafright... so I'm missing what's going wrong...21:23
mnaserits friday and the servers are lazy21:23
mriedemthe host mapping for the n-cpu on the controller node is created here:21:23
mriedemhttp://logs.openstack.org/24/545724/9/check/tempest-multinode-full/1bbec81/controller/logs/devstacklog.txt.gz#_2018-02-23_17_13_29_97821:23
mriedem2018-02-23 17:13:29.978 | Creating host mapping for compute host 'ubuntu-xenial-inap-mtl01-0002692906': adbcb697-55fc-4750-9242-b13aa75cab0721:23
mriedemdiscover_hosts doesn't run on the subnode21:24
mriedemand i don't see discover_hosts running after the subnode compute is created to discover it21:24
*** salv-orlando has joined #openstack-nova21:25
mnaseroh that's why its different21:25
mriedemandreaf: you'd likely need to compare to an existing multinode job and see where discover_hosts runs,21:25
mnasergrenade uses simple_cell_setup which does discover_hosts21:25
mriedembecause discover_hosts *has* to be run every time you startup a new n-cpu21:25
andreafmriedem yeah that's what I was about to do :)21:25
andreafshould it run on the controller?21:26
andreafdiscover_host is a devstack function right?21:26
mnaserandreaf: discover_hosts is a nova-manage cli func21:26
mnasernova-manage cell_v2 discover_hosts21:26
mnaserit runs on the controller after you add new computes21:27
mriedemandreaf: OH I KNOW THE PROBLEM!!!21:27
mriedemyour problem is zuulv321:27
mriedem:)21:27
andreafheh21:27
mnaseri thought that was the solution for everything21:27
mriedemdevstack-gate would run discover_hosts on the primary after the subnode was setup21:27
mriedemthe devstack multinode guide says this too21:27
mriedemi'll look for the d-g code21:27
andreafmriedem oh, right21:28
andreafmriedem I missed that in d-g21:28
mriedemhttps://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L78321:28
mriedemandreaf: ^21:28
mriedemd-g would run discover_hosts after the subnode was setup21:28
mriedemso you'll need an ansible task to run discover_hosts on the primary node once the subnode is setup21:28
mriedemthat should do the trick21:28
andreafmriedem heh the one line I did not port to ansible!21:29
andreafmriedem I guess I kind of assumed cells did not matter for the base job :( but it's cells v221:29
mriedemmnaser: need to fix the commit message21:29
mriedemcells v2 is king21:29
mnaseroops21:30
andreafmriedem cool thanks a lot, much appreciated!!21:30
mriedemyw21:30
andreafmriedem # NOTE(mriedem): We want to remove this if/when nova supports auto-registration of computes with cells, but that's not happening in Ocata.21:31
mnaserok fixed the commit msg, andreaf i'd borrow your +2 back please https://review.openstack.org/#/c/547431/ :)21:31
andreafmriedem I guess it's not happening in pike or queens either :D21:31
mnasers/Ocata/a while/21:31
andreafmnaser +221:31
mnaserthank you21:32
*** NostawRm has quit IRC21:32
*** NostawRm has joined #openstack-nova21:32
mriedemandreaf: right21:33
mriedemandreaf: also we don't run discover_hosts from the subnode because it's not configured to have access to the nova_api db21:34
mriedemon purpose21:34
*** dave-mcc_ has joined #openstack-nova21:34
mriedemwe want to make sure that when people like leakypipes try to add code to nova-compute that hits the API it blows up :P21:34
*** dave-mccowan has quit IRC21:36
*** edmondsw has quit IRC21:40
leakypipesmriedem: ha. ha.21:47
*** mriedem has quit IRC21:49
*** yamamoto has joined #openstack-nova21:50
cfriesenif we tell nova to boot an instance (specifying that we want interfaces on two networks), then the instance fails to start, so we reschedule to another node, would we expect to re-use the ports that were created for the first compute node?21:52
*** yamamoto has quit IRC21:55
*** hongbin has quit IRC21:56
*** hongbin has joined #openstack-nova21:57
cfriesenanswering my own question, based on a comment in ComputeManager._build_networks_for_instance() the answer seems to be "yes"21:58
cfriesenor at least "maybe"21:59
*** danpawlik has quit IRC22:00
*** danpawlik has joined #openstack-nova22:01
*** danpawlik has quit IRC22:02
*** danpawlik has joined #openstack-nova22:02
*** tbachman has joined #openstack-nova22:03
*** slaweq has quit IRC22:09
*** danpawlik has quit IRC22:10
*** StevenK has joined #openstack-nova22:15
*** FL1SK has quit IRC22:17
*** weshay_bbiab is now known as weshay22:18
*** yamamoto has joined #openstack-nova22:18
*** hongbin has quit IRC22:20
*** salv-orlando has quit IRC22:21
*** lyan has quit IRC22:21
*** salv-orlando has joined #openstack-nova22:21
*** salv-orlando has quit IRC22:25
*** tbachman has quit IRC22:25
*** tssurya has quit IRC22:27
*** masber has joined #openstack-nova22:27
*** masber has quit IRC22:30
*** masber has joined #openstack-nova22:30
*** dave-mcc_ has quit IRC22:34
*** salv-orlando has joined #openstack-nova22:43
*** dave-mccowan has joined #openstack-nova22:46
*** FL1SK has joined #openstack-nova22:52
*** amodi has quit IRC22:58
*** yamamoto has quit IRC23:00
*** burt has quit IRC23:05
*** masuberu has joined #openstack-nova23:07
*** masber has quit IRC23:09
*** dave-mccowan has quit IRC23:11
*** mchlumsky has quit IRC23:12
*** yamamoto has joined #openstack-nova23:13
*** yamamoto has quit IRC23:14
*** FL1SK has quit IRC23:20
*** itlinux has joined #openstack-nova23:22
*** swamireddy has joined #openstack-nova23:23
*** r-daneel has quit IRC23:27
*** itlinux has quit IRC23:28
*** slaweq has joined #openstack-nova23:30
*** vladikr has quit IRC23:30
*** tbachman has joined #openstack-nova23:33
*** slaweq has quit IRC23:35
*** tbachman_ has joined #openstack-nova23:36
*** tbachman has quit IRC23:37
*** tbachman_ is now known as tbachman23:37
*** salv-orlando has quit IRC23:39
*** salv-orlando has joined #openstack-nova23:39
*** pengdake_ has joined #openstack-nova23:39
*** salv-orlando has quit IRC23:44
*** pengdake_ has quit IRC23:44
*** tbachman has quit IRC23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!