Thursday, 2018-03-22

openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272200:02
*** Swami has quit IRC00:02
*** odyssey4me has quit IRC00:02
*** odyssey4me has joined #openstack-nova00:03
*** felipemonteiro__ has joined #openstack-nova00:07
*** suresh12 has quit IRC00:11
*** danpawlik has joined #openstack-nova00:18
*** tetsuro has joined #openstack-nova00:21
*** danpawlik has quit IRC00:22
*** suresh12 has joined #openstack-nova00:26
*** salv-orlando has joined #openstack-nova00:27
*** mvk has joined #openstack-nova00:27
openstackgerritEric Fried proposed openstack/nova master: SchedulerReportClient.update_from_provider_tree  https://review.openstack.org/53382100:27
openstackgerritEric Fried proposed openstack/nova master: Use update_provider_tree from resource tracker  https://review.openstack.org/52024600:27
openstackgerritEric Fried proposed openstack/nova master: Fix nits in update_provider_tree series  https://review.openstack.org/53126000:27
openstackgerritEric Fried proposed openstack/nova master: Move refresh time from report client to prov tree  https://review.openstack.org/53551700:27
openstackgerritEric Fried proposed openstack/nova master: Make generation optional in ProviderTree  https://review.openstack.org/53932400:27
openstackgerritEric Fried proposed openstack/nova master: WIP: Add nested resources to server moving tests  https://review.openstack.org/52772800:27
efriedblayum00:27
*** salv-orlando has quit IRC00:32
openstackgerritEric Fried proposed openstack/nova-specs master: Mention (no) granular support for image traits  https://review.openstack.org/55430500:32
*** Zames has joined #openstack-nova00:36
*** felipemonteiro__ has quit IRC00:40
*** jichen has joined #openstack-nova00:41
*** hongbin has joined #openstack-nova00:44
*** zhurong has joined #openstack-nova00:45
*** danpawlik has joined #openstack-nova00:52
*** yamamoto has joined #openstack-nova00:52
tetsuroGood morning.00:53
*** phuongnh has joined #openstack-nova00:56
*** danpawlik has quit IRC00:57
*** yamamoto has quit IRC00:57
Spazmotic+1 more blayum? :p00:58
SpazmoticMorning00:58
efriedHi, and goodbye.  The wife's getting pretty antsy.01:00
* efried waves01:00
* Spazmotic waves01:00
*** zhaochao has joined #openstack-nova01:00
*** liverpooler has quit IRC01:00
*** crushil has joined #openstack-nova01:02
*** liverpooler has joined #openstack-nova01:05
*** vladikr has quit IRC01:05
*** vladikr has joined #openstack-nova01:06
openstackgerritMerged openstack/nova master: Add disabled column to cell_mappings table.  https://review.openstack.org/55250501:07
openstackgerritMerged openstack/nova master: libvirt: move get_numa_memnode in designer module  https://review.openstack.org/55485001:07
openstackgerritMerged openstack/nova master: libvirt: move vpu_realtime_scheduler in designer  https://review.openstack.org/55485101:07
*** fragatina has quit IRC01:09
*** hshiina has joined #openstack-nova01:09
sean-k-mooney[m]mriedem: bauzas  can ye take a look at this backport for os-vif when ye get time https://review.openstack.org/#/c/531465/1 its minor but has been up for a while. just wondering if i should keep it open or abandon it.01:10
*** fragatina has joined #openstack-nova01:11
sean-k-mooney[m]!cmd freenode AWAY until noon gmt01:11
openstacksean-k-mooney[m]: Error: "cmd" is not a valid command.01:11
sean-k-mooney[m]data did not work...01:11
sean-k-mooney[m]*that01:11
*** tiendc has joined #openstack-nova01:11
*** tbachman has quit IRC01:11
*** fragatin_ has joined #openstack-nova01:12
*** fragatina has quit IRC01:15
*** Dinesh_Bhor has joined #openstack-nova01:15
*** fragatin_ has quit IRC01:16
*** tbachman has joined #openstack-nova01:17
*** Zames has quit IRC01:20
*** Guest90893 has quit IRC01:22
openstackgerritzhufl proposed openstack/nova master: Fix api-ref: nova image-meta is deprecated from 2.39  https://review.openstack.org/55481301:22
openstackgerritzhufl proposed openstack/nova master: Fix api-ref: nova image-meta is deprecated from 2.39  https://review.openstack.org/55481301:24
*** hiro-kobayashi has joined #openstack-nova01:24
*** liusheng has quit IRC01:26
*** liusheng has joined #openstack-nova01:27
*** suresh12 has quit IRC01:27
*** salv-orlando has joined #openstack-nova01:27
*** tbachman has quit IRC01:29
*** harlowja has quit IRC01:31
*** danpawlik has joined #openstack-nova01:31
*** salv-orlando has quit IRC01:32
*** Dinesh_Bhor has quit IRC01:35
*** oomichi has quit IRC01:35
*** Dinesh_Bhor has joined #openstack-nova01:36
*** danpawlik has quit IRC01:36
*** gjayavelu has quit IRC01:40
*** wolverineav has quit IRC01:43
*** wolverineav has joined #openstack-nova01:44
*** danpawlik has joined #openstack-nova01:47
*** wolverineav has quit IRC01:48
*** tbachman has joined #openstack-nova01:49
*** gaoyan has joined #openstack-nova01:50
*** danpawlik has quit IRC01:51
*** yamamoto has joined #openstack-nova01:53
*** hoonetorg has quit IRC01:56
*** hoonetorg has joined #openstack-nova01:57
*** yamamoto has quit IRC01:59
openstackgerritzhufl proposed openstack/nova master: Fix api-ref: nova image-meta is deprecated from 2.39  https://review.openstack.org/55481302:02
*** zhurong has quit IRC02:02
*** Tom-Tom has joined #openstack-nova02:02
*** suresh12 has joined #openstack-nova02:06
*** mriedem has quit IRC02:07
*** gjayavelu has joined #openstack-nova02:07
*** gjayavelu has quit IRC02:08
*** danpawlik has joined #openstack-nova02:08
*** wolverineav has joined #openstack-nova02:10
*** suresh12 has quit IRC02:10
*** danpawlik has quit IRC02:13
*** wolverineav has quit IRC02:14
*** wolverineav has joined #openstack-nova02:16
*** owalsh_ has joined #openstack-nova02:16
*** owalsh has quit IRC02:20
*** idlemind has quit IRC02:23
*** idlemind has joined #openstack-nova02:24
*** Spaz-Work has quit IRC02:24
*** Spaz-Work has joined #openstack-nova02:25
*** salv-orlando has joined #openstack-nova02:28
*** yingjun has joined #openstack-nova02:32
*** salv-orlando has quit IRC02:34
*** _ix has joined #openstack-nova02:36
*** yamamoto has joined #openstack-nova02:37
*** edmondsw has joined #openstack-nova02:42
*** danpawlik has joined #openstack-nova02:44
*** yamamoto has quit IRC02:46
*** wolverineav has quit IRC02:46
*** phuongnh has quit IRC02:48
*** phuongnh has joined #openstack-nova02:48
*** andreas_s has joined #openstack-nova02:48
*** danpawlik has quit IRC02:49
*** edmondsw has quit IRC02:49
*** germs has quit IRC02:52
*** germs has joined #openstack-nova02:52
*** germs has quit IRC02:52
*** germs has joined #openstack-nova02:52
*** germs has quit IRC02:52
*** germs has joined #openstack-nova02:53
*** germs has quit IRC02:53
*** germs has joined #openstack-nova02:53
*** andreas_s has quit IRC02:53
*** yamamoto has joined #openstack-nova02:56
*** psachin has joined #openstack-nova02:56
*** zhurong has joined #openstack-nova02:56
*** Tom-Tom has quit IRC02:58
*** Tom-Tom has joined #openstack-nova02:59
*** Tom-Tom has quit IRC03:03
*** Tom-Tom has joined #openstack-nova03:04
*** bkopilov has quit IRC03:06
*** phuongnh has quit IRC03:07
*** phuongnh has joined #openstack-nova03:08
*** sree has joined #openstack-nova03:09
*** dave-mccowan has quit IRC03:11
*** amodi has quit IRC03:12
*** annp has joined #openstack-nova03:14
*** AlexeyAbashkin has joined #openstack-nova03:17
*** danpawlik has joined #openstack-nova03:18
*** AlexeyAbashkin has quit IRC03:21
Spaz-WorkMorning again :P03:22
*** danpawlik has quit IRC03:23
*** _ix has quit IRC03:23
*** phuongnh has quit IRC03:30
*** hongbin has quit IRC03:30
*** salv-orlando has joined #openstack-nova03:30
*** Tom-Tom has quit IRC03:31
*** harlowja has joined #openstack-nova03:32
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rebuild  https://review.openstack.org/53240703:32
*** Tom-Tom has joined #openstack-nova03:33
*** Tom-Tom_ has joined #openstack-nova03:34
*** salv-orlando has quit IRC03:35
*** Tom-Tom has quit IRC03:38
yikunAnyone knows why NUMACell obj add the fields (1.1, 1.2) but without obj_make_compatible changing?03:42
yikunhttps://github.com/openstack/nova/blob/master/nova/objects/numa.py#L51-L5303:43
yikunIt seems only this one is an exception in all Nova object.03:43
*** hiro-kobayashi has quit IRC03:49
*** danpawlik has joined #openstack-nova03:51
*** harlowja has quit IRC03:54
*** fragatina has joined #openstack-nova03:54
*** tuanla____ has joined #openstack-nova03:55
*** danpawlik has quit IRC03:56
*** amodi has joined #openstack-nova03:59
*** udesale has joined #openstack-nova04:03
*** edmondsw has joined #openstack-nova04:05
*** tidwellr has joined #openstack-nova04:06
*** tidwellr_ has joined #openstack-nova04:08
*** tidwellr has quit IRC04:08
*** edmondsw has quit IRC04:10
openstackgerritTetsuro Nakamura proposed openstack/nova master: trivial: omit condition evaluations  https://review.openstack.org/54524804:14
*** tidwellr_ has quit IRC04:17
*** hshiina has quit IRC04:21
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: WIP: Add host info to instance action events  https://review.openstack.org/55514604:21
*** hshiina has joined #openstack-nova04:21
*** bkopilov has joined #openstack-nova04:23
openstackgerritMerged openstack/nova master: Preserve multiattach flag when refreshing connection_info  https://review.openstack.org/55466704:28
*** salv-orlando has joined #openstack-nova04:31
*** Dinesh_Bhor has quit IRC04:34
*** salv-orlando has quit IRC04:36
*** Dinesh_Bhor has joined #openstack-nova04:36
*** zhurong has quit IRC04:45
*** felipemonteiro__ has joined #openstack-nova04:45
*** crushil has quit IRC04:46
*** germs has quit IRC04:49
*** germs has joined #openstack-nova04:49
*** germs has quit IRC04:49
*** germs has joined #openstack-nova04:49
*** germs has quit IRC04:54
*** amodi has quit IRC04:55
*** abhishekk has joined #openstack-nova04:55
*** hiro-kobayashi has joined #openstack-nova04:56
*** felipemonteiro__ has quit IRC04:57
*** gjayavelu has joined #openstack-nova04:59
*** Dinesh__Bhor has joined #openstack-nova05:03
*** Dinesh_Bhor has quit IRC05:03
*** suresh12 has joined #openstack-nova05:03
*** isssp has joined #openstack-nova05:03
*** tssurya has joined #openstack-nova05:04
*** idlemind has quit IRC05:05
*** burned has quit IRC05:06
*** tianhui_ has quit IRC05:07
*** tssurya has quit IRC05:08
*** lpetrut has joined #openstack-nova05:08
*** isssp has quit IRC05:09
*** isssp has joined #openstack-nova05:12
*** sridharg has joined #openstack-nova05:14
*** ratailor has joined #openstack-nova05:14
*** imacdonn has quit IRC05:15
*** imacdonn has joined #openstack-nova05:15
openstackgerritTetsuro Nakamura proposed openstack/nova master: add check before adding cpus to cpuset_reserved  https://review.openstack.org/53986505:16
*** moshele has joined #openstack-nova05:16
openstackgerritjichenjc proposed openstack/nova master: fix race condition of instance host  https://review.openstack.org/49445805:24
*** gyankum has joined #openstack-nova05:26
*** jaosorior_ is now known as jaosorior05:26
openstackgerritjichenjc proposed openstack/nova master: Move placement test cases from db to placement  https://review.openstack.org/55314905:28
*** salv-orlando has joined #openstack-nova05:29
*** wolverineav has joined #openstack-nova05:30
*** rcernin has quit IRC05:34
*** suresh12 has quit IRC05:35
*** udesale_ has joined #openstack-nova05:36
*** zhurong has joined #openstack-nova05:37
*** udesale__ has joined #openstack-nova05:37
*** tetsuro has left #openstack-nova05:39
*** udesale has quit IRC05:39
*** udesale_ has quit IRC05:41
*** wolverineav has quit IRC05:41
*** wolverineav has joined #openstack-nova05:42
*** Tom-Tom_ has quit IRC05:43
*** Tom-Tom has joined #openstack-nova05:44
*** wolverineav has quit IRC05:46
*** rcernin has joined #openstack-nova05:46
*** Tom-Tom has quit IRC05:48
openstackgerritjichenjc proposed openstack/nova master: Avoid live migrate to same host  https://review.openstack.org/54268905:48
*** links has joined #openstack-nova05:53
*** edmondsw has joined #openstack-nova05:54
*** Tom-Tom has joined #openstack-nova05:58
*** edmondsw has quit IRC05:59
*** Tom-Tom has quit IRC06:00
*** lpetrut has quit IRC06:03
*** Tom-Tom has joined #openstack-nova06:03
*** Tom-Tom_ has joined #openstack-nova06:06
*** Tom-Tom_ has quit IRC06:06
*** rcernin has quit IRC06:06
*** Tom-Tom_ has joined #openstack-nova06:08
*** Tom-Tom has quit IRC06:08
*** rcernin has joined #openstack-nova06:08
openstackgerritjichenjc proposed openstack/nova master: Not instance to ERROR if set_admin_password failed  https://review.openstack.org/55516006:10
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Imported Translations from Zanata  https://review.openstack.org/54877206:10
*** lpetrut has joined #openstack-nova06:12
*** Dinesh__Bhor has quit IRC06:12
openstackgerritjichenjc proposed openstack/nova master: Trivial fix: move log to earlier  https://review.openstack.org/55516406:14
*** Dinesh__Bhor has joined #openstack-nova06:16
*** AlexeyAbashkin has joined #openstack-nova06:16
*** ratailor has quit IRC06:17
*** ratailor has joined #openstack-nova06:17
*** rgb00 has joined #openstack-nova06:19
*** lucas-afk has quit IRC06:20
*** fragatina has quit IRC06:21
*** AlexeyAbashkin has quit IRC06:21
*** bswrchrd has quit IRC06:22
*** kholkina has joined #openstack-nova06:23
*** lucasagomes has joined #openstack-nova06:23
*** threestrands has quit IRC06:25
openstackgerritMerged openstack/nova stable/queens: Handle not found error on taking snapshot  https://review.openstack.org/55066006:30
*** lpetrut has quit IRC06:33
*** zhurong has quit IRC06:34
*** sidx64 has joined #openstack-nova06:35
*** lajoskatona has joined #openstack-nova06:35
*** zer0c00l has joined #openstack-nova06:42
*** Zames has joined #openstack-nova06:43
*** tianhui has joined #openstack-nova06:44
openstackgerritpippo proposed openstack/nova master: Update links in README  https://review.openstack.org/55516906:45
zer0c00lif my nova endpoint is at https://novaapi.example.com:4443/nova_api, when i curl that endpoint i get something like http://novaapi.example.com/v2/ instead of https://novaapi.example.com/nova_api/v206:45
zer0c00lwhy is that? is that because some kind of misconfiguration or is it expected06:45
*** ccamacho has quit IRC06:45
*** Zames has quit IRC06:46
*** udesale_ has joined #openstack-nova06:48
*** udesale__ has quit IRC06:50
*** claudiub has joined #openstack-nova06:57
*** mayur_ind has joined #openstack-nova06:58
mayur_indhi folks07:00
*** Elixer_dota has joined #openstack-nova07:00
mayur_indgetting following error in nova http://paste.openstack.org/show/708826/07:01
mayur_indwhat may be the reason for this, is there way to solve it.?07:01
*** sahid has joined #openstack-nova07:02
*** masber has quit IRC07:05
*** masber has joined #openstack-nova07:06
*** sar has joined #openstack-nova07:08
*** sidx64_ has joined #openstack-nova07:13
*** zhurong has joined #openstack-nova07:14
*** sidx64 has quit IRC07:14
*** links has quit IRC07:15
*** sidx64 has joined #openstack-nova07:16
*** sidx64_ has quit IRC07:17
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Initial change set of z/VM driver  https://review.openstack.org/52338707:18
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Spawn and destroy function of z/VM driver  https://review.openstack.org/52765807:18
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add snapshot function  https://review.openstack.org/53424007:18
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add power actions  https://review.openstack.org/54334007:18
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add get console output  https://review.openstack.org/54334407:18
*** alexchadin has joined #openstack-nova07:19
*** andreas_s has joined #openstack-nova07:21
*** belmoreira has joined #openstack-nova07:22
*** links has joined #openstack-nova07:23
*** andreas_s has quit IRC07:25
*** claudiub has quit IRC07:25
*** isssp has quit IRC07:25
*** jichen has quit IRC07:25
*** jaosorior has quit IRC07:25
*** Hazelesque has quit IRC07:25
*** slaweq has quit IRC07:25
*** sambetts|afk has quit IRC07:25
*** test222__ has quit IRC07:25
*** markmc has quit IRC07:25
*** rajinir has quit IRC07:25
*** Anticimex has quit IRC07:25
*** karimull has quit IRC07:25
*** Kvisle has quit IRC07:25
*** rajinir_ has joined #openstack-nova07:26
*** rajinir_ has quit IRC07:26
*** rajinir_ has joined #openstack-nova07:26
*** rajinir_ is now known as rajinir07:26
*** jichen has joined #openstack-nova07:26
*** Hazelesque has joined #openstack-nova07:26
*** slaweq has joined #openstack-nova07:26
*** jaosorior has joined #openstack-nova07:26
*** Kvisle has joined #openstack-nova07:26
*** Hazelesque has quit IRC07:26
*** Hazelesque has joined #openstack-nova07:26
*** isssp has joined #openstack-nova07:26
openstackgerritMerged openstack/nova master: Add unit tests for EmulatorThreadsTestCase  https://review.openstack.org/53869907:27
*** karimull has joined #openstack-nova07:27
*** andreas_s has joined #openstack-nova07:27
*** markmc has joined #openstack-nova07:27
*** test222__ has joined #openstack-nova07:27
*** sambetts_ has joined #openstack-nova07:29
*** rcernin has quit IRC07:31
*** afaranha has joined #openstack-nova07:32
*** salv-orlando has quit IRC07:33
*** markvoelker has quit IRC07:35
*** ccamacho has joined #openstack-nova07:38
*** ralonsoh has joined #openstack-nova07:39
*** edmondsw has joined #openstack-nova07:42
*** pcaruana has joined #openstack-nova07:42
*** pcaruana has quit IRC07:44
*** pcaruana has joined #openstack-nova07:44
*** pcaruana has quit IRC07:45
*** danpawlik has joined #openstack-nova07:45
*** pcaruana has joined #openstack-nova07:45
*** edmondsw has quit IRC07:46
*** pcaruana has quit IRC07:47
*** pcaruana has joined #openstack-nova07:47
*** pcaruana has quit IRC07:48
*** pcaruana has joined #openstack-nova07:48
*** pcaruana has quit IRC07:50
*** pcaruana has joined #openstack-nova07:50
*** masber has quit IRC07:51
*** pcaruana has quit IRC07:51
*** pcaruana has joined #openstack-nova07:51
*** pcaruana has quit IRC07:53
*** pcaruana has joined #openstack-nova07:53
*** pcaruana has quit IRC07:54
*** pcaruana has joined #openstack-nova07:55
*** AlexeyAbashkin has joined #openstack-nova07:56
*** pcaruana has quit IRC07:56
*** Tom-Tom_ has quit IRC07:56
*** Tom-Tom has joined #openstack-nova07:57
*** belmorei_ has joined #openstack-nova07:57
*** Tom-Tom_ has joined #openstack-nova07:58
*** ispp has joined #openstack-nova07:58
*** asettle has quit IRC07:59
*** belmoreira has quit IRC07:59
*** andymccr has quit IRC07:59
*** Roamer` has quit IRC07:59
*** isssp has quit IRC08:00
*** Roamer` has joined #openstack-nova08:01
*** Tom-Tom has quit IRC08:02
*** belmorei_ has quit IRC08:02
*** belmore__ has joined #openstack-nova08:02
*** lpetrut has joined #openstack-nova08:03
*** hoangcx has quit IRC08:03
*** hoangcx has joined #openstack-nova08:04
*** lpetrut_ has joined #openstack-nova08:04
*** lpetrut has quit IRC08:04
*** andymccr has joined #openstack-nova08:05
*** asettle has joined #openstack-nova08:06
*** pcaruana has joined #openstack-nova08:06
*** asettle is now known as Guest6696908:06
*** FL1SK has quit IRC08:07
*** pcaruana has quit IRC08:07
*** pcaruana has joined #openstack-nova08:08
*** pcaruana has quit IRC08:09
*** pcaruana has joined #openstack-nova08:09
*** pcaruana has quit IRC08:10
*** pcaruana has joined #openstack-nova08:11
*** tesseract has joined #openstack-nova08:11
*** masber has joined #openstack-nova08:11
*** pcaruana has quit IRC08:12
*** pcaruana has joined #openstack-nova08:13
*** pcaruana has quit IRC08:15
*** pcaruana has joined #openstack-nova08:15
*** pcaruana has quit IRC08:16
*** AlexeyAbashkin has quit IRC08:17
*** pcaruana has joined #openstack-nova08:18
*** masber has quit IRC08:18
*** AlexeyAbashkin has joined #openstack-nova08:18
*** pcaruana has quit IRC08:20
*** pcaruana has joined #openstack-nova08:20
*** sidx64 has quit IRC08:20
*** pcaruana has quit IRC08:21
*** pcaruana has joined #openstack-nova08:21
*** pcaruana has quit IRC08:22
*** ragiman has joined #openstack-nova08:23
*** tuanla____ has quit IRC08:25
*** tuanla____ has joined #openstack-nova08:26
*** alexchadin has quit IRC08:27
*** pcaruana has joined #openstack-nova08:29
*** sidx64 has joined #openstack-nova08:30
*** pcaruana has quit IRC08:30
*** tianhui_ has joined #openstack-nova08:31
*** tianhui has quit IRC08:32
*** sidx64 has quit IRC08:33
*** salv-orlando has joined #openstack-nova08:34
*** markvoelker has joined #openstack-nova08:34
*** hoangcx has quit IRC08:35
*** hoangcx has joined #openstack-nova08:36
*** pcaruana has joined #openstack-nova08:36
*** pcaruana has quit IRC08:37
*** pcaruana has joined #openstack-nova08:38
*** pcaruana has quit IRC08:39
*** pcaruana has joined #openstack-nova08:40
*** salv-orlando has quit IRC08:40
*** pcaruana has quit IRC08:40
*** pcaruana has joined #openstack-nova08:41
*** pcaruana has quit IRC08:41
*** damien_r has joined #openstack-nova08:43
*** tianhui_ has quit IRC08:43
*** sapd has quit IRC08:44
*** amoralej|off is now known as amoralej08:44
*** sapd has joined #openstack-nova08:45
*** tuanla____ has quit IRC08:46
*** hiro-kobayashi has quit IRC08:46
*** tianhui has joined #openstack-nova08:46
*** tuanla____ has joined #openstack-nova08:47
*** avolkov has joined #openstack-nova08:48
*** jpena|off is now known as jpena08:48
*** tianhui has quit IRC08:51
*** tianhui has joined #openstack-nova08:55
*** masber has joined #openstack-nova08:55
*** pcaruana has joined #openstack-nova09:02
*** mdnadeem has joined #openstack-nova09:03
*** pcaruana has quit IRC09:03
*** pcaruana has joined #openstack-nova09:03
*** pcaruana has quit IRC09:05
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Spawn and destroy function of z/VM driver  https://review.openstack.org/52765809:06
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add snapshot function  https://review.openstack.org/53424009:06
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add power actions  https://review.openstack.org/54334009:06
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add get console output  https://review.openstack.org/54334409:06
*** hshiina has quit IRC09:11
*** owalsh_ is now known as owalsh09:13
*** zhurong has quit IRC09:16
*** masber has quit IRC09:17
*** gaoyan has quit IRC09:20
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add snapshot function  https://review.openstack.org/53424009:21
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add power actions  https://review.openstack.org/54334009:21
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add get console output  https://review.openstack.org/54334409:21
*** Dinesh__Bhor has quit IRC09:25
gmann_mayur_ind: will be good if you attach logs etc on bug. with 500 it is not possible to check what went wring09:26
*** masber has joined #openstack-nova09:28
*** mdbooth has joined #openstack-nova09:28
*** edmondsw has joined #openstack-nova09:30
*** mayur_ind has quit IRC09:31
*** hemna_ has quit IRC09:31
*** alexchadin has joined #openstack-nova09:34
*** edmondsw has quit IRC09:35
*** salv-orlando has joined #openstack-nova09:35
Kevin_Zhenggmann_ Hi, could you spare a few minutes and help me with some test issue? I'm working on some functional tests that have to verify the request_id, it seems that the req['openstack.request_id'] is not translated to context.request_id in functional tests, could you tell me where do we generate the context in functional tests? I have difficulties to find it09:37
*** salv-orlando has quit IRC09:38
*** salv-orlando has joined #openstack-nova09:38
*** yingjun has quit IRC09:41
*** mgoddard has joined #openstack-nova09:43
*** jichen has quit IRC09:44
Kevin_Zhenggmann_ Never mind, I got it09:45
gmann_Kevin_Zheng: ohk :), in QA meeting which is about to close09:45
*** Guest66969 is now known as asettle09:46
*** bkopilov has quit IRC09:46
*** ragiman has quit IRC09:46
*** chyka has joined #openstack-nova09:47
*** chyka has quit IRC09:51
*** ragiman has joined #openstack-nova10:02
*** mgoddard has quit IRC10:05
*** sidx64 has joined #openstack-nova10:07
*** liverpooler has quit IRC10:10
*** mdnadeem has quit IRC10:10
*** tssurya has joined #openstack-nova10:11
openstackgerritsahid proposed openstack/nova-specs master: virt: allow instances to be booted with trusted VFs  https://review.openstack.org/48552210:11
*** udesale_ has quit IRC10:16
*** gjayavelu has quit IRC10:17
*** FL1SK has joined #openstack-nova10:17
*** alexchadin has quit IRC10:26
*** masuberu has joined #openstack-nova10:27
*** sree has quit IRC10:30
*** abhishekk has quit IRC10:30
*** sree has joined #openstack-nova10:30
*** masber has quit IRC10:30
*** alexchadin has joined #openstack-nova10:31
*** bjolo has quit IRC10:32
*** Tom-Tom_ has quit IRC10:32
*** Tom-Tom has joined #openstack-nova10:33
*** derekh has joined #openstack-nova10:33
*** sree has quit IRC10:34
*** dtantsur|afk is now known as dtantsur10:36
*** mgoddard has joined #openstack-nova10:37
*** Tom-Tom has quit IRC10:38
*** bjolo has joined #openstack-nova10:39
*** vladikr has quit IRC10:39
gmann_gibi: can you check if all ok from notification wise - https://review.openstack.org/#/c/554090/210:44
gibigmann_: I opened it and I will try to check it today10:49
*** _ix has joined #openstack-nova10:54
*** AlexeyAbashkin has quit IRC11:00
*** AlexeyAbashkin has joined #openstack-nova11:00
*** logan- has quit IRC11:03
*** logan- has joined #openstack-nova11:03
*** sidx64 has quit IRC11:05
stephenfinlyarwood: When you're about, any chance you could test the combination of these two patches to see if it resolves your '[pci] passthrough_whitelist' concerns? https://review.openstack.org/#/c/554632/ https://review.openstack.org/#/c/552874/11:07
*** sar has quit IRC11:09
*** sidx64 has joined #openstack-nova11:13
*** abhishekk has joined #openstack-nova11:19
*** liverpooler has joined #openstack-nova11:21
*** sar has joined #openstack-nova11:22
*** belmore__ has quit IRC11:23
*** Zames has joined #openstack-nova11:24
*** chichi2 has joined #openstack-nova11:24
*** chichi2 has left #openstack-nova11:24
*** Zames has quit IRC11:26
gibigmann_: left a response and +2 in the review :)11:27
gmann_gibi: got it, thanks for confirmation11:33
*** liverpooler has quit IRC11:34
*** tssurya has quit IRC11:35
*** sidx64 has quit IRC11:39
*** zhurong has joined #openstack-nova11:39
openstackgerritZhenyu Zheng proposed openstack/nova master: Noauth should also use request_id from compute_req_id.py  https://review.openstack.org/55526611:39
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Add host field to InstanceActionEvent  https://review.openstack.org/55514611:39
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: t  https://review.openstack.org/55526711:39
*** sidx64 has joined #openstack-nova11:42
*** sidx64 has quit IRC11:42
*** elmaciej has joined #openstack-nova11:44
*** sidx64 has joined #openstack-nova11:44
*** udesale has joined #openstack-nova11:46
*** _ix has quit IRC11:49
*** dklyle has quit IRC11:49
*** masuberu has quit IRC11:52
*** mhenkel has quit IRC11:52
*** masuberu has joined #openstack-nova11:52
*** pchavva has joined #openstack-nova11:52
*** sambetts_ is now known as sambetts11:54
*** arvindn05 has joined #openstack-nova11:54
*** zhouyaguo has joined #openstack-nova12:03
*** amoralej is now known as amoralej|lunch12:05
*** hrw has quit IRC12:05
*** ragiman has quit IRC12:06
*** odyssey4me has quit IRC12:07
*** odyssey4me has joined #openstack-nova12:08
*** mgoddard has quit IRC12:08
*** tiendc has quit IRC12:09
*** belmoreira has joined #openstack-nova12:09
*** alexchadin has quit IRC12:12
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP  https://review.openstack.org/55328812:13
*** edmondsw has joined #openstack-nova12:15
*** mgoddard has joined #openstack-nova12:15
*** alexchadin has joined #openstack-nova12:15
openstackgerritRaoul Hidalgo Charman proposed openstack/nova master: Expose shutdown retry interval as config setting  https://review.openstack.org/55248312:16
*** hrw has joined #openstack-nova12:16
*** sree has joined #openstack-nova12:17
*** alexchadin has quit IRC12:18
*** alexchadin has joined #openstack-nova12:18
*** mdnadeem has joined #openstack-nova12:18
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP  https://review.openstack.org/55328812:18
*** alexchadin has quit IRC12:18
*** ragiman has joined #openstack-nova12:19
*** Zames has joined #openstack-nova12:19
*** alexchadin has joined #openstack-nova12:19
*** alexchadin has quit IRC12:19
*** alexchadin has joined #openstack-nova12:20
*** alexchadin has quit IRC12:20
*** alexchadin has joined #openstack-nova12:21
*** alexchadin has quit IRC12:21
openstackgerritEric Young proposed openstack/nova master: Support extending attached ScaleIO volumes  https://review.openstack.org/55467912:21
*** alexchadin has joined #openstack-nova12:21
*** alexchadin has quit IRC12:22
*** alexchadin has joined #openstack-nova12:22
*** alexchadin has quit IRC12:22
*** alexchadin has joined #openstack-nova12:23
*** lucasagomes is now known as lucas-hungry12:23
*** Zames has quit IRC12:23
sean-k-mooneybauzas: o/12:24
*** yassine has quit IRC12:24
*** yassine has joined #openstack-nova12:25
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP  https://review.openstack.org/55328812:25
efriedmorning nova12:25
sean-k-mooneyefried: morning :)12:25
openstackgerritMatthew Edmonds proposed openstack/nova master: make PowerVM capabilities explicit  https://review.openstack.org/54716912:27
*** tssurya has joined #openstack-nova12:29
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Extract generate_hostid method into utils.py  https://review.openstack.org/55528212:30
Spaz-WorkMorning daylight novaers12:31
*** vladikr has joined #openstack-nova12:31
edmondswefried lost your +1 on https://review.openstack.org/#/c/547169 with a commit message update12:31
efried...12:31
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP  https://review.openstack.org/55328812:31
edmondswsince we decided to use specless bps, needed to remove the old bp reference12:31
efriededmondsw: Oh, other drivers do it this way?12:32
edmondswyep12:32
efriedcool.12:32
efried+112:32
edmondswtx12:32
edmondswstephenfin that would be a quick review if you have a minute... https://review.openstack.org/#/c/54716912:32
*** alexchad_ has joined #openstack-nova12:33
*** alexchadin has quit IRC12:33
*** cdent has joined #openstack-nova12:34
*** markvoelker has quit IRC12:34
*** liverpooler has joined #openstack-nova12:34
*** markvoelker has joined #openstack-nova12:34
sean-k-mooneyefried: fyi i left some comments on https://review.openstack.org/#/c/554305 last night/30 seconds ago. do you think allowing traits by resouces_class is resonable? not full granular support but just enough to cover 80% of usecases12:39
efriedresponding right now.12:39
efriedsean-k-mooney: done12:39
efriedsean-k-mooney: Oh, I missed your 30-seconds-ago comment - we crossed in the mail.  Looking...12:41
sean-k-mooneycool reading. an ya im aware 99% of the time traits are specific to resouces classes to you wont get conflicts12:41
efriedokay, you brought up a more viable example.  Is it possible for GPUs and CPUs to have the same traits?12:41
efriedI wonder if it makes sense for us to name those traits differently.  HW_CPU_X vs HW_GPU_X12:42
sean-k-mooneyefried: ya i think come gpus support sse instructions12:42
efriedBe interested to see how jaypipes feels about that.12:42
sean-k-mooneyefried: i see pros and cons to that12:42
efriedyeah12:42
efriedbut again, in that case maybe it makes sense to force them to use granular in the flavor for the time being, just so we can defer this discussion and get the majority of the function we need right away.12:43
sean-k-mooneyso ya clip notes version i dont think we need  GPU1:trait:X=required,GPU2:trait:Y=required in image but gpu:trait:x=required,cpu:trait:y=reqiured  would be nice unless we namspace all the traits12:43
sean-k-mooneyefried: well images are use creatable but flavors are not12:44
efriedHum.  I wonder if that's a problem in and of itself.12:44
efriedAren't we giving the user the power to hog resources now?12:44
sean-k-mooneyHW_CPU_X vs HW_GPU_X would "fix" this but we duplicate the traits in some cases12:45
openstackgerritZhenyu Zheng proposed openstack/nova master: WIP  https://review.openstack.org/55328812:45
sean-k-mooneyam no12:45
sean-k-mooneythe user can not request a gpu via the image12:45
sean-k-mooneythey would have to select a flavor with a gpu as we dont have resaouce request on the image just traits12:45
cdentgood morning jaypipes, edleafe, efried: I believe we had some discussions about what counts as a valid uuid recently, I have some related concerns within placement: We save uuids as strings of length 36, but in most (but not all) places we json schema validate incoming uuids to accept both the '-' and not '-' forms. In the dict format of putting allocations we do _not_, we only accept '-'.12:46
efrieddo we always save them with the hyphens?12:46
*** sree has quit IRC12:47
efriedeven if they're sent in without?12:47
cdentefried: as far as I know we dont' process them as uuids, so we take what's given12:47
cdentthat's why I'm raising the issue12:47
efriedSo it'd be possible for me to create e.g. two separate resource providers with UUIDs A-B-C-D-E and ABCDE??12:47
sean-k-mooneycdent: minus is not valide for uuids only hyphens so we should either convert or rais an exception and return a 30012:47
sean-k-mooneyefried: both of those would be invalid12:48
sean-k-mooneythe asci represention of a uuid is not arbitrary it has a fixed format12:48
cdentefried: That is my concern, yes, but I haven't had a chance to check it yet12:48
efriedI'm shorthanding.  A{8}-B{4}-C{4}-D{4}-E{12}12:48
efriedcdent: Okay, sounds like a thing to do.  Should be an easy enough func test to write.12:49
sean-k-mooneyefried: ah wel that format requires the hypens12:49
*** tbachman has quit IRC12:49
cdentefried: yeah, was just checking in first before digging harder12:50
*** mriedem has joined #openstack-nova12:50
*** arvindn05 has quit IRC12:50
efriedsean-k-mooney: What cdent is saying is that the placement API is allowing either 12345678-ABCD-ABCD-ABCD-12345678ABCD or 12345678ABCDABCDABCD12345678ABCD as inputs, but may in fact be interpreting those as *different* values.12:50
efried...which would be bad.  Like crossing the streams.12:51
sean-k-mooneyefried: ya so is placement using ovo for its data sturctures12:51
*** ratailor has quit IRC12:52
sean-k-mooneyovo has 2 uuid fields one is a strict check and the other just emits a warnign if the format is invalid. if we use ovo internally in the strict form we can prevent incorrect uuids form getting to the db12:52
sean-k-mooneyefried: for the 12345678ABCDABCDABCD12345678ABCD case i would be happy if the api returned a 400 bad request in that case12:53
efriedWhich we can't do without a new microversion.12:53
sean-k-mooneyefried: ya but i would be in favor of a microversion for this12:54
efriedAlthough part of what cdent is about to find out is, even though the schema will pass that, maybe something further down will reject it.12:54
*** yamamoto has quit IRC12:54
*** arvindn05 has joined #openstack-nova12:54
sean-k-mooneyefried: well the db field is a varchar(36) so it wont so it would have to be somthing in the python code before it hits the db layer12:55
efriedsean-k-mooney: I would too (be in favor of a new microversion to lock this down), but think about it from a consumer standpoint.  They don't have to use the new microversion in order to start passing in their UUIDs with hyphens.12:55
*** _ix has joined #openstack-nova12:55
efriedIt'd be kind of weird, like "use this new microversion so you can make sure I'm using hyphens in my UUIDs for me."12:55
sean-k-mooneyyes the microversion is just stopping them passing without hypens12:56
efriedI sorta doubt consumers will bother with it, considering they would then have to do a 406-and-retry-with-lower-microversion branch.12:56
sean-k-mooneyefried: well i gues we need to first check what the behavior is today12:57
efriedyuh12:57
efriediiuc, cdent is on that.12:57
sean-k-mooneyperhaps we normalise it at some point12:57
efriedif we'd just quit bugging him :P12:57
sean-k-mooney:) but if we didnt bug him that would just give him more time to get pulled into internal meetings12:58
cdentthis came about because of internal discussions12:58
* cdent squares the circle12:58
*** lyan has joined #openstack-nova12:58
* efried shouts over cdent's shoulder: "Oi, cdent's manager, this is important, cancel all his meetings!"12:58
*** lyan is now known as Guest2940012:59
efried(cdent probably works from home, huh.  I'm shouting at his dog.)12:59
*** zhurong has quit IRC13:00
cdentno dog, and apparently you weren't loud enough to wake the cat13:00
*** udesale has quit IRC13:00
*** sq4ind has joined #openstack-nova13:04
sq4indhi guys13:04
sq4indhave a problem after upgrade to queens13:04
sq4indI cannot live migrate instances, I am getting error on nova-conductor:13:05
*** idlemind has joined #openstack-nova13:05
sq4indSetting instance to ACTIVE state.: NoValidHost: No valid host was found. Unable to move instance 220a6584-02ae-4a22-9940-6f64bbb4a1d8 to host nova0 There is not enough capacity on the host for the instance.13:05
sq4indbut there is planty of resources13:05
sq4indin the placement-api : Over capacity for MEMORY_MB on resource provider 52c0c39e-30f9-4bd8-84e9-af5c35aac61f. Needed: 2048, Used: 175104, Capacity: 122355.013:06
*** crushil has joined #openstack-nova13:06
sq4indPlacement API returning an error response: Unable to allocate inventory: Unable to create allocation for 'MEMORY_MB' on resource provider '52c0c39e-30f9-4bd8-84e9-af5c35aac61f'. The requested amount would exceed the capacity.13:06
sq4indany idea ?13:06
efriedallocation ratio thing?13:06
sq4inddefault13:06
sq4ind1.513:06
sean-k-mooneysq4ind: did you set allocation in aggregates or on compute node nova.conf13:07
sq4indon compute13:07
sean-k-mooneysq4ind: oh ok. we broke the aggregate allocation ratios13:07
*** suresh12 has joined #openstack-nova13:07
sean-k-mooneysq4ind: can you share the full resouce provider info for 52c0c39e-30f9-4bd8-84e9-af5c35aac61f13:08
sq4indit looks like the resources are not being properly updated13:08
openstackgerritEric Fried proposed openstack/nova master: Change compute mgr placement check to region_name  https://review.openstack.org/55475913:09
sean-k-mooneysq4ind: well you resocce useage exceed you cappasity currently 175104>122355.0 by a ratio of 1.413:09
sq4indsean-k-mooney, but how:13:10
sq4ind[root@nova0 ~]# free -m13:10
sq4ind              total        used        free      shared  buff/cache   available13:10
sq4indMem:         120694        6338      113954           9         402      10772713:10
sq4indSwap:          3815           0        381513:10
sq4ind?13:10
sean-k-mooneyif the ratio is not set in the RP and is at the default of 1.0  then it would fail with that message13:10
sean-k-mooneyfree -m show current inuse13:10
sean-k-mooney memory13:10
sean-k-mooneynot the reseved memory13:10
sean-k-mooneyif you are using kvm it does not preallocate the vm memory and only allocates as guests use it13:11
efriedmriedem: I went ahead and made that change -----^13:11
efried...and rechecked the devstack side - although there's no way the devstack change fails because of this tweak.13:11
sean-k-mooneysq4ind: what does the hyperviors api say is used on nova013:12
*** suresh12 has quit IRC13:12
efriedbecause now we're both setting and checking the new value.13:12
*** bjolo has quit IRC13:12
efriedmriedem: I think we'd be looking for the nova patch itself to fail tempest now, because it's using devstack with os_region_name set.13:12
*** mdnadeem has quit IRC13:13
mriedemaye aye13:13
sq4indsean-k-mooney,13:13
sq4ind| current_workload     | 0                                                                                                                                                                                                                                                     |13:13
sq4ind| disk_available_least | 252971                                                                                                                                                                                                                                                |13:13
sq4ind| free_disk_gb         | 492681                                                                                                                                                                                                                                                |13:13
sq4ind| free_ram_mb          | 110067                                                                                                                                                                                                                                                |13:13
sq4ind| host_ip              | 10.252.16.190                                                                                                                                                                                                                                         |13:13
sq4ind| host_time            | 13:12:45                                                                                                                                                                                                                                              |13:13
sq4ind| hypervisor_hostname  | nova0.linguamatics.com                                                                                                                                                                                                                                |13:13
sq4ind| hypervisor_type      | QEMU                                                                                                                                                                                                                                                  |13:13
sq4ind| hypervisor_version   | 2009000                                                                                                                                                                                                                                               |13:13
sq4ind| id                   | 1                                                                                                                                                                                                                                                     |13:13
sean-k-mooneysq4ind: pastbing might be simpler13:13
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Allow to specify granular CPU feature flags  https://review.openstack.org/53438413:13
sq4ind| load_average         | 0.10, 0.05, 0.06                                                                                                                                                                                                                                      |13:13
sq4ind| local_gb             | 492834                                                                                                                                                                                                                                                |13:13
sq4ind| local_gb_used        | 153                                                                                                                                                                                                                                                   |13:14
sq4ind| memory_mb            | 122867                                                                                                                                                                                                                                                |13:14
kashyapsq4ind: Please use pastebin :-(13:14
sq4ind| memory_mb_used       | 12800                                                                                                                                                                                                                                                 |13:14
sq4ind| running_vms          | 4                                                                                                                                                                                                                                                     |13:14
sq4ind| service_host         | nova0.linguamatics.com                                                                                                                                                                                                                                |13:14
sq4ind| service_id           | 12                                                                                                                                                                                                                                                    |13:14
sq4ind| state                | up                                                                                                                                                                                                                                                    |13:14
sq4ind| status               | enabled                                                                                                                                                                                                                                               |13:14
sq4ind| uptime               | 1:59                                                                                                                                                                                                                                                  |13:14
sq4ind| users                | 1                                                                                                                                                                                                                                                     |13:14
sq4ind| vcpus                | 16                                                                                                                                                                                                                                                    |13:14
sq4ind| vcpus_used           | 613:14
sq4indsorry13:14
sq4indsean-k-mooney, sorry for pasting here13:14
sq4indhttps://pastebin.com/aprda4We13:14
sean-k-mooneysq4ind: thats ok13:15
mriedemlyarwood: can you check https://review.openstack.org/#/c/555029/ before we do a queens release?13:15
*** yingjun has joined #openstack-nova13:15
mriedemmelwitt: might want to throw the spec review day on the schedule wiki https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule#Special_review_days13:17
sean-k-mooneysq4ind: so the capsity on the RP more or less matache the hypervisor api 122355.0 vs 12286713:17
sean-k-mooneysq4ind: but the used ram is way off 12800 vs 17510413:18
*** udesale has joined #openstack-nova13:18
sean-k-mooneyefried: any idea what could cause ^ other then leaked allocations13:19
*** amodi has joined #openstack-nova13:20
sean-k-mooneythe difference between  122355.0 vs 122867 is placement is in MiB and hypervior api is in MB i think13:20
efriedsean-k-mooney: TBH I never understood the allocation ration breakage issue.13:20
efrieds/ration/ratio/13:20
sean-k-mooneyefried: if you set them in the host aggregate we did not use that value to set the RP allocation ratio and instead only used the nova conf version13:21
*** dtantsur is now known as dtantsur|brb13:21
sean-k-mooneyat least i think that was the issue13:21
*** yamamoto has joined #openstack-nova13:21
efriedIn this case it's nowhere close, though, as sq4ind points out.13:22
sean-k-mooneywell the capasity is right it the used that is wrong13:22
efriedAlthough in the original statement it's a lot closer: "in the placement-api : Over capacity for MEMORY_MB on resource provider 52c0c39e-30f9-4bd8-84e9-af5c35aac61f. Needed: 2048, Used: 175104, Capacity: 122355.0"13:22
sean-k-mooneyso this is noting to do with alocation ratios13:22
efriedsq4ind: Are you able to query the placement API directly?13:23
sq4indefried, let me try13:23
efriedI would like to see what placement thinks the allocation ratio is for that MEMORY_MB inventory record on provider 52c0c39e-30f9-4bd8-84e9-af5c35aac61f13:23
edleafecdent: reading scrollback13:23
sean-k-mooneyefried:  yes the orignial error is correct allocating an addtion 2048 on top of 175104 would violate the over commit ratio13:23
efriedsean-k-mooney: Unless the alloc ratio is 1.513:24
efriedor higher13:24
edleafecdent: we should accept either, but normalize how we store them. Is that the change you are proposing?13:24
*** lucas-hungry is now known as lucasagomes13:24
efriededleafe: It is unclear whether we are normalizing them or not.13:24
efriedcdent is finding out.13:24
edleafeefried: gotcha. We definitely *should* be normalizing13:25
efriededleafe: Or only accepting one format.13:25
edleafeno, I don't think we need to do that13:26
cdenthere's the bug: https://bugs.launchpad.net/nova/+bug/175805713:28
openstackLaunchpad bug 1758057 in OpenStack Compute (nova) "When creating uuid-based entities we can duplicate UUIDs" [Undecided,Triaged]13:28
cdentwe do not normalize, they are treated as different resource providers13:28
sq4indefried, sorry but I am not able to query placement api directly13:29
*** awaugama has joined #openstack-nova13:30
cdentefried, edleafe: gonna have my lunch while that settlles in13:31
edleafecdent: chew thoroughly!13:32
*** psachin has quit IRC13:32
*** eharney has joined #openstack-nova13:32
*** germs has joined #openstack-nova13:33
*** germs has quit IRC13:33
*** germs has joined #openstack-nova13:33
*** germs has quit IRC13:33
*** germs has joined #openstack-nova13:34
*** germs has quit IRC13:34
*** germs has joined #openstack-nova13:34
*** burt has joined #openstack-nova13:36
efriedcdent: Okay, so I think we have to fix the bug by normalizing UUIDs for all the APIs, and I don't think we need a microversion for that.  Afterwards we can consider whether we want a microversion to further restrict the acceptable input formats.13:36
*** amoralej|lunch is now known as amoralej13:37
sq4indis there any way to repopulate cells in the placement ? (or is it safe to remove cell and recreate it )13:38
bauzasfolks, gentle notice I'm under the water with serious vGPU testing13:38
*** kholkina has quit IRC13:40
jaypipescdent: hey, got your question on UUIDs answered?13:41
mriedemkashyap: how's the libvirt min version bump thing going?13:42
kashyapmriedem: Hi13:43
kashyapmriedem: First fixing the last unit test of this, as we speak: https://review.openstack.org/#/c/534384/4/13:43
kashyap(Then I'll get to it.)13:43
kashyapJust duking around the last test for the conditional in driver.py.  The existing patch & tests all 'pass'13:43
kashyapmriedem: You got a deadline for me?  Or was it yesterday? :-)13:44
mriedemwould be nice to have that done by milestone 1 in case anything crops up, then we have time later in the release to deal with it13:44
kashyapAh, true.  /me goes to look when is Milestone-113:44
cdentjaypipes is probably a bug so made one: https://bugs.launchpad.net/nova/+bug/175805713:45
openstackLaunchpad bug 1758057 in OpenStack Compute (nova) "When creating uuid-based entities we can duplicate UUIDs" [Undecided,Triaged]13:45
mriedemkashyap: april 1913:45
kashyapmriedem: Ah, I'll be starting tomm or at most Monday.13:45
* kashyap even got a task reminder on phone for it this morning :P13:45
jaypipescdent: cool. looks like a relatively simple fix.13:45
cdentefried: I pretty much agree with you, but what do we do any extant providers that are the same uuid with different reps and have since diverged?13:46
cdentjaypipes: if you look above in scrollback there are some differing opinions on the right fix, but not vastly so13:46
kashyapmriedem: In your "copious free time", wonder if you can punch any holes in the above change: https://review.openstack.org/#/c/53438413:46
kashyapIf you can't get to it; it's fine.  Got enough attention so far, can wait13:46
mriedemkashyap: i already did wrt backports13:46
mriedemso don't really want to talk about that one honestly13:47
kashyapOkay, no prob; dansmith suggested on the post a bit more palatable.13:47
efriedcdent: That's a neat question.  Let the user clean 'em up?13:47
kashyap(But, IMHO, it just seems like (even Tony) is trying to overly stick to the "letter of the law")13:47
efriedcdent: ...which entails allowing DELETE APIs to accept and use non-normalized UUIDs...13:47
lyarwoodmriedem: sorry missed your ping earlier, looking now13:48
mriedemdansmith: question in tssurya's scheduler sighup patch about locking https://review.openstack.org/#/c/550527/13:48
mriedemdansmith: i have a feeling we should be locking on that new attribute when we're resetting it13:48
*** esberglu has joined #openstack-nova13:48
efriedcdent: Gotta say it's pretty doubtful that something like this would have happened IRL, because the client - whatever it is - will generally be using one code path to create providers.  So they'll have created 'em all with the same UUID format.13:49
dansmithmriedem: okay I failed to circle back to those so I'll try to do that this morning dodging meetings13:50
*** gouthamr has joined #openstack-nova13:50
cdentefried: that's probably true, I'm just asking the questions for completeness. Also, just sake of the record and all that (in the sense that it is not really germane for today's reality): the whole point of having an http api is so that there are and will be multiple clients and different code13:51
efriedyup, I get that.13:51
efriedcdent: I say we just fix the glitch, and not bother making a cleanup tool until someone claims they need it.13:52
*** yamamoto has quit IRC13:53
efriedcdent, melwitt, mriedem: This would be a good bug for new contributors.13:53
edleafeefried: +1 on not worrying about it until we need to worry about it13:54
*** Tom-Tom has joined #openstack-nova13:54
*** felipemonteiro__ has joined #openstack-nova13:55
mriedemtag it with low-hanging-fruit then13:55
*** mlavalle has joined #openstack-nova13:55
*** jianghuaw has joined #openstack-nova13:56
efrieddone13:57
*** tbachman_ has joined #openstack-nova13:57
sq4indsean-k-mooney, efried any idea ?13:58
efriedsq4ind: None here, sorry.13:58
sq4indefried, thanks13:58
efriedsq4ind: But you may want to recap for jaypipes - this sounds up his alley to me.13:58
gibinova meeting starts in about a minute13:59
*** bkopilov has joined #openstack-nova13:59
openstackgerritMatt Riedemann proposed openstack/nova master: Create volume attachment during boot from volume in compute  https://review.openstack.org/54142014:03
*** tidwellr has joined #openstack-nova14:03
*** felipemonteiro_ has joined #openstack-nova14:03
efriedmriedem: Poetic patch title ^14:04
*** tuanla____ has quit IRC14:04
mriedemis it?14:04
*** felipemonteiro__ has quit IRC14:07
*** yamamoto has joined #openstack-nova14:08
*** moshele has quit IRC14:08
*** hamzy has quit IRC14:09
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: Network interface attach/detach  https://review.openstack.org/54681314:10
*** links has quit IRC14:10
*** itlinux has quit IRC14:11
openstackgerritZhenyu Zheng proposed openstack/nova master: Add request_id to isntance action notifications  https://review.openstack.org/55328814:13
*** yamamoto has quit IRC14:13
*** hongbin has joined #openstack-nova14:13
*** zhouyaguo has quit IRC14:13
*** dtantsur|brb is now known as dtantsur14:15
*** dklyle has joined #openstack-nova14:15
*** germs_ has joined #openstack-nova14:18
*** germs has quit IRC14:20
*** alexchad_ has quit IRC14:22
*** Spaz-Work has quit IRC14:22
*** yamamoto has joined #openstack-nova14:24
openstackgerritZhenyu Zheng proposed openstack/nova master: Add request_id to instance action notifications  https://review.openstack.org/55328814:25
*** Spaz-Work has joined #openstack-nova14:25
bauzasmriedem: come to France and you'll enjoy the day14:25
bauzaswell, if you can...14:25
*** r-daneel has joined #openstack-nova14:27
*** yamamoto has quit IRC14:28
openstackgerritZhenyu Zheng proposed openstack/nova master: Add request_id to instance action notifications  https://review.openstack.org/55328814:28
*** ccamacho1 has joined #openstack-nova14:31
*** ccamacho1 has quit IRC14:31
*** alexchadin has joined #openstack-nova14:31
*** ccamacho1 has joined #openstack-nova14:32
mriedemneed another core for the bottom 2 patches in this series to move the nova-cells-v1 job in-tree and then change it to use neutron, which is the first part of removing nova-network https://review.openstack.org/#/c/549780/14:33
*** ccamacho has quit IRC14:34
*** amodi has quit IRC14:34
stephenfinmriedem: I'll grab em14:36
*** sree has joined #openstack-nova14:37
*** yamamoto has joined #openstack-nova14:39
*** hamzy has joined #openstack-nova14:40
*** hrw has quit IRC14:41
*** yamamoto has quit IRC14:43
sq4indI think I've found why I was having issues... basically when listing cells in placement I had one out of three rabbitmq servers in the transport url, and only hosts connected to this rabbitmq server were updating their resource usage. Now I've fixed the url, I am also moving VMs from hosts and removing hosts from placement cells, and then recreating them. After that everything works as it should be14:44
sq4indhope that makes sense14:44
*** zhaochao has quit IRC14:45
efriedsq4ind: Glad you figured it out - sorry we couldn't be more help.14:46
*** yamamoto has joined #openstack-nova14:46
*** yamamoto has quit IRC14:46
*** jackie-truong has joined #openstack-nova14:49
*** felipemonteiro__ has joined #openstack-nova14:53
sq4indefried, no problem :D I like to dig more and more into openstack internals :D And you were very helpful: you've gave some clues where to look at it :D, thanks ! :)14:55
*** felipemonteiro_ has quit IRC14:57
*** Swami has joined #openstack-nova14:57
*** gyankum has quit IRC14:58
*** alexchadin has quit IRC14:58
*** zhaochao has joined #openstack-nova14:59
*** abhishekk has quit IRC15:00
*** sree has quit IRC15:00
openstackgerritEric Fried proposed openstack/nova master: Support extending attached ScaleIO volumes  https://review.openstack.org/55467915:01
mriedemstephenfin: thanks for hitting those cells ci patches15:01
*** sree has joined #openstack-nova15:01
stephenfinmriedem: np15:03
*** tidwellr_ has joined #openstack-nova15:04
*** tidwellr has quit IRC15:04
*** dave-mccowan has joined #openstack-nova15:05
*** sree has quit IRC15:05
mriedemedmondsw: we don't need both a powervm-resize and powervm-cold-migrate blueprint15:06
mriedemif you implement resize, you have to have cold migrate15:06
mriedemso i'm going to mark the cold migrate one as superseded15:07
edmondswmriedem yep15:07
mriedemesberglu: ^15:07
edmondswrebuild and evac would go in there as well15:07
*** sree has joined #openstack-nova15:08
edmondswesberglu update the description on that one?15:08
edmondsw(resize)15:08
mriedemeh?15:08
mriedemthose are not the same as resize15:08
mriedemnor do they require a blueprint15:08
openstackgerritChris Dent proposed openstack/nova master: Use microversion parse 0.2.1  https://review.openstack.org/55026515:08
mriedemif your driver can spawn and destroy, you support rebuild15:08
*** elmaciej has quit IRC15:08
*** zhaochao has quit IRC15:09
*** dave-mccowan has quit IRC15:09
mriedemhttps://blueprints.launchpad.net/nova/rocky should be accurate now right?15:09
mriedem5 powervm-* blueprints15:09
edmondswmriedem sorry otp15:10
*** sidx64 has quit IRC15:10
*** sree has quit IRC15:12
openstackgerritRaoul Hidalgo Charman proposed openstack/nova master: Expose shutdown retry interval as config setting  https://review.openstack.org/55248315:14
mriedemdoes anyone from virtuozzo still work on nova?15:14
mriedembecause their CI is borked15:14
edmondswmriedem yeah, looks right15:14
*** moshele has joined #openstack-nova15:14
esbergluedmondsw: mriedem: Will update resize15:15
edmondswmriedem I meant evac would come only when we support migration15:15
edmondswesberglu I think mriedem wants it left as-is15:15
mriedemedmondsw: those aren't the same15:15
mriedemor related15:15
mriedemevac is not the same as cold migrate,15:15
mriedemit's a rebuild of the instance on another host15:15
mriedemusing spawn and destroy15:15
edmondswoh, right...15:16
*** dave-mccowan has joined #openstack-nova15:16
*** sidx64 has joined #openstack-nova15:16
edmondswsorry, too many different uses of the same term15:16
dansmithmriedem: should I link him or will you?15:16
dansmithedmondsw: http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/15:17
edmondswdansmith +115:17
*** sidx64 has quit IRC15:17
edmondswI remember reading that when it was new15:17
edmondswesberglu ^15:18
kashyapdansmith: At this point it warrants to be put in the channel topic ;-)15:18
dansmithheh15:18
kashyap"Evacuate", please see: $short-url15:18
kashyapdansmith: Then you can start chiding people: "Don't you read channel topic?!"15:19
dansmithkashyap: make a bot, that will waste more time15:19
kashyapTrue; it reminds me of the bot on #fedora-devel, where if someone just says:15:20
kashyapPerson: Ping15:20
kashyapBot: https://fedoraproject.org/wiki/No_naked_pings15:20
*** germs_ has quit IRC15:20
*** sidx64 has joined #openstack-nova15:20
*** amodi has joined #openstack-nova15:20
*** germs has joined #openstack-nova15:20
*** germs has quit IRC15:20
*** germs has joined #openstack-nova15:20
*** felipemonteiro__ has quit IRC15:21
*** sidx64 has quit IRC15:21
*** felipemonteiro__ has joined #openstack-nova15:21
efriedwow15:22
efriedTIL15:22
dansmithefried: not everyone agrees with those assertions (like myself)15:22
efried...that some people are just too damn touchy15:22
dansmithhehe15:22
dansmithyeah that15:22
eric-youngefried: thanks for the quick edit on https://review.openstack.org/55467915:22
mriedemcan we now say that nova people aren't as bad as fedora people?15:22
efriederic-young: Yahyoubetcha15:23
eric-youngefried: I am seeing another issue right now, and am testing to see if it's just me15:23
mriedemwhen it comes to the every 6 month nova hate-a-thon?15:23
efriederic-young: Mm, I didn't test or anything, just fixed obvious problem.15:23
mriedemeric-young: that requires a minimum os-brick right?15:23
eric-youngefried, yeah. silly mistake but it did expose something else.15:23
kashyapefried: Yeah, I don't mind such bare pings.  But prefer "dressed up" pings15:24
mriedemyou've been riedmeann'ed15:24
mriedemgd i can't even spell my own name15:24
mriedemffs15:24
kashyapmriedem: You slipped in "mean" there; I think they call it: "Freudian"15:25
* kashyap stops trolling15:25
*** lajoskatona has quit IRC15:28
sean-k-mooneyedleafe: just on the uuids its not a uuid anymore if you acept a string with our hypentens. if we decalre the filed as an int at the api then we could accpet 3c3770f6-e1a6-4c3c-ac1d-ba2aa2f231c4 as a hex int 3c3770f6e1a64c3cac1dba2aa2f231c415:28
mriedemfyi all things are going to be blocked right now https://review.openstack.org/55531415:28
mriedemuntil ^ merges15:28
*** Swami has quit IRC15:29
dansmithefried: so, on this aggregate thing15:29
dansmithefried: what if we had member_of=in:foo,bar&member_of=in:baz -- which would turn into (foo OR bar) AND baz15:30
dansmithefried: that would let me have infinite filters that AND together several "any one of these would be fine for this requirement"15:30
jaypipescfriesen: I'm dead set against "implicit" or "hidden" resources.15:30
dansmithefried: logically "tenant_aggregates AND az_aggregates AND gpu_aggregates AND shared_storage_aggregates" for some fictitious request with all those restrictions15:31
eric-youngmriedem, yes, that patch requires an unreleased version of os-brick. Is there a way to instigate a release or just wait?15:31
efrieddansmith: Sure, that seems simple-but-powerful.  But we still need to address the question of whether to allow a queryparam key to be repeated.  We haven't so far.15:32
bauzasjianghuaw_: mriedem: FWIW, I filed a bug with a new 'vgpu' tag15:33
efriedcdent, edleafe: In your capacity as API SIGgers, what do you think?15:33
dansmithefried: yeah, but it's a common thing for query strings, which is why we have it as a list right?15:33
bauzasjianghuaw_: mriedem: https://bugs.launchpad.net/nova/+bug/175808615:33
openstackLaunchpad bug 1758086 in OpenStack Compute (nova) "nvidia driver limits to one single GPU per guest" [Low,Triaged] - Assigned to Sylvain Bauza (sylvain-bauza)15:33
bauzasso we could track all VGPU related bugs15:33
efrieddansmith: I agree it's a common thing for query strings in general.  But we've explicitly avoided doing it in placement so far.15:33
*** idlemind_ has joined #openstack-nova15:33
cdentquery parameter repetition is expected and normal, which is why under the covers the library return a list of values15:33
efriedMeaning you can break the API with that.15:33
*** idlemind has quit IRC15:33
dansmithcdent: yeah, that15:33
dansmiththis: https://pastebin.com/pbgv9vux15:34
efriedYeah yeah, I know it's supported by HTTP and the tooling.15:34
cdentI'm more worried by what kind of impact this would have on already very challenging for mortals to understand query code15:34
cdents/query/database query/15:35
efrieddansmith: correct me if I'm wrong, but we can't avoid crossing that bridge *somehow*.15:35
dansmithefried: well, we can by just not doing things :)15:35
dansmithbut yeah, if we want it to be more than trivially useful...15:36
efriedI mean, we don't _have_ to implement it in a monster JOIN; we can do individual queries and do the set math in python.15:36
efriedSo cdent/edleafe y'all don't have a problem introducing a repeatable queryparam key, where we've avoided them in the past?15:36
*** itlinux has joined #openstack-nova15:37
efriede.g. resources=VCPU:3,MEMORY_MB:1024 instead of resources=VCPU:3&resources=MEMORY_MB:1024 -- if you do the latter, it does *not* work.15:37
edleafesean-k-mooney: the hyphens in a UUID are for humans. And Python doesn't require them: http://paste.openstack.org/show/708977/15:38
cdentwell that's kind of the issue: if we're going to allow it in place A, it would be better to allow it all places15:38
edleafeefried: reading back...15:38
efriedswhat I'm sayin15:38
dansmithefried: and is that a microversion change or a bug?15:38
dansmiththe CGI guy in me says it is a bug15:38
efrieddansmith: Oh, definitely would be a microversion change, and no it's not a bug.15:38
dansmithbecause I always forget and then have to fix my things :)15:38
dansmithokay15:39
efrieddansmith: It's explicitly the way we designed the API.15:39
*** itlinux has quit IRC15:39
mriedemeric-young: you could propose an os-brick release to the openstack/releases repo,15:39
dansmithso I guess you need to decide what it means to split them like that15:39
efriedI brought it up like a year ago and someone (possibly cdent) assured me that it was the way things were intended.15:39
mriedemeric-young: or i can, it's pretty easy,15:39
mriedemthen just need PTL sign off15:39
*** itlinux has joined #openstack-nova15:39
dansmithwhether you just merge them as if they were one argument, or make some other assumption about why the caller is doing that15:39
cdentefried: i do not recall senator15:39
dansmithas is the case in member_of15:39
efriedjust so.15:39
dansmiths/is/would be/15:40
cdentefried: currently how do repeated params get represented in req.GET15:40
efriedOr we can go with using punctuation join15:40
cdentI think WebOb may be just "taking care of it"15:40
edleafeefried: ok, repeating a query param is not a bad thing. It's pretty much the only way to AND things15:40
efriedcdent: nope, I remember checking that when I ran across this originally.15:40
cdentin which case the doubling may be challenging15:40
eric-youngI'll take a look. no harm in me knowing how to do it :)15:40
bauzasjaypipes: are you still available for an hangout ?15:41
*** sar has quit IRC15:41
efriededleafe: Yeah, the issue is whether we deviate from the rest of the *placement* API specifically, which uses punctuation for lists, and does *not* support repeating keys.15:41
cdentefried: you certain? https://docs.pylonsproject.org/projects/webob/en/stable/api/multidict.html?highlight=multidict15:41
efriedIt's not about webob.  It's about how we process in the handlers.15:42
efriedhold on, I'll find code.15:42
*** pcaruana has joined #openstack-nova15:42
cdentyou should get the double list back on req.GET, but using items() (as done in RequestGroup parsing) may be throwing things off, dunno15:43
*** pcaruana has quit IRC15:44
dansmithcdent: efried edleafe: sorry I didn't think about this enough during spec review, but I hadn't gotten to the second use case in my head15:44
cdentsuch is the way of the world, ain't no thing15:45
efriedcdent: Well, I can't immediately suss what the handler is doing, cause it clearly thinks it's getting a single string value back from req.GET.15:45
*** jackie-truong has quit IRC15:45
efriedit's possible that GET is special and returns a single value if there's only one, but a list if there's more than one.  Which seems goofy, but sounds vaguely familiar.  And you have to use something else, like GETALL, if you want it to be a list every time.15:46
*** yamamoto has joined #openstack-nova15:47
efriedcdent: Oh, no, it looks like GET returns the first one, period.15:47
efried...where of course "first" could be anything, because dict hashing.15:48
*** pcaruana has joined #openstack-nova15:48
cdentso we know it's a fixable problem, but there's a fair bit of semantis wrangling associated with it15:49
cdentdan wants a specific meaning out of two different member_of keys, which is different than the presumed concatenation of  two resources keys15:49
cdentit's that semantic difference of duplication that is a probelm15:50
cdent(if it actually exists, I'm struggling to get my brain right on this because context switching)15:50
efriedcdent: "fixable problem" what are we talking about?  IMO the fact that we don't accept multiple instances of qparam keys at the moment isn't a problem - it's just the way we designed it.15:51
dansmithcdent: yeah, agree that member_of and resources should not behave in opposite ways15:51
efriedIf we're talking about the actual issue dansmith is trying to solve, then yeah, fixable.15:51
efrieddansmith: So my suggestion was to use punctuation rather than duplicating qparam keys.15:52
efriedmember_of=in:A,B;in:C15:52
dansmithyeah, I like that less, fwiw15:52
dansmithbut it's fine if so15:52
efriedOh, me too, but it doesn't make member_of and resources behave differently.15:52
dansmithyeah15:52
efriedWe can pick some other punctuation maybe.15:53
efriedBut & and + both have special meaning already :(15:53
*** yamamoto has quit IRC15:53
efried(since 'AND' is what we're trying to express)15:53
dansmiththe symbol used doesn't really matter15:53
cdentefried: "fixable problem" was the immediate thing of "being able to accept duplicates on keys in general". I agree that addressign dan's problem in the least disruptive way is probably syntax in the param15:53
dansmith(to me)15:53
cdent; can't work15:53
cdentit is equivalent to & (and actually more correct)15:53
jaypipesbauzas: yes sir15:54
dansmithcdent: in a url? really?15:54
cdentbut yeah, as long as we are consistent, doesn't matter15:54
cdentdansmith: yeah cgi processing was modernized in the late 90s to use ; for split of params, but it never really took, but library are supposed to support it15:54
dansmithhuh, interesting15:54
*** chyka has joined #openstack-nova15:54
cdentit looks nicer too :)15:55
* jroll learned something new today15:55
efriedokay, so ':'?15:55
efriedoh, no.15:55
efriedcause we're using that for in:15:55
efriedsigh15:56
dansmithlet's go super wacky and use a grave accent15:56
dansmithdidn't see that coming didja!15:56
cdentunicode snowman15:56
efrieddànsmith15:57
efriedfrom now on15:57
dansmithheh15:57
cdentsince the values of member_of can't have a space, why not a + (which is space)?15:57
dansmith[08:57:29] Message(432): dànsmith Erroneous Nickname15:57
dansmithbummer15:57
*** tblakes has joined #openstack-nova15:57
bauzasjaypipes: argh, I have a meeting starting in 2 mins15:57
efriedcdent: + comes through as a space?  That could work.15:58
jaypipesbauzas: I'm free the rest of the day. just ping me when you're free.15:58
bauzasjaypipes: I just want to make sure we can discuss on the direction for all the NUMA things before the next specs day15:58
efriedit's a bit hacky-overloady15:58
bauzasjaypipes: okay, cool15:58
dansmithdo not like15:58
edleafecdent: what is the reason why you want to avoid using &?15:58
cdentso we dont' have to encode it15:58
openstackgerritsahid proposed openstack/nova-specs master: libvirt: add support for virtio-net rx/tx queue sizes  https://review.openstack.org/53960515:58
cdentand it wasn't me that was someone else15:58
edleafecdent: no, I mean multiple query params15:58
efriededleafe: Because we don't support repeating queryparams elsewhere in the API where it would also make sense.  E.g. resources=15:59
efriedand the inconsistency would make sadness15:59
* edleafe wonders what "makes sense" means15:59
efriededleafe: e.g. resources=VCPU:3&resources=MEMORY_MB:102416:00
efried^ does not work.16:00
edleafeefried: there isn't a good reason why it shouldn't16:00
efriededleafe: other than that the code isn't set up to make it work.16:01
efriedIf we wanted to support that, we would have to make code changes.16:01
jrollif only we knew someone that could change code :)16:01
edleafejroll: impossible!!16:01
efriedjroll: This would be a pretty pervasive change, for IMO negative benefit.16:01
sean-k-mooneyedleafe: that is correct python does not require them to have hyphens but https://tools.ietf.org/html/rfc4122 spcifies the reqiurements for the string representation on page 4 which reuires the use of - when serialising the uuid as a string to be standars complient16:01
dansmithjroll: are you helping? :)16:01
efriedNow we've got two different ways to express the same thing - double the test matrix, double the bugs, etc.16:02
cfriesenjaypipes: the emulator threads and I/O threads are separate from the vcpu threads, so it seems odd to make the user ask for a "VCPU"  to run them on.16:02
jrolldansmith: I'd be happy to make that work, it seems silly that it doesn't :)16:02
dansmithjroll: no, I mean helping by taking shots from the stands.. and I'm just joking of course :)16:02
jrollI can do that too :P16:02
sahidmriedem: btw i updated the vf trusted spec to address your suggestion about metadata16:02
dansmithheh16:02
mriedemsahid: ok16:03
jaypipescfriesen: it's all just a processor, IMO.16:03
jaypipescfriesen: either dedicated or shared CPU resource.16:03
jrollI'm just casting my vote for having the key twice in the url being the sane thing to do imho ¯\_(ツ)_/¯16:03
*** udesale has quit IRC16:03
jaypipescfriesen: if the emulator thread is dedicated to a physical host CPU, I really don't understand why that isn't considered a "resource consumption".16:03
dansmithjroll: for member_of you mean? I agree, that's what I like the best for my use case16:04
jrolldansmith: yep16:04
*** hemna_ has joined #openstack-nova16:04
sean-k-mooneycfriesen: well normally the emulator tread floats over teh vcpus of the guest so they dont have to ask for extra cpus if they want that behavior16:04
*** harlowja has joined #openstack-nova16:04
*** felipemonteiro_ has joined #openstack-nova16:05
sahiddansmith: i updated tx/rx queue spec I hope you can have a look so I could address any issues16:05
efriedTo be completely clear on my position:16:05
efriedI love repeatable queryparam keys in general.16:05
efriedBUT16:05
efriedI'm opposed to making all the existing APIs accept repeated keys in addition to accepting their current syntax.16:05
efriedand16:05
dansmithsahid: ack, after the call16:05
efriedI'm pretty neutral on making the (new) member_of key repeatable, even though it deviates from the norm.16:05
mriedemif you're talking about AND semantics in the API, glance already has some for filtering https://developer.openstack.org/api-ref/image/v2/index.html#show-images16:05
sahiddansmith: cool thanks16:05
dansmithjohnthetubaguy: ^16:06
edleafesean-k-mooney: of course we should be strict in how we create and store them. But there is no harm in accepting a non-hyphenated UUID string - it's unambiguous16:06
cfriesenjaypipes: I fully agree that it's resource consumption if the emulator thread is running "isolated".  I'm just not sure it makes sense to use the "VCPU" resource given the name which strongly implies "virtual CPU".16:06
jrollefried: that's fair16:06
openstackgerritStephen Finucane proposed openstack/nova-specs master: Update spec to reflect reality  https://review.openstack.org/55500016:06
jaypipescfriesen: so what sean-k-mooney just said " normally the emulator tread floats over teh vcpus of the guest" is wrong then?16:06
sean-k-mooneyedleafe: i disagree it does cause harm as we need to have normalistion code16:06
efriedmriedem: Unfortunately I don't see anything in there that helps us in this case.16:07
cfriesenjaypipes: normally the emulator threads are allowed to run on all the host cpus that the vCPU threads run on16:07
sean-k-mooneyjaypipes: also the emulator threads is libvirt specific so we have to be carful not to leak too much of the implentation details via the api16:07
johnthetubaguysahid: thanks for the quick turn around on that, taking a look at it again now16:08
dansmithcdent: efried: so, resources=CPU:1 AND resources=MEM:1024 would be the same semantic meaning as resources=CPU:1,MEM:1024 actually right?16:08
mriedemcoreycb: https://bugs.launchpad.net/nova/+bug/1758060 - i don't see that test in the upstream repo16:08
openstackLaunchpad bug 1758060 in nova (Ubuntu) "16.1.0 test failure UEFI is not supported" [Low,Triaged] - Assigned to Corey Bryant (corey.bryant)16:08
mriedemcoreycb: special sauce?16:08
dansmithI think someone said that wouldn't make sense, butnow I'm not sure I get that16:09
*** felipemonteiro__ has quit IRC16:09
efrieddansmith: It makes sense.  I just don't think we should do it.  Because testability.16:09
jaypipessean-k-mooney: that's why I called it overhead_dedicated_Set16:09
sean-k-mooneydansmith: i would equate them as teh same form reading16:09
mriedemefried: "To find images tagged with ready and approved, include tag=ready&tag=approved in your query string.  (Note that only images containing both tags will be included in the response.)"16:09
dansmithefried: hmm, okay16:09
sahidjohnthetubaguy: ack thanks16:09
*** harlowja has quit IRC16:09
cfriesenjaypipes: currently if you enable "isolate" for the emulator then nova will internally allocate another physical host cpu to run the emulator threads16:09
jaypipescfriesen: right, and I'm opposed to that, because it f**ks with resource tracking and inventory management.16:10
cdentI said it didn't make sense, but that may be because I'm not fully understanding dan's use case16:10
*** claudiub has joined #openstack-nova16:10
dansmithcdent: I'm on a call right now, but maybe a hangout with interested parties when I'm done?16:10
cdentand also because I wasn't really thinking and on the resources case, more concatenation of value16:10
dansmithif it's really a misunderstanding thing16:11
cdentdansmith: I'd like to understand more but today's not great, i'm in a meeting right now and have been in one or another since I started today16:11
efriedThe problem is simply that we used ',' to mean 'AND' in the `resources` qparam; but ',' is already taken in `member_of` (to mean 'OR', unfortunately, but disambiguated by requiring `in:`, which consumes another nice piece of punctuation, the ';')16:11
jaypipescfriesen: I am attempting to create a new resource class that represents a physical dedicated CPU (sahid would rather this resource class be called CPU_DEDICATED insetad of PCPU) that can be used to inventory these kinds of resources (which are currently not tracked in the same way as other resources, much to my dismay)16:11
efrieds/';'/':'/16:11
efriedsigh16:11
dansmithefried: ah okay, although that's also kindof already done, but yeah I see that now16:12
sean-k-mooneyjaypipes: yes, if we track the overhead sepreatly form the pcpus/vcpu resuces it could imply that the cpus the guest sees is the resouce:*CPU-overhead or we ask for resouce:*cpus + over head form placement16:12
cfriesenjaypipes: the problem is if I have X "dedicated" guest vcpus, and I want to allocate X+1 physical cpus, and run the emulator threads on a physical CPU that isn't running any vCPU thread.16:12
jaypipessean-k-mooney: this has nothing to do with "what the guest sees". this is a resource accounting issue only.16:12
efrieddansmith: so let's move ahead with repeatable member_of.  Human reading query string will interpret '&' as 'AND', as it should be.16:12
coreycbmriedem: special sauce indeeed! my bad.16:12
mriedemnice16:12
*** sar has joined #openstack-nova16:13
cdentwait, efried, did you just make a new different decision? I thought we had gone the other way ten minutes ago?16:13
efriedvay16:13
cfriesenjaypipes: I'd be fine with CPU_DEDICATED.  Would we also have CPU_SHARED instead of VCPU?16:13
efriedcdent: Where's your purple thingy?16:14
jaypipessean-k-mooney: this is only about answering the question "how many dedicated CPUs are left on the host (or on a specific NUMA node)?" and "how many shared CPUs are left on the host (or specific NUMA node)"16:14
sean-k-mooneyjaypipes: well it could what im asking is the resouces we counting are track soly by the vaules of resources:PCPU and resources:VCPU and that the overhead extra spec dont effect how we claim in placement16:14
*** eharney has quit IRC16:14
cdentefried: do p!help16:14
efriedp!help16:14
jaypipescfriesen: no, we can't change VCPU to CPU_SHARED, unfortunately.16:14
cdentp!help16:14
efriedp!log16:14
jaypipescfriesen: which is why I wanted PCPU to be the class name for the dedicated CPU resource, since it matched VCPU16:14
cdentyou can do it in a privmsg too, so we don't have to see it. there you can leave off the p! prefic16:15
efriedcdent: http://p.anticdent.org/logs/openstack-nova?dated=2018-03-22%2016:05:18.169011#4hT516:15
*** tblakes has quit IRC16:16
efriedcdent: Basically, we collectively on average seem to hate the available options for intra-param punctuation more than we hate deviating from the rest of the API by supporting repeatable for this one new case.16:16
sean-k-mooneyjaypipes: can you clear up somthing for me. if i have resources:VCPU=4 resources:PCPU=1 and hw:overhead_dedicated_set=0 do i claim 5 cpus in placement (4 shared and 1 dedicated)16:16
cdentoh, that's not how intepreted that16:16
openstackgerritEric Young proposed openstack/nova master: Support extending attached ScaleIO volumes  https://review.openstack.org/55467916:16
cdentI got what I said here about being least disruptive: [t 3cRQ]16:17
purplerbot<cdent> efried: "fixable problem" was the immediate thing of "being able to accept duplicates on keys in general". I agree that addressign dan's problem in the least disruptive way is probably syntax in the param [2018-03-22 15:53:38.038412] [n 3cRQ]16:17
*** yamamoto has joined #openstack-nova16:17
*** ragiman has quit IRC16:17
jaypipessean-k-mooney: no, you claim 4 VCPU and 1 PCPU resources.16:17
jaypipessean-k-mooney: there's no such things as a generic "CPU resource"16:17
sean-k-mooneyjaypipes: yes that is what i ment16:17
*** udesale has joined #openstack-nova16:18
sean-k-mooneyi just wanted to make sure we dont claim 4VCPUs and 2 PCPUS16:18
cfriesensean-k-mooney: jaypipes: hw:overhead_dedicated_set doesn't make sense since we may want to run emulator threads separate from all vCPU threads (this is what "isolate" does today)16:18
sean-k-mooneyand in this case the guest recived 5 logical cpus and the emulator tread is pinned to the PCPU that was allocated to the vm16:19
cdentefried: it probably doesn't matter _that_ much as long as documentation etc16:19
cdent(whichever choice we make)16:19
jaypipescfriesen: ack. I'm reworking this in the spec after your feedback.16:19
*** andreas_s has quit IRC16:19
sean-k-mooneycfriesen when you say isolate which isolate do you mean16:19
jaypipescfriesen, sean-k-mooney: can there ever be >1 emulator thread, though?16:20
efriedcdent, dansmith, edleafe: Okay, let's make a call one way or another.  And send an email.  And propose a spec delta.  And fix the code.  And close the books.16:20
cfriesenjaypipes: yes16:20
edleafecdent: efried: I didn't realize that we didn't accept multiple resources= params. But I do agree that changing that now is not a good use of time16:20
jaypipessean-k-mooney: he is referring to hw:emulator_threads_policy=isolate16:20
cfriesenjaypipes: and there can be multiple I/O threads too16:20
jaypipescfriesen: and how does "hw:emulator_threads_policy=isolate" account for >1 emulator thread? that was the whole point of the overhead_dedicated_set being a pinset -- allowing >1 emulator thread to be specified.16:20
sean-k-mooneyjaypipes: ah ok  hw:cpu_threads_policy=isolate is different just makeing sure16:20
cfriesenjaypipes: all the emulator threads get affined to a single physical cpu16:21
jaypipescfriesen: that sounds like a libvirt-specific assumption that will likely change at any given time (as most of these things tend to do)16:21
sahidjaypipes: oh so you can't change VCPU? in that case i'm ok with PCPU16:22
*** ftersin has joined #openstack-nova16:22
cfriesenjaypipes: the emulator threads don't usually have a lot of work to do (the exception is during live migration I think).  the goal of "hw:emulator_threads_policy=isolate" is to ensure that the minimal work they do doesn't interrupt the guest cpu threads16:22
*** yamamoto has quit IRC16:22
*** ftersin has left #openstack-nova16:22
jaypipessahid: I mean, we *could*, but that would be a seriously annoying code change ;)16:22
mriedemmelwitt: if you approve a blueprint like https://blueprints.launchpad.net/nova/+spec/hurrah-for-privsep-again you should set the 'series goal' to rocky so it shows up on the actual rocky blueprints page: https://blueprints.launchpad.net/nova/rocky16:23
*** udesale has quit IRC16:23
melwittmriedem: sorry, I missed that it wasn't set. setting now16:23
jaypipescfriesen: so it never makes sense to have emulator threads on dedicated CPUs and guest vCPU threads on shared CPUs, right?16:24
cfriesenjaypipes: yes, I suppose that's qemu-specific16:24
cfriesenjaypipes: no, that wouldn't make sense.16:24
sahidjaypipes: yes i'm agreed with cfriesen, that never make sense16:24
jaypipescfriesen: furthermore, from what I gather from your and sean-k-mooney's feedback, it also would never make sense to have emulator threads policy set to *anything* if the guest vCPU threads were *not* pinned to dedicated CPUs.16:25
cfriesenjaypipes: yes16:25
jaypipesk16:25
cfriesenjaypipes: but it might make sense to have vcpu threads on dedicated cpu, and emulator threads on shared cpu (lumped together with other emulator threads and shared vcpu threads)16:26
stephenfinyeah, that's one of the ideas that came up in reviews of sahid's spec ^16:27
cfriesenjaypipes: sahid: I'm just not sure if we need to keep the "allocate a whole host cpu for my emulator threads because I'm super important and don't want other guests to impact my emulator threads"16:27
sahidcfriesen: yes, you have asked me, but did not have responded. I think that is not necessary16:27
stephenfinassuming we gain 'pcpu_pin_set' option (or whatever tetsuro's spec called for)16:27
*** yingjun has quit IRC16:27
sahidwe just want in some cases the emulthreads to run somewhere16:28
jaypipesstephenfin: CONF.cpu_dedicated_set16:28
cfriesenIf we don't need that, then the simplest thing would be to enforce that we always have at least one shared host CPU, and if you ask for hw:cpu_threads_policy=isolate it runs there.16:28
stephenfinthat's the one16:28
jaypipesstephenfin: is my proposal, along with CONF.cpu_shared_set being the other disjoint set.16:28
stephenfinor maybe not, but the idea's the same16:28
*** masayukig has quit IRC16:29
jaypipescfriesen: you mean emulator_threads_policy, not cpu_threads_policy.16:29
*** gjayavelu has joined #openstack-nova16:29
cfriesenbah, yes16:29
sahidcfriesen: i see you point, we can't have cpu_shared_set empty16:29
jaypipesyet another reason all this stuff is so confusing...16:29
stephenfinSo we'd completely remove the ability to have one core per guest for emulator threads (e.g. what we currently have)16:30
stephenfinnot that I'm against that. Just making sure I understand16:30
sahidstephenfin: yes, no need of that anymore16:30
cfriesenstephenfin: that's really an implementation detail currently, but yes16:30
stephenfin(y)16:30
jaypipesstephenfin: I don't understand that last statement...16:31
jaypipesstephenfin: why would we be removing that ability?16:31
cfriesenjaypipes: if we assume that the emulator thread work scales with the number of vcpus, then we don't need to account for it separately, it's already in cpu_allocation_ratio.  This would simplify things since you wouldn't need to ask for an extra VCPU from placement.16:32
stephenfinjaypipes: Because you can't account for the extra per-guest CPU16:32
cfriesenjaypipes: we're saying we'd remove the code that allocates a whole extra host CPU to run the emulator threads16:32
*** yamamoto has joined #openstack-nova16:32
stephenfinwhat cfriesen says16:32
jaypipescfriesen: that's exactly *not* what I want.16:32
cfriesenjaypipes: sorry, which conversation thread is that for?16:33
cfriesen:)16:33
jaypipescfriesen: *something* needs to account for CPU resources consumed by emulator threads. right now, nothing is accounted for in placement.16:33
*** moshele has quit IRC16:33
cfriesenjaypipes: there are two possibilities.  1) emulator thread work scales with the number of guests.  2) emulator thread work scales with the number of vcpus16:33
*** andreas_s has joined #openstack-nova16:33
stephenfinjaypipes: Does it though? I mean, if those threads are running on cores meant for 'shared' workloads, do we care?16:34
*** AlexeyAbashkin has quit IRC16:34
cfriesenjaypipes: if 1 is true, then yes, we need to account for the work done by the emulator threads16:34
stephenfinAfter all, we don't account for the overhead of emulator threads _without_ the policy applied. We could just ignore it16:34
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Resolve nits from I552688b9ee32b719a576a7a9ed5e4d5aa31d7b3f  https://review.openstack.org/53797116:34
jaypipesstephenfin: what's the difference between an emulator thread running on a shared CPU and a guest vCPU thread running on a shared CPU, then?16:34
stephenfinYou charge for the latter16:35
jaypipesstephenfin: we account for the latter, not the formter.16:35
cfriesenjaypipes: if 2 is true, then the work done by the emulator thread is "pooled" with the work done by the vCPU threads and it's all accounted for in cpu_allocation_ratio16:35
jaypipesstephenfin: but that's my point... we're not accounting for the former when we should be.16:35
stephenfinAlso, the potential cycles consumed by emulator thread <<< guest vCPU thread16:36
jaypipesstephenfin: except during live migration?16:36
*** masuberu has quit IRC16:36
stephenfingood point16:36
jaypipesmy point being, we're currently not accounting for any of this stuff.16:36
*** eharney has joined #openstack-nova16:36
*** yamamoto has quit IRC16:36
cfriesenjaypipes: stephenfin: I think the interesting case would be if you have a bunch of "dedicated" guest cpus, and the emulator thread running on the "shared" host cpu pool16:37
*** crushil has quit IRC16:37
cfriesenbecause in that case we really don't account for the emulator thread work currently16:37
mriedemjohnthetubaguy: if you're still around, https://review.openstack.org/#/c/520248/16:37
stephenfincfriesen: I thought we did. sahid extended some 'overhead' function to do that16:37
stephenfin*currently did16:37
johnthetubaguymriedem: been meaning to catch you about that16:38
cfriesenstephenfin: right now the emulator threads for an instance run on a dedicated host cpu16:38
cfriesenand we do account for that16:38
mriedemstephenfin: he did https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L80416:38
cfriesen(if "isolate" is enable)16:38
stephenfincfriesen: Ah, ok. I missed that nuance16:38
cfriesenstephenfin: but we're talking about running the emulator threads on the "shared" host cpus now16:39
mriedemnote that we don't account for overhead in placement at all16:39
mriedemsince it's per-compute, and done on the compute16:39
jaypipescfriesen: well, *all* emulator threads run on the same dedicated CPU? or *each* emulator thread runs on a dedicated host CPU?16:39
cfriesenjaypipes: all on the same16:39
openstackgerritsahid proposed openstack/nova-specs master: virt: allow instances to be booted with trusted VFs  https://review.openstack.org/48552216:39
jaypipescfriesen: k16:39
stephenfincfriesen: At the moment?16:39
jaypipesmriedem: this is a different overhead...16:40
johnthetubaguymriedem: ah https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/remove-configurable-hide-server-address-feature.html would fix that16:40
cfriesenjaypipes: but each instance with "isolate" enabled gets a separate host CPU16:40
jaypipescfriesen: that's what I just asked you...16:40
stephenfinjaypipes: A guest can have multiple emulator threads16:40
cfriesenjaypipes: all the emulator threads for a given instance run on a single dedicated host CPU16:40
jaypipescfriesen: ok, so it's each instance gets a separate dedicated host CPU for its emulator thread if emulator_threads_policy=isolate.16:40
stephenfinso your question was confusing :)16:40
stephenfinjaypipes: *for its emulator threads16:41
stephenfin(plural)16:41
jaypipesstephenfin: we don't currently support >1 emulator thread.16:41
cfriesenjaypipes: qemu has multiple emulator threads, so we do16:41
mriedemjohnthetubaguy: yeah mentioned that in reply to the patch16:41
stephenfinindeed16:41
* stephenfin pipes down16:41
cfriesenjaypipes: but they're all pinned the same16:41
jaypipescfriesen: but *we* (i.e. nova libvirt virt driver) never sets >1 right?16:41
johnthetubaguymriedem: yeah, that is where I got that from16:41
cfriesenjaypipes: it's not up to libvirt, I don't think.  it's internal to qemu.16:42
stephenfinthis is where sahid should butt in16:42
cfriesenjaypipes: the existance of multiple threads is an implementation detail that is unimportant to the present discussion.16:42
sahidjaypipes: by default when using cpu_policy:dedicated we pin the emulatore threads to the set of pCPUs dedicated for guest16:43
johnthetubaguymriedem: I think I am about to +W this given we hide it during building, which was my main concern, that seem reasonable?16:43
sahidif that is your question16:43
mriedemjohnthetubaguy: i think you +Wing my patch is certainly reasonable yes16:43
cfriesenjaypipes: libvirt will only  let you set a single emulator thread affinity for a given domain16:43
mriedemthank you for being british16:43
johnthetubaguymriedem: heh16:43
cfriesenjaypipes: you *can* set individual IO thread affinities, but we don't muck with those at all at the moment16:43
stephenfincfriesen: Agreed - it's not important. Sorry for bogging us down there16:44
stephenfinjaypipes: So yeah, one guest = one extra dedicated CPU16:44
jaypipesHOW MANY FRIGGIN CPU RESOURCES IS THIS GUEST CONSUMING? <-- why is it so hard to answer this damn question.16:44
*** gjayavelu has quit IRC16:44
stephenfinwithout emulator and CPU thread policies, N16:45
cfriesenjaypipes: with qemu you could easily have a dozen or more host threads for an instance with a single vCPU16:45
*** yamahata has quit IRC16:45
stephenfinwith emulator thread policy but no CPU thread policy, N + 116:45
jaypipescfriesen: all I care about is the resource accounting.16:45
* cdent blinks and steps slowly away16:45
stephenfinwith emulator thread policy and CPU thread policy == isolate, (N * M) + 116:45
stephenfinwhere N = requested CPUs for guest16:45
stephenfinand M is size of the sibling sets of the host16:46
*** cdent has left #openstack-nova16:46
stephenfintypically 2 for x86 platforms (HyperThreading)16:46
jaypipescfriesen: I don't care how the instance knows which of its virtual processors are pinned to dedicated pCPUs. I don't care about which OS threads are used for emulator processing. All I care about is **how many CPU resources** is the guest consuming.16:46
* stephenfin is impressed by cdent's commitment there16:46
*** yamamoto has joined #openstack-nova16:47
*** andreas_s has quit IRC16:47
cfriesenjaypipes: stephenfin's comments are basically valid, I think.    The reason why it's slippery is that the amount of work done by the emulator thread is usually quite small, except for exceptions like live migration.16:47
jaypipescfriesen: if there isn't a way to calculate that simple resource accounting question, then something is completely f**ked about all of this.16:47
sean-k-mooneyjaypipes: well as stephenfin said its  for emulator thread policy and CPU thread policy == isolate, (N * M) + 1  but N and M depend on the compute host that is selected16:48
jaypipescfriesen: furthermore, if there isn't a simple way to do resource accounting, all of this smells like over-engineering to me.16:49
sean-k-mooneywell actully N is the numer of guest cpus but M is host dependent16:49
jaypipesugh16:49
stephenfinjaypipes: It's totally hardware specific and we don't want to leak that level of detail to the user16:50
sean-k-mooneyjaypipes: to your privous point however if we have a PCPU resouce and VCPU resouce then we can get rid fo the host depency16:50
sean-k-mooneyactully no we cant. we would need to get teh host info to the scheduer filter still16:50
cfriesenjaypipes: currently with dedicated cpus it's deterministic because the emulator threads get a whole separate host CPU16:50
jaypipesstephenfin: I *also* do not want to leak *any* of this crap to the end user. My whole spec is intended to convey a simple to understand concept about consumable CPU resources.16:51
jaypipesstephenfin: i.e. the user wants a dedicated CPU, they ask for that. if they want 4 shared CPUs, they ask for that.16:51
cfriesenjaypipes: If you set the CPU threading policy to "ISOLATE", then the number of host CPUs actually consumed will depend on whether the host has hyperthreading enabled or not.16:52
*** yamamoto has quit IRC16:52
jaypipesstephenfin: the problem is that for some reason, everyone wants to complicate the situation endlessly with hardware-specific doodads that nobody outside of Intel and a couple folks at Red Hat even understand.16:52
jaypipesstephenfin: sorry if I'm frustrated, but by God, why is this stuff so complicated?16:53
sahidjaypipes: i think you are making it more complicated by mixing VCPU/PCPU also that, two option overhed_dedicated/shared_set, we should keep all of that easy, you ask for PCPU or VCPU and you ask to isolate or not the overhead16:53
stephenfinYup ^ ISOLATE gives us an additional case, where a user ask for entire CPU including thread siblings16:53
sahidone case that run the same as the guest pCPUs assigned and the other torun on CODN.cpu_shared_set16:53
cfriesenjaypipes: realistically, it's complicated because people can use the complexity to get better performance.16:53
jaypipessahid: how is that simpler than just having the user ask for a quantity of dedicated CPU and a quantity of shared CPU resources?16:53
stephenfincores, in 'lscpu' terminology16:53
sean-k-mooneystephenfin: ya isolate is teh real issue here16:53
sahidbecause how are you doing the pinning16:54
stephenfinWe could kill the 'ISOLATE' feature, but then folks, me included, are going to wonder what we're gaining for the people that use this16:54
jaypipessahid: again, I don't care about the pinning. all I care about here is the resource accounting. how *many* of dedicated CPU and shared CPU resources are being consumed on the host.16:54
*** wolverineav has joined #openstack-nova16:54
sean-k-mooneyjaypipes: and jay ya i know this is hardware defined infrastruct at its best. sorry to bring up the edgecases16:54
*** wolverineav has quit IRC16:54
*** wolverineav has joined #openstack-nova16:55
sean-k-mooneystephenfin: well we could replace isolate with avoid. whic would meen we just make sure you land on a host that has no hyperthreads so you get teh same performance16:55
cfriesenjaypipes: for the simple case I think it should be possible to make it simple.16:55
sean-k-mooneythat could be modled as a trait16:55
stephenfinjaypipes: and it's understandably frustrating but it's telcos that are the main driver of this stuff. We're just enabling it16:55
jaypipessahid: and by "I", I mean "this spec doesn't care about the pinning". Not that the virt driver doesn't care. Just that the placement service doesn't care or know at all which guest CPU is pinned to which host CPU.16:56
sahidjaypipes: i think we care about the pinnig, because the end user will not see the CPU that will be dedicated for running emultreads16:56
cfriesensean-k-mooney: then you're back to compute-node level granularity and may as well use aggregates16:56
jaypipesstephenfin: telcos are *not* demanding an utterly incomprehensible way of configuring workloads. they are demanding that there high performance workloads be set up in a way that takes advantage of the hardware as much as possible.16:57
stephenfinaye, what sahid said is the main thing bothering me. The user isn't getting a CPU. They're getting their emulator threads offloaded16:57
dansmithmriedem: sorry I keep missing those comments.. unintentional16:57
sean-k-mooneycfriesen: ya i know...16:57
jaypipesstephenfin: trust me, nobody in Verizon HQ Planning likes trying to understand these CONF and extra spec options. In fact, they pretty much have to rely on RH's triple-o people to configure stuff for them because nobody can understand any of it.16:58
cfriesenstephenphin: agreed.  but *internally* we could model that by bumping the VCPU resource count (assuming we ran the emulator thread on the "shared" host cpus).16:58
cfriesenstephenfin: this would basically be giving a rough estimate that the emulator work is roughly equal to a "shared" vcpu16:59
jaypipescfriesen: isn't that precisely what my spec is proposing? bumping the # of requested VCPU resources to account for emulator threads?16:59
sean-k-mooneyjaypipes: true but onap, ecomp osm and open mano all use these things today so we cant break existing users. well we can but we need to know what we are breaking16:59
johnthetubaguyjaypipes: I still think we should have a guide that says "make me go predictably fast and isolated" and another that says "let me go as dense with the packing as I can to get max utilisation"16:59
cfriesenjaypipes: the difference is that I don't want the extra "VCPU" resource to be specified in the flavor.  I want nova to add it if the flavor asks for emulator threads to be isolated.17:00
jaypipessean-k-mooney: ONAP and eCOMP both go around nova and inventory the hosts entirely externally of nova. :(17:00
*** gyee has joined #openstack-nova17:00
jaypipescfriesen: what is the difference?17:00
jaypipesjohnthetubaguy: totes agree.17:00
sean-k-mooneyjaypipes: ya i hate that too but they use the flavour extra specs also for things like emulator threads and cpu pinning17:00
jaypipesjohnthetubaguy: clearly you've never read the RH NFV OpenStack tuning guide.17:01
cfriesenjaypipes: clarity to the user.  If I look at a flavor that specifies 5 cpus, but then the resource in the extra specs asks for 6 cpus, I'm going to think there's a mistake17:01
johnthetubaguyjaypipes: nope, should I?17:01
jaypipesjohnthetubaguy: no17:01
sahidcfriesen: yes that is why the additional CPU used when using policy=isolate is not take into account in quota17:02
johnthetubaguyjaypipes: cool, gtk17:02
sean-k-mooneycfriesen: that depends. if the resouces asks for 6 cpus but  5 for the vm the the vms sees and 1 for emulator thread that it does not i think its ok17:02
stephenfinjaypipes: Heh, yeah. I'm not saying it's _not_ all crap as it is (quite the opposite, tbh) but I just don't see yet how we can abstract this kind of stuff better than we do17:02
*** yamamoto has joined #openstack-nova17:02
jaypipessahid: and that is a resource accounting problem, no?17:02
*** fragatina has joined #openstack-nova17:02
sahidwe ask for 4 dedicated pCPUs but actually 5 are dedicated17:02
stephenfinI'm just seeing "if we do this, stuff that does work at the moment breaks"17:02
*** idlemind_ has quit IRC17:02
johnthetubaguystephenfin: what about having those two options I just mentioned?17:02
sahidjaypipes: it is yes, but saying I want 5 vCPUs and the user only have 4vCPU is also a mistake17:03
sean-k-mooneycfriesen: we accounted for the extra cpu for the emulator for in placement and the guest still got what it expected17:03
*** idlemind has joined #openstack-nova17:03
*** fragatin_ has joined #openstack-nova17:03
stephenfinjohnthetubaguy: The problem is, what VZW wants is different from what E/// want17:03
openstackgerritMerged openstack/osc-placement master: Do not depend on jenkins user in devstack gate  https://review.openstack.org/55247617:03
sahidwe don't have any unit in our resource management to take that extra CPU into account17:03
cfriesensean-k-mooney: I want to distinguish between what's in the flavor  and what nova asks from placement.   If the flavor says "4 VCPU and I want isolated emulator threads" then I'm fine with nova asking for 5 VCPU.17:04
cfriesensean-k-mooney: but I don't want the flavor to have to ask for 5 VCPU17:04
stephenfinWe could also stop treating emulator threads as a VCPU and add another resource class to that17:04
stephenfin*for that17:04
sean-k-mooneysahid: again it depends if the flavour vcpu field is set to 4 the that means the guest will see 4 cpus. if the resouce:VCPUS is 5 we will claim 5 cpus in placemnt 4 of which would be used by the guest and on that could be used for the emulator17:04
jaypipessahid: so perhaps the way forward is indeed to have a completely separate CPU_SHARED and CPU_DEDICATED resource class, separte from VCPU. VCPU would represent guest vCPU threads that are using *shared* host CPUs. CPU_SHARED would represent emulator threads (and anything else not guest-related) that are on shared host CPUs, and CPU_DEDICATED would be the amount of dedicated CPUs.17:04
stephenfin(for the emulator_threads_policy issue)17:04
johnthetubaguystephenfin: I guess I am trying to suggest we document the two extreme cases really well, then document a few ways you might want to dial back in either direction from the extreme?17:05
cfriesenstephenfin: just to simplify things. :)17:05
stephenfinSo CPU_SHARED, CPU_DEDICATED, and CPU_EMULATOR_THREADS17:05
jaypipesstephenfin: CPU_EMULATOR_THREADS is not a consumable resource, though...17:05
jaypipesstephenfin: CPU_SHARED is the consumable resource that emulator threads consume.17:06
cfriesenstephenfin: fundamentally right now, emulator threads run on the same host CPUs as the guest CPUs do by default.17:06
sahidjaypipes: yes basically i thought that was you idea since the beginning :)17:06
stephenfinjaypipes: It could be though, if we added the 'emulator_pin_set' list to nova.conf that sahid suggests17:06
dansmithsahid: I left you minor comments in that spec17:06
*** yamamoto has quit IRC17:06
sahidthanks dansmith i will address them17:07
cfriesenstephenfin: I don't like the idea of having a separate dedicated pool just for emulator threads when the "shared" pool is perfectly fine17:07
*** fragatina has quit IRC17:07
jaypipesstephenfin: you are conflating two different things. one is allocation of resources. the other is assignment of emulator threads to a particular set of host CPUs.17:07
cfriesenI think accounting for the emulator threads as a VCPU (specifically if all the cpus in a guest are "dedicated") is a reasonable approximation17:08
cfriesenIf any of the cpus in a guest are "shared", then we can run the emulator threads with them.17:08
jaypipesomg, chris you're killing me.17:09
stephenfinjaypipes: Am I? We're saying for the resources for CPU_DEDICATED will come from CONF.cpu_dedicated_set. I'm saying the resources for CPU_EMULATOR_THREADS could come from CONF.cpu_emulator_thread_set17:09
*** fragatin_ has quit IRC17:09
sean-k-mooneyjaypipes: can i sugges we create an etherpad and start listing exampels and then clearly state what each means in terms of host,guest and placement allocatiosn17:09
cfriesenjaypipes: ah, forget it.  just account for the emulator threads as a VCPU.17:09
stephenfinThen, as a guest, I want N CPU_EMULATOR_THREADS17:09
stephenfinsean-k-mooney: Not a bad idea17:10
jaypipesstephenfin: but according to cfriesen, you can't request that.17:10
jaypipesstephenfin: you can't say "I want 3 emulator threads".17:10
bauzasjaypipes: stephenfin: I'm done with my meeting17:10
bauzascan we jump into a hangout ?17:10
jaypipesbauzas: uhm,....17:10
cfriesenstephenfin: yeah, there's just some random number of emulator threads that are all pinned the same.17:11
jaypipesbauzas: you've missed quite a bit.17:11
bauzasjaypipes: yeah, that's what I just see...17:11
*** jackie-truong has joined #openstack-nova17:12
stephenfinRiight. OK, forget that17:12
sean-k-mooneymaybe this would help https://libvirt.org/formatdomain.html#elementsCPUTuning17:12
bauzasjaypipes: stephenfin: any possible tl;dr ?17:12
sean-k-mooneyand this https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation17:12
jaypipessean-k-mooney: that's the problem...17:12
stephenfinYou can tell libvirt what CPUs the emulator threads will be placed on. Currently we say to pin only to one dedicated CPU17:12
jaypipessean-k-mooney: I am not interested in the pinning questions. I am only concerned with how to do resource accounting.17:12
stephenfinBut in this new world, we'd be affining to the entire pool of shared CPUs17:13
stephenfinThat first bit was what I was mixing up17:13
sean-k-mooneyjaypipes: yes i know.17:13
cfriesenjaypipes: if every guest CPU is "shared" then the emulator threads just float across the same set of host cpus.  It's when you have "dedicated" guest CPUs that the question of how to account for the emulator thread work arises.17:13
jaypipescfriesen: ack, understood.17:13
openstackgerritNguyen Hai proposed openstack/nova-specs master: Enhance nova-specs webpage and clean up repo  https://review.openstack.org/55180217:13
sean-k-mooneyso looking at the docs you can only have 1 emulator thread. you can have multiple io thread and  multiple vcpu trheads17:13
cfriesensean-k-mooney: don't we spawn another emulator thread when live migrating?17:14
stephenfinbauzas: We're enraging jaypipes to the point that I fear for his pugs' lives17:14
jaypipescfriesen: but it's not like that is something dynamically calculated, right? I mean, a guest either has dedicated CPUs or it doesn.t'17:14
jaypipeshttps://tenor.com/view/angry-panda-mascot-mad-gif-345663817:14
sean-k-mooneycfriesen: i dont know but in that case to we care17:14
cfriesenjaypipes: correct.17:14
bauzasstephenfin: hold my beer17:14
sean-k-mooneycfriesen: its live migrating all sla are out the window at that point17:14
stephenfinbauzas: Also, we're trying to figure out how we map things like emulator threads policy and the ISOLATE CPU threads policy features to placement17:14
bauzask17:15
jaypipescfriesen: so it would be a request for, say resources:CPU_DEDICATED=4 (the vCPU threads) and resources:CPU_SHARED=1 (the emulator thread)17:15
*** belmoreira has quit IRC17:15
openstackgerritNguyen Hai proposed openstack/nova-specs master: Enhance nova-specs webpage and clean up repo  https://review.openstack.org/55180217:15
cfriesenjaypipes: if we assume that the emulator thread is doing a non-trivial amount of work then that would be accurate.17:15
*** mgoddard has quit IRC17:15
jaypipescfriesen: and no request for resources:VCPU, since there's no "VCPU" resources being consumed by the guest.17:15
cfriesenjaypipes: if we assumed the emulator thread is doing a trivial amount of work then we could avoid allocating resources for the emulator thread entirely17:16
*** yamahata has joined #openstack-nova17:16
jaypipescfriesen: it's not about whether the emulator thread is doing a lot of work or not. it's about whether we want to *account* for it -- which obviously would be up to the operator, yes?17:16
jaypipescfriesen: if the operator didn't care, they wouldn't put resources:CPU_SHARED=1 in the flavor extra spec17:17
*** yamamoto has joined #openstack-nova17:17
cfriesenjaypipes: okay, I get you.17:17
cfriesenkeeping resource allocation separate from the nitty-gritty of actually affining things, I think what you say is true17:17
jaypipes++17:17
stephenfinI wonder what operators _would_ care about this? It's not exactly a thing you'd use in a bog-standard cloud17:17
sean-k-mooneyjaypipes: i dodnt think we should assume resources:CPU_SHARED=1 is the emulator thread though17:18
* bauzas sits at the back and listens17:18
stephenfinAnother idea: we could add sahid's emulator_thread_pin_set option and forget the whole thing accounting thing17:18
jaypipessean-k-mooney: we're not assuming that.17:18
stephenfinSame way we don't account for things that currently run on the host outside of 'vcpu_pin_set'17:18
*** afaranha has quit IRC17:18
jaypipessean-k-mooney: it could be anything associated with the guest that is consuming some amount of shared CPU resources.17:19
*** mdbooth has quit IRC17:19
stephenfinThat's a variant of the "use the shared pool and ignore the impact idea", but this would ensure our emulator threads wouldn't impact the guests running on said shared pool17:19
sean-k-mooneyok can i sugges we document this here https://etherpad.openstack.org/p/cpu-resource-accounting17:19
stephenfine.g. during a live migration17:19
sean-k-mooneythen we can pull it into the spec17:19
jaypipessure17:20
cfriesenjaypipes: the thing here though is that "emulator_thread_policy=isolate" is what we currently ask for.  I'm not enthusiastic about saying that they *also* need to set CPU_SHARED=1 or similar17:20
jaypipesI need to take a quick walk to screw my head back on, though.17:20
cfriesenI need coffee17:20
jaypipescfriesen: they wouldn't *need* to. if they didn't, the resource accounting would just continue to be messed up and inaccurate.17:20
bauzasI need to catch up the convo...17:20
jaypipesanyway, I will now take a walk.17:21
openstackgerritsahid proposed openstack/nova-specs master: libvirt: add support for virtio-net rx/tx queue sizes  https://review.openstack.org/53960517:21
sean-k-mooneybauzas: the reason i suggested the etherpad is i think we all need to catch up and get it donw on "paper"17:21
*** yamamoto has quit IRC17:22
bauzassean-k-mooney: etherpads are good, but I tend to think hangouts are way better17:22
*** archit has joined #openstack-nova17:22
bauzasfor reaching to a conclusion17:22
sahidhum.. I just noticed your comments johnthetubaguy, let me update the spec17:22
*** amodi has quit IRC17:22
stephenfinbauzas: I think we need to get our ideas in order first though17:22
*** archit is now known as amodi17:22
* bauzas opening a new tab then17:22
sean-k-mooneybauzas: we can do both17:22
stephenfinAlso, it's almost home time for me and I have dinner plans this evening :)17:23
*** dtantsur is now known as dtantsur|afk17:23
bauzasyour fault17:23
bauzasI just spent the whole day between meetings and troubleshooting17:23
bauzasI just feel I need to pass my nerves on something17:23
*** sridharg has quit IRC17:24
bauzasah, and kids-sitting, thanks to our glorifous country that doesn't like work17:24
*** danpawlik has quit IRC17:27
*** sambetts is now known as sambetts|afk17:27
sahidmikal: sorry to annoy you each time I have CI problem, but our CI is really slow, no?17:30
*** mgoddard has joined #openstack-nova17:30
sahidwrong channel sorry17:30
*** tblakes has joined #openstack-nova17:31
*** yamamoto has joined #openstack-nova17:32
* stephenfin has to bounce. Will pick up on https://etherpad.openstack.org/p/cpu-resource-accounting and jaypipes' spec in the morning17:35
*** derekh has quit IRC17:35
*** damien_r has quit IRC17:36
*** yamamoto has quit IRC17:37
rybridgesHello. Got a question for you guys. I am trying to setup routed provider networks with Neutron. I created a network with a segment and a subnet associated with that segment. Then the host aggregate for the segment gets created automatically in nova when I do those things. However, when I do openstack aggregate show on the aggregate, I don't see any hosts associated with the aggregate that gets17:37
rybridgescreated. Is this normal? Or a bug? Or some problem on our end?17:37
rybridgesI am running the openvswitch agent on my hypervisors. Dont have any other pre-existing aggregates17:37
*** sahid has quit IRC17:38
*** esberglu_ has joined #openstack-nova17:39
*** lucasagomes is now known as lucas-afk17:40
*** esberglu has quit IRC17:41
sean-k-mooneystephenfin: by the way we should keep https://etherpad.openstack.org/p/cpu-resource-accounting and turn it into docs for this stuff17:43
*** READ10 has joined #openstack-nova17:47
*** yamamoto has joined #openstack-nova17:47
*** tblakes has quit IRC17:48
openstackgerritMerged openstack/osc-placement master: Resolve nits from I552688b9ee32b719a576a7a9ed5e4d5aa31d7b3f  https://review.openstack.org/53797117:49
*** Anticimex has joined #openstack-nova17:50
melwittrybridges: I'm not familiar with the details of the feature but at a high level, I would think you (the admin) would have to add hosts to the host aggregates. that is, how would neutron know which hosts to put in which aggregates for you? mlavalle, can you confirm how one is expected to configure on the nova side to use routed provider networks? ^17:51
*** jackie-truong has quit IRC17:51
*** yamamoto has quit IRC17:52
*** danpawlik has joined #openstack-nova17:52
rybridgesThanks so much for the response melwitt. I was thinking nova might know because of the bridge_mappings in the openvswitch conf. Since all of the bridge mappings on my hypervisor point to the same physical network, and I created the neutron network with that physical network, I was thinking nova / neutron might be able to pick that up.17:53
rybridgesWe can add the hosts manuall to the aggregate. I just wanted to make sure that is normal and we aren't expecting them to be added for us17:53
melwittyeah, understood. hopefully mlavalle can confirm when he's around. there are some docs and summit videos about the feature (https://etherpad.openstack.org/p/nova-ptg-rocky L240) but from skimming them I don't see mention of what to do on the nova side17:56
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Allow to specify granular CPU feature flags  https://review.openstack.org/53438417:56
*** felipemonteiro has joined #openstack-nova17:56
*** ccamacho1 has quit IRC17:56
*** danpawlik has quit IRC17:57
*** felipemonteiro_ has quit IRC17:59
*** yankcrime has quit IRC18:00
*** gjayavelu has joined #openstack-nova18:00
*** jpena is now known as jpena|off18:00
openstackgerritMerged openstack/nova stable/pike: Revert "Refine waiting for vif plug events during _hard_reboot"  https://review.openstack.org/55381818:02
*** yamamoto has joined #openstack-nova18:02
rybridgesYa we found a talk about it from a summit in 2016 here: https://youtu.be/HwQFmzXdqZM?t=33m5s  The guy does a demo and shows that when you create the network and subnet, the aggregate is automatically created and has the compute hosts in there18:04
*** yamamoto has quit IRC18:06
*** yamamoto has joined #openstack-nova18:06
*** yamamoto has quit IRC18:06
*** mgoddard has quit IRC18:10
*** suresh12 has joined #openstack-nova18:10
*** boris_42_ has joined #openstack-nova18:13
*** harlowja has joined #openstack-nova18:15
*** AlexeyAbashkin has joined #openstack-nova18:15
openstackgerritDan Smith proposed openstack/nova master: Add AggregateList.get_by_metadata() query method  https://review.openstack.org/54472818:16
openstackgerritDan Smith proposed openstack/nova master: Add aggregates list to Destination object  https://review.openstack.org/54472918:16
openstackgerritDan Smith proposed openstack/nova master: Add request filter functionality to scheduler  https://review.openstack.org/54473018:16
openstackgerritDan Smith proposed openstack/nova master: Make get_allocation_candidates() honor aggregate restrictions  https://review.openstack.org/54799018:16
openstackgerritDan Smith proposed openstack/nova master: Add require_tenant_aggregate request filter  https://review.openstack.org/54500218:16
openstackgerritDan Smith proposed openstack/nova master: WIP: Honor availability_zone hint via placement  https://review.openstack.org/54628218:16
sean-k-mooneybauzas: oh by the way are you around? regarding the nic feature based scheduling. https://review.openstack.org/#/c/545951/ if i was to start it from scratch today i would totally use placement and tratis and model the requests as tratis on the neutron port for a resouce class of type VIF18:16
sean-k-mooneybauzas: that said i was only ment to work on this feature this cycle as it was assume that little to no code change would be needed since it was approved for the last 2 cycles.18:17
sean-k-mooneybauzas: so im not really sure what to do with https://review.openstack.org/#/c/545951/.18:17
*** harlowja_ has joined #openstack-nova18:17
sean-k-mooneybauzas: 50% of the feature(all the discovery and stroage fo the nic feature in the nova db) has been merged since pike. the only bit that was missing was using it in the schduler and the change to the pcirequest spec.18:18
*** oomichi has joined #openstack-nova18:18
*** harlowja has quit IRC18:19
*** AlexeyAbashkin has quit IRC18:20
*** dave-mccowan has quit IRC18:20
openstackgerritmelissaml proposed openstack/nova master: fix a typo in provider_tree.py  https://review.openstack.org/55540418:21
*** danpawlik has joined #openstack-nova18:25
*** danpawlik has quit IRC18:30
*** fragatina has joined #openstack-nova18:31
*** ralonsoh has quit IRC18:31
*** tesseract has quit IRC18:32
openstackgerritDan Smith proposed openstack/nova-specs master: Amend the member_of spec for multiple query sets  https://review.openstack.org/55541318:32
dansmithefried: cdent: edleafe: ^ proposal to argue over18:32
*** mdurrant_ is now known as mdurrant18:33
*** gjayavelu has quit IRC18:33
*** gjayavelu has joined #openstack-nova18:34
openstackgerritSurya Seetharaman proposed openstack/nova master: Allow scheduling only to enabled cells (Filter Scheduler)  https://review.openstack.org/55052718:34
openstackgerritSurya Seetharaman proposed openstack/nova master: Modify nova-manage cell_v2 list_cells to display "disabled" column  https://review.openstack.org/55541518:34
openstackgerritSurya Seetharaman proposed openstack/nova master: Add --enable and --disable options to  nova-manage update_cell  https://review.openstack.org/55541618:34
openstackgerritSurya Seetharaman proposed openstack/nova master: Add disabled option to create_cell command  https://review.openstack.org/55541718:34
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: Always deallocate networking before reschedule if using Neutron  https://review.openstack.org/55541818:36
*** germs has quit IRC18:36
*** germs has joined #openstack-nova18:37
*** germs has quit IRC18:37
*** germs has joined #openstack-nova18:37
kmallocdansmith: i wanted to ask you a question, would you (as a nova team/core) be opposed to the concept of an API that (say for pure administration purposes) would let you cleanup all resources (notably for the case of "keystone project has been deleted, but i want all instances cleanup) -- letting nova schedule the "cleanup" (/cleanup-all-resources-for-project/<project-id>). Trying to avoid a "let someoen scope to18:38
kmallocwhatever project, even if it doesn't exist" api, to let folks do that kind of cleanup/iterate over the instances in nova.18:38
kmallocmriedem: ^ cc (same question for you)18:38
jrollrybridges: not sure if this was in ocata, but routed networks should be using placement aggregates for that, might be worth hitting the placement api and checking those. (again that's definitely how it is in queens, idk about ocata though)18:39
*** germs has quit IRC18:39
openstackgerritBalazs Gibizer proposed openstack/nova-specs master: Network bandwidth resource provider  https://review.openstack.org/50230618:39
*** germs has joined #openstack-nova18:39
*** germs has quit IRC18:39
*** germs has joined #openstack-nova18:39
*** lpetrut_ has quit IRC18:39
jaypipeskmalloc: I would be opposed to that, yes. there are notification event queues that external agents/workers can listen to and do the needful cleanup via the Compute API.18:40
mriedemkmalloc: IMO that is something that can be dealt with in mistral18:40
dansmithkmalloc: you can do that today with a few lines of bash and novaclient right?18:40
melwittkmalloc: that's not something that's going to be in nova (or any of the other projects themselves) but in I think this openstackclient purge CLI does what you're describing https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/project-purge.html18:40
*** READ10 has quit IRC18:40
*** tssurya has quit IRC18:41
sean-k-mooneymelwitt: there is noting preventing an external service/agent/cli tool with admin previlages from doing that in pricipal however correct. its just not implented18:41
kmallocmelwitt: right, but if you can't get a scoped token for a project anymore...18:41
mriedemadmin?18:41
jaypipeskmalloc: and administrative token can be procured, no?18:41
kmalloci was looking at the forward idea of system-scope (for administrative actions)18:42
kmallocit is tying to eliminate the concept of an "admin" project you scope to for this kind of thing18:42
kmallocbut if nova can be made aware of system-scope for the "admin" cases, i guess that works just as well18:42
*** felipemonteiro has quit IRC18:42
sean-k-mooneykmalloc: e.g. an admin token that only allowed to make requets against a specific host18:42
kmallocok, thanks!18:43
*** felipemonteiro has joined #openstack-nova18:43
*** tbachman_ has quit IRC18:43
kmalloc(again, forward looking things)18:43
mriedemhttps://review.openstack.org/#/c/525772/18:43
kmallocyeah18:43
kmallocty!18:43
kmallocjaypipes: sortof, system-scope is designed to not let you do things like make resources (create a VM) - but still do actions like [for example] disable a hypervisor (clearly not a project-scoped action, very much infrastructure admin) [these are examples, not meaning that api would exist]18:45
jaypipeskmalloc: not sure how forward looking it is, considering the bug report is from 2012-03-29.18:45
jaypipes;)18:45
kmallocjaypipes: forward looking in keystone-and-ks-middleware support vs "when the bug was filed and we suck at fixing that across openstack's history" ;)18:46
sean-k-mooneykmalloc: ah so a system-scoped token could set the datapath_down  field on a neutorn port or admin_state but not delete or create a neutron port18:47
kmallocsean-k-mooney: right. that would be the kindof idea18:48
*** tblakes has joined #openstack-nova18:48
sean-k-mooneykmalloc: would you also envision it allowing migrate actions also or jsut service level apis18:48
kmallocsean-k-mooney: the plan is really for things that are meant to be done administratively vs by an end user. Migrate is a bit gray18:49
kmalloci've seen it used in non-administrative ways.18:49
sean-k-mooneykmalloc: im just trying to tink of why a full could admin token would not be correct in this case18:49
kmallocthough haven't looked at common cases lately (my knowledge there is out dated)18:49
openstackgerritDan Smith proposed openstack/nova master: Add require_tenant_aggregate request filter  https://review.openstack.org/54500218:49
openstackgerritDan Smith proposed openstack/nova master: WIP: Honor availability_zone hint via placement  https://review.openstack.org/54628218:49
*** moshele has joined #openstack-nova18:50
kmallocsean-k-mooney: trying to break the "manage infrastructure" and "admin can do things for an account/domain/project"18:50
sean-k-mooneykmalloc: ah ok makes sense.18:50
kmallocyeah, it's something we've needed for a long time18:51
kmallocbecause admin via a magic project scope is ... kindof awful18:51
kmallocit leads to a lot of potential "oopse, this scope bled through and let something happen" making the concept of say "domain admin" hard.18:51
jaypipeskmalloc: kinda awful that is the same as it was >6 years ago...18:51
kmallocjaypipes: yes, and through many many many many iterations... we have successfully moved the needle to "yeah we still need to fix it"18:52
kmallocjaypipes: which makes me a bit more than sad.18:52
jaypipeskmalloc: not saying it's not a good idea, just that clearly everyone has worked around it or lived with admin-ness for time eternal at this point.18:52
sean-k-mooneykmalloc: https://review.openstack.org/#/c/525772 will result in a api microversion change also right18:52
kmallocsean-k-mooney: hm.18:52
*** amoralej is now known as amoralej|off18:53
*** cdent has joined #openstack-nova18:53
kmallocsure? but how are old microversions managed in policy... wait wait, going to stop going down this rabbithole for now18:53
mlavallerybridges, melwitt: Neutron creates the aggragates for you: https://github.com/openstack/neutron/blob/master/neutron/services/segments/plugin.py#L21618:53
openstackgerritMatt Riedemann proposed openstack/nova master: Always deallocate networking before reschedule if using Neutron  https://review.openstack.org/52024818:54
mriedemoomichi: can you re-approve this? ^ it sat so long that the test needed to be updated.18:54
kmallocjaypipes: right, but i think it's time to provide a real fix rather than relying on "everyone has their own workaround"18:54
sean-k-mooneykmalloc: do unscoped tokens work for all scoped apis. if so then you proable dont need on but it might be nice to have a microverion18:54
kmallocjaypipes: and with policy-in-code, and scope-types we're really moving the right way18:54
kmallocsean-k-mooney: no, unscoped tokens don't work anywhere18:54
kmallocexcpet to get a scoped token in keystone.18:54
jaypipeskmalloc: ok18:54
kmallocsean-k-mooney: we're trying to allow RBAC for system-scope actions, unscoped tokens are explicitly no-roles.18:55
sean-k-mooneykmalloc: ok so ya that would be a micro version bump then for backwards compatiblity.18:55
melwittmlavalle: right, but what about compute hosts that go into those aggregates? is that something neutron can do automatically or is it left up to the admin to have to do that?18:55
*** tblakes has quit IRC18:55
melwittthanks for chiming in, btw :)18:55
kmallocsean-k-mooney: yeah i'm not sure how a microversion affects policy enforcement -- it becomes weird.18:55
jaypipeskmalloc: I struggle to have confidence in the firmness of the ideas after seeing keystone v3 domains, federation, PKI and other things go back and forth.18:55
kmallocjaypipes: well, please chime in on the scope types stuff.18:56
mlavallemelwitt, rybridges: Neutron adds the hosts to the aggregate: https://github.com/openstack/neutron/blob/master/neutron/services/segments/plugin.py#L22318:56
sean-k-mooneymlavalle: that is for routed networks correct18:57
mlavallesean-k-mooney: yes it is18:57
kmallocjaypipes: there are a number of things we have not changed once other services lean on it. [also not sure what you mean firmness of federation, that's stayed the same and been built on, it hasn't been taken out]18:57
melwittmlavalle: a-ha, okay. so rybridges is seeing behavior where no hosts get added to the aggregate18:58
kmallocjaypipes: but i'd rather have more eyes/comments than less on the scope-types work.18:58
*** avolkov has quit IRC18:58
rybridgesHey mlavalle! Thanks so much for the reply. We just checked the segmenthostmappings table in the DB and we see that the host mappings are in the table and our compute hosts are listed. But they are not added in the outpute of aggregate show18:58
kmallocsean-k-mooney: i'll have to see how the policy stuff shakes out with microversions, i'm sure it'll be 100% ok, just not clicking right this moment.18:58
*** moshele has quit IRC18:59
*** danpawlik has joined #openstack-nova18:59
kmallocsean-k-mooney: which means, i need to poke at things more. hehe :)18:59
*** itlinux has quit IRC18:59
*** itlinux has joined #openstack-nova18:59
kmallocsean-k-mooney: thanks for your time!18:59
*** tbachman has joined #openstack-nova19:00
cfriesenjaypipes: cpu thread isolation is actually a valid issue for resource accounting, because the amount of resources consumed is variable depending on whether the host has enabled HT or not.19:00
*** tblakes has joined #openstack-nova19:00
sean-k-mooneykmalloc: ya no worres i just didnot notice any microversion bumps in https://review.openstack.org/#/c/525772/4 and sice it has 1+2 already i taught i would ask since it proably should not merge until that is checked19:00
*** voelzmo has joined #openstack-nova19:00
jaypipescfriesen: I don't doubt that. I'm saying that the cpu_threads_policy has nothing to do with resource accounting.19:01
kmallocsean-k-mooney: yeah, it might also not require a microversion, because it doesn't actually change anything yet, the change might need a microversion.19:01
cfriesenjaypipes: but it does, since it determines whether we need to allocate additional resources on some compute nodes19:01
kmallocsean-k-mooney: i'll make sure to get some time to ensure we aren't introducing any breaking changes there (i don't think we are) that would require microversion bumps and if we are... make sure it's added19:01
sean-k-mooneykmalloc: oh adding the scope_types=['project'] does not change the behavior19:02
kmallocsean-k-mooney: yeah it shouldn't.19:02
jaypipescfriesen: whether the host has HT enable impacts the reporting of CPU inventory for the host (or its NUMA nodes), but cpu_threads_policy only affects which host processors the guest is willing to be pinned to.19:02
*** r-daneel_ has joined #openstack-nova19:02
*** tbachman_ has joined #openstack-nova19:02
jaypipescfriesen: are you referring to cpu_allocation_policy?19:02
*** r-daneel has quit IRC19:02
*** r-daneel_ is now known as r-daneel19:02
jaypipesor cpu_threads_policy?19:02
kmallocsean-k-mooney: yeah it's meant to just allow for richer policy allowing for defining the scope-types in policy as well19:02
*** gjayavelu has quit IRC19:03
cfriesenjaypipes: no.  If I ask for 1 PCPU and set cpu_thread_policy=ISOLATE then it will actually consume two host CPUs (the main one and the HT sibling)19:03
kmallocsean-k-mooney: but that should just be allowing for more specific policy enforcement, not actually making a change to behavior.19:03
kmallocsean-k-mooney: that is why i was feeling confused, but you know, i am 100% willing to assume I was mis-understanding something since it's in nova's code base and i don't spend as much time there.19:04
*** tbachman has quit IRC19:04
*** tbachman_ is now known as tbachman19:04
sean-k-mooneykmalloc: ok cool sound like you have it well in hand in anycase. just taught i would ask for my  own knollage i avoid api changes unless i have no choice19:04
*** danpawlik has quit IRC19:04
kmallocsean-k-mooney: cool. and again ty very much!19:04
cfriesenjaypipes: basically by enabling cpu_thread_policy=ISOLATE I'm asking for the entire host core, which maps to two ht siblings.    But each of those siblings was exposed to placement as a PCPU.19:04
jaypipescfriesen: it doesn't always map to 2 HTs.19:05
cfriesenjaypipes: true, it could be more19:05
jaypipescfriesen: yet more hardware vendor undefined randomness.19:05
sean-k-mooneyjaypipes: unfrotuently yes19:05
cfriesenjaypipes: or it could be 1, if HT is disabled on the host19:05
jaypipescfriesen: no, that is the host inventory of things.19:06
sean-k-mooneyor 8 HT if its powerpc19:06
jaypipescfriesen: again, I understand the host inventory part of this.19:06
jaypipescfriesen: my problem is with flavors that consume different amounts of resources on different hosts.19:06
cfriesenyep, that's exactly what this does, since it depends on the host config19:06
*** yamamoto has joined #openstack-nova19:07
sean-k-mooneyjaypipes: ya thats what happens for cpu_thread_policy=ISOLATE  today19:07
jaypipescfriesen: which, due to the design of cpu_threads_policy, being a string of "isolate|prefer|share" is entirely impossible to predict an integer amaount of some CPU resources that will *actually* be consumed by the guest.19:07
sean-k-mooneyjaypipes: yes. this was not an issue in icehouse but its biting us now19:08
mlavallerybridges, melwitt: Look at slide 28 in https://www.slideshare.net/MiguelLavalle/routed-networks-sydney. That shows you what you should see in Placement. For each segment in a routed network, that is the Placement structure that you should see19:08
cfriesenjaypipes: the original goal was to improve flexibility by enabling hyperthreads, while allowing instances to ask for whole cores if they need it for performance.19:08
tblakesmriedem: For bug https://bugs.launchpad.net/nova/+bug/1756360, it looks like we're going to need to implement __repr__ for NovaExceptions. Do you have any input on the format we want to return?19:08
openstackLaunchpad bug 1756360 in OpenStack Compute (nova) "Serializer strips Exception kwargs" [Undecided,Incomplete] - Assigned to Tyler Blakeslee (tblakes)19:08
sean-k-mooneyjaypipes: cfriesen ya the original intel proposal made pinning a host config option with no flavor extra specs. then you would just use aggregate to make teh desision19:09
cfriesenjaypipes: actually, it *is* possible to predict what will be consumed by the guest given the host information19:09
cfriesenjaypipes: it's just that it could be different from one compute node to another19:09
mlavallerybridges, melwitt: a routed network is a network where segments are associated to its subnets. In other words, if you do a GET of those subnets, all of them should have a valua in their 'sehment_id' attribute19:09
sean-k-mooneycfriesen: not before the placement allocate_candidates request19:10
*** voelzmo has quit IRC19:10
mlavallerybridges, melwitt: 'segment_id'^^^^19:10
cfriesensean-k-mooney: correct, unless we wanted to model siblings in placement. :)19:10
*** Swami has joined #openstack-nova19:10
* cfriesen ducks19:11
*** voelzmo has joined #openstack-nova19:11
mriedemtblakes: can you reply to gibi's question about reproducing this in comment 119:11
sean-k-mooneycfriesen: lets not unless its a trait19:11
melwittrybridges: did you verify whether the nova_api.aggregate_hosts table has anything in it? in case it's some kind of scoping issue with doing the 'openstack aggregate show' not showing it?19:11
sean-k-mooneyevent then its a state ful trait which is kind of a bad thing19:11
cfriesensean-k-mooney: actually, a trait of "number of HT siblings" might make sense19:11
openstackgerritJulia Kreger proposed openstack/nova master: WIP: Add microversion to ironic client wrapper call  https://review.openstack.org/55476219:11
sean-k-mooneycfriesen: it could but its missueing tratits19:12
sean-k-mooneyand it changes the request19:12
*** yamamoto has quit IRC19:13
openstackgerritMatt Riedemann proposed openstack/nova-specs master: virt: allow instances to be booted with trusted VFs  https://review.openstack.org/48552219:13
sean-k-mooneyi guess you could tag the PCPU RP with HW_1_HT and HW_2_HT if HT was on19:13
*** sdeath has joined #openstack-nova19:14
sean-k-mooneythen maybe combine that with forbiden tratis to avoid HT hosts19:14
sdeathQ:  upgrade from Ocata->Pike; now services are showing up as duplicates (same ID, UUID, etc).  Any ideas as to what might be causing it?19:14
sdeathnova service-list, openstack compute servicec list19:15
sdeath(asked last couple days running on #openstack)19:15
sean-k-mooneyits a strech jay im assuming you would not like tratis for HT amount19:15
sdeathinteresting development today:  evidently I can't delete them, bombs out, "Service id X refers to multiple services" (although the UUID is the same)19:15
sean-k-mooneymelwitt: ^ is the db race you were debugging19:16
cfriesenhow bad would it be if the initial pre-check didn't account for siblings properly but the accounting after we picked a compute node did?19:16
sean-k-mooneymelwitt: the reader writer lock upgrde thing19:16
cfriesenthe NUMATopology filter would still check for siblings properly19:16
cfriesensean-k-mooney: jaypipes: ^19:17
melwittsean-k-mooney: hm, I thought that bug was preventing services without uuids from receiving new uuids. not resulting in duplicates of them?19:18
sdeaththere are no duplicate entries in nova.services; I suspect a join against that table is returning duplicate rows, maybe related to the upgrade?  two versions of the API present at once?19:18
sean-k-mooneymelwitt: oh ok i taught i saw duplicate uuid in the title maybe not19:18
dansmithsdeath: do you have two cells defined pointing at the same db?19:18
sean-k-mooneysdeath: is this an ironic deployment?19:18
sdeathdansmith:  I do, it turns out…  that my problem?19:18
dansmithsdeath: yup19:18
sdeathah - very well then; cure for this being, then…?19:19
dansmithsdeath: delete one19:19
sean-k-mooneydansmith: wait sdeath do you also have duplicate host names across cells19:19
sean-k-mooneydansmith: how would you get teh same uuid otherwise?19:19
dansmithsean-k-mooney: two cell mappings pointing at the same db will cause us to list from it twice19:20
sdeathsean-k:  it returns the same rows both times…  I can paste what I pasted to #openstack (and got kickbanned, thank you Freenode, I AM NOT A SPAMMER grumble mumble bah)19:20
jaypipescfriesen: "it *is* possible to predict what will be consumed by the guest given the host information" <-- but it's not possible to do scheduling that way.19:20
dansmithsdeath: pastebin, yo19:20
melwittuse paste.openstack.org19:20
melwittor pastebin19:20
sean-k-mooneydansmith: ah ok so its not two compute services with the same uuid its two cell mappings19:20
melwittall of that said, you might hit the bug sean-k-mooney mentioned after that, and if so, the fix is up for review currently and will be backported https://bugs.launchpad.net/nova/+bug/174650919:21
openstackLaunchpad bug 1746509 in OpenStack Compute (nova) "TypeError: Can't upgrade a READER transaction to a WRITER mid-transaction" [Medium,In progress] - Assigned to melanie witt (melwitt)19:21
sdeathso OK…  I can't kill the cell because it's got hosts in it...19:21
sdeathevidently…19:21
cfriesenjaypipes: right, so what if we ignore siblings for the placement prefiltering, run through the scheduler filters as usual (which will look at siblings properly), then once we pick a host we update the actual allocations in placement based on the knowledge we have of the host.19:21
dansmithsdeath: you'll have to delete it from sql I guess19:21
sdeathDS:  is that safe, then?19:21
sdeathif I remove from the nova.cells table?19:21
*** voelzmo has quit IRC19:22
dansmithsdeath: from nova_api.cell_mappings19:22
jaypipescfriesen: gross.19:22
sean-k-mooneycfriesen: well that is what we were going to do anywya in jays current spec right?19:22
mriedemdansmith: sdeath: we have --force flag on delete-cell i thought?19:22
cfriesensean-k-mooney: except for the final allocations part at the end19:22
rybridgesmlavalle: melwitt: nova_api.aggregate_hosts is empty in the db. that is likely our problem. But as I said earlier the segmenthostmappings table in the neutron db has the right hosts in it..19:22
dansmithmriedem: that will delete all the hosts19:22
dansmithmriedem: which he doesn't want19:22
jaypipessean-k-mooney: we weren't going to dynamically adjust the number of resources requested in the allocation requests depending on which host was picked!19:22
sean-k-mooneycfriesen: the allcoation stil happen in the scheduler right19:22
mriedemthe mappings, right19:23
mriedemnvm19:23
sean-k-mooneyjaypipes: ah ya but it could be done in the schduler and ask placement to validate it19:23
sean-k-mooneyjaypipes: we would have to have a down call to the compute host however to figure out if we need to adjust the claim amount19:24
dansmithsdeath: figure out which one is right right one and delete the other19:24
cfriesensean-k-mooney: isnt' that already in the host infomration?19:24
mlavallerybridges: sehgmenthostmappings has a lot of rows that will naver make it to Placement. Only those segments that are part of a routed network will have a RP in Placement19:24
jaypipessean-k-mooney: no. I'm not willing to change the design of the placement service (and the allocation request atomicity/guarantees) just to meet these wack-o requirements.19:24
dansmithsdeath: hopefully all your instance and host mappings refer to one of the two cell mappings19:24
melwittrybridges: yeah, so that implies the nova API call to add the host must be failing https://github.com/openstack/neutron/blob/master/neutron/services/segments/plugin.py#L224 else segment_host_mappings is empty there19:24
sean-k-mooneycfriesen: what how many ht the host has19:24
sdeathm:  let's see if —force exists...19:24
sean-k-mooneycfriesen: its in the numa toplogy blob kindof but not really19:25
*** tidwellr_ has quit IRC19:25
dansmithsdeath: you do not want --force19:25
mlavallerybridges: in other words. Have you created a routed network yet?19:25
dansmithsdeath: --force on that is actually --recursive19:25
*** tidwellr has joined #openstack-nova19:25
cfriesenjaypipes: once we select a compute node in the scheduler we need to make a call to placement to actually consume the resources, right?19:25
sdeathds:  so maybe we won't do that then19:25
sdeath[backs slowly away from angry bomb with ominous red light glowing on it]19:25
dansmithsdeath: PSA: I dunno what irc client you're using but I bet it supports tab nick completion19:25
melwittheh19:25
cfriesenjaypipes: oh, but we tell it that we want to actually consume a specific allocation from earlier,19:26
sdeathadium, evidently it does19:26
sdeathso, backtrcing:  nova_api.cell_mappings19:26
cfriesenjaypipes: I think I see what you mean, we can't retroactively change the size of an earlier allocation request19:26
sean-k-mooneycfriesen: ya so in this case we would have to make another allocation_candiates request and then find the host n that again and consume that19:27
cfriesensean-k-mooney: right.   Or else we throw out the cpu_thread_policy option in the flavor, or else we model siblings in placement.19:27
jaypipescfriesen: right. the whole point of the allocation candidates list was "these are the nodes that met your request for resources". if we then just modify that resource amount, we invalidate the result of allocation candidates. :(19:28
sdeathdansmith: OK, rows delete (from host_mappings and from cell_mappings; host_mappings due to foreign_key constraint)19:28
sdeathrestarting nova… let's see if this work.19:28
sdeath*wrks19:28
sdeathhey, how about that.  only one row per hypervisor!19:28
sdeathOK, so...19:28
dansmithsdeath: so you'll need to do discover_hosts again, but it probably won't find them all because they've been marked as already mapped19:29
dansmithsdeath: you should have changed the id on the others to be the cell you kept19:29
sdeathwill it barf if I manually reinsert?19:30
dansmithsdeath: no19:30
sean-k-mooneycfriesen: i dont really see a good way to keep cpu_thread_policy=isolate with out having an allocation on the compute host19:30
tblakesmriedmen: I responded to his comment in bug 1756360. He was seeing that issue because he wasn't passing in a kwarg that nova passes in.19:30
jaypipescfriesen: or... we just go with my spec and just track dedicated CPUs as their own thing and deal with the workloads that can't tolerate having their dedicated CPUs be hyperthreads as the snowflakey thing that it is when we get to the destination host the scheduler selected.19:30
sdeathdansmith: actually, map_cell_and_hosts seems to have picked them back up..19:30
openstackbug 1756360 in OpenStack Compute (nova) "Serializer strips Exception kwargs" [Undecided,Incomplete] https://launchpad.net/bugs/1756360 - Assigned to Tyler Blakeslee (tblakes)19:30
sdeathI see fresh IDs and happiness19:30
dansmithsdeath: that's not what you wanted I think19:30
cfriesenjaypipes: so we'd get to the compute node and find out "oops, it doesn't have as many PCPU resources as we thought" and fail the claim?19:31
dansmithsdeath: didn't that create a new cell mapping again?19:31
sdeathdansmith:  nope19:31
sean-k-mooneycfriesen: well claim would happen in scuduler before getting to compute host19:31
openstackgerritEd Leafe proposed openstack/nova master: Add unit test for non-placement resize  https://review.openstack.org/53761419:31
sean-k-mooneycfriesen: we could at the very end try to extend the claim and only fail if we could not extend19:32
sdeathcell that I wanted to keep is there; new host_mapping IDs mapping the previously-mapped compute nodes to that cell; nova service-list and openstack compute service list show the expected list of hypervisors.19:32
dansmithsdeath: ah, I see where it went, okay19:32
dansmithsdeath: so you're good now?19:32
sdeathnova-manage cell_v2 list_cells shows right number of cells...19:32
sdeathI think it's good.19:33
sdeathso...19:33
cfriesensean-k-mooney: I'm talking about ResourceTracker.instance_claim()19:33
*** danpawlik has joined #openstack-nova19:33
dansmithsdeath: cool, now read the channel topic :)19:33
jaypipescfriesen: no... it *would* have as many pCPUs as we thought... it's just *that snowflake of a workload considers hyperthreads not to be good enough pCPUs for it*.19:33
cfriesenjaypipes: heh...that's one way of thinking about it. :)19:34
sean-k-mooneyjaypipes: well one exampel of that is realtime cpus and hyperthread dont mix19:34
jaypipescfriesen: it's the only way to think about it. but I digress..19:34
sdeathApologies; I tried #openstack two days running, no soap…  wasn't sure where to head from there.  There a better "general guidance" channel?19:34
jaypipessean-k-mooney: I don't care?19:34
tblakesmriedem: I responded to the comment in https://bugs.launchpad.net/nova/+bug/1756360. They were seeing that issue because they were not passing in a kwarg that nova passes in.19:34
openstackLaunchpad bug 1756360 in OpenStack Compute (nova) "Serializer strips Exception kwargs" [Undecided,Incomplete] - Assigned to Tyler Blakeslee (tblakes)19:34
cfriesenjaypipes: I think that'd work, it wouldn't be any more racy than it is now.19:34
dansmithsdeath: nope, and I feel your pain, but still.. normally we run people out of here _before_ answering their questions19:35
sean-k-mooneyjaypipes: similar feeling...19:35
cfriesensean-k-mooney: the numatopologyfilter would still catch stuff like that I think19:35
jaypipessean-k-mooney: no, in all seriousness, it doesn't impact the accounting of CPU resources.19:35
jaypipescfriesen: yes, it would indeed.19:35
sean-k-mooneyjaypipes: yes that is true i shoudl be placement19:35
sean-k-mooneyit does not care becase it doesn't impact the accounting of CPU resources. numa topology filter might19:36
*** awaugama has quit IRC19:36
sdeathdansmith: then thanks for the assistance!  there any objection to lurking?19:37
dansmithsdeath: nope19:37
sean-k-mooneyjaypipes: so jay for the snowflake. it lands on a host with HT off and boots fine. it lands on a host with HT on. what happens. retry?19:37
melwittsdeath: fwiw the openstack@lists.openstack.org is better than the channel for getting responses about deployment/config issues, IMHO19:37
*** danpawlik has quit IRC19:37
melwittmailing list19:38
cfriesenjaypipes: so if we ignore ht in placement we don't get the atomicity of reservation benefits, but it's no worse than it is currently.  we might fail ResourceTracker.instance_claim() or rebuild_claim() or resize_claim() once we get to the compute node.19:38
jaypipessean-k-mooney: NUMATopologyFilter has no impact on resource accounting whatsoever, for two reasons: a) the results of numa_fit_instance_to_host() are immediately discarded and b) it looks at *assignment*, not allocation.19:38
melwittgah, more POST_FAILURE on zuul >:|19:38
*** patriciadomin has quit IRC19:38
sdeathmelwitt: will give that one a shot if/when needed...  thanks!19:38
cfriesensean-k-mooney: if it lands on a host with HT on, but there's enough free host CPUs, then we behave as now and we're good19:39
jaypipessean-k-mooney: see what cfriesen just said... the NUMATopologyFilter will catch the HT-snowflake stuff in the scheduler before going to the compute host.19:39
sean-k-mooneyjaypipes: yes i know. im aske about the usecase boot me a vm with 2 pinned cpus and tread policy isolate. will that still work with out a retry if the host has hypertreads on19:39
*** _nick has joined #openstack-nova19:39
cfriesensean-k-mooney: actually no, we'd need to account in placement for the extra CPUs we consume19:39
cfriesenjaypipes: given that the snowflake instance will actually consume more CPUs, we need to account for those extra ones in placement19:40
*** tbachman has quit IRC19:40
*** _nick is now known as yankcrime19:40
sean-k-mooneycfriesen: jaypipes yes so there are two options at that point, 1 we fail to boot and retry, or 2 we ask placement to extend the allocation and fail if it would not fit19:40
jaypipescfriesen, sean-k-mooney: the fundamental problem with the cpu_thread_policy is that it leads to non-deterministic amounts of requested resources.19:40
cfriesenjaypipes: it's deterministic, but it depends on the host19:40
jaypipessean-k-mooney: I'm not going to change the allocation request.19:40
jaypipescfriesen: omg, I'm gonna slap you.19:41
jaypipes:)19:41
cfriesenjaypipes: what about a whole new allocation request19:41
jaypipescfriesen: it's non-deterministic from the viewpoint of the scheduler19:41
sean-k-mooneyjaypipes: ok so we just define it a retry and maybe you could use a weigher to minimies the change it would happen19:41
sean-k-mooneye.g. you land on a host. figure out you need more resouce to run on that host then you asked for and retry on next host19:42
cfriesensean-k-mooney: so now we're saying that ISOLATE can't possible run on a host with HT enabled.  how is this different from an aggregate?19:42
jaypipescfriesen: for a whole new allocation request, we'd need to re-submit to GET /allocation_candidates with a new requested resource amount, which would give us back a different set of compute hosts, which we would send to the NUMATopologyFilter, which would re-work the allocation request again, and we'd end up in an infinite loop of sadnsees.19:42
cfriesenjaypipes: can we specify a particular compute node when doing the allocation request?19:42
sean-k-mooneycfriesen: actully you can make it work but it doubles the amount of flavors19:43
jaypipescfriesen: no.19:43
cfriesenjaypipes: how do we handle specifying the compute node when doing a migration?19:43
sean-k-mooneycfriesen: you set flavor.vcpu=4 and resouce[vcpu]=8 and it will work only on ht systems19:43
jaypipescfriesen: you can ask for an aggregate, an amount of resources, required traits, but not a specific provider (since that would defeat the entire purpose of the GET /allocation_candidates endpoint.19:43
cfriesensean-k-mooney: that'd work on non-ht as well, technically19:43
*** esberglu_ is now known as esberglu19:44
jaypipescfriesen: we don't call GET /allocation_candidates when specifying a compute node (force_host).19:44
sean-k-mooneycfriesen: yes but it really expecive maybe add trait:HT_COUNT_2=required to avoid that19:44
rybridgesmlavalle: This is how I am creating the network / segment / subnet: http://paste.openstack.org/show/708997/  As you can see the aggregate host list is empty. I was under the impression that this process creates a routed network. Is there something I am missing?19:45
jaypipessean-k-mooney: that's a trait that looks suspiciously like a quantity of resources.19:45
sean-k-mooneyjaypipes: the trait is based on how we said we would do cpu frequency19:45
jaypipessean-k-mooney: you mean vGPU display heads?19:45
sean-k-mooneye.g. tag it with multipel tratis so 4GHZ cpu would have 1GHZ,2GHZ,3GHZ and 4GHZ traits applied19:46
mriedemmelwitt: that's a known issue http://status.openstack.org/elastic-recheck/#175805419:46
mriedemeverything is blocked until that's merged19:46
sean-k-mooneyjaypipes: ya i think that does the same thing but did not look at that that closely19:46
jaypipessean-k-mooney: but in this case, the amount of resources being *requested* changes depending on which host a workload ends up on :( that's the whole problem with this...19:46
melwittmriedem: ah, thanks19:47
dansmithis gerrit sucking hard for everyone else?19:47
*** tblakes has quit IRC19:47
sean-k-mooneyjaypipes: yes so if you wanted an isoleated vm on a host with HT on you would have a flavor like this19:47
cfriesenjaypipes: what do we call when specifying a compute node on a migration?19:48
jaypipesdansmith: I'm too busy wanting to shoot myself in the head with a bazooka to feel any pain from gerrit.19:48
sean-k-mooneyflavor.vcpu=4,resouce[vcpu]=8:traits:HT_count_2=forbid thread_policy=isolate19:48
dansmithjaypipes: roger that19:48
cfriesenjaypipes: and why couldn't we do that in the scheduler if we realize after selecting a host that we need to account for some extra PCPU resources?19:49
cfriesensean-k-mooney: at that point you may as well use a host aggregate19:49
jaypipescfriesen: we call PUT /allocations/{migration_uuid} to reserve resources on the source host for the migration and PUT /allocations/{instance_uuid} to consume the instance resources on the destination host.19:49
*** r-daneel_ has joined #openstack-nova19:49
jaypipescfriesen: we don't go through the shceduler at all when force_host.19:49
cfriesenjaypipes: we do for migrations (when it's really a "suggested host" rather than force)19:50
*** tblakes has joined #openstack-nova19:50
*** r-daneel has quit IRC19:50
*** r-daneel_ is now known as r-daneel19:50
sean-k-mooneycfriesen: well you can have lavor.vcpu=4,resouce[vcpu]=8:traits:ht_count=require thread_policy=isolate  and  flavor.vcpu=4,resouce[vcpu]=4:traits:HT_count_2=forbid thread_policy=isolate19:51
sean-k-mooneycfriesen: it should have been HT_count_2=require when resouce[vcpu]=8 not forbid originally19:51
*** sidx64 has joined #openstack-nova19:52
jaypipessean-k-mooney: that just doesn't seem right to me.19:52
cfriesenI really don't want to have multiple extra-spec keys that depend on the value of other extra-spec keys19:52
*** sidx64 has quit IRC19:52
jaypipesand certainly isn't very understandable to me.19:52
*** rgb00 has quit IRC19:52
cfriesenagreed, that's a mess. :)19:52
mriedemwe don't go through the scheduler when a host is forced, but conductor does the resource allocation 'claim'19:53
mriedemfor live migrate and evacuate19:53
*** suresh12 has quit IRC19:53
sean-k-mooneyso we keep coming back to we deprecate and remove tread_policy=isolate or extend the allocation on the compute host which breaks the workflow19:53
sean-k-mooneyor we change isolate to mean "host with HT off"19:54
cfriesenmriedem: for "cold migrate to this compute node" we need to be going through the scheduler to do cpu pinning, pci, etc.19:55
mriedemcfriesen: we do go through the scheduler for cold migrate19:57
mriedemthere is no force for cold migrate19:57
mriedemremember19:57
mriedem-519:57
cfriesenjaypipes: is there an actual API spec somewhere for placement?  https://docs.openstack.org/nova/latest/user/placement.html doesn't seem to document the HTTP calls.19:57
mriedemcfriesen: https://developer.openstack.org/api-ref/placement/19:57
mriedemhttps://docs.openstack.org/nova/latest/user/placement.html#rest-api goes to ^19:58
cfriesenmriedem: we allow an optional host, which is not a "force" but is a suggested destination19:58
mriedemcfriesen: yeah, and19:58
mriedem?19:58
mriedemit goes through the scheduler filters19:58
mriedemand placement19:58
cfriesenmriedem: right, but for that placement call do we ask for all the possible allocation candidates or do we specify a particular host?19:58
jaypipessean-k-mooney: at this point, I'd much prefer a single trait called HW_CPU_HYPERTHREADING whose absence indicates that hyperthreads are not enabled on the host.19:59
mriedemcfriesen: placement doesn't know about 'hosts'19:59
jaypipessean-k-mooney: and using the forbidden traits stuff to find compute hosts that don't have hyperthreading enabled.19:59
mriedemcfriesen: we say, 'hey placement, here is my request, give me your shit'19:59
mriedemand then we restrict to just that requested host for the filtering19:59
jaypipesmriedem: and placement goes 'sorry, I don't speak jive'.20:00
mriedemis that a 418?20:00
cfriesenmriedem:  okay, so the call to placement still filters all the RPs, then the scheduler filters narrow it down to the requested host?20:00
jaypipescorrect.20:00
mriedemcfriesen: yes20:00
jaypipescfriesen: yes20:00
sean-k-mooneyjaypipes: ya im leaning that way too. just get rid of hw:cpu_thread_policy as technical debt but i know that will piss of several people20:00
*** damien_r has joined #openstack-nova20:00
jaypipessean-k-mooney: if by several people you do accurately mean 2-3 people in the world, I'm willing to live with that.20:01
mriedemdoesn't stephenfin have a tattoo for that extra spec?20:01
sean-k-mooneymriedem: i dont think so20:01
cfriesenjaypipes: can we update the allocation?20:01
*** gjayavelu has joined #openstack-nova20:01
jaypipesmriedem: no, but dansmith told me he just got an NFV tattoo.20:02
sean-k-mooneymriedem: if he does that a life lesson in its self20:02
jaypipeson his butt cheek.20:02
mriedemoh i was thinking of https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html20:02
dansmithdo not do no20:02
sean-k-mooneycfriesen: we could but this would be the only place we update teh allocation from the compute node in the future20:02
sean-k-mooneycfriesen: hence why we should not20:02
cfriesensean-k-mooney: I was thinking update it from the scheduler after picking the destination20:03
*** eharney has quit IRC20:03
sean-k-mooneycfriesen: but then again we need to down call to the compute node to know if we need to update it and then have to go to placement and make another allocation_candiates request20:03
cfriesensean-k-mooney: no, I'm pretty sure that the scheduler has information on number of ht siblings20:04
sean-k-mooneycfriesen: in the host numa topology blob yes20:04
cfriesensean-k-mooney: I'm looking at the PUT /allocations/{consumer_uuid} operation, which seems to allow updating an existing allocation20:05
sean-k-mooneybut im not sure we still have the host_state object at the point we are selecting the node20:05
mlavallerybridges: what microversion of Placement you are using?20:05
*** suresh12 has joined #openstack-nova20:05
cfriesensean-k-mooney: that's easy to solve20:05
* mriedem goes back to focusing on harakiri20:06
sean-k-mooneyjaypipes: oh an when i say several people i mainly mean my management chain so meh20:06
jaypipescfriesen: no, we really can't update the allocation...20:06
cfriesenjaypipes: what is that PUT operation then?20:06
*** danpawlik has joined #openstack-nova20:06
jaypipescfriesen: are you referring to within the scheduler or somewhere else?20:06
*** tssurya has joined #openstack-nova20:06
cfriesenhttps://developer.openstack.org/api-ref/placement/#update-allocations20:06
sean-k-mooneyjaypipes: yes he means update the claim form the schduler by checking the numa toplogy blob from the host state object for the selected host at the very end before we down call to the compute node20:07
jaypipescfriesen: that call overwrites the allocations that were made for an instance against one or more providers.20:08
*** josecastroleon has joined #openstack-nova20:08
sean-k-mooneycfriesen: its really for things like interface or volumen attach in the future20:08
*** yamamoto has joined #openstack-nova20:09
cfriesenjaypipes: right, so we do the initial allocation, then if HT is enabled on the selecte host and the flavor has cpu_thread_policy=ISOLATE we update the PCPU resource allocation to use twice as many20:09
jaypipessean-k-mooney: well... not really. it exists to fully replace the set of allocations for a consumer.20:09
sean-k-mooneyjaypipes: yes but we do an atomic update using the generation ides so the caller is respocible for doing the merge20:09
jaypipescfriesen: that is technical debt I do not wish to continue carrying.20:10
jaypipescfriesen: I do not wish to get into the business of treating some special snowflake workloads differently -- with regards to the allocation behaviour -- from every other instance.20:11
*** suresh12 has quit IRC20:11
mriedemmelwitt: small things for https://review.openstack.org/#/c/555092/20:11
sean-k-mooneycfriesen: technically you could proably do it with an out of tree weigher if you really wanted too. sort by host without ht and if there are none left increase allocation...20:11
cfriesenfair enough.  we have a decent number of clients with all-in-one single-node installs that will be unhappy if we tell them that HT is going to be an all-or-nothing kind of thing.20:11
melwittmriedem: thx20:11
jaypipescfriesen: all-in-one single-node installs of a "cloud". hmm...20:12
*** sidx64 has joined #openstack-nova20:12
*** danpawlik has quit IRC20:12
mriedemsean-k-mooney: cfriesen: which of you are going to summarize the last 4 hours talking about this in the ML tonight?20:12
mriedemper the PTG retrospective?20:12
cfriesenhttps://etherpad.openstack.org/p/cpu-resource-accounting20:12
sean-k-mooneymriedem: we have a partial summary here20:12
sean-k-mooney:) we need to add the last case though e.g. deprecate cpu_thread_policy20:13
cfriesenjaypipes: I suspect a number of distrib-cloud folks would be not thrilled with HT becoming a per-compute-node thing.20:13
jaypipescfriesen: technically, with the HW_CPU_HYPERTHREADING trait solution, one could feasibly set the trait against only one NUMA node provider on the single compute host and effectively carve out one NUMA node for HT-tolerant workloads and the other NUMA node for non HT-tolerant workloads...20:14
sean-k-mooneycfriesen: well there is one other way around this20:14
cfriesenyeah, that's true20:14
cfriesenbetter than per compute node20:14
sean-k-mooneyyou can exclude the HT form the new shared and dedicate pin sets20:14
*** yamamoto has quit IRC20:14
mriedem555314 has been promoted to top of gate now btw20:14
cfriesensean-k-mooney: doesn't buy you anything because then nothing can run on the ones you excluded20:14
sean-k-mooneycfriesen: it means you can have HT for you shared cores and no HT for your dedicated cores20:15
jaypipesI still think the "apply the HW_CPU_HYPERTHREADING trait to one of the NUMA nodes" is a better, simpler solution.20:16
sean-k-mooneyjaypipes: oh had not read that when i typed that am yes that would work20:16
jaypipesand have the flavors that cannot tolerate their pCPUs being on hyperthread siblings add a forbidden=HW_CPU_HYPERTHREADING to the request to placement.20:17
sean-k-mooneyjaypipes: you would not even have to do it at the numa level20:17
cfriesenjaypipes: something else...looking at line 80 in the etherpad.  if you have a multi-numa-node guest with some of the vcpus being "shared" and some being "dedicated", it actually matters which are which because it affects how many of each you need for the numa-specific RPs20:17
sean-k-mooneyyou could have 2 RP of VCPU under the same numa node with different tratis20:17
*** sshwarts has quit IRC20:17
*** suresh12 has joined #openstack-nova20:17
jaypipescfriesen: sure, but doesn't the CONF.cpu_dedicated_set and CONF.cpu_shared_set pinning strings allow you to specify that properly?20:18
sean-k-mooneycfriesen: the numa toplogy filter will fix that20:18
jaypipessean-k-mooney: keep it simple...20:18
sean-k-mooneyjaypipes: well i was assuming ... actully no i was going to say the operator created teh RPs but the virtdriver/compute agent does20:19
jaypipessean-k-mooney: right.20:19
cfriesenjaypipes: no.  suppose I ask for two numa nodes, with 3 "shared" and 1 "dedicated" in virtual numa node 1, and 1 "shared" and 3 "dedicated" in virtual numa node 2.20:19
cfriesenjaypipes: each virtual numa node must map to a physical numa node20:19
sean-k-mooneycfriesen: yes following so far. what part of that will the numa topology filter not validate20:20
jaypipescfriesen: no, that's not how things currently work, according to sahid and sean-k-mooney at least... multiple virtual NUMA nodes do *not* necessarily map to multiple *host* NUMA nodes.20:20
cfriesenjaypipes: that's true, but each virtual numa node must not cross physical numa node boundaries20:21
jaypipescfriesen: ah, yes, for sure.20:21
sean-k-mooneyjaypipes: in the libvirt driver they do but its not required too. its a result of a icehose bug20:21
jaypipescfriesen: but, as sean-k-mooney says, the NUMATopologyFilter will catch that stuff.20:21
jaypipescfriesen: none of that will impact the *amount* of PCPU or VCPU resources that are requested.20:21
cfriesenjaypipes: sure it does, because it affects how many of each I need to ask for from the RP that represents a single numa node20:22
sean-k-mooneycfriesen: but jay is suggesting we dont ask placement for x pcpus in numa node 1 and y on node 2 we just ask for x+y pcpus20:23
jaypipessean-k-mooney: we *could* ask for 1 PCPU on one provider and 2 PCPU on another provider using granular request groups if we wanted.20:24
cfriesenjaypipes: suppose I have vcpus 0-3 on virtual numa node 0, and vcpus 4-7 on virtual numa node 1.    If I specify that I want vpus 0-3 to be shared, that's fundamentally a different request than if I say I want 0,4 to be shared20:24
openstackgerritJulia Kreger proposed openstack/nova master: WIP: Add microversion to ironic client wrapper call  https://review.openstack.org/55476220:24
sean-k-mooneyjaypipes: yes we could. and as a step 2 we might want to for the numa reasons but what your proposing today would not block us doing that in the future20:24
jaypipescfriesen: could we do granular request groups for that? i.e. resources1:VCPU=1, resources2:VCPU:120:24
jaypipescfriesen: would say "find my a provider tree that has providers that can serve 1 VCPU each20:25
cfriesenis that how we're representing virtual numa nodes?20:25
sean-k-mooneyjaypipes: that kind of what i was thinking of whith the numa stuff on line 203 of https://etherpad.openstack.org/p/cpu-resource-accounting20:26
jaypipesk20:26
jaypipessean-k-mooney: ack20:26
cfriesenbecause we're going to need to be able to ask for "X PCPU, Y VCPU, Z memory pages of size A" all on one host NUMA node20:26
*** sidx64_ has joined #openstack-nova20:26
sean-k-mooneycfriesen: that should be fine with granular requrest20:27
sean-k-mooneyi think20:27
jaypipescfriesen: we haven't considered memory pages yet, but yeah, that's what granular request groups are for.20:27
*** ircuser-1 has joined #openstack-nova20:27
jaypipesbtw, I proposed standardizing memory pages as resource classes a year ago: https://review.openstack.org/#/c/442718/20:27
sean-k-mooneyjaypipes: the mem_page_size can just be a trait20:28
jaypipesplease no.20:28
openstackgerritmelanie witt proposed openstack/nova master: Add functional regression test for bug 1746509  https://review.openstack.org/55509220:28
openstackbug 1746509 in OpenStack Compute (nova) "TypeError: Can't upgrade a READER transaction to a WRITER mid-transaction" [Medium,In progress] https://launchpad.net/bugs/1746509 - Assigned to melanie witt (melwitt)20:28
openstackgerritmelanie witt proposed openstack/nova master: Move _make_instance_list call outside of DB transaction context  https://review.openstack.org/55509320:28
jaypipessean-k-mooney: are they consumable things?20:28
sean-k-mooneyjaypipes: the mempages are20:28
jaypipesright, so they should be resource classes.20:28
cfriesenokay.  so I think the end-user would find it convenient to specify *which* guest CPUs are shared/dedicated.  If we specify only "how many" then they have to discover it at boot time and have sufficiently flexible software to handle setting up dynamic affinity based on what they discovered.20:28
jaypipessean-k-mooney: but, meh, another day for that discussion on memory pages.20:29
sean-k-mooneyso we have a rp with an inventory of mempages with a 2MB trait as a child of the numa node20:29
*** sidx64 has quit IRC20:29
*** tbachman has joined #openstack-nova20:29
jaypipescfriesen: no disagreement from me, but again.... not related to placement/resource accounting.20:29
sean-k-mooneyjaypipes: oh i was agreeing the mempage should be a resouce class. i just dont know if we want mempage2MB and mempage1G20:30
jaypipessean-k-mooney: let's discuss the mempages stuff another day, eh? :)20:30
sean-k-mooneyjaypipes: sure20:30
sean-k-mooneycan we put this in https://etherpad.openstack.org/p/cpu-resource-accounting then copy it into the spec20:31
cfriesenjaypipes: so if we allow the end user to specify something like "hw:shared_vcpus=0,1,4,5", it seems to me that nova should internally map that to the desired granular request rather than needing to specify it explicitly in the flavor extra spec.20:31
cfriesenjaypipes: given this, it is implied that we want 4 VCPU resources, with the remainder being PCPU20:32
*** syjulian has quit IRC20:33
*** MikeG451 has quit IRC20:33
cfriesenjaypipes: alternately, if we explicitly specify the VCPU and PCPU count, and make them discover the mapping at boot time, then we wouldn't need the "hw:shared_vcpus" extra spec20:33
jaypipescfriesen: lemme make sure I understand you...20:33
*** awaugama has joined #openstack-nova20:33
jaypipescfriesen: so you are saying that, just for VCPU and PCPU resource classes, that if nova sees the magic hw:shared_vcpus extra spec, that nova should figure out how many VCPU and how many PCPU it should ask for from placement instead of requiring the admin to put a "resources:VCPU=X" and "resources:PCPU=X" extra spec in the flavor?20:34
sean-k-mooneycfriesen: so flavor.vcpus=8, hw:shared_vcpus=0,1,4,5 resouce[vcpu]=4 resouces[pcpu]=4 meaning all odd guest cpus are shared and all even cores(those not in that list) are dedicate cores.20:35
cfriesenjaypipes: I think that would be the most convenient option for the end user20:36
sean-k-mooneycfriesen: the hw:shared_vcpus=0,1,4,5 dose not change the placement request however right its just for the virt diriver/numa topology filter20:37
jaypipescfriesen: I prefer to have them discover the mapping at boot time and just specify VCPU and PCPU counts.20:37
cfriesenjaypipes: that would be more generic, yes.20:37
jaypipescfriesen: that gives the virt driver the freedom to map the guest CPUs whatever way it needs, and once the virt driver makes that mapping decision, it could just write it to the instance metadata for the guest to read on boot.20:38
cfriesenjaypipes: in that case I'd suggest that we make the lower-numbered vCPUs in a virtual numa node be "shared", and the higher-numbered ones "dedicated"20:38
*** danpawlik has joined #openstack-nova20:39
cfriesenbut yeah, up to the virt driver20:39
*** moshele has joined #openstack-nova20:39
rybridgesmlavalle: {"versions": [{"min_version": "1.0", "max_version": "1.4", "id": "v1.0"}20:39
jaypipescfriesen, sean-k-mooney: ok, I'm going to update my spec and attempt as best as possible to recap the above decisions.20:40
cfriesenwe'd have to persist the mapping somewhere to preserve it over live migration.  and it'd be nice to keep it over cold migration/evacuate too20:40
jaypipescfriesen: instance metadata...20:40
jaypipescfriesen: just like how we save device metadata/tags today, right?20:40
cfriesenshould work, I think20:41
*** boris_42_ has quit IRC20:41
sean-k-mooneycfriesen: you say per numa node but you realsie libvirt does not map guest cores to virtual numa nodes right20:42
sean-k-mooneyactully you kind of can.20:42
sean-k-mooney<numa>20:42
sean-k-mooney    <cell id='0' cpus='0-3' memory='512000' unit='KiB'/>20:42
sean-k-mooney</numa>20:43
*** danpawlik has quit IRC20:43
sean-k-mooneycpus in the cell id is the guest logical cpu20:43
*** damien_r has quit IRC20:45
*** cdent has quit IRC20:45
*** AlexeyAbashkin has joined #openstack-nova20:45
sean-k-mooneyjaypipes: the gotcha with the metadata preseting the mapping is that on live migration we would want to make sure the mapping did not change as the running workload would not likely call the metadata api again20:45
cfriesensean-k-mooney: yeah, that's doable20:46
sean-k-mooneythat said NFV + live migration does not mix no matter how much telcos want it to20:46
cfriesensean-k-mooney: it just means that NUMATopologyFilter needs to check for that case20:47
jaypipessean-k-mooney: we'd still want to update the instance metadata to reflect the pinning on the destination host, though, so when the workload rebooted, it was able to configure itself.20:47
sean-k-mooneycfriesen: meaning the numa toplogy filter would have to query the metatdata service20:47
cfriesenjaypipes: no, we'd need to keep the "which vcpus are dedicated" mapping the same over the live migration20:47
sean-k-mooneycfriesen: maybe put that in its own filter.20:47
cfriesenjaypipes: we don't need to store the virtual-to-physical mapping in the metadata20:48
*** damien_r has joined #openstack-nova20:48
sean-k-mooneycfriesen: ok i should have left 2 hours ago. letse leave the edgecase to live migration of snowflake vnf to another spec?20:49
cfriesensean-k-mooney: lol...me too.  got an issue to debug20:49
sean-k-mooneycfriesen: it has no effect on placement just the numatoplogy+everything_else_super filter20:49
sean-k-mooneyjaypipes: are you ok writing this up in the spec?20:50
cfriesensean-k-mooney: correct20:50
*** damien_r has quit IRC20:51
sean-k-mooneyok im off tomorow but if you ping me ill proably check irc at some point20:52
*** AlexeyAbashkin has quit IRC20:52
*** edmondsw has quit IRC20:56
*** edmondsw has joined #openstack-nova20:56
cfriesenif we report the "which vcpus are dedicated" mapping via metadata, what happens if the virtual routing is such that the guest has no access to the metadata server?  will the config drive get suitably updated?20:57
*** damien_r has joined #openstack-nova20:57
sean-k-mooneyconfig drive is read only so no20:57
sean-k-mooneyyou have to proxy to metadata via dhcp server if its an isolated neutron network20:57
sean-k-mooneyconfig dirive might get update on cold migrate or hard reboot20:58
cfriesensean-k-mooney: readonly is fine, the guest isn't going to be updating it20:58
cfriesenand the mapping of which guest cpus are shared/dedicated shouldn't change dynamically once booted20:59
sean-k-mooneyyes but qemu can unplug it and update it on livemigrate either20:59
*** josecastroleon has quit IRC20:59
cfriesenwe need to keep it constant over live migrate20:59
cfriesenso no problem20:59
*** damien_r has quit IRC21:00
mlavallerybridges: the microversion required on the Neutron side is 1.1. So it seems right. At the time of creating your subnet, do you you see any tracebacks in the Neutron server log? If so, please share it in a paste21:01
sean-k-mooneysure. it just means more complicated code so when we calluate the pinning for the dest we have to maintian the shared/dedicated logical core mappings which is technical debt but ok21:01
*** damien_r has joined #openstack-nova21:01
*** edmondsw has quit IRC21:01
cfriesensean-k-mooney: not really technical debt.  logically the guest shouldn't need to be constantly checking which guest cpus are shared/dedicated.  should only need to happen at boot time, or ideally on creation/rebuild21:02
dansmithefried: excellent use of proper ITU phonetics21:02
*** hamzy has quit IRC21:03
*** sidx64_ has quit IRC21:03
sean-k-mooneycfriesen: yes but it does mean that this is yes another thing the fit_instance_to_host function needs to enforce21:04
cfriesensean-k-mooney: yes, but it's required as soon as you allow mixed shared/dedicated in one instance.21:04
*** tblakes has quit IRC21:05
*** damien_r has quit IRC21:05
sean-k-mooneycfriesen: yes which we dont allow today so once its added that fucntion need to be extended to allow passing in a set of mappings and the validate that the would still be correct for the new host.21:05
*** suresh12 has quit IRC21:08
sean-k-mooneycfriesen: again its doable but https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4364-L4510 is not exactly the most plesant code to debug currently. anyway this time im really leaving21:08
sean-k-mooneyo/21:08
*** suresh12 has joined #openstack-nova21:08
*** suresh12 has quit IRC21:10
*** suresh12 has joined #openstack-nova21:10
*** damien_r has joined #openstack-nova21:10
*** yamamoto has joined #openstack-nova21:10
*** danpawlik has joined #openstack-nova21:14
*** itlinux has quit IRC21:14
*** yamamoto has quit IRC21:16
*** AlexeyAbashkin has joined #openstack-nova21:17
cfriesenjaypipes: so is your overall goal to get rid of the current hw:numa_mem.X=Y, hw:numa_cpu.X=Y, hw:mem_page_size=2048, etc. and specify it all explicitly as placement resources?21:17
cfriesenjaypipes: because the alternative would be to keep them as-is and have nova calculate the allocation21:18
dansmithcfriesen: fwiw, I have't been following all of this today,21:18
dansmithbut I'm highly allergic to plans for us trying to automate that fitting of things onto numa nodes21:19
dansmithI don't like that it's manual, but it's a headache I don't really want to own21:19
*** tidwellr has quit IRC21:19
*** danpawlik has quit IRC21:19
*** salv-orlando has quit IRC21:19
dansmithso convincing me is going to take some damn fine, simple code and a lot of tests21:19
*** gouthamr has quit IRC21:19
*** salv-orlando has joined #openstack-nova21:19
cfriesendansmith: so are you suggesting we keep what we have now or require explicit resource specification in the flavor?21:20
dansmithI would expect we'd take the thing we have now and turn it into a granular resource request based on the total, and the pieces you specified per node21:20
dansmithcfriesen: I don't really have a suggestion, I'm just saying, every time I think I could genericify that someone brings up a case they want to support which is hard21:21
*** AlexeyAbashkin has quit IRC21:21
dansmithand I expect people that need/use that level of fine-grained control over what the system looks like are doing it for suuuuper specific reasons21:22
openstackgerritPatricia Domingues proposed openstack/nova master: load up the volume drivers by checking architecture  https://review.openstack.org/54139321:22
dansmithlike "I have a custom irreplaceable application that only runs on windows nt 4.0 which has a never-going-to-be-fixed bug where it requires 2MB of memory on the same node as the cdrom's IDE controller or my application fails to read from the light pen"21:23
*** salv-orlando has quit IRC21:24
cfriesendansmith: nice.21:25
* dansmith takes a bow21:25
*** salv-orlando has joined #openstack-nova21:26
cfriesenIf we're going to allow shared/dedicated CPUs within a single instance then we'll need to add some way of specifying at a minimum how many "shared" or "dedicated" vcpus we want in each virtual numa node.  Whether that's explicitly via resources or via some other thing that nova factors into the resource allocation request is sort of up in the air.21:28
cfriesenBut if we keep the existing fields we have now (which would be nice to avoid breaking people) nova will need to translate that into allocation requests for VCPUS or PCPUS (to use jay's terminology)21:30
melwittdansmith, mriedem: I was just looking at https://blueprints.launchpad.net/nova/+spec/libvirt-cpu-model-extra-flags again. is the ability to enable various cpu flags a feature that we would want anyway regardless of the meltdown mitigation? wondering why one of the choices isn't just "enable_pcid = True/False" and leave it at that, for the backport and going forward? that way there's no migration path nor concern about opening up more21:31
melwitt bug possibilities for stable branches21:31
melwittsigh21:31
dansmithmelwitt: yeah I think people have wanted that for other reasons21:32
dansmithmelwitt: like, choose a lower base cpu but add in avx221:32
dansmithor something like that21:32
melwittokay, I see21:32
rybridgesmlavalle: We are not seeing any stacktraces in neutron-server.log or in nova-api.log or in nova-placement-api.log. It looks like the association is not working on this line: https://github.com/openstack/neutron/blob/stable/ocata/neutron/services/segments/plugin.py#L225  Nothing after that line is being executed. But we dont see errors or stack traces. When we look at the resource_providers in21:32
rybridgesplacement, the segment is there and registered as a resource provider, but the aggregate is not associated with it.21:32
dansmithmelwitt: you can already configure your cpu model to be different and break live migration, so this wouldn't be any different21:33
melwittokay. if it's a feature that has utility going forward (specifying various flags) then I'm inclined to favor the idea of backporting it as the full-fledged feature being that the risk is lower than the upgrade pain for operators to go from a [workarounds] option -> cpu_model_extra_flags21:35
dansmithmelwitt: did you see my latest suggestion on the patch?21:35
melwittno, looking now21:36
dansmithmelwitt: backporting it with a restriction that pcid is the only thing you can put in that option would eliminate the general-purpose use of it without causing the deployment pain21:36
melwittah, I see. that's a nice idea21:36
melwittI think that addresses all of the concerns21:37
dansmithme too21:37
melwittnoice21:37
melwittwhat do you think of that mriedem?21:38
*** moshele has quit IRC21:39
mriedemso the backport has the choices kwarg with a single item?21:39
mriedemand we drop choices in master?21:39
dansmiththat's one way yeah21:39
dansmithwell, I'm asking right now if we want a list anyway21:40
*** artom has quit IRC21:40
dansmithor some sort of reasonable validation21:40
dansmithif we can get the list of all options from libvirt or something21:40
*** artom has joined #openstack-nova21:40
mriedemcpu_map.xml comes back in the libvirt host capabilities stuff we already have i think21:41
mriedemwe dump it on startup to the logs21:41
melwittyeah, I was thinking maybe we have choices and there's only one choice in the backport and going forward there will be a set of choices we know are good? or are there just too many to reasonable do it that way21:41
dansmithyeah, so I would think we should validate what they give us anyway, so we don't just generate broken xml21:41
dansmithbut that would be later than in conf choices21:41
mriedemhttp://logs.openstack.org/84/534384/9/check/tempest-full/a66bf1a/controller/logs/screen-n-cpu.txt.gz#_Mar_22_13_34_01_14693121:42
mriedemi would say, (1) backportable version has just a single choice, pcied or whatever, and then (2) another patch, master-only, drops choices and we validate on startup of the driver based on the host capabilities21:43
mriedemnot in config21:43
dansmiththat's cool,21:43
dansmithalthough I think just landing it with the validation and restricted choice would be fine too21:43
dansmithbut if you like it being smaller with no validation then that's cool21:43
*** esberglu has quit IRC21:44
mriedemoh i don't care if we add the validation in the backport change too21:44
mriedemif that's reliable21:44
mriedemi'm not sure what in ^ is the thing we would use to validate21:44
*** esberglu has joined #openstack-nova21:44
mriedemthe cpu features?21:44
mriedemhttp://logs.openstack.org/84/534384/9/check/tempest-full/a66bf1a/controller/logs/screen-n-cpu.txt.gz#_Mar_22_13_34_01_14851621:45
melwittyeah, if there's a chance that pcid could fail validation, then it would be good to validate it before going ahead21:45
dansmithI think, but kashyap did say at one point it wasn't obvious and we couldn't automate the enabling of this too,21:45
dansmithso we might be barking up the wrong tree21:45
dansmitheither way it doesn't matter.. if we can, we should, if we can't, then we won't :)21:45
dansmithI already asked on the review so we'll see what he says21:45
mriedemgood luck21:45
melwittokay, I'll approve the bp with a summary of what we agreed on the approach21:46
dansmithoh I thought we had already agreed to approve it21:46
melwittwe did, I meant more the approach part21:46
melwittI hadn't approved it yet while we were debating how to go forward21:47
dansmithalright21:47
*** tssurya has quit IRC21:47
*** danpawlik has joined #openstack-nova21:49
*** esberglu has quit IRC21:49
*** suresh12 has quit IRC21:51
*** Drankis has quit IRC21:52
*** pcaruana has quit IRC21:53
sean-k-mooneydiscussing https://review.openstack.org/#/c/534384/ ?21:53
mlavallerybridges: that's odd. I have a meeting in 5 min and then I have to run. can I ping you tomorrow?21:53
*** danpawlik has quit IRC21:54
*** yamamoto has joined #openstack-nova21:55
rybridgesmlavalle: sure dude! Thanks for the help today21:55
mlavallerybridges: :-) what's your time zone? I'm in US Central (Austin)21:56
*** suresh12 has joined #openstack-nova21:59
openstackgerritJulia Kreger proposed openstack/nova master: WIP: Add microversion to ironic client wrapper call  https://review.openstack.org/55476222:02
*** amodi has quit IRC22:03
*** awaugama has quit IRC22:04
*** archit has joined #openstack-nova22:06
rybridgesmlavalle PST (California)22:08
rybridgesI'll be in 8am-7pm PST22:09
mlavalleok cool22:09
*** itlinux has joined #openstack-nova22:10
*** burt has quit IRC22:12
*** archit has quit IRC22:12
*** wolverineav has quit IRC22:17
*** damien_r has quit IRC22:18
*** wolverineav has joined #openstack-nova22:18
*** damien_r has joined #openstack-nova22:20
*** danpawlik has joined #openstack-nova22:23
*** crushil has joined #openstack-nova22:25
*** crushil has left #openstack-nova22:26
*** danpawlik has quit IRC22:29
*** pchavva has quit IRC22:29
openstackgerritMatt Riedemann proposed openstack/nova master: Teardown networking when rolling back live migration even if shared disk  https://review.openstack.org/55548122:32
openstackgerritPatricia Domingues proposed openstack/nova master: load up the volume drivers by checking architecture  https://review.openstack.org/54139322:34
*** rcernin has joined #openstack-nova22:34
*** damien_r has quit IRC22:34
*** suresh12 has quit IRC22:35
*** suresh12 has joined #openstack-nova22:36
*** suresh12 has quit IRC22:44
openstackgerritMatt Riedemann proposed openstack/nova master: Teardown networking when rolling back live migration even if shared disk  https://review.openstack.org/55548122:44
openstackgerritMatt Riedemann proposed openstack/nova master: DRY up test_rollback_live_migration_set_migration_status  https://review.openstack.org/55548922:44
*** edmondsw has joined #openstack-nova22:46
*** andreas_s has joined #openstack-nova22:47
*** andreas_s has quit IRC22:52
*** liverpooler has quit IRC22:52
*** masber has joined #openstack-nova22:56
*** hongbin has quit IRC23:01
*** felipemonteiro has quit IRC23:03
*** danpawlik has joined #openstack-nova23:03
*** mlavalle has quit IRC23:03
*** mlavalle has joined #openstack-nova23:03
*** Guest29400 has quit IRC23:04
*** suresh12 has joined #openstack-nova23:07
*** danpawlik has quit IRC23:07
*** elmaciej has joined #openstack-nova23:08
*** suresh12 has quit IRC23:11
*** r-daneel has quit IRC23:12
*** masuberu has joined #openstack-nova23:13
*** AlexeyAbashkin has joined #openstack-nova23:16
*** masber has quit IRC23:17
*** masber has joined #openstack-nova23:18
*** masuberu has quit IRC23:18
*** AlexeyAbashkin has quit IRC23:20
*** fragatina has quit IRC23:22
*** fragatina has joined #openstack-nova23:23
*** archit has joined #openstack-nova23:24
*** Swami has quit IRC23:26
*** masber has quit IRC23:29
*** masber has joined #openstack-nova23:29
*** chyka has quit IRC23:32
*** danpawlik has joined #openstack-nova23:33
*** danpawlik has quit IRC23:38
*** mlavalle has quit IRC23:38
*** _ix has quit IRC23:44
SpazmoticMorning folks23:45
*** calebb has quit IRC23:50
*** calebb has joined #openstack-nova23:52
*** danpawlik has joined #openstack-nova23:54
*** esberglu has joined #openstack-nova23:54
*** liverpooler has joined #openstack-nova23:55
*** xinliang has quit IRC23:58
*** esberglu has quit IRC23:59
*** danpawlik has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!