Friday, 2019-03-22

sean-k-mooneyplacement is broken by this however as are a few other i have seen so far00:00
sean-k-mooneyoh actully no playment has the fix00:00
smcginnissean-k-mooney: What is the issue with the codesearch and upper constraints?00:02
smcginnisNearly every repo should have that.00:02
sean-k-mooneysmcginnis: yes but if you have that and you have a lower-constrats tox env and dont overwrive the install_command the the lower-constratis env uses upper-constratits00:03
sean-k-mooneyso novas lower-constraints job is broken as is congresses git.openstack.org/cgit/openstack/congress/tree/tox.ini00:04
sean-k-mooneyand tripleo-validations00:05
sean-k-mooneythere are not that many project that actully have a lower-constraints tox env00:05
smcginnisReally? Most should have that. It was one of those big blasts out to update the world.00:06
smcginnishttps://review.openstack.org/#/q/topic:requirements-stop-syncing+(status:open+OR+status:merged)00:06
smcginnisCongress does. At the bottom of tox.ini00:06
sean-k-mooneyyes i know00:07
*** wolverineav has joined #openstack-nova00:07
smcginnishttps://opendev.org/openstack/congress/src/branch/master/tox.ini#L103-L10800:07
sean-k-mooney yes but its broken00:07
sean-k-mooneyhttps://opendev.org/openstack/congress/src/branch/master/tox.ini#L900:07
sean-k-mooneythat install command means its lower-constrtis is actully installing the upper-constrats instead00:08
*** liuyulong has quit IRC00:08
smcginnisI believe with the deps set there, that makes sure it also uses the lower-constraints.txt00:09
sean-k-mooneysmcginnis: it does not00:10
smcginnisAt least it has been working that way for nearly a year now.00:10
smcginnisWhere do you see it not?00:10
sean-k-mooneyin the output of the gate logs00:10
sean-k-mooneygive me a sec an ill grab some00:10
*** gyee has quit IRC00:11
sean-k-mooneyso this is a random nova change00:12
sean-k-mooneyhttp://logs.openstack.org/52/626952/12/check/openstack-tox-lower-constraints/7bc660b/tox/lower-constraints-1.log00:12
*** mriedem_afk is now known as mriedem00:12
sean-k-mooneyif you scroll to the end you will see its installing zVMCloudConnector-1.4.000:12
sean-k-mooneythe lower-constrating is 1.1.100:12
sean-k-mooneythe lower constrait of psycopg2 is 2.6.8 its using psycopg2-2.7.700:14
sean-k-mooneydoing the same with congress00:16
sean-k-mooneyhttp://logs.openstack.org/13/643113/1/check/openstack-tox-lower-constraints/b00ac73/tox/lower-constraints-1.log00:16
sean-k-mooneythat job used testtools-2.3.000:17
sean-k-mooneythere lower-constrating is 2.2.0 https://github.com/openstack/congress/blob/master/lower-constraints.txt#L13900:17
*** irclogbot_1 has joined #openstack-nova00:17
smcginnisHmm, this had been working fine. I wonder if a tox upgrade or something like that has caused a change in behavior.00:17
sean-k-mooneysmcginnis: im not sure if it has been working fine or not00:18
sean-k-mooneythe issue is we are pasing two constratit files to pip00:18
sean-k-mooneycmdargs: '/home/zuul/src/git.openstack.org/openstack/congress/.tox/lower-constraints/bin/pip install -c/home/zuul/src/git.openstack.org/openstack/requirements/upper-constraints.txt -U -c/home/zuul/src/git.openstack.org/openstack/congress/lower-constraints.txt -r/home/zuul/src/git.openstack.org/openstack/congress/test-requirements.txt00:18
sean-k-mooney-r/home/zuul/src/git.openstack.org/openstack/congress/requirements.txt'00:18
sean-k-mooneyi dont think that is actully allowed00:19
smcginnisIt had been working.00:19
melwittsean-k-mooney: doesn't lower constraint mean the lower bound? how can you tell it's not working if it installs a version newer than the lower bound?00:19
sean-k-mooneymelwitt: the lower-constrating job is ment to install the oldest version that we say should work00:20
melwittoh, the job. I see00:21
sean-k-mooneyright so the job has not been running using our actual lower bound so we dont actully know the lower bound work.00:22
melwittgotcha. thanks00:22
sean-k-mooneyand in nova case i know our mocking is broken on the lower bound as we are actully trying to connect to rabbitmq in some of the test and its not mocked00:22
*** wolverineav has quit IRC00:23
sean-k-mooneyif i bumped the version of oslo.messaging we have in lower bound our tests would pass with the fix to the tox.ini but i dont think we actully need the newer version is just broken tests00:24
gmannit seems few deps are taken from lower-constraints.txt like hacking00:25
sean-k-mooneysmcginnis: anyway this is either a change in pips behavior or this was always broken unfortunetly00:25
sean-k-mooneygmann: i think what is happening is any dep that is not n upper-consrtraitng but is in lower-constratits is being honered00:26
sean-k-mooneybut i think ip is takeing the constraint form the first file it finds it in00:26
sean-k-mooneythat is jsut a guess00:26
gmannseems so.  and that is why we have upper-constraints.txt also in dep  list but picking order might have changed on tox/pip side00:28
sean-k-mooneywell tox just runs pip so tox is not really invoved00:29
sean-k-mooneybut for this env specifcially we should not include upper-constraints.txt or at least if we did it should be after lower00:30
sean-k-mooneylooking at the pip change log i dont imediatly see something related to this00:31
sean-k-mooneythe -U flag proably isnt helping much either in this case00:32
gmannsean-k-mooney: we might need upper-constraints.txt  pkg not in lower-constraints.txt, in case latest pip version of them would create issue.00:41
sean-k-mooneygmann: we may yes but we need to validate the ordering  we pass them00:42
gmannyeah ordering is imp00:42
sean-k-mooneythe docs for pip dont state anything about orderign so im going to look at the code tomorow00:43
sean-k-mooneythat said its alreay tomorow i.e. its after mindnight so i should proable leave this untill after i sleep for selveral hours00:44
sean-k-mooneygmann: ok im actully going now but i think this is the isue. https://github.com/pypa/pip/blob/d4217f0cc30ea2012911be1cb22a74d8cc4a365a/src/pip/_internal/req/req_set.py#L133-L13600:54
sean-k-mooneythe way i processses constratits if a constraint is repeated the first one is used00:55
*** ileixe has joined #openstack-nova00:58
sean-k-mooneylooking at git blame the logic seam to be unchange bar minor change for at least 4 years00:59
sean-k-mooneyhttps://github.com/pypa/pip/blame/0bc7a974fac35ebc0b38beeba4de59017166b1bc/src/pip/_internal/req/req_set.py#L109-L13000:59
*** igordc has quit IRC01:01
gmannhumm, i think added upper-constraints.txt  dep in tox before  lower-constraints.txt  things happened. which means it never worked as expected01:02
openstackgerritGhanshyam Mann proposed openstack/nova master: WIP: Make os-services policies granular  https://review.openstack.org/64542701:16
openstackgerritmelanie witt proposed openstack/nova master: DNM Re-enable testing of console with TLS in nova-next job  https://review.openstack.org/64543201:29
*** awalende has joined #openstack-nova01:31
*** itlinux has joined #openstack-nova01:34
*** awalende has quit IRC01:36
openstackgerritGhanshyam Mann proposed openstack/nova master: WIP: Make os-services policies granular  https://review.openstack.org/64542701:45
*** tbachman has quit IRC01:55
*** hongbin has joined #openstack-nova02:22
*** wolverineav has joined #openstack-nova02:24
openstackgerritGhanshyam Mann proposed openstack/nova master: WIP: Make os-services policies granular  https://review.openstack.org/64542702:28
*** wolverineav has quit IRC02:28
openstackgerritGhanshyam Mann proposed openstack/nova master: WIP: Make os-services policies granular  https://review.openstack.org/64542702:29
openstackgerritGhanshyam Mann proposed openstack/nova master: Introduce scope_types in os-services, lock policies  https://review.openstack.org/64545202:46
*** psachin has joined #openstack-nova02:48
openstackgerritMerged openstack/nova master: Require python-ironicclient>=2.7.0  https://review.openstack.org/64286302:55
*** udesale has joined #openstack-nova03:21
openstackgerritGhanshyam Mann proposed openstack/nova master: WIP:Introduce scope_types in os-services, lock policies  https://review.openstack.org/64545203:21
*** marst has joined #openstack-nova03:21
openstackgerritMatt Riedemann proposed openstack/nova master: Stop running tempest-multinode-full  https://review.openstack.org/64545603:24
mriedemkeeel it ^03:24
*** mriedem has quit IRC03:29
openstackgerritBoxiang Zhu proposed openstack/nova-specs master: Scheduler filters evaluated even forced host  https://review.openstack.org/64545803:30
*** marst has quit IRC03:37
*** whoami-rajat has joined #openstack-nova03:41
*** marst has joined #openstack-nova03:43
openstackgerritBoxiang Zhu proposed openstack/nova-specs master: Scheduler filters evaluated even forced host  https://review.openstack.org/64545803:46
openstackgerritOpenStack Release Bot proposed openstack/nova stable/stein: Update .gitreview for stable/stein  https://review.openstack.org/64546103:50
openstackgerritOpenStack Release Bot proposed openstack/nova stable/stein: Update UPPER_CONSTRAINTS_FILE for stable/stein  https://review.openstack.org/64546203:50
openstackgerritOpenStack Release Bot proposed openstack/nova master: Update master for stable/stein  https://review.openstack.org/64546303:50
*** nicolasbock has quit IRC04:03
*** hongbin has quit IRC04:12
*** erlon has quit IRC04:13
*** zhubx007 has quit IRC04:17
*** ileixe has quit IRC04:18
*** zhubx007 has joined #openstack-nova04:18
*** mlavalle has quit IRC04:23
*** hamzy_ has joined #openstack-nova04:31
*** TheJulia has quit IRC04:31
*** wxy-xiyuan has quit IRC04:31
*** coreycb has quit IRC04:31
*** wxy-xiyuan has joined #openstack-nova04:31
*** TheJulia has joined #openstack-nova04:31
*** fyx has quit IRC04:31
*** dustinc has quit IRC04:31
*** spsurya has quit IRC04:31
*** lxkong has quit IRC04:32
*** yonglihe has quit IRC04:32
*** yonglihe has joined #openstack-nova04:32
*** awestin1 has quit IRC04:32
*** adrianreza has quit IRC04:32
*** kmalloc has quit IRC04:32
*** hogepodge has quit IRC04:32
*** johnsom has quit IRC04:32
*** jbryce has quit IRC04:33
*** jbernard has quit IRC04:33
*** hamzy has quit IRC04:33
*** jbernard has joined #openstack-nova04:33
*** coreycb has joined #openstack-nova04:34
*** spsurya has joined #openstack-nova04:34
*** fyx has joined #openstack-nova04:34
*** adrianreza has joined #openstack-nova04:34
*** lxkong has joined #openstack-nova04:34
*** awestin1 has joined #openstack-nova04:34
*** hogepodge has joined #openstack-nova04:34
*** johnsom has joined #openstack-nova04:35
*** jbryce has joined #openstack-nova04:38
*** dustinc has joined #openstack-nova04:39
*** kmalloc has joined #openstack-nova04:39
*** udesale has quit IRC04:47
*** udesale has joined #openstack-nova04:47
*** lpetrut has joined #openstack-nova04:52
*** yankcrime has joined #openstack-nova04:54
*** ileixe has joined #openstack-nova05:03
*** janki has joined #openstack-nova05:04
*** marst has quit IRC05:08
*** lpetrut has quit IRC05:25
*** sridharg has joined #openstack-nova05:43
*** dustinc has quit IRC05:50
*** tkajinam has quit IRC05:59
*** tkajinam has joined #openstack-nova05:59
*** takashin has joined #openstack-nova06:10
*** lpetrut has joined #openstack-nova06:17
*** cfriesen has quit IRC06:18
*** psachin has quit IRC06:20
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove deprecated 'default_flavor' config option  https://review.openstack.org/64547606:22
*** wolverineav has joined #openstack-nova06:24
*** sapd1_x has joined #openstack-nova06:25
*** psachin has joined #openstack-nova06:25
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Imported Translations from Zanata  https://review.openstack.org/64547806:26
*** david-lyle has joined #openstack-nova06:27
*** dklyle has quit IRC06:27
*** david-lyle has quit IRC06:29
*** dklyle has joined #openstack-nova06:29
*** wolverineav has quit IRC06:29
*** ivve has joined #openstack-nova06:29
*** lpetrut has quit IRC06:30
openstackgerritTakashi NATSUME proposed openstack/nova master: Add description about sort order in API ref guideline  https://review.openstack.org/62728206:33
openstackgerritTakashi NATSUME proposed openstack/nova master: Add description about sort order in API ref guideline  https://review.openstack.org/62728206:33
*** dims has quit IRC06:39
*** dims has joined #openstack-nova06:41
*** Luzi has joined #openstack-nova06:49
*** brault has quit IRC06:50
*** rcernin has quit IRC07:07
*** whoami-rajat has quit IRC07:11
*** lpetrut has joined #openstack-nova07:11
*** vrushali_kamde has joined #openstack-nova07:14
*** vrushali_kamde_ has joined #openstack-nova07:16
*** vrushali_kamde_ has left #openstack-nova07:18
*** vrushali_kamde_ has joined #openstack-nova07:19
*** vrushali_kamde has left #openstack-nova07:20
*** sapd1_x has quit IRC07:20
*** vrushali_kamde_ has left #openstack-nova07:20
*** vrushali_kamde has joined #openstack-nova07:21
*** dpawlik_ is now known as dpawlik07:21
*** pcaruana has joined #openstack-nova07:27
*** whoami-rajat has joined #openstack-nova07:30
*** takashin has left #openstack-nova07:30
*** rpittau|afk is now known as rpittau07:33
*** phasespace has quit IRC07:35
*** luksky has joined #openstack-nova07:49
*** tosky has joined #openstack-nova07:53
*** lpetrut has quit IRC08:04
*** xek_ has joined #openstack-nova08:08
*** ttsiouts has joined #openstack-nova08:13
*** helenaAM has joined #openstack-nova08:17
*** tesseract has joined #openstack-nova08:25
*** ttsiouts has quit IRC08:26
*** ttsiouts has joined #openstack-nova08:27
*** phasespace has joined #openstack-nova08:27
*** ttsiouts has quit IRC08:31
*** dtantsur|afk is now known as dtantsur08:33
*** ttsiouts has joined #openstack-nova08:33
*** tkajinam has quit IRC08:34
*** ttsiouts has quit IRC08:45
*** ttsiouts has joined #openstack-nova08:46
*** ttsiouts has quit IRC08:50
*** ralonsoh has joined #openstack-nova08:59
*** ttsiouts has joined #openstack-nova09:04
*** tssurya has joined #openstack-nova09:04
*** tobias-urdin has joined #openstack-nova09:15
*** ttsiouts has quit IRC09:21
*** ttsiouts has joined #openstack-nova09:22
*** ttsiouts has quit IRC09:26
openstackgerritMatthew Booth proposed openstack/nova master: Eventlet monkey patching should be as early as possible  https://review.openstack.org/62695209:27
openstackgerritLee Yarwood proposed openstack/nova master: Use migration_status during volume migrating and retyping  https://review.openstack.org/63722409:28
*** IvensZambrano has joined #openstack-nova09:37
*** derekh has joined #openstack-nova09:42
*** ttsiouts has joined #openstack-nova09:44
openstackgerritBoxiang Zhu proposed openstack/nova master: [WIP] Scheduler filters evaluated even forced host  https://review.openstack.org/64552009:45
kashyapmdbooth_: Nice work on that eventlet patch (although I'm not fully qualified to comment on it).09:50
kashyap"I started with what seemed like the simplest and most obvious change, and moved on as I discovered more interactions which broke."09:50
kashyap:D09:50
kashyap(Also kudos for the "commit message mini essay")09:51
*** mdbooth_ is now known as mdbooth09:51
*** cdent has joined #openstack-nova09:51
mdboothkashyap: Heh, thanks. The commit message has also grown over time :)09:51
kashyapYes, I can see.  I'm sure you spent many hours fine-tuning it as you discovered things.09:51
mdboothI feel that enormous comments are an indicator of code smell, though, and nova.monkey_patch is almost all comment.09:53
mdboothMost code should not require that much explanation.09:53
openstackgerritMerged openstack/nova master: Cleanup comments around claim_resources method  https://review.openstack.org/64457609:56
openstackgerritMerged openstack/nova master: Address old TODO in claim_resources_on_destination  https://review.openstack.org/64459609:56
*** IvensZambrano has quit IRC09:57
*** lpetrut has joined #openstack-nova10:04
*** jangutter has joined #openstack-nova10:19
*** IvensZambrano has joined #openstack-nova10:19
*** lpetrut has quit IRC10:23
*** dtantsur is now known as dtantsur|brb10:28
lyarwoodmdbooth: re https://review.openstack.org/#/c/637224/ - good news, I can't reproduce the cinder weirdness locally after rebasing and recloning everything this morning. Just pushed https://review.openstack.org/#/c/637527/ again, hopefully this also passes in the check gate now.10:30
lyarwoods/gate/queue/g10:30
kashyapmdbooth: Yeah, was about to remark on that, too.  THe large commentary section.10:33
*** nicolasbock has joined #openstack-nova10:40
*** ttsiouts has quit IRC10:42
*** ttsiouts has joined #openstack-nova10:43
*** ttsiouts_ has joined #openstack-nova10:48
*** ttsiouts has quit IRC10:48
*** johnsom has quit IRC10:59
*** adrianreza has quit IRC10:59
*** johnsom has joined #openstack-nova10:59
*** whoami-rajat has quit IRC10:59
*** spsurya has quit IRC10:59
*** yonglihe has quit IRC10:59
*** yonglihe has joined #openstack-nova11:00
*** awestin1 has quit IRC11:00
*** spsurya has joined #openstack-nova11:00
*** adrianreza has joined #openstack-nova11:00
*** wolverineav has joined #openstack-nova11:00
*** whoami-rajat has joined #openstack-nova11:01
*** awestin1 has joined #openstack-nova11:02
*** phasespace has quit IRC11:04
*** wolverineav has quit IRC11:05
*** kaisers has quit IRC11:12
*** kaisers has joined #openstack-nova11:13
*** ileixe has quit IRC11:21
*** IvensZambrano has quit IRC11:27
*** IvensZambrano has joined #openstack-nova11:29
*** luksky has quit IRC11:32
*** pcaruana has quit IRC11:53
openstackgerritMatthew Booth proposed openstack/nova master: Fix incomplete instance data returned after build failure  https://review.openstack.org/64554611:53
*** ttsiouts_ has quit IRC11:54
*** ttsiouts has joined #openstack-nova11:55
*** ttsiouts has quit IRC11:59
*** luksky has joined #openstack-nova12:05
*** dtantsur|brb is now known as dtantsur12:05
*** arne_wiebalck_ has joined #openstack-nova12:06
*** csatari_ has joined #openstack-nova12:07
*** Cardoe_ has joined #openstack-nova12:07
*** Guest12731 has joined #openstack-nova12:08
*** ttsiouts has joined #openstack-nova12:09
*** eharney has quit IRC12:14
*** logan- has quit IRC12:14
*** arne_wiebalck has quit IRC12:14
*** ab-a has quit IRC12:14
*** csatari has quit IRC12:14
*** med_ has quit IRC12:14
*** Cardoe has quit IRC12:14
*** johnthetubaguy has quit IRC12:14
*** Guest12731 is now known as logan-12:14
*** Cardoe_ is now known as Cardoe12:14
*** csatari_ is now known as csatari12:14
*** arne_wiebalck_ is now known as arne_wiebalck12:14
*** johnthetubaguy has joined #openstack-nova12:15
*** weshay is now known as weshay|rover12:17
*** eharney has joined #openstack-nova12:23
*** erlon has joined #openstack-nova12:28
*** udesale has quit IRC12:47
*** udesale has joined #openstack-nova12:48
*** pcaruana has joined #openstack-nova12:54
kaisersmdbooth: Hi! If possible, could you revisit the driver bugfix at https://review.openstack.org/#/c/554195/ ? Needs a second +2 and you already looked into it some time ago. :)12:55
mdboothkaisers: I don't have that mojo, but I can take a look.12:55
kaisersmdbooth: still fine, thanks12:56
*** altlogbot_0 has quit IRC13:01
*** irclogbot_1 has quit IRC13:01
*** yonglihe has quit IRC13:02
*** irclogbot_2 has joined #openstack-nova13:02
*** altlogbot_3 has joined #openstack-nova13:02
openstackgerritMerged openstack/nova master: Imported Translations from Zanata  https://review.openstack.org/64547813:04
*** dklyle has quit IRC13:07
*** erlon has quit IRC13:08
*** lbragstad has joined #openstack-nova13:10
openstackgerritSurya Seetharaman proposed openstack/nova-specs master: Support adding the reason behind a server lock  https://review.openstack.org/63862913:11
*** dklyle has joined #openstack-nova13:12
tssuryacdent, edleaf, mriedem: thanks for the spec review ^ I have updated it with respect to your comments.13:13
cdenttssurya: great, thank you. will look again soon.13:13
tssuryaedleafe*13:13
edleafetssurya: :)13:13
tssuryamriedem: I have a question regarding the notification change: https://review.openstack.org/#/c/638629/1/specs/train/approved/add-locked-reason.rst@168 not particularly what/why it should change13:15
gibitssurya: I think it is not mandatory to add the new fields to the notification payload but it would be nice to add13:17
gibitssurya: if we are not adding now, then we can add it later13:17
*** jmlowe has quit IRC13:18
tssuryagibi: ah ok, yea I can add them in this spec itself, np, I wasn't sure of the workflow since I haven't dealt with notification changes before13:18
tssuryaso that means I need to add the fields into the InstancePayload13:19
tssuryaand then send that via the notifications event right ?13:19
*** lbragstad is now known as elbragstad13:19
*** eharney has quit IRC13:20
gibitssurya: yes, you extend the IntancePayload and with the new fields and make sure that the value of the new fields are populated from the instance13:20
tssuryahttps://github.com/openstack/nova/blob/b33fa1c054ba4b7d4e789aa51250ad5c8325da2d/nova/compute/utils.py#L447 somewhere there ?13:20
*** ttsiouts has quit IRC13:20
tssuryaoh ok13:20
gibitssurya: there is a schema def in the InstancePayload ovo that defined how to copy data from the Instance ovo so if you fill the schema then the copy is automatic13:21
*** ttsiouts has joined #openstack-nova13:21
*** ivve has quit IRC13:21
gibihttps://github.com/openstack/nova/blob/master/nova/notifications/objects/instance.py#L5913:21
tssuryagibi: ah okay that's great13:22
tssuryasure then I will add this change too to the spec13:22
tssuryathanks gibi! :)13:22
gibitssurya: here is a patch from the past that extended the InstancePayload https://github.com/openstack/nova/commit/2bca6431e69bf2c6e657736b7fe11f5a2fbb943313:22
*** ttsiouts has quit IRC13:26
*** mrch_ has quit IRC13:26
tssuryagibi: nice, I can follow that workflow :)13:26
gibitssurya: and you can invite me to review the patch when it is up. :)13:27
openstackgerritMohammed Naser proposed openstack/nova master: bdm: store empty object as connection_info by default  https://review.openstack.org/64535213:27
*** marst has joined #openstack-nova13:27
tssuryagibi: absolutely! :D thank you13:27
gibinp13:27
mnaseri added a note to this bug.  it's been around for a few cycles ^13:27
smcginnissean-k-mooney: I see now what you were saying with the lower-constraints issue yesterday.13:32
smcginnissean-k-mooney: Cinder has this: https://opendev.org/openstack/cinder/src/branch/master/tox.ini#L15-L1813:32
smcginnissean-k-mooney: That's what you were saying the nova tox.ini needed to be changed to, right?13:32
*** ivve has joined #openstack-nova13:34
cdentsmcginnis , sean-k-mooney I have a patch pending to fix lower constraints in nova for months and months https://review.openstack.org/#/c/622972/13:34
cdentit's so old now it is probably out of date13:34
smcginnisWell wouldya look at that.13:35
smcginnisI do think we should probably have the deps set separately than install_command though.13:35
smcginnisSince that's the variable between these targets.13:35
smcginnisBut two ways to do the same thing I suppose.13:36
openstackgerritTakashi NATSUME proposed openstack/nova master: Update contributor guide for Train  https://review.openstack.org/64558113:39
*** marst has quit IRC13:40
openstackgerritMerged openstack/nova master: Trivial: remove unused var from policies.base.py  https://review.openstack.org/64538013:47
*** BjoernT has joined #openstack-nova13:47
*** BjoernT has quit IRC13:49
*** mriedem has joined #openstack-nova13:51
*** ivve has quit IRC13:51
*** BjoernT has joined #openstack-nova13:54
*** gbarros has joined #openstack-nova13:55
*** mlavalle has joined #openstack-nova13:56
*** awaugama has joined #openstack-nova13:56
*** efried is now known as fried_rice13:58
fried_riceo/13:58
*** bnemec is now known as beekneemech13:59
*** jaosorior has quit IRC14:01
kaisersLooking for a second +2 for another driver bugfix at https://review.openstack.org/#/c/522245/ , anyone interested? :)14:04
*** marst has joined #openstack-nova14:05
mdboothOoh, that's fun.14:06
*** eharney has joined #openstack-nova14:06
mdboothIn instance.save() we do:14:06
mdboothif not updates:14:06
mdbooth   ...14:06
mdbooth   return14:06
*** BjoernT_ has joined #openstack-nova14:06
mdboothWhich, I think, means that if 2 different threads do:14:07
mdboothinstance.task_state='foo'14:07
mdboothinstance.save(expected_task_state=[None])14:07
mdboothThey will *both* succeed, because the current state will never be evaluated for the second one14:07
mdboothAnd that would break a ton of stuff in nova api14:08
cdentI feel like this is a known issue, but I can't remember why I think that14:08
*** erlon has joined #openstack-nova14:08
mdboothI'm on my second interesting race condition of the day. I'm in race condition heaven.14:09
*** BjoernT has quit IRC14:09
mdboothmriedem: Speaking of which you brought up gate failure https://bugs.launchpad.net/nova/+bug/1820337 yesterday, which I think is resolved by https://review.openstack.org/#/c/645546/14:13
openstackLaunchpad bug 1820337 in OpenStack Compute (nova) "test_bfv_quota_race_local_delete intermittently fails with testtools.matchers._impl.MismatchError: ['bfv'] != []" [High,In progress] - Assigned to Matthew Booth (mbooth-9)14:13
*** jmlowe has joined #openstack-nova14:15
mriedemmdbooth: comment inline14:18
mriedemi also noted that that bug doesn't show up on master for some reason14:18
mriedembut yeah this seems reasonable14:18
mgoddardhi nova14:20
mgoddardhas anyone seen this after sending SIGHUP to nova-compute:14:20
fried_riceo/ mgoddard14:20
mgoddardIn shutdown, no new events can be scheduled14:20
mdboothmriedem: Yeah, the rocky/master thing is weird. My bet is it'll be some random quirk of eventlet scheduling I'll bet, caused by an additional context-switching operation added somewhere random and unimportant.14:20
mgoddardhttp://logs.openstack.org/40/616640/29/check/kolla-ansible-centos-source-upgrade/34ac7d4/primary/logs/system_logs/kolla/nova/nova-compute.txt.gz#_2019-03-22_13_48_58_62114:21
fried_ricemgoddard: Does it go away after a little while?14:21
mgoddardfried_rice: I'm not sure, but I don't think so14:21
mgoddardfried_rice: how little a while?14:21
fried_riceIt wouldn't surprise me that SIGHUP would bork anything that's in flight. Was that vif creation in flight at the time?14:22
mgoddardfried_rice: no, it's a post upgrade instance boot test14:23
fried_ricelooks like the SIGHUP happens here: http://logs.openstack.org/40/616640/29/check/kolla-ansible-centos-source-upgrade/34ac7d4/primary/logs/system_logs/kolla/nova/nova-compute.txt.gz#_2019-03-22_13_43_57_19214:24
fried_ricenot sure if the errors after that are "normal" or not.14:24
fried_riceyour spawn is five minutes later, which should be plenty of time, yah.14:26
fried_riceThis sounds like a job for...14:26
fried_ricegibi: ^  any ideas?14:26
gibilooking14:27
mriedemmdbooth: maybe, i guess that eventlet thing you're trying to fix was master-only14:27
mdboothmriedem: Yes, that's master only. Nova-api didn't need eventlet in Rocky, and wasn't monkey patched.14:28
mdboothThe gate thing is in functional, which is monkey patched separately, same on both branches. The chance causing the difference in scheduling could be literally anywhere, though. Doesn't even have to be even vaguely related.14:29
openstackgerritSurya Seetharaman proposed openstack/nova-specs master: Support adding the reason behind a server lock  https://review.openstack.org/63862914:30
mgoddardfried_rice: gibi: do the grenade jobs use a SIGHUP?14:30
* fried_rice has no idea14:31
cdentmgoddard: unlikely14:31
gibiI'm not sure what should nova is expected to do at SIGHUP14:31
gibiI'm not sure what nova is expected to do at SIGHUP14:31
mgoddardrolling upgrade docs say to SIGHUP to refresh upgrade levels: https://docs.openstack.org/nova/stein/user/upgrade.html#rolling-upgrade-process14:31
gibimgoddard: I understand the first  log http://logs.openstack.org/40/616640/29/check/kolla-ansible-centos-source-upgrade/34ac7d4/primary/logs/system_logs/kolla/nova/nova-compute.txt.gz#_2019-03-22_13_48_58_621 as nova got an error from neutron so it would worth to check what is in the neutron log14:32
dansmithgibi: there's a bug about this.. oslo.service is killing us during a sighup instead of letting us do graceful things14:34
gibimgoddard: ^^14:34
dansmithgibi: mgoddard:  https://review.openstack.org/#/c/641907/14:35
dansmithI was looking at that with him early-on but I've lost all the context on it14:35
gibidansmith: thanks14:35
mgoddardgibi: I think it's a red herring. I can't see any errors in neutron. I think that the VIF plug event pushes nova through a path that exposes this half-shutdown state14:35
mgoddarddansmith: thanks14:35
dansmithmgoddard: it's because nova has been restarted and has dropped a bunch of state on the ground14:36
dansmithI have to run off for a sec, biab14:36
artomhttp://logs.openstack.org/81/644881/2/check/tempest-full-py3/3b506d2/controller/logs/screen-n-cpu.txt.gz?level=WARNING#_Mar_20_20_39_18_72355914:36
artomWell crap, looks like there's still something missing from that whole revert resize external events mess14:36
*** artom is now known as temka14:37
mgoddardIt seems like the workaround here is going to be to restart nova-compute rather than HUP, I'll go with that14:40
mgoddardthanks fried_rice gibi dansmith14:41
sean-k-mooneycdent: regarding your lower constratins patch that is similar to mine but it also is technically wrong14:49
*** shilpasd has quit IRC14:49
sean-k-mooneycdent: placement and os-vif only use lower-constratis in our lower-constratis job14:49
cdentsean-k-mooney: and?14:50
sean-k-mooneybut if we have a transitive depency that is capped in upper-constratits but not in our lower-constratis then it will be uncapped14:50
sean-k-mooneygiven os-vif and placement have relitivly few depencies we are proably fine14:51
sean-k-mooneybut for nova or larger project we actully should use both but lower constraints needs to be passed first14:51
mgoddarddansmith: qq for when you return - would you recommend a full restart of all nova services rather than HUP, including those where we haven't seen issues with HUP (i.e. all except compute)?14:51
cdentmy changes to nova's lower-constraints job make it so the upper-constraints file is not included in the test (which it had been before), and that caused conflicts14:52
sean-k-mooney yes and that is more correct but the actually fix should be to include lower-constratis first followed by upper14:52
dansmithmgoddard: yes to restart of compute instead of hup, although it'll still cause in-flight boots to fail.. don't think it's really a concern for most of the other services, so probably okay to hup them14:53
sean-k-mooneythe pip code uses the first constratit it finds for a dep14:53
*** Sundar has joined #openstack-nova14:53
sean-k-mooneycdent: the main issue i have found is that mocking does not work correctly with the lower constrating deps as i think things moved14:53
sean-k-mooneycdent: ill pull down your version and rebase and see if i see the same issue14:54
mgoddarddansmith: at this point it's just been upgraded so will have restarted anyway - another restart won't hurt too much. I'll go with that, thanks14:54
dansmithack14:54
sean-k-mooneyi can combine both patches and see if i can get something that works fully14:54
*** Luzi has quit IRC14:56
sean-k-mooneysmcginnis: and yes the cinder code https://opendev.org/openstack/cinder/src/branch/master/tox.ini#L15-L18 is what we do in os-vif and it almost right they should also have upper-constriatns listed here https://opendev.org/openstack/cinder/src/branch/master/tox.ini#L198 after lower.14:57
smcginnissean-k-mooney: I don't think we want upper constraints listed in the lower constraints job on line 198. The point of that is to make sure it only uses the lower constraints.14:59
sean-k-mooneysmcginnis: the way os-vif placement and cinder execute lower-constraitns is actully probably fine 99% of the time as the are using lower-constraints for all there direct dependcies14:59
*** Luzi has joined #openstack-nova14:59
sean-k-mooneysmcginnis: if lower constratits does not list every dep we install then not having it in line 198 will leave deps uncapped14:59
sean-k-mooneyassuming those deps are capped in upper15:00
sean-k-mooneythat is the only case where it would be needed15:00
smcginnisBut lower should list every dependency.15:00
sean-k-mooneyit should but i dont think we enforce that15:00
smcginnisAnything in requirements.txt should be in lower-constraints.txt15:00
*** Luzi has quit IRC15:00
smcginnisThe point of the job is to capture and validate all the requirement's minimum allowable versions.15:01
sean-k-mooneysmcginnis: yes i know but we dont validate that every file listed in pip feeze is in lower-constratis15:01
sean-k-mooneythe original file was generated that way but there is notheing to enforce it so it could be out of sync15:02
openstackgerritMatt Riedemann proposed openstack/nova master: Fix return param docstring in check_can_live_migrate* methods  https://review.openstack.org/64561015:02
sean-k-mooneylower-constratins also needs to have our depencies dependcies listed as we do not use our deps lower-constraitns recursivly15:02
sean-k-mooneyanyway ill email the list once i have a working fix for nova that work with master15:03
sean-k-mooneyi have a partial fix but as i said our unit tests mocking does not work with lower constratints as we end up using the oslo messaging amqp driver instead of the fake driver in some tests and the mocking does not work15:04
openstackgerritSurya Seetharaman proposed openstack/nova master: [WIP] Add 'power-update' external event to listen to ironic  https://review.openstack.org/64561115:07
*** cfriesen has joined #openstack-nova15:07
*** dpawlik has quit IRC15:14
*** Sundar is now known as DarthNau15:14
mriedemdansmith: do you see any reason to not add a locked_reason to the instances table vs system_metadata or something? https://review.openstack.org/#/c/638629/1/specs/train/approved/add-locked-reason.rst@4215:19
mriedemi get a bit of the heeby jeebys shoving more non-filterable/indexable columns in the instances table for just system metadata type purposes, but if it doesn't cause any issues for indexing and the other locked/locked_by columns are already in there, then i suppose it's fine15:20
mriedemi'd ask jay but he's gone15:20
dansmithand requires rewriting the instances table, for what is just metadata15:20
*** marst has quit IRC15:21
mriedemright - it's a schema migration on a large table15:21
*** altlogbot_3 has quit IRC15:21
dansmithi.e. non-critical information that won't be there for some instances anyway15:21
dansmithright15:21
mriedemi'm not advocating a big ass data migration to move the existing locked/locked_by to another table as a blocker on this, but seems we shouldn't pile on15:21
dansmithyeah15:21
mriedemdespite it will be confusing to have a new locked_* field in a different place now, but the Instance object abstracts that15:22
dansmithmoving it just means we have a collapse we have to do later (or leave it dangling) so moving is less good than just putting new things in the right-er place15:22
dansmithand really, you only need that info in the api, right? you put it on the instance and you're loading it all the time...15:22
mriedemcorrect15:22
dansmithhow long of a reason are we proposing?15:23
mriedempurely non-queryable meta15:23
mriedem25515:23
dansmithbecause we get a free 255 char limit with sysmeta15:23
dansmithack15:23
mriedemwhich is exactly what system_metadata value is15:23
dansmithyeah15:23
dansmithseems all around better to put it in sysmeta to me, aside from the purely-developer reason of wanting them to look like they're next to each other in one data model at the bottom of the stack15:25
dansmith...which I get of course15:25
dansmithbut.15:25
*** altlogbot_3 has joined #openstack-nova15:26
dansmiththe argument of putting it instances because the other locked-by is there, is like saying we should have stored the pre-resize vm_state in instances because that's where vm_state is, which would be insane reasoning15:26
dansmithand since most instances are unlocked for most of their life....15:26
*** irclogbot_2 has quit IRC15:30
mriedemok are you going to -1? if not i'll just leave a comment with a link to this conversation15:31
tssuryamriedem, dansmith: I'll update the spec to add it to the metadata table15:31
*** irclogbot_3 has joined #openstack-nova15:32
dansmithtssurya: I think that's the way to go, especially since it's just changing the spec to remove the db migration, schema migration and just say "stuff it into sysmeta" :)15:32
dansmithit'll be much easier to merge that way too15:33
tssuryadansmith: haha sure, for some reason I wanted to have all the lock related info in the same place, but yea sure we could split it to the metadata15:33
tssuryahopefully I won't have to move the locked_by right ?15:33
tssuryalet that stay there ?15:34
dansmithtssurya: no don't move the existing stuff15:34
tssuryacool! :)15:34
mriedemtssurya: i'm leaving comments on PS315:35
mriedemso hold up15:35
tssuryamriedem: okay15:36
*** irclogbot_3 has quit IRC15:36
*** maciejjozefczyk has quit IRC15:37
*** irclogbot_2 has joined #openstack-nova15:38
*** igordc has joined #openstack-nova15:42
*** gaoyan has joined #openstack-nova15:42
mriedemtssurya: done15:44
tssuryathanks15:44
mriedemelbragstad: so we've got this silly locked_by field on the instance object in our db which is an enum of 'admin' or 'owner',15:45
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/api.py#L401615:45
mriedemi guess in this case admin is a non-project admin, so like a system scope admin15:46
mriedemanyway, there is a proposal to expose that out of the rest api https://review.openstack.org/#/c/638629/3/specs/train/approved/add-locked-reason.rst@11715:46
mriedemdo you see any issues with future policy scope creep weirdness with that?15:46
dansmithsure seems like that should be a userid and not an enum15:46
dansmith(before we expose it I mean)15:47
dansmithor15:47
mriedemit's hella old15:47
dansmithI know15:47
gmannmriedem: dansmith as in future it can be locked by system.admin and project.admin as well we should make it user-id.15:47
dansmithso when we expose it, maybe we should make it possible for it to be owner|admin|other, so that later if we store a user-id, we can say "other" there15:47
gmannotherwise we will not know it is locked by project admin or system admin15:48
mriedemgmann: today if you lock as project admin it's 'owner'15:48
mriedemit's only 'admin' if the request comes from an admin in a different project15:48
dansmithtssurya: just added a comment about that15:49
gmannmriedem:  no, it is 'admin' if project admin lock any non-admin user server of same project15:50
tssuryadansmith: checking15:50
mriedemlooks like it was added in https://review.openstack.org/#/c/38196/15:50
dansmithgod the world was so much simpler in 201315:50
mriedemgmann: ? is_owner = instance.project_id == context.project_id15:50
mriedemas long as the project id matches, it's owner15:50
mriedemdespite the role15:50
gmannohh yeah we only match project id. soory15:51
gmannsorry15:51
dansmith"soory" correct canadian for "sorry"15:51
mriedemhe's fitting right in15:52
gmann:)15:52
gmannso we assume  'admin' is system-admin which again might be confusing15:52
mriedemhmm, i wonder if locked_by is still used on unlock https://review.openstack.org/#/c/38196/13/nova/compute/api.py@249015:52
dansmithit's what gates whether or not you can unlock right?15:52
gmannI am adding lock as [system, project] scope_type in my PoC of scope_type - https://review.openstack.org/#/c/645452/2/nova/policies/lock_server.py15:52
mriedemah it is15:53
gmannbut we have overridden unlock also i think that is for admin to unlock owner server15:53
*** gaoyan has quit IRC15:53
gmannowner locked server15:53
mriedemos_compute_api:os-lock-server:unlock:unlock_override15:54
mriedemyuck15:54
mriedembaking policy into the db15:54
dansmithready for friday to be over already?15:54
gmannyeah, we should remove that also. not sure any reason to keep that15:54
gmannare we going to expose 'locked_by' also ? not checked the spec yet15:56
*** igordc has quit IRC15:57
tssuryadansmith: umm you want to add "other" where ? as in locked_by enum schema needs to change ? or just in the response api terminology for future needs ?15:57
dansmithtssurya: just the api schema for the future15:57
tssuryaokay15:57
dansmithwe should get some agreement on that (and the actual enum value) first though15:58
tssuryayea15:58
mriedemdansmith: i replied to that comment,15:58
dansmithand mriedem is too busy barfing15:58
mriedemInstance.locked_by is just a string15:58
mriedemwhich will get spewed back out of the api response15:58
mriedemso if we allow storing a user_id in there in the future, it's a db schema change but shouldn't really mean any change to the instance object or api15:58
dansmithoh sorry, I assumed it was an enum in the json schema too15:58
mriedemit might be in tempest...checking15:58
mriedemoh nvm, we don't expose it today15:59
gmannwe only expose 'lock' as string in API response15:59
dansmithit says admin or owner15:59
*** igordc has joined #openstack-nova15:59
mriedemright15:59
gmannwe do not expose 'locked_by'15:59
mriedemtempest is the only thing that would have a response schema check15:59
dansmithmriedem: right the thing she's adding says "admin or owner"15:59
mriedemi know, but that's just the possible values in the db schema today15:59
dansmithso I assumed that meant a schema enum15:59
mriedemtempest could just allow 'string'15:59
mriedemnova doesn't have any response schema15:59
dansmithwell let's be clear about it then because it looks like those are the only two values16:00
dansmithI mean in the spec16:00
mriedemsure16:00
*** jangutter has quit IRC16:01
mriedem11am and i forgot what i was actually going to try and get done today...16:01
mriedemneed a stable core to hit this https://review.openstack.org/#/c/633493/16:04
mriedemhttps://review.openstack.org/#/c/640116/ is also trivial on queens16:05
mriedemfor fast approval16:05
mriedemand https://review.openstack.org/#/c/637391/16:05
mriedemonce we get those we could probably release queens16:05
mriedemthanks dansmith16:07
* mriedem goes to lunch16:07
*** mriedem is now known as mriedem_afk16:07
dansmithuar16:08
*** dustinc has joined #openstack-nova16:08
*** gaoyan has joined #openstack-nova16:09
*** helenaAM has quit IRC16:15
*** imacdonn has quit IRC16:19
*** imacdonn has joined #openstack-nova16:20
*** gaoyan has quit IRC16:21
*** janki has quit IRC16:22
mdboothhttps://bugs.launchpad.net/nova/+bug/182137316:24
openstackLaunchpad bug 1821373 in OpenStack Compute (nova) "Most instance actions can be called concurrently" [Undecided,New]16:24
* mdbooth will try to work on that next week, btw16:24
melwittdansmith: want to hit these release bot changes? https://review.openstack.org/645461 and https://review.openstack.org/64546316:25
sean-k-mooneymdbooth: i assume you are going to remove the logic that shortcircuts if the value is set the same value and have it hit the db16:25
mdboothsean-k-mooney: Probably, but I'll have to meditate carefully on the safety of whatever.16:26
mdboothsean-k-mooney: And right now I'm a bit hungry and a bit checked out for the weekend :)16:26
sean-k-mooneymdbooth: fixing db curuptions is definetly not a firday evning activity16:26
mdboothDon't try to fix db races on an empty stomach :)16:27
dansmithmelwitt: I think you're safe to fast-approve things like that :)16:27
melwittcan't be too careful16:27
melwitt(joke)16:28
*** fried_rice is now known as fried_rolls16:30
dansmithmdbooth: hmm, check_instance_state has a task state to try to prevent some of what you describe, not sure why that's not being used (although it won't close it entirely)16:31
dansmithbut I agree, I think either doing a dedicated task_state check in the db if we're going to punt due to no-updates will retain as much of the optimization as possible and close that window16:31
dansmithletting it all go through even if no updates is an option, but it's heavy16:32
dansmith(i.e. what we were doing before)16:32
*** mrch_ has joined #openstack-nova16:38
*** rpittau is now known as rpittau|afk16:39
mdboothdansmith: Yeah. I started thinking about whether we can safely bypass the trip to the db, but that's not a Friday afternoon thought-train.16:40
dansmithI don't think we can entirely based on your reasoning, but we can make it efficient16:41
dansmithbut yes, a tuesday conversation16:41
*** dpawlik has joined #openstack-nova16:42
dansmithfried_rolls: are you not approving this because of your comment or for sequencing reasons? https://review.openstack.org/#/c/644625/516:44
dansmithfried_rolls: if the former, I'll just add it to unblock progress, but it seems entirely duplicative to me16:44
*** cfriesen has quit IRC16:45
*** dpawlik has quit IRC16:47
*** BjoernT_ has quit IRC16:50
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574916:52
*** udesale has quit IRC17:00
*** tbachman has joined #openstack-nova17:09
*** dtantsur is now known as dtantsur|afk17:11
*** cdent has left #openstack-nova17:18
*** cdent has joined #openstack-nova17:24
*** cdent has left #openstack-nova17:24
*** eharney has quit IRC17:26
*** derekh has quit IRC17:26
gibimriedem_afk: could you hit the bandwidth spec update when you are back https://review.openstack.org/#/c/644810/ ?17:26
*** tbachman has quit IRC17:29
*** tssurya has quit IRC17:30
openstackgerritKashyap Chamarthy proposed openstack/nova-specs master: Add "CPU selection with hypervisor consideration" spec  https://review.openstack.org/64581417:33
gmanndansmith: mriedem_afk i think we should expose project_id in locked_by field which can be easily fetched for 'owner' case (based on DB enum value and context)  but not sure how to fetch the admin project-id when server is locked by admin and GET by owner17:34
dansmithgmann: yeah that's the obviously difficult one17:35
gmannwe need to change the locked_by DB from enum to UUID ?17:35
gmannshould we change DB schema also in this spec so that we always store project-id in locked_by while lock API17:35
dansmithwe could just start throwing a locked_by_uuid in sysmeta too, and maybe just add "id" to the enum,17:35
mnaserkashyap: regarding your arm change to support 'virt' machine type, does that ship in as specific release of qemu?17:36
dansmithbut since it's an enum in the schema, it's not easy to just convert it to a freeform string17:36
elbragstadmriedem_afk just catching up17:36
kashyapmnaser: Hi17:36
mnaserunder centos 7 with qemu-kvm, i can't find a 'virt' machine type (this is the rhel shipped one)17:36
gmanndansmith: ok.17:36
kashyapmnaser: Which change do you refer to?17:36
mnaserkashyap: https://github.com/openstack/nova/commit/e155baefb0bbaa0aa78e54fe9e0f10c12336c01a17:36
kashyap(/me is dazed due to death-by-meetings; and is almost ready to check out)17:37
elbragstadmigrating from enum in the schema is pretty disruptive, isn't it?17:37
* kashyap clicks17:37
dansmithelbragstad: yes17:37
mnaseri'm assuming this controls the resulting `-M` parameter in the qemu-kvm command which is the machine type17:37
elbragstadwe hit that with something in keystone's database once, it was a total pain17:37
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574917:37
mnaserto which I don't see the `virt` option available under centos 7.6 right now17:37
kashyapmnaser: Top off my head, I need to check which version introduced the 'virt' board for ARM/AArch6417:37
kashyapmnaser: Are you checking with the right QEMU binary?17:38
mnaserokay, i was curious at seeing how `virt` works for x86 machines too but just noticed it's missing.17:38
mnaserkashyap: on a centos 7 machine => /usr/libexec/qemu-kvm -M help17:38
mnasergets me this: http://paste.openstack.org/show/748268/17:38
kashyapmnaser: That is referring to x86.17:38
mnaserkashyap: let me check an arm box quickly17:39
kashyapmnaser: You need to pull in the AArch64 binary; `dnf search` for it17:39
mnaseri thought virt was a generic qemu-kvm machine type available for all architectures17:39
kashyapmnaser: No.  Only for AArch64/ARM17:39
kashyapmnaser: But ...17:39
mnaserkashyap: ok, you're right, i see it under aaarch64 on an arm box17:39
elbragstadmriedem_afk re: your first question - i would probably write that policy check string to be something like "(role:admin and system_scope:all) or rule:owner)"17:40
kashyapmnaser: ... there has been some talk at last year's KVM Forum about adding a similar 'virt' board for x8617:40
gmanndansmith: elbragstad so we can just extend the current enum to store 'id' also (do not modify column from enum->string)and that we expose in API, right ?17:40
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574917:40
mnaserkashyap: ok, cool, sorry for the firedrill, misinformed on my side then :)17:40
kashyap(Also "Kata Containers" folks, who forked QEMU, also want it)17:40
kashyapmnaser: No worries :-)17:40
elbragstadgmann enum definitions are kind of immutable i think17:40
dansmithgmann: we could, but I'm not sure how important this really is, and it's not fair to saddle the other effort with too much I think17:40
mnaserkashyap: yeah, i saw NEMU which seems like it is better solved by adding a virt machine type.17:40
dansmithelbragstad: I think adding one thing to an enum isn't bad17:40
dansmithelbragstad: they're stored as an int, IIRC, so adding one doesn't cause too much disruption as long as you're not removing one or adding one that causes you to need the next size up of int17:41
dansmithbut some database person should config17:41
dansmith*confirm17:41
elbragstadaha17:41
gmanndansmith: but current string 'admin' will be confusing for case of scope_type as system and proejcts17:41
mnaserkashyap: aha, from the NEMU readme "NEMU also introduces a new QEMU x86-64 machine type: virt. It is a purely virtual platform, that does not try to emulate any existing x86 chipset or legacy bus (ISA, SMBUS, etc) and offloads as many features to KVM as possible. This is a similar approach as the already existin AArch64 virt machine type and NEMU will only support the two virt machine types."17:41
mnaserthat's where i saw it from.17:41
kashyapmnaser: There are several issues with "NEMU", and "better solved" is also debatable :-).  Upstream QEMU folks are working with them to abandon the idea of a fork17:42
kashyapAnd to work with upstream.17:42
gmanni mean from API response side17:42
dansmithgmann: you mean for the proposed spec to expose it?17:42
mnaserkashyap: absolutely, what i meant was instead of creating nemu, we upstream all that stuff into a custom machine type :)17:42
gmanndansmith: in current spec-  https://review.openstack.org/#/c/638629/3/specs/train/approved/add-locked-reason.rst@11717:42
kashyapmnaser: Yep, that is a good idea; and x86 will "soon" get it.  There have been some mind-showering sessions on QEMU usptream list17:42
dansmithgmann: if you're really concerned about that and are willing to do the work underneath to make it not have to, then okay17:42
gmannyeah17:42
kashyapmnaser: So expect to see a "virt" board for x86 (that is expressely designed for guests, including of course Nova instances).17:43
mnaserkashyap: cool, that would be super neat17:43
* kashyap --> hungry; be back later17:43
gmannbecause this is the only API we say 'admin' or 'owner' in response. and 'admin' will be extended in scope_type system-admin or project-admin17:44
*** luksky has quit IRC17:44
dansmithgmann: ack, well, raise that on the spec I guess17:44
*** ganso has joined #openstack-nova17:45
mnaseris "urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='198.72.124.41', port=443): Read timed out. (read timeout=60)" under tempest.scenario.test_shelve_instance.TestShelveInstance a common failure?17:47
elbragstadi think i stumbled across an article like this a while ago when we were trying to add an attribute to an enum http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/17:47
*** psachin has quit IRC17:48
*** tesseract has quit IRC17:48
*** BjoernT has joined #openstack-nova17:51
*** gmann is now known as gmann_afk17:52
*** mriedem_afk is now known as mriedem17:55
elbragstadmriedem i thinking keeping the `admin` or `owner` details out of the database would be a good think moving forward17:56
elbragstada lot of the work we're doing with scope types introduces different types of "admins", so we can't necessarily assume "admin" is always the cloud administrator/operator17:57
dansmithelbragstad: right, that's why we're talking about this:18:01
dansmithit's already in the db, but not exposed to the user18:01
dansmiththe spec is proposing exposing that admin|user notion to the api user18:02
dansmithso we either need to make the api sufficiently generic or change a bunch of stuff before we can do the other part, which is locked reason18:02
elbragstadso a user would be able to see who locked an instance?18:02
dansmithgmann_afk: elbragstad mriedem: What if instead of exposing the person that locked it for now, we just expose a boolean which is "can I unlock it?"18:02
*** IvensZambrano has quit IRC18:02
dansmithelbragstad: right18:02
dansmithbut we could paper over a lot of details by just saying "you can unlock this" or "you can not unlock this:18:03
elbragstaddansmith the "can I unlock it" bit seems like a good middle ground?18:03
dansmithright18:03
mriedemdansmith: that depends on policy though, which could change between the time they have that response and when they try to unlock18:03
elbragstadas a user - i probably care less about who locked my stuff and i just want to know if i can lock it18:03
dansmithat least lets us push the refactoring of the data store away from the current spec18:03
mriedemgranted, i wouldn't expect policy to change often18:03
dansmithmriedem: sure, but same for exposing admin|user .. you have to now what the policy is to know whether it matters18:04
mriedemthe 'can i unlock' is a policy check18:04
elbragstadwell - to calculate that boolean you'd just need to run the context through that policy check - then populate that accordingly in the data model (and response?)18:04
dansmithright18:04
mriedemnot the data model, but the response yes18:04
dansmithbut if you don't use the boolean, and expose "locked_by: admin"18:04
dansmiththe user has to know whether policy considers them an admin or not right?18:05
dansmithwhereas "no you can't unlock this" is pretty clear18:05
openstackgerritMerged openstack/nova stable/queens: libvirt: Add workaround to cleanup instance dir when using rbd  https://review.openstack.org/62872618:05
elbragstad++18:05
openstackgerritMerged openstack/nova stable/queens: Avoid BadRequest error log on volume attachment  https://review.openstack.org/64011618:05
dansmiththe fact that it could change between check and attempt is pretty much the case for everything :)18:05
elbragstad"admin" is kinda confusing, what is locked by a project admin, a domain admin or a system admin?18:05
elbragstads/what/was/18:06
mriedemin this case project admin == 'owner'18:06
mriedemsame as if the owner of the instance locked it themselves18:06
mriedemit's the system admin case that is wonky18:06
elbragstadi'm missing the part why the system-admin case is wonky18:07
mriedemi mean the case where the locked_by is 'admin' today is if someone outside the project that owns the instance locks the instance18:07
mriedemproject admin locking the instance treats locked_by as 'owner'18:07
elbragstad... interesting18:08
mriedemidk how i feel about returning an at-the-time-of-request policy check boolean saying if you can or cannot unlock it with a subsequent unlock call, i don't think we have anything else like that in the api18:08
elbragstadsomeone outside the project?18:08
dansmithI totally don't understand the concern over the policy thing18:08
dansmithwe return data (or not) based on policy18:09
mriedemfor faults and event traceback yeah18:09
mriedemtrue18:09
mriedemand the numa blob that is being proposed18:09
dansmithand we used to for things like flavor extra specs, iirc18:09
dansmithand hostnames if you're admin18:09
dansmithor fake host ids if you're not18:10
mriedemit's slightly different though, those are just exposing things or not18:10
dansmithif it really matters, then unlockable:true or omit it entirely, but I don't think that's better :)18:10
mriedemi'd be much more comfortable with this locked_reason spec if we didn't have to deal with gd locked_by in the first place18:10
*** gmann_afk is now known as gmann18:11
mriedemlocked_by is just internal meta and i'm not sure why we need to expose it, and it's only related to a policy check on unlock18:11
mriedemif you can't unlock you find out with a 403 as normal18:12
dansmithwell, we can do that too18:12
dansmithbut I don't understand the concern18:12
mriedemi wouldn't block on it18:12
mriedemjust seems like we're maybe trying to solve a problem we're making for ourselves18:12
mriedemwhich we maybe don't need to care about18:13
dansmithI tell you who's making problems.. mriedem  :)18:13
mriedemi've said a million times that guy is an insufferable asshole18:13
dansmithheh18:13
elbragstaddumb question: does exposing the attribute actually have an impact on someone being able to unlock or lock an instance?18:14
mriedemit's an indication if they can unlock it18:14
gmanni agree that 'can i unlock' is policy control things. exposing locked_by I consider the benefit of as user i know my instances is locked by someone else like admin (system-admin or project-admin) and i can ask them to unlock18:15
dansmithI imagine the impetus for this whole thing is, someone tries to do something, can't because locked, so they don't understand why and have to call support to figure out why.. if there was a "locked by an admin, reason: because you didn't pay" then .. they know18:15
elbragstadbut if that attribute wasn't exposed to them, could they still build the same request and send it to nova and have it behave the same way?18:15
dansmithelbragstad: yes18:15
dansmithelbragstad: but they may not understand why it fails if they don't see it was locked by someone else18:15
mriedemright i see the value in the reason field for sure18:15
gmannthen we can leave locked_by to expose18:16
openstackgerritMerged openstack/nova stable/queens: Allow utime call to fail on qcow2 image base file  https://review.openstack.org/63349318:16
openstackgerritMerged openstack/nova master: Update master for stable/stein  https://review.openstack.org/64546318:16
mriedemelbragstad: this is the check code on unlock https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/lock_server.py#L4718:16
openstackgerritMerged openstack/nova stable/stein: Update .gitreview for stable/stein  https://review.openstack.org/64546118:16
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/api.py#L402518:17
*** xek_ has quit IRC18:17
dansmithgmann: you're saying you think exposing "can I unlock" is bad right?18:17
dansmithwhat if we just expose locked_by_me:True|False or locked_by=user|other18:17
gmannyeah. that is all control by policy18:17
dansmithokay I don't understand that at all, but whatever :)18:17
gmannif locked by other then policy should say 40318:17
dansmithgmann: right, but you're going to make them try to determine that instead of just doing the check ourselves? we can ask policy about it, just like we do for the conditional exposure of fault18:18
dansmithtry the put I mean18:18
elbragstadif the policy is written like "(role:admin and system_scope:all) or rule:owner" and locked_by_me is True|False, is locked_by useful?18:18
mriedemif i'm following this unlock logic, if an admin in my project locks my server, i can unlock it. but if an admin outside my project (global god like admin) locks my server, then i can only unlock it if i pass the override policy check right?18:18
elbragstadoh - maybe i was missing ^ that bit18:19
gmannmriedem: right18:19
mriedemand the override policy defaults to rule:is_admin i.e. system level god admin18:19
mriedemin legacy terms18:19
gmannelbragstad: we compare only project_id (not user id) for deciding it is owner or not.18:20
mriedemso with dan's proposal, on response, we checking if you pass is_expected_locked_by and if not, do you pass the override policy, exactly like the unlock code,18:21
mriedemare we going to want to perform that policy check when listing servers with details? ^ for each server?18:21
elbragstadmmm - so owner == anyone with a token scoped to the project that contains the instance18:21
gmannhumm, no we should not perform that check in show.18:21
mriedemelbragstad: yes18:22
elbragstadand admin == anyone with the `admin` role on any project18:22
gmannyeah18:22
openstackgerritMerged openstack/nova-specs master: Update Network Bandwidth resource provider spec  https://review.openstack.org/64481018:22
mriedemgmann: "no we should not perform that check in show." is what we would have to do to return an "can i unlock this server" boolean18:22
dansmithand it's what we do when we expose (or not) fault in instance18:22
mriedemi don't agree on that exactly being the same18:23
dansmithis there some api guideline that says "only reveal if someone can do something when they try" ?18:23
mriedemwith fault traceback it's just a show/hide18:23
gmannyeah, seems like.18:23
mriedemmaybe we do a policy check on every instance when listing, for faults i mean, would have to look18:23
dansmithyou guys keep saying "we shouldn't do that" but I haven't seen any *reason* :)18:23
mriedemanywho18:23
*** jdillaman has quit IRC18:25
mriedemwell i guess we don't do a context.can check when listing servers to see if we can show the fault traceback https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/views/servers.py#L550 but it's close, just based on the hard-coded is_admin flag in the context18:25
dansmithalso, locked_by_me=True|False is something we can do today, something we can do in the future if we store the locking user, and something that doesn't require a policy check on show.. why is that not okay?18:25
mriedemwhich elbragstad wants to remove18:25
*** _alastor_ has quit IRC18:25
*** erlon has quit IRC18:25
mriedemdansmith: you mean locked_by_owner yeah?18:26
dansmithsure18:26
mriedemwhere the logic would just be locked_by_owner = instance.locked_by == 'owner'18:26
dansmithor locked_by=Owner|NotOwner18:26
dansmithright18:26
dansmithdon't say admin, just say "not you"18:26
elbragstad++18:27
* dansmith is still waiting on the reasoning for not doing a policy check18:27
mriedemi'd be more ok with doing locked_by=owner|other18:27
mriedemi'm trying to express this...18:27
* dansmith expects a simpson reference youtube any second18:28
dansmith*simpsons18:28
mriedembut doing a policy check when building a response based on policy for something that would be checked during a subsequent unlock request, seems very weird and capability-ish to me, which we don't do elsewhere. it's true we check policy when building a response to hide things for non-admins, like system details (hosts, fault traceback, etc), but those aren't capability things18:28
gmannI am also ok for locked_by=owner|other if we cannot expose the UUID.18:28
mriedemgmann: we don't store the uuid anywhere for who locked the instance18:28
elbragstadif instances stored user information instead of project ids, we wouldn't need two lock properties, would we?18:28
gmannmriedem: yeah,  i mean if we do not want to change that as it will be complex18:29
dansmithmriedem: so the reason is we don't do exactly this elsewhere and it "feels wrong" ?18:29
mriedemcorrect18:29
dansmithI mean, I understand that things feel wrong.. I don't understand why this one does, but that's okay18:29
dansmithalright18:29
mriedemi could argue it'd be nice to have a "can_be_resized" flag in the server response18:29
mriedemthat doesn't mean we're going to add that18:29
dansmithpeople have asked for that :)18:29
mriedemi'm sure they have18:29
dansmithbased on all the reasons you *can't* do that18:30
mriedemand we've talked about that capabilities API for several years18:30
dansmithhow's that coming? :)18:30
mriedemit's mostly rotted flesh in my storage room18:30
dansmithbecause to me,18:30
mriedemdon't tell the cops please18:30
dansmiththe capaibiltiies api is the same thing, but all in one place, which IMHO lends further support to it being a good idea, but just not doable until we have enough other ones for critical mass :)18:31
dansmiths/doable/palatable/18:31
dansmithand that's unfortunate18:31
dansmithbut anyway, exposing locked_by=owner|other seems like a reasonable compromise18:31
mriedemi'd be fine with a GET /servers/{server_id}/capabilities API18:32
mriedemand we start rolling shit into that like this18:32
dansmithto be clear,18:32
dansmiththat means someone has to get the server, see that it's locked and a reason, then go fetch the capabilities of the server to see if they can unlock it18:32
dansmithwhich is fine, but...18:32
mriedemthey could just try to unlock it18:32
mriedemget the 40318:33
mriedemlike always18:33
dansmithsure18:33
mriedemif locked_by=other18:33
elbragstad^ that's what people do now with other APIs (not necessarily in nova)18:33
mriedemthen there is a reasonable guess they might not be able to unlock it18:33
mriedemwhich the reason should tell them why it's locked18:33
dansmithI dunno, I'm just having trouble following all the logic spaghetti, but I don't really need to care18:33
elbragstaddoes the reason include something like "this was locked by an admin, pay your bill pls"18:34
gmannyeah lock_reason can tell, "this is locked by admin/system and ask them to unlock after xyz condition/time"18:34
gmannelbragstad: yeah, i think it can be18:34
dansmithelbragstad: I know they do, and I'm sure it doesn't set off any alarms to hit 403s, but it feels like trying sudo vi /etc/passwd on a system to see if you can, and if not, it's reported :)18:34
gmannIMO with lock_reason info in response can explain the locked_by things. till now we did not have lock_reason so may be locked_by was important to know18:35
elbragstadyeah - true... the same argument has been made to have a capabilities-like api in keystone, but we're obviously not there yet, either18:35
dansmithlock_reason is for humans, locked_by is for computers, IMHO18:35
dansmithso not really a substitute18:35
mriedemi think i'm going to go lock myself in the garage with the car running18:36
dansmithelbragstad: last time I went jiggling door handles in my neighborhood to see who has theirs locked, I was asked to not do that18:36
elbragstadmake sure you set locked_by=owner18:36
dansmithmriedem: unfortunately if you have a catalytic converter that's going to take a really long time18:36
gmanndansmith: ok so computer purpose still can be fulfilled by locked_by=other|owner.18:37
dansmithgmann: yes18:37
elbragstaddansmith yeah - not the best u-x, i agree18:37
*** tssurya has joined #openstack-nova18:37
gmanni am thinking exposing exact UUID of locked person (admin etc) can be security leak ?18:38
gmannso just say you can unlock it or not via locked_by=owner|other seems appropriate for me now18:38
gmannfor example exposing system admin id to end user etc18:39
*** sridharg has quit IRC18:40
*** eharney has joined #openstack-nova18:40
*** tbachman has joined #openstack-nova18:40
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574918:45
*** pcaruana has quit IRC18:45
melwitthm, wanted to get this other spec update done before moving specs but it's in merge conflict https://review.openstack.org/#/c/639033/118:45
mriedemsomeone want to send this queens change through https://review.openstack.org/#/c/643219/18:47
melwitter... I guess it's not. just somehow it makes the move specs patch in merge conflict18:47
mriedemi'll start the release request18:47
mriedemmelwitt: i'll get that spec update18:47
melwittmriedem: thanks. I'll get the queens patch18:47
*** rchurch has quit IRC18:48
*** eharney has quit IRC18:52
*** eharney has joined #openstack-nova18:54
openstackgerritMerged openstack/nova-specs master: Update alloc-candidates-in-tree  https://review.openstack.org/63903319:01
*** awaugama has quit IRC19:01
openstackgerritMerged openstack/nova stable/pike: Fix bug case by none token context  https://review.openstack.org/60304419:05
*** tbachman_ has joined #openstack-nova19:05
*** tbachman has quit IRC19:05
*** tbachman_ is now known as tbachman19:06
*** fried_rolls is now known as fried_rice19:07
temkaUh, do we timestamp our logs in local time?19:09
fried_ricedansmith: I thought to give more people time to look at it and comment.19:10
*** tjgresha_nope has joined #openstack-nova19:10
*** tjgresha has joined #openstack-nova19:10
dansmithfried_rice: roger19:11
fried_ricedansmith: Also, the thing mriedem is working on with the multiattach capability has raised an interesting question that we're still working through. Namely, where do you put the required trait (which request group)?19:11
fried_riceNot that that necessarily should hold up your spec, but it's giving me pause.19:11
mriedemfried_rice: did you see my change on that patch?19:11
dansmithyeah, would rather save that for implementation19:11
mriedemi'm not using a request group anymore19:11
mriedemi'm hitching a ride on the RequestSpec.flavor19:12
mriedemcash, grass or ..., no traits ride for free19:12
fried_ricemriedem: been afk for the past couple hours, if that's when you changed it. But eventually you're adding it to a request group, albeit maybe indirectly.19:12
dansmithmriedem: I was trying to decide if I like or hate that19:12
mriedemfried_rice: right indirectly19:12
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574919:12
dansmithon the one hand it saves the "how do we request it" for the deeper layers, which might be good,19:12
fried_riceon the other hand, it isn't part of the flavor19:13
dansmithbut also modifying the flavor, while it's the reason we added it and store it separately, strikes me as "this doesn't FEEL right"19:13
fried_ricethat's my gut too, but I haven't really thought about it19:13
mriedemi did it for expediency b/c i couldn't figure out the semantics of the internal request group stuf19:14
mriedem*stuff19:14
dansmithyeah and the fact that you have to also worries me19:14
mriedemit kept trying to add an empty numbered resources dict, and there are no resources in my request group19:14
mriedemit's likely a bug in the resource request / group stuff internally19:14
fried_ricewhere should it add it?19:15
mriedemit was also just one of those, "oh hey i have like 2 hours left today to do something, how about i hack this together and see how it goes" thing19:15
dansmithdoesn't the base unnumbered group correspond to the whole request?19:16
mriedemfried_rice: this is what i was getting19:16
mriedemGET /placement/allocation_candidates?limit=1000&required1=COMPUTE_VOLUME_MULTI_ATTACH&resources=MEMORY_MB%3A512%2CVCPU%3A1&resources1=19:16
dansmithsomething that is a trait on te host that is fundamental seems like it should be easy to add19:16
mriedemand i'd get a 400 on that empty resources119:16
mriedemthe unnumbered resources there are from the flavor19:16
fried_ricemhm, because the API refuses to allow you to specify requiredN without a corresponding resourcesN19:16
mriedemthe required1/resources1 were from the group i added19:16
mriedemok,19:17
fried_riceso when whoever wrote that, they didn't see a need to not send the resourcesN through every time.19:17
mriedemso the problem is the RequestGroup for the flavor gets built long after my code19:17
mriedemi didn't want the requiredN, but i couldn't figure out how to not make it add the number19:17
*** weshay|rover is now known as weshay19:17
dansmithmriedem: right but they should be in the unnumbered one regardless of where they came from in this case19:17
fried_riceright, but the problem is, where *do* you want to put the trait?19:17
mriedemfried_rice: alongside the flavor19:18
mriedemflavor group19:18
fried_riceOkay, but if we say that, then we're *requiring* the VCPU/MEMORY_MB to come from the unnumbered group19:18
fried_ricewhich is not (yet) a requirement otherwise.19:18
mriedem*head explodes*19:18
dansmithmriedem: if I cut the cat off your car can I sit in your garage with you?19:18
fried_riceand it's going to explode (like mriedem's head) when we do numa.19:18
mriedemdansmith: there is plenty of room, laura is at work19:19
mriedemi'll get the lawn chairs down19:19
* dansmith heads over with his hacksaw19:19
mriedemno need, i've got one19:19
dansmitheven better, won't have to check a bag19:19
* fried_rice took a few lines to realize we were talking about a catalytic converter19:19
* fried_rice imagined fur getting caught in the hacksaw teeth and stuff19:19
dansmithindeed, but catalytic is so hard to type when you only have hours left on this planet19:19
fried_riceSo go with me on this: Can't I theoretically today put resources1:VCPU=1, resources1:MEMORY_MB:2048 into my flavor extra specs? And it'll ignore the base flavor values for vcpus and ram?19:22
*** phasespace has joined #openstack-nova19:23
fried_ricedown the road, I'll get a request with no unnumbered group19:23
fried_riceso I'll have required=MULTIATTACH but no resources=*19:24
fried_ricewhich I think will result in the same failure19:24
* fried_rice goes to look at the latest thing cdent did...19:24
mriedemclearly we have outsmarted ourselves19:26
mriedemmission accomplished19:26
fried_riceyeah, because this19:27
fried_ricehttps://github.com/cdent/placecat/blob/d3241a6fb0d8701072986c169615787f7484a3b7/gabbits/trait-pairs.yaml#L10919:27
fried_ricedoesn't work, we've got an issue to work through.19:27
fried_riceI have to go do phone things now :(19:27
mriedemwhole bunch of stable/pike changes we could move through https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike+label:Code-Review=119:32
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: Refix disk size during live migration with disk over-commit  https://review.openstack.org/63137219:33
*** igordc has quit IRC19:34
elbragstadi assume by previous the previous discussion, capabilities in nova stalled out?19:36
* elbragstad sucks at typing19:36
*** jmlowe has quit IRC19:40
*** mriedem has quit IRC19:46
*** DarthNau has quit IRC19:46
*** BjoernT has quit IRC19:47
EmilienMhi19:47
EmilienMwe need https://review.openstack.org/#/c/626952/ in Stein, please19:47
*** mriedem has joined #openstack-nova19:47
mriedemelbragstad: yup19:48
EmilienMif https://review.openstack.org/#/c/626952/ doesn't land, we'll have to carry that patch downstream19:48
EmilienMwe can't deploy Nova with SSL atm19:48
EmilienMnot sure why it's being postponed to Train, really19:48
gmannmelwitt: ack your comment on policy improvement spec. I am doing QA release things today. I will ping you on monday to discuss that and probably next plan.19:48
EmilienMmelwitt ^19:48
elbragstadmriedem is the main reason just because there aren't enough people to work on it or was it something else? i tried looking for a spec in review to find the answer but didn't see any19:49
*** luksky has joined #openstack-nova19:50
melwittEmilienM: I don't think it's being postponed, should go into rc2. were you ok with that mriedem?19:50
mriedemmelwitt: idk about rc2. mdbooth said as long as it's merged on master (Train) he's happy to backport internally rather than upstream.19:50
EmilienM+1 for rc2, thanks19:50
dansmithlast I asked, fried_rice said it was waiting until after rc something19:50
dansmithright, that ^19:50
melwittoh19:50
dansmithbut I really don't know19:51
EmilienMI would rather avoid downstream only backports19:51
EmilienMbut I don't really care as I don't maintain this rpm downstream19:51
mriedemand i'd rather avoid putting some crazy eventlet thing in rc2 that we don't see in the gate19:51
EmilienMit's just ugly that the rest of users will have the same issue as we have19:51
melwittfrom what mdbooth said it is a regression though, for those running an unlucky combo of package versions right19:51
mriedemwe gate on bionic which is py36 and don't see this upstream in CI, so i can't explain the difference19:51
mriedemyeah maybe19:52
mriedemi've not followed the ML thread19:52
dansmithmdbooth also said it was fine for train19:52
fried_ricedansmith: At this point I'm waiting for you to re+219:53
fried_ricesince you had done it before19:53
melwittok... I was just thinking if it's a regression it would be nice to fix it for stein ppl but if most think it should be train-only, then I'll go with the consensus19:53
dansmithfried_rice: I haven't reviewed it in its current form and you said you'd ping me if you needed something19:53
fried_riceokay. ping.19:53
dansmithfried_rice: I'm not super comfortable with it at this point (I reviewed it initially a while ago)19:53
dansmithso I say let melwitt do it if she thinks it's a good idea19:53
melwittcool gmann, thanks19:54
mriedemi have to spend my requisite 2 hours per month to look at fixing https://launchpad.net/bugs/1790204 before writing up my monthly internal report...19:54
openstackLaunchpad bug 1790204 in OpenStack Compute (nova) "Allocations are "doubled up" on same host resize even though there is only 1 server on the host" [High,Triaged]19:54
dansmithten revisons ago and a week prior19:54
mriedemhonestly my +2 on that is mostly, (1) it's not in rc1, (2) it's eventlet and i'm not ever comfortable with eventlet stuff, (3) lots of people have been working on it and say it fixes the problem in a reproduction system so (4) i'm going to trust mdbooth on it19:55
fried_riceyes, and I'm willing to do the same, but I wanted to give dansmith the opportunity to look at it again because he seemed to understand it (10 patch sets and a week ago).19:56
melwittyeah, I was going to say, that's the best my +2 could be too, for the same reasons19:56
melwitteventlet monkey patch ordering is not really my forte19:56
dansmithfried_rice: don't wait on me19:56
fried_riceokay.19:56
fried_riceEmilienM: +A19:57
EmilienMfried_rice: thank you19:58
mriedemi'm still not really comfortable with pushing this into an rc220:00
mriedemi'd be much cooler with landing it in train, and then if non-OSP people come along after stein GA saying they have the same issue (maybe zigo - he tends to tickle weird stuff like this), then we say ok let's backport and do a quick 19.0.1 on stable20:01
mriedemor thode or coreycb20:02
mriedemyou know, other distro gremlins :)20:02
mriedemi say that with the utmost affection20:02
*** tbachman has quit IRC20:04
*** EmilienM has left #openstack-nova20:07
coreycbmriedem: we'll be ramping up our testing including SSL in the next couple of weeks. so we haven't hit that yet.20:14
mriedemcoreycb: ack cool20:17
coreycbmriedem: will keep you posted. we're py3 only at this point so i imagine that we'll hit it as well.  grrr eventlet.20:18
*** igordc has joined #openstack-nova20:19
*** jmlowe has joined #openstack-nova20:20
mriedempy3 what?20:21
mriedem3.5?20:21
mriedemor bionic 3.6?20:21
mriedemoh you mean just py3 support20:21
mriedemyeah apparently osp didn't hit issues with py3.520:21
coreycbpy3.6 for bionic-stein. and py3.7 too but that is for disco-stein so not a common target.20:22
openstackgerritMerged openstack/nova master: Fix return param docstring in check_can_live_migrate* methods  https://review.openstack.org/64561020:41
*** ralonsoh has quit IRC21:06
*** mdbooth_ has joined #openstack-nova21:06
*** mdbooth has quit IRC21:09
*** igordc has quit IRC21:16
*** igordc has joined #openstack-nova21:16
*** mchlumsky has quit IRC21:30
mriedemso i guess if your cloud supports strict affinity groups + resize, you *have* to enable allow_resize_to_same_host yeah? otherwise you could just never resize any servers in an affinity group21:35
mriedemwhich is fun21:35
mriedemseems allow_resize_to_same_host should really just be a weigher rather than a true/false config21:35
*** mgoddard has quit IRC21:47
*** mgoddard has joined #openstack-nova21:47
*** zhubx007 has quit IRC21:57
*** zhubx007 has joined #openstack-nova21:57
openstackgerritMerged openstack/nova stable/queens: Fix resource tracker updates during instance evacuation  https://review.openstack.org/64321922:09
*** cfriesen has joined #openstack-nova22:13
mriedemdansmith: i don't suppose you remember off the top of your head why we used force_hosts/nodes for the rebuild to same host run through the scheduler where we still want to use the filters, iow, why didn't we use requested_destination?22:14
mriedemthe former skips filters, the later restricts to a host but runs filters22:14
mriedemi think we could have avoided this https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L58422:15
mriedemi know that was all a crazy mess at the time and we were just trying to get something working22:15
dansmithmriedem: because we have to run the numa filter right?22:28
dansmithI mean, I barely remember this, but I remember we needed to run the numa filter even though we knew where it was going to validate that the new image still works with the requested numa topology on the host or whatever22:28
mriedemyeah that's not my point, i'm just wondering why we used force_hosts over requested_destination22:29
mriedemrequested_destination will still run the filters22:29
mriedemand avoid needing to look for that secret hint here https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L58422:29
dansmithoh I thought you said one doesn't run the filters22:29
mriedemforce_hosts doesn't normally22:29
mriedemwhich is why that condition had to be added22:29
dansmith[15:14:55]  <mriedem>the former skips filters, the later restricts to a host but runs filters22:29
mriedemwe could have just avoided that by using requested_destination22:29
dansmithoh I see22:30
dansmiththen I dunno22:30
dansmith:D22:30
mriedemjust rushing i'm guessing22:30
dansmithcould be22:30
mriedemi'm basically needing to do some similar hackery for resize to same host and got digging into that code is all22:30
dansmithwe did that in dublin under threat of snowpocalypse (i.e. 1" of snow)22:30
openstackgerritMerged openstack/nova master: Eventlet monkey patching should be as early as possible  https://review.openstack.org/62695222:31
openstackgerritmelanie witt proposed openstack/nova-specs master: Move Stein implemented specs  https://review.openstack.org/64574922:37
*** whoami-rajat has quit IRC22:41
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Alternative fix to bug 1790204 (does not work yet)  https://review.openstack.org/64595422:48
openstackbug 1790204 in OpenStack Compute (nova) "Allocations are "doubled up" on same host resize even though there is only 1 server on the host" [High,Triaged] https://launchpad.net/bugs/179020422:48
mriedemfried_rice: i tried my alternative and it just blasted me right in my stupid face ^22:48
mriedemdansmith: oh yeah teehee https://review.openstack.org/#/c/645954/1/nova/compute/api.py22:49
mriedemi think it's time for me to start working on a project where i'm just doing application development on top of something else where the shit can roll downhill rather than breaking my head working on essentially an operating system22:54
*** ivve has joined #openstack-nova22:58
*** tosky has quit IRC23:01
*** mriedem has quit IRC23:04
*** luksky has quit IRC23:20
*** tbachman has joined #openstack-nova23:24
*** tbachman has quit IRC23:24
*** tbachman has joined #openstack-nova23:25
*** igordc has quit IRC23:27
*** igordc has joined #openstack-nova23:27
*** tssurya has quit IRC23:29
*** igordc has quit IRC23:33
*** N3l1x has quit IRC23:34

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!