Wednesday, 2019-06-19

*** takashin has joined #openstack-placement00:59
*** mriedem_away has quit IRC01:01
*** tetsuro has joined #openstack-placement01:10
*** tetsuro has quit IRC01:48
*** tetsuro has joined #openstack-placement01:52
*** tetsuro has quit IRC02:47
*** tetsuro has joined #openstack-placement03:29
*** tetsuro has quit IRC03:34
zzzeekefried: "Batches" mean you run several SELECT queries, or whatever it is you're running, with a portion of the IN list in each one04:50
*** tetsuro has joined #openstack-placement05:07
*** e0ne has joined #openstack-placement05:15
*** e0ne has quit IRC05:20
*** dklyle_ has joined #openstack-placement05:24
*** david-lyle has quit IRC05:27
*** purplerbot has quit IRC05:27
*** purplerbot has joined #openstack-placement05:28
*** alex_xu has quit IRC05:30
*** irclogbot_1 has quit IRC05:30
*** irclogbot_0 has joined #openstack-placement05:31
*** alex_xu has joined #openstack-placement05:33
*** yikun has joined #openstack-placement06:08
*** tetsuro has quit IRC06:10
*** belmoreira has joined #openstack-placement06:40
*** tetsuro has joined #openstack-placement07:33
*** tssurya has joined #openstack-placement07:38
*** helenafm has joined #openstack-placement07:38
*** belmoreira has quit IRC07:38
*** tetsuro has quit IRC07:38
*** ttsiouts has joined #openstack-placement07:42
*** belmoreira has joined #openstack-placement07:44
*** takashin has left #openstack-placement08:00
*** ttsiouts has quit IRC08:06
*** ttsiouts has joined #openstack-placement08:07
*** ttsiouts has quit IRC08:11
*** ttsiouts has joined #openstack-placement08:16
*** e0ne has joined #openstack-placement08:34
*** tetsuro has joined #openstack-placement08:55
*** tetsuro has quit IRC09:11
*** cdent has joined #openstack-placement09:15
*** belmoreira has quit IRC09:25
stephenfinWhat version was the first one that placement was required in? Newton or Ocata?09:37
openstackgerritChris Dent proposed openstack/placement master: Nested provider performance testing  https://review.opendev.org/66569509:46
cdentstephenfin: i'm pretty ocata was where it was required, newton optional09:48
cdenti'd need to dig to confirm that though09:49
stephenfincdent: That's what the Queens docs were alluding to but I wasn't sure myself https://docs.openstack.org/nova/queens/user/placement.html09:49
stephenfin"The placement-api service must be deployed at some point after you have upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0 Ocata release."09:50
cdentstephenfin: sorry got distracted. yeah, that corresponds with my memory10:14
cdentgibi, efried : a simple case of over logging: https://storyboard.openstack.org/#!/story/2005918 ?10:14
gibicdent: is this log simply says home many canadidate is found per RP tree per request?10:24
gibis/home/how/10:25
gibiI guess the pre RP tree part makes it noisy10:25
gibido we have a log that states how many candidate found per request in total?10:25
gibiI guess that would be enough10:26
cdentgibi: I think it is simply saying "for root provider X, there are N allocation requests"10:26
cdentthere are more informative logs nearby about the entire result set10:27
gibicdent: then I think we can drop this log10:28
cdentthis one doesn't really provider enough info to distinguish from AR under the same tree from another10:28
*** ttsiouts has quit IRC10:47
*** ttsiouts has joined #openstack-placement10:48
*** ttsiouts has quit IRC10:52
*** ttsiouts has joined #openstack-placement11:32
*** ttsiouts has quit IRC13:00
*** ttsiouts has joined #openstack-placement13:01
openstackgerritChris Dent proposed openstack/placement master: Remove overly-verbose allocation request log  https://review.opendev.org/66628113:03
*** ttsiouts has quit IRC13:05
bauzasdo folks remember if we had some problems by Queens about zombie allocations ?13:11
*** mriedem has joined #openstack-placement13:11
bauzasI remember we stopped to use _heal_allocations IIRC13:11
bauzasbut when ? I don't remember13:11
cdentbauzas: there have been multiple buglets throughout various parts of nova which have led to different styles of orphaned allocations. many of them have been fixed by mriedem, but he keeps finding more, i'm not sure of the timing on when they've all happened13:12
cdentgibi has also done some related work13:12
bauzas:/13:12
gibiI did not really worked with zombi allocations13:13
cdentI think I'm thinking of healing13:13
gibiyeah13:13
bauzass/zombie/orphaned if you prefer13:13
gibiI'm still working on healin missing port allocations13:13
bauzasI have an internal bug about it13:14
cdenti've got to leave but will back soonish13:15
*** ttsiouts has joined #openstack-placement13:18
*** cdent has quit IRC13:21
*** tssurya has quit IRC14:06
*** cdent has joined #openstack-placement14:11
bauzasfolks, back with my issue but with a bit more details, do we know if we delete consumer records when instances are accordingly deleted ?14:26
bauzasI'd dare say no14:26
edleafebauzas: https://github.com/openstack/placement/blob/master/placement/objects/consumer.py#L6814:30
bauzasthanks14:30
gibibauzas: do you see empty consumer record? or those consumers do consume some resources?14:31
bauzasgibi: actually, the problem is more than than, my customer sees some orphaned allocations after upgrading from Ocata to Queens14:32
bauzasso I guess this is normal that consumers table isn't purged14:32
bauzasthe root cause being the allocations themselves14:32
cdentbauzas: can you copy your internal bug upstream (to nova, not placement, probably)? It's hard to get a good sense of what the problem is without something to read14:34
bauzasI'll first ask for more details14:34
bauzasto my customer14:34
bauzasFWIW, the bug is https://bugzilla.redhat.com/show_bug.cgi?id=1721068 but I'm triaging it to something just about allocations records not deleted14:35
openstackbugzilla.redhat.com bug 1721068 in openstack-nova "allocations database is not properly cleaned" [Unspecified,New] - Assigned to nova-maint14:35
bauzasthe rest isn't a real problem14:35
cdentbauzas: the fact that that mentions migrations and upgrades suggests some of the issues that mriedem has fixed are involved here14:37
bauzasI don't have the whole knowledge of all mriedem's fixes14:38
cdentnor do I14:39
mriedemnor I14:39
bauzas\o/14:40
bauzas... and that's a Queens customer14:41
bauzasbut heh, will do what I can14:41
*** dklyle_ has quit IRC14:42
cdentbauzas: if you've got unconfirmed migrations across an upgrade, that is likely a factor14:42
*** dklyle has joined #openstack-placement14:42
cdentin which case, since you're back in queens, a manual cleanup of the database may be the way to go14:43
cdentbut I'm totally guessing. placement will do what nova asks. If nova forgets to ask...14:43
bauzasI wonder if that could be somehow related to API microversion 1.814:43
mriedemif someone wants to summarize that bug for me then maybe i know the issue, but i don't feel like digging into reading that right now14:43
bauzasmriedem: don't worry, I'll try to dig into first and make a better statement if I need your help14:44
mriedemthe best way i've found to debug anything related to nova + placement is write a functional test to recreate the scenario before trying to wrap my head around what the fix needs to be14:44
mriedemi.e. https://review.opendev.org/#/c/663737/14:44
bauzasI think I may have a possible reason14:45
bauzasif you upgrade from Ocata to Queens by some kind of FFU14:45
bauzasthen your allocation records don't have project_id/user_id14:46
bauzassince https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#require-placement-project-id-user-id-in-put-allocations is only for Pike14:46
bauzasin this case, you could hit the problem that was fixed by https://review.opendev.org/#/c/574488/ but only for Rocky14:47
bauzascdent: mriedem: am I correct with the assumption that we don't heal allocations in Queens if they don't have user_id/project_id ?14:48
bauzassince https://review.opendev.org/#/c/574488/ isn't backported to Queens and Pike ?14:48
mriedemwell you're correct that https://review.opendev.org/#/c/574488/ isn't backported to queens14:50
mriedemand the commit message implies that people might backport it, "Note that we should be using Placement API version 1.28 with consumer_generation when updating the allocations, but since people might backport this change the usage of consumer generations is left for a follow up patch."14:50
mriedemthere was an online data migration for the consumer stuff when it was added though - why wasn't that run as part of the FFU?14:51
bauzasgood call, I can ask14:51
bauzasthe customer said it did14:51
bauzasbut I'll ask for showing the records14:51
bauzasanyway, I got a clue, thanks14:52
mriedemhttps://review.opendev.org/#/c/567678/12/nova/cmd/manage.py14:52
mriedemthis is all rocky code...14:52
mriedemso i'm not sure why it would be a problem in queens14:52
*** belmoreira has joined #openstack-placement14:53
bauzaswe require project_id and user_id to be part of the allocation in Pike14:53
mriedemin that bz, the request spec cleanup during archive was something that's been addressed for deleted instances, but probably isn't in queens14:53
mriedemhttps://review.opendev.org/#/q/I483701a55576c245d091ff086b32081b392f746e i guess it is14:54
bauzasmriedem: yup, I already found it, and I have a comment for it14:55
bauzasand nope, that was merged in Pike, so Queens got it14:55
mriedemas the bz says, migrations could fail and nova could fail to cleanup the allocations held by the migration consumer, ala https://review.opendev.org/#/c/661349/14:56
mriedemwhich is a backport that recently merged in queens but isn't released yet14:56
mriedemthe consumers records should be automtaically deleted in placement when their last remaining allocation is deleted,14:57
mriedemso the thing to probably figure out / confirm is that the leftover consumers are tied to allocations for migration records, rather than instances14:57
mriedemb/c we don't do such a great job of cleaning up migration-held allocations on failure14:58
mriedemnor do we have tooling/periodics that scan for those and clean them up14:58
bauzasmriedem: the consumers records are only deleted when allocations are gone by Rocky, not queens Ic2b82146d28be64b363b0b8e2e8d180b515bc0a014:58
bauzasoops14:58
bauzasmriedem: https://review.opendev.org/#/c/581086/14:58
bauzasso we could backport this one14:58
mriedemi guess, that's not going to help the customer that already has a bunch of stale consumers records15:00
bauzasyeah they'll need to purge those anyway15:00
mriedemand hopefully not f up and delete consumers that have allocations15:00
bauzasor if we backport the change, the table will heal at least for allocations that are removed after*15:01
*** Sundar has joined #openstack-placement15:24
*** helenafm has quit IRC15:52
*** belmoreira has quit IRC15:57
*** ttsiouts has quit IRC16:02
*** ttsiouts has joined #openstack-placement16:03
*** ttsiouts has quit IRC16:07
openstackgerritjacky06 proposed openstack/os-traits master: Sync Sphinx requirement  https://review.opendev.org/66638616:32
cdentefried, mriedem: Would be great to get this new nested-perfload to some form of "useful" before we get too much further with the nested magic work. it's pretty close now, but some feedback needed. I left some prompts on the review. https://review.opendev.org/#/c/665695/16:52
openstackgerritChris Dent proposed openstack/placement master: DNM: See what happens with 10000 resource providers  https://review.opendev.org/65742316:59
cdentmriedem: you might like this one (or at least feel vaguely qualified to care): https://review.opendev.org/66394517:01
efriedcdent: ack17:03
*** e0ne has quit IRC17:20
*** cdent has quit IRC17:22
openstackgerritColleen Murphy proposed openstack/placement master: Update SUSE install documentation  https://review.opendev.org/66640817:28
*** melwitt is now known as jgwentworth17:55
efriedaspiers: As a suse-er, do you have the ability to validate ^ ?18:03
aspiersyes18:04
efriedthanks18:08
openstackgerritColleen Murphy proposed openstack/placement master: Update SUSE install documentation  https://review.opendev.org/66640818:09
*** e0ne has joined #openstack-placement18:35
*** e0ne has quit IRC18:36
*** e0ne has joined #openstack-placement19:05
*** jgwentworth is now known as melwitt19:52
*** artom has quit IRC20:02
*** e0ne has quit IRC20:11
*** belmoreira has joined #openstack-placement20:20
*** e0ne has joined #openstack-placement20:55
*** e0ne has quit IRC20:59
*** e0ne has joined #openstack-placement21:00
*** e0ne has quit IRC21:39
*** mriedem has quit IRC21:59
*** belmoreira has quit IRC22:13
*** belmoreira has joined #openstack-placement22:16
openstackgerritEric Fried proposed openstack/placement master: Add a test for granular member_of flowing down  https://review.opendev.org/66646022:37
*** Sundar has quit IRC22:44
*** belmoreira has quit IRC22:45
openstackgerritEric Fried proposed openstack/placement master: Spec for nested magic 1  https://review.opendev.org/66219123:08
openstackgerritEric Fried proposed openstack/placement master: Microversion 1.35: root_required  https://review.opendev.org/66549223:27
openstackgerritEric Fried proposed openstack/placement master: Miscellaneous doc/comment/log cleanups  https://review.opendev.org/66569123:27

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!