Friday, 2019-05-17

*** ttsiouts has joined #openstack-placement03:08
*** ttsiouts has quit IRC03:41
*** bhagyashris has joined #openstack-placement04:55
*** ttsiouts has joined #openstack-placement05:39
*** ttsiouts has quit IRC06:12
*** belmoreira has joined #openstack-placement06:32
*** belmoreira has quit IRC06:56
*** mcgigglier has joined #openstack-placement07:50
*** belmoreira has joined #openstack-placement07:53
*** belmoreira has quit IRC07:57
*** ttsiouts has joined #openstack-placement08:01
*** bauzas has left #openstack-placement08:05
*** bauzas has joined #openstack-placement08:06
*** belmoreira has joined #openstack-placement08:13
*** tssurya has joined #openstack-placement08:38
*** ttsiouts has quit IRC09:04
openstackgerritQiu Fossen proposed openstack/placement master: Cap sphinx for py2 to match global requirements  https://review.opendev.org/65976209:06
*** belmoreira has quit IRC09:08
*** belmoreira has joined #openstack-placement09:10
*** e0ne has joined #openstack-placement09:17
*** bhagyashris has quit IRC09:36
openstackgerritMerged openstack/placement master: Cap sphinx for py2 to match global requirements  https://review.opendev.org/65976209:46
*** belmoreira has quit IRC09:47
*** belmoreira has joined #openstack-placement09:51
*** belmoreira has quit IRC10:00
*** belmoreira has joined #openstack-placement10:21
*** cdent has joined #openstack-placement10:30
*** belmoreira has quit IRC10:49
*** ttsiouts has joined #openstack-placement11:01
*** ttsiouts has quit IRC11:34
*** cdent has quit IRC12:48
*** dklyle has joined #openstack-placement12:50
*** cdent has joined #openstack-placement13:01
*** alex_xu has joined #openstack-placement13:05
*** belmoreira has joined #openstack-placement13:20
*** mriedem has joined #openstack-placement13:21
*** belmoreira has quit IRC14:54
*** belmoreira has joined #openstack-placement14:58
*** mcgigglier has quit IRC15:05
*** tssurya has quit IRC15:36
cdentlatest pupdate: https://anticdent.org/placement-update-19-19.html15:38
stephenfinIs is possible to list all allocations against a given resource provider via osc-placement? I was expecting to see a 'openstack resource provider allocation list' command but there doesn't seem to be one15:49
stephenfinAlso, calling 'openstack resource provider allocation show' with an invalid UUID doesn't result in any error output, which is annoying (I used a compute node UUID instead of an instance UUID, by mistake)15:51
efriedstephenfin: Were you expecting that to be a 404?15:54
stephenfinYeah15:55
efriedstephenfin: At what microversion?15:55
stephenfinI mean, there's no allocation corresponding to that UUID, right?15:55
efriedYeah, but `ls` of an empty directory isn't an error15:55
efried`ls` of a *nonexistent* directory is an error15:56
stephenfinUm, whatever Devstack has given me. I'm not providing the --os-placement-api-version param15:56
efriedbut the tracking of consumers is...15:56
stephenfinWhy would the allocation exist?15:56
efriedokay, iirc osc-placement is going to default to 1.0, where we didn't track consumers at all yet15:56
efriednot the allocation. The consumer.15:56
stephenfinOh, yeah15:57
efriedBefore we tracked consumers separately (would have to look up when that happened) the only cue to their existence was allocations existing.15:57
efriedBut just because there's no allocation doesn't mean the consumer doesn't exist15:57
cdentthere is no different between an identified allocation and a consumer. they are exactly the same thing15:57
stephenfin$ openstack --os-placement-api-version 1.17 resource provider allocation show foobar15:58
stephenfin15:58
stephenfin---15:58
stephenfinthat seems wrong15:58
cdentwhen a group of allocations goes away, the consumer goes away15:58
cdentstephenfin: I agree with you that it is confusing15:58
cdentin fact the entire command is confusion15:58
stephenfinIs there anyway to fix it15:58
cdentbecause 'resource provider allocation' is really not at all the same as 'consumer allocation show'15:59
stephenfinWhat I want is to find out all the things consuming resources from a provider15:59
stephenfini.e. all the instances using disk on a given node15:59
stephenfin*e.g.15:59
cdentthat command, as far as I can tell, does not currently exist in osc-placement, but you're right that that name would have suggested it15:59
cdentone moment, I'm looking at the code15:59
stephenfinIs there a thing on the placement server side that would let us do that?16:00
stephenfinThis looks like what I want https://developer.openstack.org/api-ref/placement/?expanded=list-resource-provider-allocations-detail#resource-provider-allocations16:00
stephenfinAny reason I couldn't hook that up to the 'openstack resource provider allocation list' command?16:01
cdentI think it already is, I'm just looking to decode the code16:01
cdenthere we go: https://docs.openstack.org/osc-placement/latest/cli/index.html#resource-provider-show16:01
cdentstephenfin: are you writing automation, or doing human exploring?16:02
stephenfinAh, I need the '--allocations' parameter16:02
cdentyeah, not at all obvious. Which is how I feel about all of osc-placement, but I tend to think in terms of the API, not the grammar of osc16:03
stephenfincdent: Exploring. A customer has switched to OSP13 and is complaining that placement is rejecting their requests due to insufficient storage16:03
stephenfinand I'm thinking that placement is counting stuff actually stored elsewhere (a Ceph cluster?) against the local compute node16:03
cdentah, yeah. that seems to happen quite a bit. support folk in vmware have resorted to making a db query tool that reports a summary of all resource providers, their inventory, their usage16:03
cdent(which is not too overwhelming because there's usually only a few resource providers in a cluster setup)16:04
stephenfinyeah, I think we need to integrate it into our sosreport tool if its not there already16:04
stephenfinCan you remind me: the oft-quoted "shared storage" problem would result in us thinking we have more storage than we actually do, right?16:05
cdentI assume you've already determined that placement is returning 0 results to the scheduler and it not something filtering it out16:05
cdentyes, more than we actually do16:05
cdentwell16:06
stephenfinAye. I'm seeing "Over capacity for DISK_GB on resource provider a5c17d03-56fb-4078-a88e-1c1eaf2f561c. Needed: 12, Used: 832, Capacity: 837.0"16:06
stephenfin(I'd have closed with "this is working as expected" but the fact that this apparently worked before placement (Newton) prevents that :( Fun )16:07
cdentthe problem in the ceph situation is that everyone is consuming that disk but with no central accounting (which is why shared providers need to happen), so I would have thought that the problem is more likely workloads trying to place and then discovering that when placement thought there was space, there wasn't (because some other compute node ate it)16:08
cdentIs there a chance the allocation rations are wrong for some mounted disk. Such that the disk itself has plenty of space (because thin) but placement thinks it is consumed?16:09
stephenfinYeah, that was my understanding of the situation too. Thanks for confirming16:09
cdentthe situation you have seems to be placement thinks there's no space, but the customer thinks there is. is that right?16:10
stephenfinCorrect16:10
stephenfinI thought of allocation_ratio and asked for logs, but alas they're controller logs so not much use to me. I'll have to go back and ask for compute node logs16:10
stephenfin*disk_allocation_ratio16:11
stephenfinThe output of 'openstack resource provider inventory list' would be beneficial too16:11
cdentaye16:11
*** belmoreira has quit IRC16:11
cdentthis back and forth stuff is so not fun16:11
stephenfinNot at all16:12
cdentI always want to be like: either grant me access to the machine and root right now, or go away16:12
stephenfinBut still, I'm generally happy to have osc-placement. It does make things a good deal easier than previously16:12
cdenttrue16:12
stephenfinWhen you can get access to the machine/can interact in real-time with someone at the machine16:13
cdentanyway it is time for me to go home. let me know how it turns out. I'm curious to identify patterns on these sorts of things.16:13
cdentbbl16:13
*** cdent has quit IRC16:13
stephenfinWill do o/16:13
*** mriedem is now known as mriedem_away16:49
*** e0ne has quit IRC17:17
*** ttsiouts has joined #openstack-placement17:26
*** mriedem_away is now known as mriedem18:23
*** efried has quit IRC19:43
*** efried has joined #openstack-placement19:43
*** ttsiouts has quit IRC19:59
*** ttsiouts has joined #openstack-placement20:15
*** ttsiouts has quit IRC20:20
*** ttsiouts has joined #openstack-placement20:51
*** ttsiouts has quit IRC20:56
*** ttsiouts has joined #openstack-placement21:28
*** ttsiouts has quit IRC22:01
*** mriedem has quit IRC22:20
*** artom has quit IRC23:53
*** ttsiouts has joined #openstack-placement23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!