Friday, 2019-01-25

*** efried_2dmtg has quit IRC00:35
*** bhagyashris has joined #openstack-placement02:20
*** bhagyashris has quit IRC02:57
*** bhagyashris has joined #openstack-placement04:06
openstackgerritTetsuro Nakamura proposed openstack/placement master: Use local config for placement-status CLI  https://review.openstack.org/63260005:57
openstackgerritTetsuro Nakamura proposed openstack/placement master: Add upgrade status check for missing root ids  https://review.openstack.org/63259905:57
*** takashin has left #openstack-placement06:45
*** e0ne has joined #openstack-placement07:38
*** helenaAM has joined #openstack-placement08:12
*** s10 has joined #openstack-placement08:50
*** rubasov has quit IRC09:06
*** rubasov has joined #openstack-placement09:06
*** bhagyashris has quit IRC09:57
*** s10 has quit IRC10:11
*** gibi has quit IRC10:54
*** gibi has joined #openstack-placement10:54
*** e0ne has quit IRC11:59
*** ioni has joined #openstack-placement12:15
ionihello guys12:15
ionii was redirected from #openstack-nova in this channel12:15
ionii have this warnings on couples of compute nodes: https://paste.xinu.at/ecVs8/12:15
ioniI was wondering what's the right way to fix deallocated resources in openstack queens12:16
*** cdent has joined #openstack-placement12:19
*** tssurya has joined #openstack-placement12:30
*** e0ne has joined #openstack-placement12:41
*** avolkov has joined #openstack-placement13:02
*** mriedem has joined #openstack-placement13:12
*** cdent has quit IRC13:14
jaypipesioni: good morning13:19
ionijaypipes, hello13:20
jaypipesioni: are you seeing this in your nova-compute logs on restart of the nova-compute service?13:20
ionijaypipes, yes13:20
jaypipesioni: and the nova-compute you are seeing it on was the source host of an instance that was *live* migrated to another host?13:21
*** cdent has joined #openstack-placement13:21
ionijaypipes, or migrations that failed at the end due to port cannot being alocate to the host or other varios problems that I fixed after that13:21
ionijaypipes, i don't usually use live migration13:22
jaypipesioni: ah, ok, that's the issue then.13:22
jaypipesioni: can you please provide me the UUIDs of the compute nodes for source and destination host as well as the instance's UUID? I will then give you some SQL statements to run.13:22
ionijaypipes,  uuid of compute node is ID from openstack hypervisor list or host id?13:23
jaypipesioni: if the hypervisor list ID field is a UUID-looking thing, then yes. :)13:25
ioni| a921d193-74d3-4b81-b473-6705c955e15c | cloudbox11 |13:25
ioni| f6ac25cb-5ce9-4b91-a559-aa0a8d976255 | cloudbox13 |13:25
jaypipesioni: yeah, that's it.13:25
jaypipesk. and 11 is the destination, 13 is the source?13:25
ioniso first is the host with warning13:25
ioni13 is destination13:25
jaypipesah, yes13:26
jaypipesand f5d12428-555f-4b5d-a555-60d36b85d73a is the instance UUID.13:26
ioniindeed13:26
jaypipesok, hold please while I generate some SQL for you :)13:26
ioniok, i'll figure it out for all hosts what to do particulary13:26
jaypipesioni: question for you...13:27
jaypipesioni: when you said "migrations that failed" and "other various problems that I fixed after that", can you elaborate a bit on what those other problems were? a failed migration should really be cleaning up any placement records that might have been changed during migration..13:28
ionijaypipes,  so, one way for a migration to fail is to have an instance that has a port with a subnet with a service-type like compute:foo13:28
ionibecause i didn't find a way to set up neutron to not schedule from a specific subnet13:29
ioniso when i migrate that instance, it will fail to finish the migration, because the port cannot be moved13:29
ionii simply reset the state from error and start the instance13:29
ioniand works13:29
ioninow is on the new node13:30
ionii don't remember another case from failing13:30
ionithere was a case where nova simply returned that it doesn't have permission to the image(i have images that are marked as deactivated)13:31
jaypipesioni: how many instances are we talking about here?13:31
ionii have like 20 nodes and couples have warnings13:32
ioninot all of them13:32
ionibut i can manage to them manually13:32
jaypipesioni: can you tell me what the results of the following SQL statement are? (execute this against the nova_api database please): SELECT a.* FROM allocations AS a JOIN resource_providers AS rp ON a.resource_provider_id = rp.id WHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c'\G13:34
jaypipesioni: sorry, hold up :)13:35
ioniok, i'll hold13:35
ionii was about to paste the result13:35
jaypipesSELECT a.* FROM allocations AS a JOIN consumers AS c ON a.consumer_id = c.id JOIN resource_providers AS rp ON a.resource_provider_id = rp.id WHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c' AND c.uuid = 'f5d12428-555f-4b5d-a555-60d36b85d73a'\G13:36
jaypipesioni: ^13:36
ioniEmpty set, 84 warnings (0.00 sec)13:36
ionilet me check to see if cleary was cloudbox1113:37
jaypipesioni: can you pastebin the results of the first query pls?13:37
* jaypipes wonders if we remove the "-" from UUIDs....13:37
ionijaypipes, https://paste.xinu.at/hjBZex/13:37
ioniyes, it's cloudbox11 with destination cloudbox1313:38
jaypipesoh, duh, yeah, this is queens...13:38
ioniyep13:38
jaypipeswe don't have the consumers table populated yet.13:38
jaypipesOK, hold a minute. more SQL coming your way...13:38
ionijust updated from pike to queens13:39
ionisoon to rocky13:39
ionibut wanted to make sure that everything is working fine13:39
jaypipesioni: SELECT a.* FROM allocations AS a13:41
jaypipesJOIN resource_providers AS rp ON a.resource_provider_id = rp.id13:41
jaypipesWHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c'13:41
jaypipesAND a.consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a';13:41
ionijaypipes, https://paste.xinu.at/YbNiJ/13:42
jaypipesioni: and then please execute this (just want to verify the allocations don't exist on the destination before we update anything...)13:43
jaypipesSELECT a.* FROM allocations AS a13:43
jaypipesJOIN resource_providers AS rp ON a.resource_provider_id = rp.id13:43
jaypipesWHERE rp.uuid = 'f6ac25cb-5ce9-4b91-a559-aa0a8d976255'13:43
jaypipesAND a.consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a';13:43
ioniyeah, i figured it out that you wanted that13:43
ioniit's alocated to cloudbox1313:43
ionihttps://paste.xinu.at/5SQ/13:43
ioniso now i have to dlete 4567-456913:44
ioni*delete13:44
ionithere was a resize involved, that's why it has more ram and disk13:44
ionion resize, most of the time, nove wants to migrate13:44
jaypipesioni: yes. I just needed to make sure it was safe to do that. so, you can execute this now: DELETE FROM allocations WHERE resource_provider_id = 8 AND consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a';13:45
jaypipesioni: for the other instance that was affected by the failed migration, perform the same steps. just make sure you don't DELETE before checking that the allocations table contains records for both the source and destination resource provider :)13:46
ionijaypipes, is not safe to use id?13:47
ionidelete from allocations where id='4567' ?13:47
jaypipesioni: oh, yes, you can do that too.13:47
jaypipesDELETE FROM allocations WHERE id BETWEEN 4567 AND 4569;13:48
ionijaypipes, because i don't know what is resource_provider_id :D13:48
ioniit doesn't seem to be unique13:48
jaypipesioni: that's the source host provider internal ID.13:48
jaypipesioni: my original DELETE expression is saying "delete the allocation for this particular instance on the source host"13:49
ionijaypipes, cool. thanks for the hints13:49
ionii can now resolve my issue13:49
jaypipesioni: glad to be of assistance. let us know if you need any further help.13:49
ionii'll lurk around if you don't mind13:50
jaypipesnot a problem :) always happy to have folks lurk!13:50
*** jaypipes is now known as leakypipes13:50
*** e0ne has quit IRC14:50
*** cdent has quit IRC15:00
*** cdent has joined #openstack-placement15:37
*** e0ne has joined #openstack-placement15:55
*** efried has joined #openstack-placement16:02
*** efried is now known as efried_mtg16:05
*** e0ne has quit IRC16:39
*** tssurya has quit IRC16:43
*** helenaAM has quit IRC17:04
*** mriedem is now known as mriedem_afk17:10
melwittis the placement api-ref being published in a new place? https://developer.openstack.org/api-ref/placement/ doesn't have the aggregates API, for example18:03
melwittoh, nevermind, it does18:04
cdentmelwitt: the ordering is perhaps a bit unintuitive18:17
melwittI was looking for something that said Aggregates but it's Resource provider aggregates. it's just me18:19
melwittI was too hasty18:20
*** mriedem_afk is now known as mriedem18:41
*** dklyle has joined #openstack-placement19:17
*** cdent has quit IRC19:24
*** e0ne has joined #openstack-placement19:26
*** dklyle has quit IRC19:51
*** avolkov has quit IRC20:07
*** dklyle has joined #openstack-placement20:13
*** e0ne has quit IRC20:18
*** efried_mtg has quit IRC20:31
*** dklyle has quit IRC20:55
*** dklyle has joined #openstack-placement20:59
*** dklyle has quit IRC21:13
*** dklyle has joined #openstack-placement23:13
*** dklyle has quit IRC23:20
*** efried has joined #openstack-placement23:23

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!