Tuesday, 2018-09-04

*** mlavalle has quit IRC00:01
*** hamzy has quit IRC00:01
openstackgerritBrin Zhang proposed openstack/nova master: Need further updates, no need to review  https://review.openstack.org/59927600:06
*** hamzy has joined #openstack-nova00:14
prometheanfireany clue on this? https://gist.github.com/prometheanfire/d1d2d93d7b1c97e4186389b664301a8000:38
prometheanfirerocky nova-compute is not starting :|00:38
*** brinzhang has joined #openstack-nova00:39
*** brinzh has joined #openstack-nova00:39
*** gbarros has joined #openstack-nova00:44
*** hoangcx has joined #openstack-nova00:45
*** tbachman has quit IRC00:46
*** brinzh has quit IRC00:49
*** tbachman has joined #openstack-nova00:49
* prometheanfire gives up00:51
*** Dinesh_Bhor has quit IRC00:54
*** Nel1x has joined #openstack-nova00:57
*** hongbin has joined #openstack-nova00:58
*** med_ has quit IRC01:05
*** med_ has joined #openstack-nova01:06
openstackgerritZhenyu Zheng proposed openstack/nova-specs master: Make scheduling weight more granular  https://review.openstack.org/59930801:08
*** Dinesh_Bhor has joined #openstack-nova01:16
*** med_ has quit IRC01:21
*** rcernin has quit IRC01:24
*** rcernin has joined #openstack-nova01:24
*** bzhao__ has joined #openstack-nova01:27
*** fried_rice is now known as efried01:27
*** med_ has joined #openstack-nova01:27
*** erlon has quit IRC01:31
*** tetsuro has joined #openstack-nova01:47
*** tetsuro has quit IRC01:53
*** Dinesh_Bhor has quit IRC02:03
*** dave-mccowan has quit IRC02:13
*** Dinesh_Bhor has joined #openstack-nova02:29
*** erlon has joined #openstack-nova02:38
*** psachin has joined #openstack-nova02:42
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi(N-R-P): support compute node resource provider update  https://review.openstack.org/52104102:47
*** jiapei has joined #openstack-nova02:55
openstackgerritfupingxie proposed openstack/nova master: Support list for alias in pci section in nova.conf  https://review.openstack.org/59224302:58
openstackgerritfupingxie proposed openstack/nova master: Add an example to add more pci devices in nova.conf  https://review.openstack.org/59224303:01
*** Nel1x has quit IRC03:14
*** erlon has quit IRC03:17
*** ykarel has joined #openstack-nova03:24
*** r-daneel has joined #openstack-nova03:34
*** ykarel has quit IRC03:37
*** ykarel has joined #openstack-nova03:51
*** Dinesh_Bhor has quit IRC03:59
*** Dinesh_Bhor has joined #openstack-nova04:01
*** psachin has quit IRC04:04
*** med_ has quit IRC04:06
*** psachin has joined #openstack-nova04:06
*** Dinesh_Bhor has quit IRC04:07
*** ykarel has quit IRC04:08
*** udesale has joined #openstack-nova04:12
*** gbarros has quit IRC04:13
openstackgerritjichenjc proposed openstack/nova master: WIP: add check for deleted flag  https://review.openstack.org/59949204:16
openstackgerritjichenjc proposed openstack/nova master: Move str to six.string_types  https://review.openstack.org/59949304:21
*** Bhujay has joined #openstack-nova04:33
*** ykarel has joined #openstack-nova04:37
*** Dinesh_Bhor has joined #openstack-nova04:39
*** janki has joined #openstack-nova04:42
*** hongbin has quit IRC04:58
*** kaliya has quit IRC05:15
*** stakeda has joined #openstack-nova05:20
openstackgerritgaryk proposed openstack/nova master: Docs: update link for remote debugging  https://review.openstack.org/59131605:32
*** lei-zh has joined #openstack-nova05:39
*** links has joined #openstack-nova05:41
*** pas-ha has quit IRC05:48
*** pas-ha has joined #openstack-nova05:48
*** dtroyer has quit IRC05:49
*** mugsie has quit IRC05:49
*** zigo has quit IRC05:49
*** dtroyer has joined #openstack-nova05:49
*** Luzi has joined #openstack-nova05:57
openstackgerritgaryk proposed openstack/nova master: Docs: update link for remote debugging  https://review.openstack.org/59131606:03
*** elod has joined #openstack-nova06:06
*** sridharg has joined #openstack-nova06:07
*** Dinesh_Bhor has quit IRC06:07
*** links has quit IRC06:09
*** Bhujay has quit IRC06:16
*** sahid has joined #openstack-nova06:20
*** hoonetorg has joined #openstack-nova06:24
*** psachin has quit IRC06:25
*** Dinesh_Bhor has joined #openstack-nova06:30
*** psachin has joined #openstack-nova06:30
*** rcernin has quit IRC06:33
*** lpetrut has joined #openstack-nova06:33
*** Bhujay has joined #openstack-nova06:38
*** luksky has joined #openstack-nova06:58
jiapeiGood afternoon novaers, I don't know if any of you have encounter such a problem when install stein by devstack. The problem is "ComputeHostNotFound_Remote". My procedures are: 1. Jenkins pull devstack-stein code, 2. Jenkins use ./stack.sh to install 3. install successfully 4. next Jenkins job coming, Jenkins use ./unstack.sh and ./clean.sh to unistall, then Jenkins use ./stack.sh to install 5. failed; The logs is07:02
jiapeihttp://paste.openstack.org/show/729382/07:02
*** Dinesh_Bhor has quit IRC07:03
*** lpetrut has quit IRC07:08
*** sridharg has quit IRC07:08
*** Dinesh_Bhor has joined #openstack-nova07:09
openstackgerritsahid proposed openstack/nova stable/rocky: libvirt: skip setting rx/tx queue sizes for not virto interfaces  https://review.openstack.org/59950607:09
*** dtantsur|afk is now known as dtantsur07:26
*** ccamacho has joined #openstack-nova07:30
*** holser_ has joined #openstack-nova07:32
*** alexchadin has joined #openstack-nova07:33
*** ykarel is now known as ykarel|lunch07:40
*** moshele has joined #openstack-nova07:43
*** tssurya has joined #openstack-nova07:45
*** gibi has joined #openstack-nova07:45
*** jpena|off is now known as jpena07:45
kashyapstephenfin: Morning, thanks for the outputs; looking now07:46
*** helenafm has joined #openstack-nova07:47
gibigood morning nova07:47
*** Dinesh_Bhor has quit IRC07:56
*** Dinesh_Bhor has joined #openstack-nova08:01
openstackgerritfupingxie proposed openstack/nova master: Delete allocations for instances that have been moved to another node  https://review.openstack.org/58289908:08
*** priteau has joined #openstack-nova08:12
openstackgerritfupingxie proposed openstack/nova master: Delete allocations for instances that have been moved to another node  https://review.openstack.org/58289908:14
openstackgerritfupingxie proposed openstack/nova master: Delete allocations for instances that have been moved to another node  https://review.openstack.org/58289908:18
*** alexchadin has quit IRC08:19
*** Dinesh_Bhor has quit IRC08:27
*** derekh has joined #openstack-nova08:28
*** cdent has joined #openstack-nova08:34
openstackgerritgaryk proposed openstack/nova master: Docs: update link for remote debugging  https://review.openstack.org/59131608:37
*** ykarel|lunch is now known as ykarel08:45
*** psachin has quit IRC08:50
*** tonyb has quit IRC08:50
*** ttsiouts has joined #openstack-nova08:51
*** psachin has joined #openstack-nova09:03
*** ttsiouts has quit IRC09:06
*** ttsiouts has joined #openstack-nova09:06
*** Dinesh_Bhor has joined #openstack-nova09:12
*** davidsha has joined #openstack-nova09:19
*** stakeda has quit IRC09:21
*** sayalilunkad has joined #openstack-nova09:32
*** helenafm has quit IRC09:32
*** lei-zh has quit IRC09:33
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi(N-R-P)(WIP): support compute node resource provider update  https://review.openstack.org/52104109:37
*** jaosorior has joined #openstack-nova09:47
*** holser_ has quit IRC09:48
*** holser_ has joined #openstack-nova09:52
*** Dinesh_Bhor has quit IRC09:52
*** tonyb has joined #openstack-nova09:53
*** panda|rover has quit IRC09:54
*** panda has joined #openstack-nova09:59
*** panda has quit IRC10:04
*** psachin has quit IRC10:06
openstackgerritChen proposed openstack/nova master: doc: update info for hypervisors  https://review.openstack.org/59955410:13
openstackgerritBalazs Gibizer proposed openstack/nova master: WIP: Use placement from separate repo in functional test  https://review.openstack.org/59955610:14
openstackgerritfupingxie proposed openstack/nova master: Delete allocations for instances that have been moved to another node  https://review.openstack.org/58289910:16
*** ttsiouts has quit IRC10:26
*** dave-mccowan has joined #openstack-nova10:41
*** tbachman has quit IRC10:42
*** mugsie has joined #openstack-nova10:46
*** cdent has quit IRC10:54
stephenfinprometheanfire: Oh, yeah, I'm around now (and for the next 6 hours or so - I'm GMT)10:56
*** erlon has joined #openstack-nova10:58
*** takamatsu has joined #openstack-nova11:00
*** jpena is now known as jpena|lunch11:02
*** luksky has quit IRC11:05
*** andreaf has joined #openstack-nova11:06
*** ttsiouts has joined #openstack-nova11:23
*** udesale has quit IRC11:24
kashyapstephenfin: Most excellent:11:29
kashyap    <controller type='pci' index='1' model='pcie-root-port'>11:30
kashyap      <model name='pcie-root-port'/>11:30
kashyap      ...11:30
kashyap    </controller>11:30
kashyapstephenfin: That's the bit I was looking for.11:30
kashyapI was wondering if we (wrongly) set an explict controller model like: <model name='ioh3420'/>11:31
*** beagles has joined #openstack-nova11:31
*** cdent has joined #openstack-nova11:33
*** panda has joined #openstack-nova11:34
*** alexchadin has joined #openstack-nova11:34
*** luksky has joined #openstack-nova11:40
sean-k-mooneykashyap: ya perhaps.11:41
*** jaypipes has joined #openstack-nova11:44
*** cdent has quit IRC11:49
*** sambetts_ has quit IRC11:55
*** dims has joined #openstack-nova11:56
*** sambetts_ has joined #openstack-nova11:57
*** jpena|lunch is now known as jpena12:03
*** ttsiouts has quit IRC12:04
*** ttsiouts has joined #openstack-nova12:05
*** ajo has joined #openstack-nova12:05
*** eharney has joined #openstack-nova12:06
*** ttsiouts_ has joined #openstack-nova12:07
*** trozet has joined #openstack-nova12:08
*** ttsiouts_ has quit IRC12:09
*** ttsiouts has quit IRC12:09
efriedGood morning nova12:10
*** ttsiouts has joined #openstack-nova12:10
*** med_ has joined #openstack-nova12:13
*** tbachman has joined #openstack-nova12:14
*** tbachman has quit IRC12:14
*** tbachman has joined #openstack-nova12:16
kashyapsean-k-mooney: Nova is not; so we're good there.12:19
*** sahid has quit IRC12:23
*** sahid has joined #openstack-nova12:23
*** helenafm has joined #openstack-nova12:24
mnaserfwiw: rocky has been stable in sjc1 with a bunch of new clients coming on and being part of nodepool at ~50 vms (+ all bfv)12:25
mnaserso congrats :)12:25
*** trozet has quit IRC12:29
openstackgerritSurya Seetharaman proposed openstack/nova master: Merge security groups extension response into server view builder  https://review.openstack.org/58547512:29
openstackgerritSurya Seetharaman proposed openstack/nova master: Merge extended_status extension response into server view builder  https://review.openstack.org/59209212:29
openstackgerritSurya Seetharaman proposed openstack/nova master: Add scatter-gather-single-cell utility  https://review.openstack.org/59494712:29
openstackgerritSurya Seetharaman proposed openstack/nova master: Merge extended_volumes extension response into server view builder  https://review.openstack.org/59628512:30
openstackgerritSurya Seetharaman proposed openstack/nova master: Making instance/migration listing skipping down cells configurable  https://review.openstack.org/59242812:30
openstackgerritSurya Seetharaman proposed openstack/nova master: Add get_by_cell_and_project() method to InstanceMappingList  https://review.openstack.org/59165612:30
openstackgerritSurya Seetharaman proposed openstack/nova master: Return a minimal construct for nova list when a cell is down  https://review.openstack.org/56778512:30
*** ttsiouts has quit IRC12:32
efriedmnaser: ++ !12:33
*** ykarel is now known as ykarel|away12:34
*** ccamacho has quit IRC12:34
kashyapsean-k-mooney: Hey, when you're about: from yesterday's XML snippet from stephenfin, when _not_ setting any 'num_pcie_ports', why do we get 4 root ports: http://paste.openstack.org/show/729350/12:35
gibimnaser: thanks for the good news12:36
kashyapsean-k-mooney: Ah, never mind, saw this: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5114-L511912:37
mnaserthe next fun step is going to be upgrading our mtl region12:39
mnaserthat will be the fun one12:39
*** ttsiouts has joined #openstack-nova12:41
*** gbarros has joined #openstack-nova12:41
*** ykarel|away has quit IRC12:42
openstackgerritStephen Finucane proposed openstack/nova-specs master: Re-propose numa-aware-live-migration spec  https://review.openstack.org/59958712:47
openstackgerritBrin Zhang proposed openstack/nova master: Need further updates, no need to review  https://review.openstack.org/59927612:52
*** mriedem has joined #openstack-nova12:53
*** mdrabe has joined #openstack-nova12:56
*** gbarros has quit IRC13:04
*** gbarros has joined #openstack-nova13:05
*** awaugama has joined #openstack-nova13:06
*** munimeha1 has joined #openstack-nova13:06
*** lbragstad has joined #openstack-nova13:19
*** moshele has quit IRC13:25
openstackgerritJay Pipes proposed openstack/nova-specs master: allow transferring ownership of instance  https://review.openstack.org/59959813:26
jaypipesmelwitt: ^^ example of needed coordination between nova and placement. /me hopes the extraction won't be too much of a distraction13:27
*** brinzhang has quit IRC13:30
openstackgerritJay Pipes proposed openstack/nova-specs master: allow transferring ownership of instance  https://review.openstack.org/59959813:44
*** ccamacho has joined #openstack-nova13:46
*** tbachman has quit IRC13:46
mriedemjaypipes: can you push the spec and such against the old existing bp for the same thing? https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership13:54
sean-k-mooneyjaypipes: i have been asked how to do that in the past. mainly for teaching cases where we wanted to be able to prepare a buncn of vms for people then give them the vm.13:56
mriedemthis is more than just placement coordination,13:57
mriedemit's also cinder, neutron, glance, castellan/barbican right?13:57
openstackgerritJay Pipes proposed openstack/nova-specs master: allow transferring ownership of instance  https://review.openstack.org/59959813:58
mriedemplus maybe whatever is managing the vm? trove/heat?13:58
jaypipesmriedem: done.13:58
sean-k-mooneytrove/heat should really only need to expose an api to initate the transfer the rest should be handeld in nova right?13:59
*** udesale has joined #openstack-nova13:59
jaypipesmriedem: I'm not trying to coordinate between cinder, neutron or glance in the spec. only placement and nova. I note the other integration points and why I'm not trying to add orchestration functionality to nova.13:59
*** ttsiouts has quit IRC13:59
jaypipesmriedem: I'm afraid absolutely nothing would get done at all if we try to boil the ocean like previous attempts have done.14:00
mriedemthat would effectively break us14:00
mriedemif you create a vm which creates a volume and a port,14:00
jaypipesmriedem: what would effectively break us?14:00
mriedemand then change the owner of the vm, we'll fail to delete the volume/port14:00
jaypipesmriedem: why would you delete the volume/port?14:00
mriedemb/c that's what we do14:00
mriedemdelete_on_termination=true for bfv,14:01
sean-k-mooneyjaypipes: from a placement perspecitive did we settle on the idea that neutron and cinder resouces would be conumed by the instance e.g. the instance uuid is used as the consumer rather then neutorn port uuid ecta.14:01
mriedemand nova cleans up the ports it creates14:01
jaypipesmriedem: who said anything about terminating anything?14:01
mriedemwe can assume that someone would eventually try to delete these resources14:01
mriedemif this is stricly baremetal instances b/c oath, then let's be clear about that14:01
jaypipesmriedem: let's discuss this on the review, eh14:02
mriedembut even baremetal instances can boot from volume now14:02
jaypipesmriedem: this is not strictly bm instances for oath, no...14:02
mriedemsure14:02
jaypipesI'm really not sure why you think that.14:02
jaypipesI'm not sure what about the spec as written gave you that impression.14:02
*** johnsom has joined #openstack-nova14:06
*** r-daneel has quit IRC14:07
tobias-urdinhm is there any easy way to figure out if an instance is volume backed using novaclient?14:08
mriedemyes14:09
mriedemimage_ref is ''14:09
mriedem*image14:10
mriedemnormally it's a dict with an id and link,14:10
mriedembut for volume-backed, it's just ''14:10
mriedemhttps://github.com/openstack/nova/blob/master/nova/api/openstack/compute/views/servers.py#L33214:11
*** mlavalle has joined #openstack-nova14:13
*** itlinux has quit IRC14:15
mriedemcomments inline14:15
mriedemjaypipes:14:15
*** Bhujay has quit IRC14:17
*** cdent has joined #openstack-nova14:18
tobias-urdinmriedem: thanks!14:18
*** sapd1_ has joined #openstack-nova14:19
*** eharney has quit IRC14:20
*** alexchadin has quit IRC14:22
tobias-urdinmriedem: and if you need to get a list of all attached volume and know which one is the root volume?14:23
tobias-urdinsorry got it get_server_volumes()14:25
tobias-urdinif it's one volume that easy, otherwise should you rely that device=/dev/vda always is the first that is booting14:25
tobias-urdin?14:26
*** alexchadin has joined #openstack-nova14:26
*** ttsiouts has joined #openstack-nova14:27
mriedemdevice name doesn't really mean anything,14:27
mriedemnova ignores it if supplied14:27
mriedemboot_index is what you'd want, but i don't think we expose that out of the api14:27
mriedemwe certainly could, and probably should if we ever want to get device_name out of the API for volumes14:28
tobias-urdinhm ouch so the boot index cant be access through any api calls listing server info or block device mappings or similar14:30
tobias-urdincan the boot_index be changed from nova's perspective? because the only way I could work around that would be relying on the creation date of the volumes14:31
sean-k-mooneytobias-urdin: if you need a reliable way to assicate volumes with devices in the guest you need to use tags14:31
tobias-urdinsean-k-mooney: ok, don't think that helps what i'm trying to do. i need to get the root volume if its a volume backed instance14:33
tobias-urdini guess other than creation date i could check if the volume was created from an image with the cinder api, but that could fail as well if somebody attaches a volume for recovery14:34
*** jiapei has quit IRC14:34
sean-k-mooneytobias-urdin: e.g. novas boot form volume form image or boot with a precreated volume14:34
tobias-urdinafter checking i can't see nova populating the image field even when booting from a volume + image during creation14:36
sean-k-mooneymdbooth: wasn't someone working on ^^14:36
mriedemwhich image field?14:37
mriedemi forgot about tags - yes you could use tags to say which is the root volume during boot from volume, but we don't expose the bdm tags out of the API either :)14:37
tobias-urdinwhichever Server.image from novaclient provides14:37
mriedemi have related specs for both of those things i think14:37
mriedemyes that's on purpose - the server.image is '' if volume-backed14:38
mriedemthe image backing the root volume is in the volume metadata14:38
*** mchlumsky has joined #openstack-nova14:38
mriedemin "volume_image_metadata"14:38
mriedemhttps://review.openstack.org/#/c/452546/ is related to getting device_name out of the API,14:39
tobias-urdinis that exposed out of the api and novaclient?14:39
*** r-daneel has joined #openstack-nova14:39
sean-k-mooneyvolume_image_metadata is a copy of the glance metadata for an image pluse i think an image ref of some kind i think ?14:39
*** tbachman has joined #openstack-nova14:39
mriedemtobias-urdin: which? volume_image_metadata?14:39
mriedemtobias-urdin: that's on the volume, so nova doesn't expose it, cinder does,14:40
mriedembut yes14:40
mriedemhttps://review.openstack.org/#/c/393930/ also related to exposing bdm tags out of the compute api14:41
mdboothvolume_image_metadata doesn't need to have a corresponding glance image, btw.14:41
mriedemwhich overlaps with https://review.openstack.org/#/c/452546/14:41
mdboothBut it's used in the same way and has the same semantics.14:41
mdboothmriedem: Yeah, it's weird we don't expose that.14:43
mdboothtags via the rest api, that is.14:44
mriedemjust hasn't been done - i've had the specs, been defeated, then found new agreement but haven't found new motivation for doing the work14:44
* mdbooth knows that story :(14:45
*** sridharg has joined #openstack-nova14:45
tobias-urdinfound what i needed in volume_image_metadata atleast, just have to do some ugly assumptions for now based on volume_image_metadata and creation date since i cant access boot_index or block device mappings info14:46
tobias-urdinthanks for helping out :)14:46
sean-k-mooneytobias-urdin: this kind of think is something that could be a feature request to the openstacksdk/shade teams as getting the root volume for boot from volume is likely one of thoes things that will change alot depending on your env and could be handeld in a shade proxy api14:49
mdboothtobias-urdin: In practise, it's probably going to be the one with volume_image_metadata.14:49
mdboothAlthough I was reviewing a v2v tool the other day which added volume_image_metadata to multiple volumes, which would have broken that assumption, but I can't imagine that's common.14:50
mdboothCreation data is less reliable, especially if you have persistent data on volume X, and created a root volume to contain an app to manipulate it some time later.14:51
mdboothUnlike the first case, I'd expect that to be likely to happen in the wild.14:51
tobias-urdinsean-k-mooney: sort of pressing to get this stuff done (as always...) otherwise it would've been optimal to check the sdk first14:52
tobias-urdinmdbooth: yeah, i was thinking about combining if there is multiple volume_image_metadata just assume the first created.. atleast i would always get something14:53
prometheanfirestephenfin: hi :D14:56
stephenfinprometheanfire: o/14:56
mdboothNot strictly related, but afaict we attach volumes in a non-deterministic order on restart, except for the root volume. And that's ok. We should probably deliberately randomise it :)14:57
*** dklyle has joined #openstack-nova14:58
tobias-urdinwhile I'm at it... here's a sad question, can I somehow block the creation of image backed instances? (i.e images_type backed instances)14:58
*** tbachman has quit IRC14:59
tobias-urdinpolicy, super simple api hack or similar, everything's allowed but the best would be to not really touch anything critical14:59
prometheanfirestephenfin: maybe my deployment is just broken, gimme a few to get out of this meeting and I'll update you14:59
tobias-urdinI was hoping on setting images_type to None but that errors out upon initialization14:59
mdboothtobias-urdin: Don't deploy glance?15:01
tobias-urdin(oh how I wish there was a cinder backend for images_type right now, wish i familar enough with the codebase to drive such a work)15:01
tobias-urdinmdbooth: how u mean? hm settings [glance]/api_servers to something invalid?15:02
sean-k-mooneytobias-urdin: well if you have ceph you could set nova to image_type ceph15:02
tobias-urdinsean-k-mooney: yea that's what we do today, however don't want users that way at all, weird i know but we want to default to cinder15:03
efriedbauzas, mriedem: https://review.openstack.org/#/c/598365/ just needs config helps updated?15:03
mriedemhaven't looked at reviews on it yet today15:04
sean-k-mooneytobias-urdin: its not that weird haveing a generic image_type cinder has come up in dublin ptg and last time in denver15:04
bauzasefried: yep, IMHO15:04
sean-k-mooneyits just not that simple.15:04
bauzasefried: mriedem: once done, we could modify the default values15:05
mriedemi thought ovh already had a patch for that15:05
tobias-urdinsean-k-mooney: yea :(15:05
efriedbauzas, mriedem: It's blocking our CI, so I would like to get it merged as soon as possible.15:05
bauzasorly N15:05
bauzas?15:05
bauzasif so, let me +W it15:05
mriedembauzas: https://review.openstack.org/#/c/532924/15:06
sean-k-mooneymriedem: do you know if anyone is proposing or looking at a cinder image time for stein or makeing bfv the default?15:06
efriedbauzas: Okay, thanks. We can do the conf helps in a fup?15:06
*** beekneemech is now known as bnemec15:06
sean-k-mooney*image_type=cinder15:06
mriedemefried: i can look and update15:06
bauzasefried: if needed, yep15:06
mriedemsean-k-mooney: as in the fabled libvirt cinder image backend of lore?15:06
mriedemno, no one is working on that15:07
bauzasmriedem: would you be possible to pass a new revision now, or just a new change later ?15:07
bauzasif the latter, no worries15:07
kashyapsean-k-mooney: When you get a minute, please remind me again: we can't set PCIe root ports via flavor metadata property, can we?15:07
sean-k-mooneymriedem: ya that is what i had rememberd form dublin15:07
mriedembauzas: efried: i'll update it in a minute15:07
bauzasmriedem: and yeah, I remember this change15:07
prometheanfirestephenfin: ya, I made sure no pycache/pyc/pyo15:07
sean-k-mooneykashyap: if we can the glance metadef have not been created to document it. i wish we could and didn not have this in the nova config. ill check15:08
kashyapsean-k-mooney: Right, I presume we _can't_ today; I'll go look the code15:08
bauzasmriedem: and I also remember the spec https://review.openstack.org/#/c/552105/3/specs/rocky/approved/default-allocation-ratios.rst15:09
stephenfinprometheanfire: There's definitely some form of caching going on or your source is located somewhere else. That's the only reason for that stuff to happen15:09
bauzasmriedem: but I think it's a separate issue15:09
kashyapsean-k-mooney: A libvirt dev was asking that question: can Nova set the root ports via flavor; or just through a global knob15:09
*** itlinux has joined #openstack-nova15:09
prometheanfirestephenfin: even odder, I ran the py27 version of nova-compute, got the traceback containing py3515:10
prometheanfirethe main traceback was py27 though15:10
stephenfinprometheanfire: Can you paste the output of that?15:11
prometheanfireI think the real issue was that portion15:11
sean-k-mooneykashyap: this is the config generation code but github is not finding its usage https://github.com/openstack/nova/blob/c6218428e9b29a2c52808ec7d27b4b21aadc0299/nova/virt/libvirt/config.py#L1713-L172815:12
sean-k-mooneykashyap: codesearch did however http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n511715:13
sean-k-mooneykashyap: the dirver uses the conf value directly15:13
kashyapsean-k-mooney: So no metadata property15:13
kashyapWonder if we should file a blueprint to add it15:13
*** dklyle has quit IRC15:13
*** dklyle has joined #openstack-nova15:14
sean-k-mooneykashyap: i would be +1 on that espcially if we can deprecate and remvoe the conf option15:14
kashyapsean-k-mooney: Yeah, we should be able to do that; instead of the global config.15:14
kashyapDamn, since I 'discovered' the bug, I get the pleasure of filing the Blueprint I guess :P15:14
kashyapsean-k-mooney: More seriously, paperwork question: does this require a spec?  Since it's user-impacting?15:14
kashyapOr a spec-less blueprint is reasonable enough?15:15
sean-k-mooneykashyap: spec-less blueprint. extra specs are a gray area as they are not technical part of the versioned api but are user fasing so mriedem or someelse will likely comment on the blueprint if a specs is needed15:16
prometheanfirestephenfin: https://gist.githubusercontent.com/prometheanfire/6512134e799ec8c08c3f080150f60d19/raw/7cdb3a9a350c19540b1c930077e164786226636b/gistfile1.txt15:17
kashyapsean-k-mooney: Yep, noted; thanks for the discussion.15:17
sean-k-mooneyby the way i have been using http://codesearch.openstack.org/ a lot more recently instead of using github to such for these things its pretty good15:18
kashyapAh, nice.15:19
kashyapsean-k-mooney: BTW, seems like "hw:machine_type" isn't documented a flavor extra specs here: https://docs.openstack.org/nova/latest/user/flavors.html15:19
kashyapIs that so?15:19
sean-k-mooneykashyap: correct it is documented here https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L34-L3815:20
sean-k-mooneyaccouding to the glace metadef registry its only valid on image not flavor15:20
*** openstackgerrit has quit IRC15:20
kashyapsean-k-mooney: Hmm, but the syntax is slightly different: hw:machine_type  vs hw_machine_type (in nova.conf)15:21
kashyapRight, this seem to work: `openstack flavor set --property hw:machine_type=x86_64=q35 test.q35`15:21
stephenfinprometheanfire: Based on that, it seems oslo.service package in your virtualenv is starting a thread using the system oslo.service package. I've no idea why that would happen15:21
*** Luzi has quit IRC15:21
stephenfinprometheanfire: Might be worth asking on #openstack-oslo to see if anyone else has seen this before15:21
sean-k-mooneykashyap: hw: is the namesapced flavor syntax, images dont have namespaces so the namespace is prepended with an _ instead of :15:22
sean-k-mooneykashyap: https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 tell you these are only valid in the image15:22
stephenfinprometheanfire: You've patched 'usr/lib64/python3.5/site-packages/nova', I assume?15:22
kashyapsean-k-mooney: Nod; I'll send a doc patch to document it here as well: https://docs.openstack.org/nova/latest/user/flavors.html15:22
sean-k-mooneykashyap: if its valid in both it looks like https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt.json#L7-L1615:23
prometheanfirestephenfin: that file didn't exist at the time :P15:23
prometheanfireI uninstalled nova/oslo-service/oslo-db system-wide15:23
prometheanfireonly available in the venv15:23
sean-k-mooneykashyap: documenting it there is fine but the autritive source is glance15:23
prometheanfirelike I said, broken :P15:23
stephenfinprometheanfire: Very :)15:24
kashyapsean-k-mooney: Yep, noted.  (And it seems to be valid for both, IIUC)15:24
stephenfinprometheanfire: Yeah, I'm not sure how much I can help with that. There's something funky going on with venvs that I don't understand. I don't think it's anything to do with the patch itself15:24
*** gbarros has quit IRC15:24
*** gbarros has joined #openstack-nova15:25
prometheanfirestephenfin: ya, at this point justmergeit15:27
sean-k-mooneykashyap: doing a code search i only see code for using it from the conf or the image not the flavor15:27
sean-k-mooneykashyap: http://codesearch.openstack.org/?q=machine_type&i=nope&files=&repos=nova15:27
kashyapsean-k-mooney: Yeah, I've just done a test, indeed it's so15:28
kashyapstephenfin: Hey, when you get a moment, yesterday you said this worked for you:15:28
kashyap    $ openstack flavor create test.q3515:28
kashyap    $ openstack flavor set --property hw:machine_type=x86_64=q35 test.q3515:28
kashyap    $ openstack server create --flavor test.q35 --image test \15:28
kashyap        --nic net-id=$NIC_UUID test-q3515:28
kashyapstephenfin: Did the second command really take effect?  It shouldn't have worked.15:29
stephenfinkashyap: No, it didn't. I misread your comments15:29
kashyapI guess you had 'q35' via other means, like config15:29
kashyapAh-ha!15:29
stephenfinHence the second set of pastes15:29
sean-k-mooneykashyap: i think stephenfin had to use the config15:29
*** Miouge has left #openstack-nova15:29
kashyapYep, it's all clear now.15:31
*** amarao has quit IRC15:33
*** alexchadin has quit IRC15:35
*** alexchadin has joined #openstack-nova15:36
*** alexchadin has quit IRC15:36
*** alexchadin has joined #openstack-nova15:36
*** alexchadin has quit IRC15:37
*** alexchadin has joined #openstack-nova15:37
*** alexchadin has quit IRC15:37
*** alexchadin has joined #openstack-nova15:38
*** alexchadin has quit IRC15:38
*** moshele has joined #openstack-nova15:40
*** eharney has joined #openstack-nova15:45
*** ccamacho has quit IRC15:53
*** gbarros has quit IRC15:57
*** jlvilla-viva is now known as jlvillal15:58
*** openstackgerrit has joined #openstack-nova15:58
openstackgerritSurya Seetharaman proposed openstack/nova master: Return a minimal construct for nova list when a cell is down  https://review.openstack.org/56778515:58
*** gbarros has joined #openstack-nova15:59
*** macza has joined #openstack-nova16:08
*** alexchadin has joined #openstack-nova16:10
*** mdrabe has quit IRC16:10
*** ttsiouts has quit IRC16:13
openstackgerritMatt Riedemann proposed openstack/nova master: Document unset/reset wrinkle for *_allocation_ratio options  https://review.openstack.org/59967016:14
*** ttsiouts has joined #openstack-nova16:14
mriedemefried: bauzas: jaypipes: ^ follow up for config option wording16:14
*** alexchadin has quit IRC16:14
*** holser_ has quit IRC16:15
bauzasmriedem: thanks, and bingo16:15
efriedmriedem: +A, nice one guv16:17
*** Bhujay has joined #openstack-nova16:17
*** ttsiouts has quit IRC16:19
prometheanfirehuh, nova destroyed an instance when I was messing with placement stuff16:19
*** tbachman has joined #openstack-nova16:19
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Don't persist zero allocation ratios in ResourceTracker  https://review.openstack.org/59967216:21
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Document unset/reset wrinkle for *_allocation_ratio options  https://review.openstack.org/59967316:21
jaypipesmriedem: I'd rather have someone like mgagne look at that patch and give advice.16:22
jaypipessince we're not operators...16:22
mriedemsure, hence the big todo comment in the bottom change16:22
sean-k-mooneyprometheanfire did you delete its allocation or something?16:24
*** gyee has joined #openstack-nova16:24
sean-k-mooneyprometheanfire: i did not think we could kill nova instance by messing with placement so that sounds... unintended16:25
prometheanfiresean-k-mooney: no, it looks like libvirt forgot it exists16:25
prometheanfireor something16:25
sean-k-mooneyis the instace still listed in openstack. e.g. opestack server list16:25
prometheanfire2018-09-04 16:11:44.844 4079 INFO nova.compute.manager [req-ab55e9f6-b2a8-48ac-b50b-fde5b7af0892 - - - - -] [instance: 0e9aa374-3627-48ac-a410-4abd65564a80] Deleting instance as it has been evacuated from this host16:25
prometheanfireno clue why it was evacualted :|16:26
prometheanfirethat's the first log line on start of nova-compute16:26
sean-k-mooneyoh ok am ill go with ghosts16:26
sean-k-mooneyor your other admins16:26
prometheanfireI'm the only admin :P16:26
sean-k-mooneythen ill stick with my first answer16:26
*** janki has quit IRC16:26
prometheanfirewhat triggers a evacuation?16:26
*** janki has joined #openstack-nova16:27
sean-k-mooneyyou dont have watcher deployed or one of the other ha serices do you?16:27
prometheanfirenope16:27
sean-k-mooneyprometheanfire: as far as i knew evacuate was an admin only api call16:27
sean-k-mooneyso without manually invoking it i did not think we had a way to auto evacuate16:27
prometheanfireghosts then16:28
*** sahid has quit IRC16:28
sean-k-mooneymriedem: jaypipes dansmith  any idea what could cause an evacuation to happen without an admin doing it?16:28
mgagnejaypipes, mriedem: This change is already Workflow+1, wording is fine for me anyway. (re https://review.openstack.org/#/c/599670/)16:29
*** helenafm has quit IRC16:30
prometheanfiresean-k-mooney: https://gist.githubusercontent.com/prometheanfire/76fc4693feed8b99118feaebbadfaea4/raw/c2a1b60e7181d7b864af35730e9f7eadfa365fd1/gistfile1.txt16:30
prometheanfiresean-k-mooney: it looks like it tries to see if it was evacuated, gets the timeout/traceback and sees that as ok?16:30
* sean-k-mooney im really glad that alt-l allow me to go to raw mode and click long links16:30
prometheanfiresean-k-mooney: github gist links do suck16:31
prometheanfire2018-09-04 16:34:15.499 4079 WARNING nova.compute.manager [req-2f9b1170-6748-4c94-af5f-00e8fc70d0e9 - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.16:31
prometheanfirelol16:31
sean-k-mooneyprometheanfire: had you previously evacuated instaces from that host? if so when the nova compute agent comes back up it will clean up any instaces that were not deleted16:31
prometheanfireis there a way to get nova to recreate the libvirt domain?16:31
prometheanfiresean-k-mooney: nope16:32
prometheanfirenot that I can remember at all16:32
jaypipesmgagne: yes, I recognize the change is already +W. I just wanted a real operator to double-check the wording. :)16:32
sean-k-mooneyprometheanfire: well it depends if you do opestack server show 0e9aa374-3627-48ac-a410-4abd65564a80 i assume it is gone?16:32
mgagnejaypipes: +116:33
sean-k-mooneyprometheanfire: if not then it should be running somewhere else in your cloud16:33
prometheanfiresean-k-mooney: the server show still works, when I do a reboot I get this16:34
prometheanfire2018-09-04 16:23:57.779 4079 ERROR nova.compute.manager [req-e7f9f2a5-7cbc-4776-a085-88245450abac bcebdc7b8dfd4d43b036d1b73df6d377 5488a33661454bd792ff8c62d31d07a0 - default default] [instance: 0e9aa374-3627-48ac-a410-4abd65564a80] Cannot reboot instance: Instance 0e9aa374-3627-48ac-a410-4abd65564a80 could not be found.: nova.exception.InstanceNotFound: Instance16:34
prometheanfire0e9aa374-3627-48ac-a410-4abd65564a80 could not be found.16:34
sean-k-mooneyprometheanfire: am from the admin view can you check the host its running on and see if libvirt see it?16:35
prometheanfirevirsh list --all doesn't show it16:36
prometheanfirethis had to be my dns server too16:36
sean-k-mooneyprometheanfire: im guessing its partially deleted. you could try a force reset of the vm status followed by a hard reboot16:37
prometheanfiresean-k-mooney: where is the libvirt.xml stuff stored now?16:38
*** janki has quit IRC16:38
prometheanfireI could recreate the domain and it'd probably work16:38
*** moshele has quit IRC16:38
prometheanfire/etc/libvirt/qemu/ it looks like16:39
sean-k-mooneyyes. if you know the instace name the xml might still be there if not the qemu args will be in /var/log/libvirt/qeum/instacne...16:40
*** moshele has joined #openstack-nova16:40
prometheanfireya, that helps some16:44
prometheanfirethe xml isn't there, but I can register a domain with it and it should be picked up then, hopefully...16:44
kashyapstephenfin: Hey, if you still have that env, can I ask to do one last test, please?16:44
sean-k-mooneyprometheanfire: nova will rechreate the domain for you if you can start the vm.16:44
kashyapstephenfin: It is the following:16:45
kashyapstephenfin: Boot a guest w/ Q35, but now with _8_ PCIe root ports, using 'num_pcie_ports=8' in nova.conf16:45
sean-k-mooneyprometheanfire: the easies thing to do would be to rest the state to active. issue a shudown the reset to active agin if need and start the instace16:45
prometheanfiresean-k-mooney: in that case reset state and start16:45
prometheanfireya16:45
prometheanfireand we're back, odd that it happened, but ok16:49
sean-k-mooneyprometheanfire: was your instance on ceph or cinder storage?16:50
prometheanfireno16:50
prometheanfirejust a basic instance16:50
*** udesale has quit IRC16:51
sean-k-mooney:( in that case the evacuate deleted your data as it did a force rebuild to a different host16:51
prometheanfiresean-k-mooney: ssh was happy, so it didn't16:52
prometheanfireno clue what happened, but I'm fine16:52
prometheanfireeven if it did, it's all in puppet for this node16:52
*** luksky has quit IRC16:53
prometheanfiremy compute nodes still aren't reporting to placement though, not sure they ever did that right16:53
sean-k-mooneyprometheanfire: well ssh would be fixed via cloud init16:53
*** tssurya has quit IRC16:54
sean-k-mooneyif you data is intack however thats a good thing :)16:54
prometheanfireI meant the host key is the same16:54
*** dkehn has quit IRC16:56
prometheanfiresean-k-mooney: btw, did you see my comment about the database connection string in the apidb not being urlencoded?16:56
openstackgerritMerged openstack/nova master: Combine error handling blocks in _do_build_and_run_instance  https://review.openstack.org/54596016:56
prometheanfiretoo far in backlog now16:57
prometheanfireupdate cell_mappings set database_connection='URLENCODED_CONNECTION_STRING' where uuid='0000'16:58
*** SamYaple has joined #openstack-nova16:58
*** dkehn has joined #openstack-nova16:59
SamYaplewhat actions, if any, can a user do that will trigger an 'instance.update' notification? having troble figuring it out from the code17:01
*** derekh has quit IRC17:01
prometheanfireThere are no compute resource providers in the Placement service but there are 2 compute nodes in the deployment.  This means no compute nodes are reporting into the Placement service and need to be upgraded and/or fixed.17:03
*** ykarel has joined #openstack-nova17:03
prometheanfirebut the compute nodes have the placement info in their config, not sure what else to do there17:03
*** sapd1_ has quit IRC17:05
mriedemSamYaple: do you have notify_on_state_change set?17:05
SamYaplemriedem: yes17:05
mriedemhttps://docs.openstack.org/nova/latest/configuration/config.html#notifications.notify_on_state_change17:05
mriedemto what?17:05
SamYapleprometheanfire: just checked scrollback, i just dealt with that issue! on a few nodes. It was caused by reprovisioning my compute nodes with teh same names but migrating the instances off with evacuate. when it came back the database was messed up17:05
mriedemprometheanfire: are you on pike or master/17:06
mriedem?17:06
sean-k-mooneyprometheanfire: jaypipes or cdent might be able to help with that. im guessiog your missing the placement client and or config section in the nova config but honestly that is jsut a guess17:06
prometheanfireSamYaple: oh?17:06
prometheanfiremriedem: rocky17:06
SamYaplemriedem: so it *only* sends updates with state changes? if i read that correctly? (sorry, chasing down a bug in an inherited notifications reader with bad comments, so im trying to be explicit)17:07
SamYapleprometheanfire: yes, it had to do with the old instances that had been in an ERROR state before the node went down17:08
prometheanfiresean-k-mooney: there's a separate placement client? as in pypi type thing?17:08
mriedemprometheanfire: i relatively recently talked with dansmith about trying to parse/encode/decode the db connection string in the cell_mappings table and i think the consensus was if you have special stuff in the url, you need to encode it beforehand17:08
sean-k-mooneySamYaple: that kind of makes sense. since you used the same host name the compute agent will get teh compute servie record for the previous install17:08
openstackgerritBalazs Gibizer proposed openstack/nova master: WIP: Use placement from separate repo in functional test  https://review.openstack.org/59955617:08
SamYapleprometheanfire: ill try to pull my sql queries to fix it17:08
prometheanfiremriedem: ah, somehow I got it in the db17:08
*** davidsha has quit IRC17:09
sean-k-mooneyprometheanfire: there is a placement osc plugin but when i said placement client i was referint to the devstack placement clinet service but honestly dont know what that does17:09
mriedemSamYaple: it looks like a hack way to trigger the instance.update notififcation is to change metadata on the instance17:09
SamYaplesean-k-mooney: would a possible workaround for my use case possibly be solved simply by deleting the service record between reprovisioning?17:10
SamYaplemriedem: ah! perfect! ok. so thats the hack this code is trying to tap into. that aligns on my end. thank you17:10
prometheanfiresean-k-mooney: ok, thought the nova-compute service handled registering itself17:10
mriedemprometheanfire: it does17:10
openstackgerritBalazs Gibizer proposed openstack/nova master: WIP: Use placement from separate repo in functional test  https://review.openstack.org/59955617:10
mriedemSamYaple: which release are you on?17:10
SamYaplemriedem: ocata/pike/queens across a few environments17:11
sean-k-mooneySamYaple:ya perhapos never tried it. with all the forign key on the table however it might be more hassel then its worth17:11
mriedemok, because we didn't until recently cleanup after ourselves wrt placement when deleting a nova-compute service record17:11
mriedemthta's fixed now, but you'd have to make sure you have it, otherwise you have to cleanup placement entries yourself17:11
sean-k-mooneyprometheanfire: if you have the right setting in your conf the nova-compute agent will take care of populating the placement api for you17:12
SamYapleah, yes. this process has only been done on the older ocata clusters. We should be queens everywhere by end of year though.17:12
prometheanfiresean-k-mooney: nova-status upgrade check      still shows the warning17:12
SamYaplemriedem: my sql are me cleaning up the placement entries :)17:12
mriedemfyi https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac8017:12
SamYaplemriedem: oh b-e-a-utiful. Super helpful. gonna do an internal backport on that for our ocata stuff. got a few hundred more nodes to go17:13
sean-k-mooneyprometheanfire: for devstack the plamcenet section looks like this http://paste.openstack.org/show/729431/17:13
prometheanfiresean-k-mooney: yep, looks right (looks like mine)17:14
sean-k-mooneyprometheanfire: strange and openstack resource provider list is does not have the compute nodes?17:15
prometheanfiresean-k-mooney: http://paste.openstack.org/show/729432/17:15
mriedemSamYaple: well, be warned, it's a hefty series of backports17:16
*** jpena is now known as jpena|off17:16
mriedemthis is just pike https://review.openstack.org/#/q/topic:bug/1756179+(status:open+OR+status:merged)+branch:stable/pike17:16
prometheanfiresean-k-mooney: openstack service provider list17:16
prometheanfirethat?17:16
sean-k-mooneyno "openstack resource provider list"17:16
sean-k-mooneyif you do not have that you are missing the placement osc plugin17:17
prometheanfireok17:17
mriedemprometheanfire: are there errors in the nova-compute logs?17:17
prometheanfiremriedem: which errors you looking for?17:17
sean-k-mooneyprometheanfire: https://github.com/openstack/osc-placement for future reference. it should be on pypi too17:17
mriedemprometheanfire: umm, any errors?17:17
mriedempresumably "failed to create resource provider in placement" or something like that17:18
prometheanfiremriedem: none now, I rebuilt all my systems from the ground up to fix the odd python issue17:18
SamYaplemriedem: sorry, i said backport. i mean hack something in based on that commit. i stay pretty close to upstream17:18
*** tetsuro has joined #openstack-nova17:18
prometheanfiremriedem: testing now via '   su - nova -c "/usr/lib/python-exec/python3.5/nova-compute --config-file /etc/nova/nova-compute.conf --config-file /etc/nova/nova.conf"   '17:18
mriedemSamYaple: ok, well, good luck. :) if that series of backports on pike also applies to ocata (i'm not sure if they all do), we could also backport them to ocata upstream17:19
mriedemall official like17:19
sean-k-mooneyprometheanfire: you should get somting like "ResourceProviderCreationFailed" if the placement info is missing/wrong in the conf17:20
prometheanfiremriedem: http://paste.openstack.org/show/729433/17:20
prometheanfirelgtm17:21
*** med_ has quit IRC17:21
melwitt.17:21
sean-k-mooneyprometheanfire: that what i get when i comment it out but the agent keeps running which i was surrprised by17:21
*** openstackgerrit has quit IRC17:22
*** openstackgerrit has joined #openstack-nova17:22
openstackgerritMerged openstack/nova master: Removing pip-missing-reqs from default tox jobs  https://review.openstack.org/59944217:22
mriedemprometheanfire: it's likely logged at debug17:23
prometheanfireah, k17:23
mriedemdo you see the resource provider in placement now?17:23
prometheanfiremriedem: I'm packaging osc-placement now17:23
SamYaplemriedem: ack. will do17:24
*** med_ has joined #openstack-nova17:24
sean-k-mooneymriedem: no its logging at error if it fails17:24
prometheanfiresean-k-mooney: mriedem does osc-placement need to be installed on the compute host?17:26
cdentprometheanfire: no, it's only an openstack client plugin, for humans17:27
prometheanfirethat's what I thought17:27
*** Bhujay has quit IRC17:31
sean-k-mooneyprometheanfire: its not needed but it makes debuging placment issues simpeler as you dont have to dive into the db17:32
prometheanfireopenstack resource provider list       - shows my two nodes17:34
sean-k-mooneyprometheanfire: then placement is happy with them. what was the warning you were getting with nova status-check?17:34
prometheanfireone is generation 76 and one is generation 34 though17:35
prometheanfiresean-k-mooney: http://paste.openstack.org/show/729434/17:35
sean-k-mooneycdent: do you no if nova-status upgrade check uses the api or try to connect to the db directly?17:36
cdentsean-k-mooney: api. it talks to the db for nova stuff, api for placement stuff17:37
cdentsean-k-mooney: yes, it is useful for debugging, but it doesn't have to be _on_ the compute node...17:37
sean-k-mooneyprometheanfire: cdent yes. im wondering if nova-status if not finding the placement endoint or somthing in prometheanfire case17:38
sean-k-mooneycdent: sorry ^ was ment for you17:38
sean-k-mooneyprometheanfire: do you have the placement setting in the nova.conf also or jsut /etc/nova/nova-cpu.conf?17:39
prometheanfiresean-k-mooney: I have it in nova.conf for all hosts (nova-api and compute hosts)17:40
prometheanfireonline-data migrations fail too, I think it's looking in the wrong db here17:41
sean-k-mooneyok i was going to suggest using --config-file=/path/to/conf to make sure its not reading values form another location17:41
cdentsorry sean-k-mooney, I haven't been tracking prometheanfire's situation, just jumping at random points. in too many conversations at once17:41
sean-k-mooneycdent: no worries.17:42
sean-k-mooneyprometheanfire: i would try "nova-status --debug --config-dir /etc/nova/nova.conf upgrade check"17:42
prometheanfiresame17:43
prometheanfiresean-k-mooney: does placement look at the cell_mappings table of the api db?17:44
sean-k-mooneyprometheanfire: i dont think placement does but if you have not do the cellsv2 discover host thing this might have issues17:45
sean-k-mooneyprometheanfire: actull i think the output is missleading17:46
sean-k-mooneyprometheanfire: i get http://paste.openstack.org/show/729436/17:47
prometheanfiresean-k-mooney: that's diferent than mine17:48
prometheanfirenova-manage cell_v2 list_hosts17:49
prometheanfireshows my hosts as not mapped to a cell17:49
sean-k-mooneyyes it is but  1 i got a success and its an all in one devstrck that is passing tempest test so it works17:49
sean-k-mooneyso i think the test might be wrong17:49
sean-k-mooneyhum ok well ill try to look into this more tomorow. my brain is fried for today so im going to call it a day o/17:51
prometheanfirek17:52
*** alexchadin has joined #openstack-nova17:56
*** gbarros has quit IRC17:57
prometheanfirelist cells backtraces17:57
prometheanfirehttp://paste.openstack.org/show/729437/17:57
mriedemprometheanfire: you still don't have a resource provider in placement but no errors in the n-cpu logs?17:57
prometheanfiremriedem: shows my two nodes17:58
mriedemhuh your cells must not have names17:59
prometheanfiremy api_db has two entries for cell_mappings17:59
prometheanfireselect * from cell_mappings;17:59
mriedemwhich is the TypeError17:59
prometheanfireya, one of them has no name17:59
mriedemok that's a bug clearly17:59
mriedemyou want to report it or shall i?18:00
*** alexchadin has quit IRC18:00
*** helenafm has joined #openstack-nova18:01
prometheanfireI'm still figuring stuff out18:01
prometheanfireI think a lot of my bugs in this area are because I moved to cells before it was ready18:01
prometheanfirelist_hosts now has the cell name :D18:02
prometheanfiremriedem: can I delete cell0 or is that still used?18:02
mriedemcell0 is required18:03
mriedemit's where instances records are created for things that fail to schedule18:03
prometheanfirek18:03
mriedem"for cell in sorted(cell_mappings, key=lambda _cell: _cell.name):"18:04
mriedemoops18:04
mriedem"because I moved to cells before it was ready"18:04
mriedemas in you moved in ocata? or rocky?18:04
mriedemor...18:04
mriedemand i'm assuming list_cells works now b/c you set a name in the DB?18:04
prometheanfireit does18:04
mriedemi just want to make sure we have a nova bug to track that, i'll open it18:04
prometheanfireonline data migrations fail though18:05
prometheanfireit keeps on trying to look up the projects table in the nova database, not the nova_api database18:06
*** r-daneel has quit IRC18:06
mriedemprojects table?18:06
mriedemgot a paste?18:06
melwittmust be the placement db18:06
melwitt(projects table)18:07
*** r-daneel has joined #openstack-nova18:07
prometheanfiremriedem: http://paste.openstack.org/show/729438/18:07
mriedemok so the create_incomplete_consumers online data migration18:07
mriedemprometheanfire: is the placement db defined in nova.conf separately? in [placement_database]?18:07
prometheanfiremriedem: atm, no18:08
mriedemdid you run nova-manage api_db sync before running online_data_migrations?18:08
mriedemyou've gotta sync the api db schema before running the online data migrations18:08
prometheanfireyep18:08
mriedemhttps://bugs.launchpad.net/nova/+bug/1790695 btw18:08
openstackLaunchpad bug 1790695 in OpenStack Compute (nova) "TypeError in nova-manage cell_v2 list_cells if a cell does not have a name" [High,Triaged]18:08
*** helenafm has quit IRC18:09
prometheanfirewhere does create_incomplete_consumers get the db that it's connecting to?18:09
prometheanfirealso, running nova-manage with --debug doesn't print much debug info :P18:11
*** ykarel is now known as ykarel|away18:13
mriedemcreate_incomplete_consumers is getting an admin RequestContext which won't have any db connection set on it18:13
mriedemthat context should then be changed by @db_api.placement_context_manager.writer18:14
jaypipesprometheanfire, mriedem: create_incomplete_consumers would need to query both the API database (which == placement DB) as well as the nova cell DBs, right?18:14
mriedemwhich should use the api_database if placement_database isn't configured18:14
mriedemjaypipes: there is nothing about that online data migration that is iterating cell dbs18:15
mriedemand no it doesn't need to hit the cell dbs18:16
*** tetsuro has quit IRC18:16
mriedemit creates missing consumer records for existing allocations records18:16
*** tetsuro has joined #openstack-nova18:16
mriedemwhat i don't know is if this is trying to connect to the placement database or the api db18:16
mriedemcdent wrote the placement db stuff so he'd likely need to look18:17
jaypipesmriedem: those are the same thing, no?18:17
mriedemno18:17
mriedemnot if you configure the placement db18:17
jaypipesbut isn't that a deliberate thing? has prometheanfire deliberately configured a placement DB?18:17
cdentmriedem:  what's up?18:17
prometheanfirethey are the same for me18:17
jaypipesright...18:17
mriedemjaypipes: no he hasn't,18:17
mriedembut i don't know that the code is correctly looking at this18:17
cdentwhich code?18:17
mriedemcdent: http://paste.openstack.org/show/729438/18:18
openstackgerritmelanie witt proposed openstack/nova stable/rocky: Add functional test for affinity with multiple cells  https://review.openstack.org/59973118:18
prometheanfirethat placement_database stuff only went in for rocky18:18
openstackgerritmelanie witt proposed openstack/nova stable/rocky: Make scheduler.utils.setup_instance_group query all cells  https://review.openstack.org/59973218:18
cdentthanks18:18
mriedemonline_data_migrations trying to hit the projects table,18:18
mriedembut i don't know which db it's trying to hit18:18
mriedemprometheanfire: you're testing here is on rocky yes?18:18
mriedem*your18:18
prometheanfireyes18:18
mriedemi believe devstack in rocky is configuring the placement_database, but this should still work without placement_database, otherwise grenade wouldn't work18:19
cdentmriedem: devstack will only use placement_database is a flag var is set, which we set in nova-next, but not elsewhere (unless someone else has changed it)18:20
cdent#PLACEMENT_DB_ENABLED=True18:20
mriedemok18:20
*** tetsuro has quit IRC18:20
*** dtantsur is now known as dtantsur|afk18:21
prometheanfireno way I can get more debug code?18:21
prometheanfirewasn't there a sqlalchemy setting for more debug?18:22
mriedemyes connection_debug or something18:22
mriedemhttps://docs.openstack.org/nova/latest/configuration/config.html#api_database.connection_debug18:22
prometheanfireconnection_trace = False (Boolean) Add Python stack traces to SQL as comment strings.18:23
prometheanfirethat one??18:23
mriedemthere is also connection_debug18:23
*** alexchadin has joined #openstack-nova18:23
mriedemi don't know which is better18:23
prometheanfireah, that one18:23
mriedemthe banner hides everything18:23
mriedemcdent: i don't see what calls this https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/db_api.py#L27 except for the placement wsgi code18:24
cdentmriedem: there's a similar thing in nova-manage for db_sync https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L861-L866 . If that got missed elsewhere, could be a problem18:25
prometheanfireI'm fine re-creating the cell, but I think my instances would be dead at that point18:25
mriedemlooks like it used to happen down in the db api code https://review.openstack.org/#/c/541435/18:25
mriedembefore ^18:25
mriedemprometheanfire: this doesn't have anything to do with the cell db18:26
cdentWhat was the original command that started this investingation?18:27
prometheanfireok18:27
jaypipesmriedem: are we able to get all the instance info from the nova API db for create_incomplete_consumers() then?18:27
mriedemcdent: nova-manage db online_data_migrations18:27
prometheanfirecdent: top of paste18:27
mriedemjaypipes: no18:27
cdentthanks18:27
*** cfriesen has joined #openstack-nova18:27
*** alexchadin has quit IRC18:27
mriedemjaypipes: you need the instance.user_id right?18:27
jaypipesmriedem: and instance.project_id.18:28
jaypipesmriedem: that's why I presumed we needed to hit the cell DB.18:28
mriedemwe have the project_id in the instance_mappings table in the API DB18:28
mriedembut not the user_id18:28
*** luksky has joined #openstack-nova18:28
mriedemthe online data migration, which you wrote i might add :) - relies on the consumer information in the allocations table, which is populated via running nova18:28
mriedemmy guess is https://review.openstack.org/#/c/541435/ regressed something but i have no idea how prometheanfire would hit this but we don't in the gate18:29
jaypipesmriedem: we're talking about populating placement with missing *nova* information.18:29
prometheanfiremriedem: old and jankey install18:29
prometheanfireis there a way to re-init the cell db?18:29
prometheanfireI think my cell db for cell1 is the same as the general nova db18:30
prometheanfirenot sure that's a good thing :|18:30
mriedemjaypipes: i don't know what you're talking about. looking at the online data migration, clearly it doesn't care about nova instance information from the cell dbs18:30
mriedemprometheanfire: yes that's expected18:30
prometheanfireok, good18:30
mriedemjaypipes: this https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/objects/consumer.py#L4418:31
prometheanfirecell0 has it's own db18:31
mriedemprometheanfire: yes18:31
prometheanfireand nova_api18:31
cdentmriedem: that seems like a good guess. is there a chance we don't have tests that exercise the online migrations?18:31
mriedemcdent: devstack runs them18:31
mriedemhttps://github.com/openstack-dev/devstack/blob/5da7e4a22ede5f3049e7607a54a0f5ca2b413a29/lib/nova#L78718:31
cdentis there a paste of prometheanfire's nova.conf somewhere?18:31
prometheanfireno18:32
jaypipesmriedem: sorry, I was referring to https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L178318:32
mriedemjaypipes: yeah totally different thing18:32
jaypipesmriedem: sorry18:32
mriedemnp18:32
prometheanfiregimme a sec18:32
*** moshele has quit IRC18:33
cdentmriedem: but those migrations aren't checked for count and if a migration method fails all it does is print?18:33
cdentI might be reading this wrong: https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L677-L68218:34
*** tetsuro has joined #openstack-nova18:35
prometheanfirecdent: http://paste.openstack.org/show/729439/18:36
*** eharney has quit IRC18:36
cdentthanks18:37
mriedemwell if it failed like it is in prometheanfire's paste, we'd notice18:38
mriedembut lemme check the logs18:38
mriedemmaybe that Exception is masking it18:38
prometheanfiremriedem: one reason why I think it's an artifact of the old install18:38
mriedemF ME http://logs.openstack.org/08/599208/2/check/tempest-full/608d60a/controller/logs/devstacklog.txt.gz#_2018-09-02_15_04_31_94918:39
mriedemyup, broken since rocky18:39
mriedemmelwitt: time for RC1018:39
cdentugh18:39
mriedemwell, added to and broken in rocky18:39
* melwitt dies18:39
cdentdidn't papa python always teach us never to catch exception :(18:39
mriedemprometheanfire: you want to create the bug this time?18:40
mriedemit's nearly 2pm and i haven't had lunh yet18:40
mriedem*lunch18:40
prometheanfiremriedem: ya, I'm about to go get lunch I think18:40
prometheanfiremriedem: just the output of my migration in the bug?18:41
cdentif nobody else is aching to fix this tonight, I can do it tomorrow morning18:41
prometheanfirehttps://bugs.launchpad.net/nova/+bug/179070118:43
openstackLaunchpad bug 1790701 in OpenStack Compute (nova) "online_data_migrations fail in rocky+" [Undecided,New]18:43
prometheanfirefor now18:43
prometheanfireecho $?18:44
prometheanfire018:44
prometheanfireLOL, exits 018:44
prometheanfireI'll add that to the bug18:44
cdentshould we: a) make a devstack bug to make it be more unhappy when doing the migrations, or b) simply fix the exist code on the script?18:45
prometheanfireimo the exit code being fixed would should make devstack unhappy when it should be unhappy18:46
cdentyes18:46
cdentUntil you mentioned the exit code my b was going to be something else that how now passed out of my mind...18:47
prometheanfire:D18:47
cdentso two fixes, in nova: exit code handling on that command, intialize the placement db properly18:47
prometheanfirefixing the error is still nice though18:47
cdentthat's backportable then18:47
prometheanfireis that what's happening (api_db sync not init'ing the placement stuff)?18:48
mriedemi'll hack some stuff up18:48
mriedemthis isn't api_db sync18:48
cdentthen on top of that we need address that an online placement db migration probably shouldn't be in nova's db migrations (even if placement wasn't being extracted)18:48
mriedembut yes the online_data_migraitons command isn't configuring a global properly18:48
prometheanfireok18:49
prometheanfireI'll be around if you want me to test, I should be able to take a snapshot of my master for rollback type testing too if you want18:49
*** med_ has quit IRC18:49
prometheanfirezfs <318:49
cdentmriedem: please ping me if you can't get around to it, or if you do add me to the review please18:49
mriedemsure18:50
*** tetsuro has quit IRC18:53
*** moshele has joined #openstack-nova18:54
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Swallow fewer exceptions in _run_migration  https://review.openstack.org/59974418:55
*** moshele has quit IRC18:56
mriedemfwiw that blanket try/except has been around since the command was added https://review.openstack.org/#/c/278078/18:56
prometheanfiremriedem: want a paste with my error with that patch18:58
prometheanfireit's longer18:58
mriedemsure18:58
prometheanfiremriedem: seems to fix the exit code problem too18:58
mriedemyeah b/c you're not getting a nova exception18:58
*** itlinux has quit IRC18:58
mriedemso it kills the command18:59
mriedemi expect my patch to make devstack fail18:59
prometheanfireI don't think it's that helpful (my paste)18:59
prometheanfirehttp://paste.openstack.org/show/729440/18:59
prometheanfirebut it's there anyway18:59
mriedemyeah it's what i'd expect18:59
mriedemi'll push the real fix in a follow up and then squash them18:59
mriedemso we can see the failure in devstack and the fix in the follow up19:00
prometheanfirewfm, going to lunch19:01
prometheanfireyou should too19:01
mriedemeating while typing19:04
mriedemthe healthy way19:04
prometheanfire:D19:06
openstackgerritJay Pipes proposed openstack/nova-specs master: Standardize CPU resource tracking  https://review.openstack.org/55508119:08
openstackgerritJay Pipes proposed openstack/nova-specs master: allow transferring ownership of instance  https://review.openstack.org/59959819:19
*** sridharg has quit IRC19:21
*** moshele has joined #openstack-nova19:22
openstackgerritJay Pipes proposed openstack/nova-specs master: allow transferring ownership of instance  https://review.openstack.org/59959819:33
openstackgerritmelanie witt proposed openstack/nova stable/queens: Fix the request context in ServiceFixture  https://review.openstack.org/59976219:41
openstackgerritmelanie witt proposed openstack/nova stable/queens: Honor availability_zone hint via placement  https://review.openstack.org/59976319:41
openstackgerritmelanie witt proposed openstack/nova stable/queens: Improve NeutronFixture and remove unncessary stubbing  https://review.openstack.org/59976419:41
openstackgerritmelanie witt proposed openstack/nova stable/queens: Add functional test for affinity with multiple cells  https://review.openstack.org/59976519:41
openstackgerritmelanie witt proposed openstack/nova stable/queens: Make scheduler.utils.setup_instance_group query all cells  https://review.openstack.org/59976619:41
*** tbachman has quit IRC19:42
*** moshele has quit IRC19:47
mriedemmelwitt: rather than https://review.openstack.org/#/c/599763/ i'd probably just add that small 5 LOC to whatever you need that uses it and mention it in the commit message19:56
melwittmriedem: ok, can do19:57
melwittmriedem: it's a similar deal with the NeutronFixture changes. I could alternatively add a stub_network_* method call to my test. I didn't need it on master because of the NeutronFixture changes19:57
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Configure placement DB context manager for online_data_migrations  https://review.openstack.org/59982219:59
mriedemhow big would the stub be on the patch in queens?20:00
mriedembut https://review.openstack.org/#/c/599764/ is a bit gross yeah20:01
mriedemprometheanfire: https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/nova/44/599744/1/check/tempest-dsvm-full-xenial/9691036/devstacklog.txt.gz shows that failure as expected20:02
mriedem2018-09-04 19:18:56.723 | ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'nova_cell0.projects' doesn't exist") [SQL: u'SELECT projects.id \nFROM projects \nWHERE projects.external_id = %(external_id_1)s'] [parameters: {u'external_id_1': '00000000-0000-0000-0000-000000000000'}] (Background on this error at: http://sqlalche.me/e/f405)20:02
prometheanfire:D20:03
cdentmriedem: is it normal for there to be NovaExceptions that we would want to only print?20:05
mriedemi'd say no20:05
openstackgerritMatt Riedemann proposed openstack/nova master: Configure placement DB context manager for online_data_migrations  https://review.openstack.org/59974420:05
mriedemi've removed it20:05
cdenthuzza20:05
cdenth20:05
mriedem^ is the old squasharoo20:05
mriedemoops guess i missed that in the squash20:06
melwittmriedem: I mean I could add a fake_network.set_stub_network_methods(self) all to my test instead of backporting the changes that removed all of those calls20:08
melwitt*call20:08
mriedemmelwitt: you mean like *add* to your backport test what was removed from here right? https://review.openstack.org/#/c/599764/1/nova/tests/functional/db/test_archive.py20:12
mriedemmelwitt: if so, then yes just do that20:12
mriedemthe neutron fixture backport is very weird otherwise20:12
*** med_ has joined #openstack-nova20:13
melwittmriedem: yes, add fake_network.set_stub_network_methods(self) to my backport test20:13
melwittk20:13
openstackgerritMatt Riedemann proposed openstack/nova master: Configure placement DB context manager for online_data_migrations  https://review.openstack.org/59974420:13
mriedemprometheanfire: ^ should make your dreams finally come true20:14
prometheanfiremriedem: mostly, 'nova-status upgrade check' still shows a warning20:15
mriedempaste me20:15
prometheanfirebut online migrations work (with nothing migrated)20:15
prometheanfiremriedem: http://paste.openstack.org/show/729448/20:15
*** alexchadin has joined #openstack-nova20:16
prometheanfiremriedem: I thought sean-k-mooney mentioned that being a false positive or something though20:17
mriedemhmm, how many nodes are shown with nova hypervisor-list?20:19
prometheanfire220:22
mriedemand how many in openstack resource provider list?20:24
prometheanfire220:26
*** holser_ has joined #openstack-nova20:26
mriedemhmm, looks like we have a bug then20:28
mriedemyou want to write that up?20:28
prometheanfiresure20:29
prometheanfireI think this one is not pike only, I think I saw this in queens too20:29
prometheanfiremaybe more, not sure20:29
prometheanfires/pike/rocky20:29
prometheanfireI keep on calling rocky pike20:29
cdentcriminey20:32
mriedemprometheanfire: and list_cells shows cell0 and cell1 right?20:34
prometheanfirehttps://bugs.launchpad.net/nova/+bug/179072120:34
openstackLaunchpad bug 1790721 in OpenStack Compute (nova) "nova-status upgrade check shows warnings when it shouldn't" [Undecided,New]20:34
prometheanfireyep, shows both cells20:34
mriedemi think it's the same bug20:35
mriedemwe're hitting the api db using this placement context manager, but it's not configured for the api db20:35
mriedemso it's hitting cell0 looking for resource providers20:35
cdentblargh. I thought nova-status use the api for placement checks?20:36
prometheanfireok, so partial fix so far then (I think)20:36
mriedemcdent: there is a TODO from me in that same coe20:37
mriedem*code20:37
cdentah20:37
prometheanfiremriedem: ya, read that :P20:37
mriedemwe do hit the API to check that we can *talk* to placement20:37
prometheanfirein _count_compute_resource_providers20:37
mriedemyeah this one, "Check: Placement API"20:37
mriedemthat makes sure we can at least get to placement20:37
*** r-daneel_ has joined #openstack-nova20:38
*** r-daneel has quit IRC20:38
*** r-daneel_ is now known as r-daneel20:38
*** itlinux has joined #openstack-nova20:41
openstackgerritmelanie witt proposed openstack/nova stable/queens: Add functional test for affinity with multiple cells  https://review.openstack.org/59976520:42
openstackgerritmelanie witt proposed openstack/nova stable/queens: Make scheduler.utils.setup_instance_group query all cells  https://review.openstack.org/59976620:42
mriedemi should probably add nova-status upgrade check to devstack first, but i think that has to be run *after* the subnodes, if any, are up, which means we need to call back into devstack from d-g20:43
mriedemlike we do with discover_hosts20:43
prometheanfireiirc it's run right before online migrations20:44
prometheanfireat least the upgrade doc makes me think that20:44
mriedemyou can run it on base install too20:44
mriedemto verify the deployment20:44
prometheanfireah20:45
openstackgerritmelanie witt proposed openstack/nova stable/pike: Fix the request context in ServiceFixture  https://review.openstack.org/59983920:45
openstackgerritmelanie witt proposed openstack/nova stable/pike: Add functional test for affinity with multiple cells  https://review.openstack.org/59984020:45
openstackgerritmelanie witt proposed openstack/nova stable/pike: Make scheduler.utils.setup_instance_group query all cells  https://review.openstack.org/59984120:45
openstackgerritmelanie witt proposed openstack/nova stable/pike: Fix the request context in ServiceFixture  https://review.openstack.org/59983920:46
openstackgerritmelanie witt proposed openstack/nova stable/pike: Add functional test for affinity with multiple cells  https://review.openstack.org/59984020:46
openstackgerritmelanie witt proposed openstack/nova stable/pike: Make scheduler.utils.setup_instance_group query all cells  https://review.openstack.org/59984120:46
*** cdent has quit IRC20:47
*** holser_ has quit IRC20:49
*** alexchadin has quit IRC20:51
*** Sundar has joined #openstack-nova20:51
*** eharney has joined #openstack-nova20:51
Sundarmelwitt: Please ping me when you have a moment.20:55
melwittSundar: hi20:56
mriedemprometheanfire: well we'll see if this notices it https://review.openstack.org/59984720:56
Sundarmelwitt: You had asked for a Nova spec for accelerator-related things. The only open AFAICS is how Cyborg will interact with placement: through virt drivers or by calling placement directly.20:56
SundarMost other aspects are already addressed in the Cyborg/Nova scheduling spec (https://review.openstack.org/#/c/554717/) or in the ongoing os-acc spec (https://review.openstack.org/#/c/577438/).20:56
SundarSo, can the new spec just point to the older specs for those aspects?20:57
prometheanfiremriedem: just had to be sure, but at least that exit code works (got a 1)20:57
SundarMay be we need a bit more detail on how exactly the virt drivers will invoke os-acc. I can add that to os-acc spec.20:58
melwittSundar: you can and should add links to the other specs as references in the nova spec, but the nova spec should describe the proposed changes to nova as part of the interaction. the references can be for background reading and then the new spec will detail the nova changes that will be needed and those are what we will review (after reading the referenced other specs that you should add to the References section of the spec)21:00
melwittwe just want to be able to review the proposal for nova changes as a nova spec21:00
* prometheanfire needs to start packaging earlier in the cycle so these bugs are hit in rc still21:01
*** imacdonn has quit IRC21:02
*** imacdonn has joined #openstack-nova21:02
Sundarmelwitt: OK. I'll take a stab. We can iterate from there as needed. Thank you.21:03
melwittcool, thanks21:03
melwittSundar: btw, what day/time are you having the nova/placement interaction session at the cyborg room?21:03
melwittSundar: and would sometime between 11:10am and 12:30pm on thursday work for you for Cyborg/Nova session at the nova room?21:05
*** erlon has quit IRC21:05
*** awaugama has quit IRC21:11
Sundarmelwitt: It doesn't look like the Cyborg times are decided yet! https://etherpad.openstack.org/p/cyborg-ptg-stein21:17
SundarI'll ask Cyborg PTL and get this resolved21:17
SundarI am fine with your proposed time on Thursday21:17
melwittok, just let me know so I can make a note on our etherpad so folks know when to show up at the cyborg room21:18
SundarYes, sure. Thanks!21:18
*** luksky has quit IRC21:28
*** itlinux is now known as itlinux-away21:30
*** itlinux-away is now known as itlinux21:30
*** itlinux is now known as itlinux-away21:30
*** itlinux-away is now known as itlinux21:34
*** itlinux is now known as itlinux-away21:35
*** itlinux-away is now known as itlinux21:35
*** itlinux is now known as itlinux-away21:35
mriedemlbragstad: we probably need your keystone eyeballs on this https://review.openstack.org/#/c/599598/3/specs/stein/approved/transfer-instance-ownership.rst@14021:40
mriedemtl;dr should nova be responsible for checking that a given user is in a given project21:40
lbragstadmriedem looking21:40
mriedemi guess GET /v3/users/{user_id}/projects would be pretty easy though21:41
*** munimeha1 has quit IRC21:47
*** itlinux-away is now known as itlinux22:01
*** itlinux has quit IRC22:02
*** Sundar has quit IRC22:06
*** r-daneel has quit IRC22:08
lbragstadmriedem yeah - we have another API like that, too22:11
lbragstadhttps://developer.openstack.org/api-ref/identity/v3/index.html#list-role-assignments-for-user-on-project22:11
lbragstadso long as there is a role returned, then that might be enough for nova http://paste.openstack.org/raw/729458/22:18
*** priteau has quit IRC22:21
mriedemlbragstad: ok and if nova is configured with admin / service user creds to keystone, can nova get the user information even if the current token in the request context is not for that user?22:21
mriedemi guess it probably depends on what auth nova's keystone creds are configured with22:22
*** mchlumsky has quit IRC22:22
mriedembut we use that today to verify a provided tenant id exists22:22
mriedemfor certain apis that take a tenant id22:22
lbragstadyeah the token used to call that API is going to have to belong to the nova service user22:23
lbragstadright now that policy is protected by rule:admin_required22:24
lbragstadbut - if that's too strict a default for nova, we do have work staged for stein to rework the authorization of that API https://bugs.launchpad.net/keystone/+bug/175066922:25
openstackLaunchpad bug 1750669 in OpenStack Identity (keystone) "The v3 grant API should account for different scopes" [High,Triaged]22:25
lbragstadnova should have access to the token used to make the request via the context object22:27
lbragstadif you're expecting only system administrators to call this API, then you might be able to reuse that token to call the role API in keysotne22:28
melwittthat reminds me of a wishlist bug we have open where people would like us to validate the user_id when updating quota limits, same as we validate the project_id currently22:29
openstackgerritMatt Riedemann proposed openstack/nova master: Fix TypeError in nova-manage cell_v2 list_cells  https://review.openstack.org/59986122:30
lbragstadwe have an api for that but it too would require administrator access22:32
lbragstador it would require the nova service user to be an administrator22:32
lbragstadbut, hopefully that's going to be changing soon with https://bugs.launchpad.net/keystone/+bug/174802722:33
openstackLaunchpad bug 1748027 in OpenStack Identity (keystone) "The v3 users API should account for different scopes" [High,Triaged] - Assigned to sonu (sonu-bhumca11)22:33
melwittI see. but the validation of project_id is a non-admin thing?22:33
lbragstadi guess it depends on how you're validating the project id22:33
melwittI have to check. I don't remember how we're doing it22:34
lbragstadare you calling the GET /v3/projects/{project_id}22:34
melwittlemme see22:34
melwittyup, looks like it22:35
lbragstadnice - in that case we do rule:admin_required or project_id:%(target.project.id)s22:35
lbragstadmaking it accessible to admins and users with a role assignment on the project in the path22:35
melwitthttps://github.com/openstack/nova/blob/master/nova/api/openstack/identity.py#L4122:37
lbragstadif that ksa session is built with the nova service user, then it looks like you're already handling the case where nova doens't have the necessary permissions https://github.com/openstack/nova/blob/master/nova/api/openstack/identity.py#L61-L6822:38
lbragstadso that's good22:38
lbragstadunless you're hoping to switch that case to false eventually22:39
melwitttrying to see if it's built with the nova service user (sorry, this is new to me)22:39
lbragstadhttps://github.com/openstack/nova/blob/master/nova/utils.py#L1189-L1191 ?22:40
melwittyeah, and we're passing ksa_auth=context.get_auth_plugin()22:41
lbragstadlooks like it supports being passed a ksa session and building one from config22:41
mriedemyes https://docs.openstack.org/nova/latest/configuration/config.html#keystone22:41
mriedemit's whatever you configure nova with22:41
mriedemif you don't and we can't verify, we would default to our old behavior which is, meh - hope you know what you're doing admin person!22:42
melwittah, ok22:42
*** erlon has joined #openstack-nova22:42
lbragstadhuh - ok22:43
melwitt        # we don't have enough permission to verify this, so default22:44
melwitt        # to "it's ok".22:44
melwitthaha22:44
lbragstadso - the same creds used by ksm?22:44
lbragstad"move along citizen.. nothing to see here"22:44
melwitthaha22:45
*** spartakos has joined #openstack-nova22:48
melwittshould be whatever is in the [keystone] section of the nova.conf, though I don't see that configured in the gate nova.conf22:52
melwittI see only [keystone_authtoken] for example, in here http://logs.openstack.org/99/584999/5/check/tempest-full/29581fd/controller/logs/etc/nova/nova_conf.txt.gz22:53
lbragstadyeah - that's what we look for in ksm for sure22:54
melwittoh, ok22:54
lbragstadi'm wondering if that gets re-used for nova requests22:54
lbragstadlook like it - it should be available to nova via the config object22:55
*** rcernin has joined #openstack-nova22:55
melwittmaybe this is how it gets re-used? https://github.com/openstack/nova/blob/master/nova/context.py#L59-L6422:57
melwittwe do that as part of the context.get_auth_plugin() call, if there's no self.user_auth_plugin set https://github.com/openstack/nova/blob/master/nova/context.py#L16022:58
*** itlinux has joined #openstack-nova23:04
*** cfriesen has quit IRC23:06
*** priteau has joined #openstack-nova23:23
*** mlavalle has quit IRC23:33
*** itlinux is now known as itlinux-away23:34
*** brinzhang has joined #openstack-nova23:42
lbragstadaha - interesting23:48
*** itlinux-away is now known as itlinux23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!