*** edmondsw has joined #openstack-powervm | 01:21 | |
*** edmondsw has quit IRC | 01:26 | |
*** edmondsw has joined #openstack-powervm | 03:09 | |
*** edmondsw has quit IRC | 03:14 | |
*** chhavi has joined #openstack-powervm | 04:08 | |
*** chhavi has quit IRC | 04:39 | |
*** edmondsw has joined #openstack-powervm | 04:58 | |
*** edmondsw has quit IRC | 05:02 | |
*** efried has quit IRC | 06:08 | |
*** efried has joined #openstack-powervm | 06:18 | |
*** chhavi has joined #openstack-powervm | 06:33 | |
*** edmondsw has joined #openstack-powervm | 06:46 | |
*** edmondsw has quit IRC | 06:50 | |
*** edmondsw has joined #openstack-powervm | 08:34 | |
*** edmondsw has quit IRC | 08:39 | |
*** edmondsw has joined #openstack-powervm | 10:22 | |
-openstackstatus- NOTICE: zuul has been restarted, all queues have been reset. please recheck your patches when appropriate | 10:25 | |
*** edmondsw has quit IRC | 10:27 | |
*** chhavi has quit IRC | 11:38 | |
*** chhavi has joined #openstack-powervm | 11:39 | |
*** edmondsw has joined #openstack-powervm | 12:10 | |
*** edmondsw has quit IRC | 12:15 | |
*** chhavi has quit IRC | 13:08 | |
*** chhavi has joined #openstack-powervm | 13:10 | |
*** chhavi has quit IRC | 13:22 | |
*** chhavi has joined #openstack-powervm | 13:23 | |
*** edmondsw has joined #openstack-powervm | 13:51 | |
*** esberglu has joined #openstack-powervm | 14:03 | |
*** esberglu has quit IRC | 14:17 | |
*** esberglu has joined #openstack-powervm | 14:17 | |
*** esberglu has quit IRC | 14:22 | |
*** esberglu has joined #openstack-powervm | 14:32 | |
*** esberglu_ has joined #openstack-powervm | 14:33 | |
*** esberglu_ is now known as esberglu__ | 14:34 | |
*** esberglu has quit IRC | 14:37 | |
*** AlexeyAbashkin has joined #openstack-powervm | 14:53 | |
edmondsw | esberglu__ having connection issues? | 14:54 |
---|---|---|
*** esberglu__ is now known as esberglu | 14:54 | |
esberglu | edmondsw: I was, good now though | 14:55 |
*** AlexeyAbashkin has quit IRC | 15:13 | |
esberglu | edmondsw: efried: I'm looking at attaching a vscsi volume on spawn, I'm pretty sure the OOT driver is broken | 15:17 |
efried | esberglu Do you have a test bed? | 15:17 |
esberglu | I'm just messing around with the CLI spawns right now | 15:18 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/tasks/storage.py#L474-L491 | 15:18 |
esberglu | But that doesn't work | 15:18 |
esberglu | Well the IT equivalent doesn't work | 15:19 |
efried | What's the error? | 15:19 |
esberglu | So the bdm parameter is a dictionary roughly in the form in the comment here | 15:20 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L1708-L1731 | 15:20 |
esberglu | So 1st of all self.bdm.volume_id isn't a thing | 15:20 |
*** gman-tx has joined #openstack-powervm | 15:21 | |
esberglu | It has to be self.bdm['connection_info']['data']['volume_id'] | 15:21 |
esberglu | And calling save on a dict doesn't work | 15:21 |
efried | "has to be"? | 15:21 |
efried | Are you sure it's a dict? | 15:21 |
esberglu | efried: Yeah | 15:21 |
esberglu | logged it out | 15:22 |
efried | Nova has lots of objects that are dict-ish but also support attribute indexing. | 15:22 |
efried | IIRC, the class behind bdm is actually a subclass of dict. So it would print out looking as if it's just a dict. | 15:22 |
efried | but may actually have .volume_id | 15:23 |
efried | nova.objects.block_device.BlockDeviceMapping | 15:23 |
esberglu | efried: I got an attribute does not exist error trying to use .volume_id | 15:23 |
efried | okay. | 15:23 |
esberglu | And an error about not being able to call save() on a dict after I changed the code to get the vol id as above | 15:24 |
edmondsw | I asked gman-tx to join since an OOT issue should affect PowerVC | 15:24 |
esberglu | efried: edmondsw: Maybe we aren't extracting the bdm correctly? From what I'm looking at in nova.objects.block_device.BlockDeviceMapping we should be able to use .volume_id | 15:47 |
edmondsw | esberglu where do we extract it? | 15:48 |
esberglu | edmondsw: https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L1699-L1735 | 15:48 |
edmondsw | the bdm you're working with isn't [] is it? | 15:51 |
esberglu | No its roughly the dict in the comment | 15:51 |
edmondsw | I'm not seeing an issue there... pretty straightforward | 15:52 |
edmondsw | esberglu there are a whole bunch of different classes and forms for bdm... seems like we're mixing them inappropriately | 15:56 |
edmondsw | esberglu you'll enjoy what jaypipes says about them here: https://github.com/jaypipes/articles/blob/master/openstack/walkthrough-launch-instance-request.md#prepare-block-devices | 15:58 |
*** AlexeyAbashkin has joined #openstack-powervm | 16:02 | |
*** AlexeyAbashkin has quit IRC | 16:06 | |
esberglu | clear | 16:19 |
esberglu | clear | 16:19 |
esberglu | oops | 16:20 |
esberglu | edmondsw: K Now I'm more confused than before | 16:20 |
edmondsw | sorry | 16:20 |
edmondsw | bdms are always confusing | 16:22 |
*** chhavi has quit IRC | 16:38 | |
esberglu | edmondsw: efried: Why do we SaveBDM when attaching volumes on spawn, but not when attaching them after? | 16:43 |
efried | esberglu Haven't a clue. | 16:44 |
efried | I can't even spell BDM | 16:44 |
efried | If you can find thorst, he might be able to shed some light. | 16:44 |
efried | You could also tap gfm, he might know (or know who knows). | 16:44 |
esberglu | Here's the block_device_mapping getting passed into SaveBDM, still trying to piece together where it's coming from | 16:45 |
esberglu | http://paste.openstack.org/show/640973/ | 16:45 |
efried | esberglu What makes you say we don't save BDMs except in spawn? | 16:56 |
efried | esberglu Isn't everything funneled through _add_volume_connection_tasks? | 16:57 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L443 | 16:57 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L508 | 16:57 |
esberglu | spawn calls _add_volume_connection_tasks | 16:58 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L746 | 16:58 |
efried | Yup, got it. | 16:58 |
esberglu | attach_volume adds the ConnectVolume task directly | 16:58 |
efried | You should check with those people I mentioned, but I suspect that is indeed an oversight. | 16:58 |
gman-tx | For BDMs we need to also pull in Ken | 17:19 |
*** AlexeyAbashkin has joined #openstack-powervm | 17:54 | |
*** AlexeyAbashkin has quit IRC | 17:58 | |
*** burgerk has joined #openstack-powervm | 18:00 | |
edmondsw | esberglu gman-tx we have Ken now | 18:07 |
edmondsw | burgerk any insights here? | 18:07 |
esberglu | burgerk: http://eavesdrop.openstack.org/irclogs/%23openstack-powervm/%23openstack-powervm.2018-01-08.log.html#t2018-01-08T15:17:02 | 18:09 |
esberglu | ^ conversation starts there if you don't have context | 18:10 |
burgerk | so is the question about the BDM itself or if the right calls are being made on it ? | 18:15 |
esberglu | 1st question. Is there are reason the the bdm is only being saved when attaching volumes on spawn? | 18:16 |
esberglu | And not being saved when attaching a volume after the spawn | 18:16 |
esberglu | Or is that an oversight? | 18:16 |
burgerk | I would think the BDM should be updated whenever it is changed. | 18:19 |
burgerk | so my guess would be oversight | 18:20 |
burgerk | unless it is being done elsewhere that isn't obvious here | 18:21 |
*** openstackgerrit has joined #openstack-powervm | 18:26 | |
openstackgerrit | Matthew Edmonds proposed openstack/nova-powervm master: cleanup private bdm methods https://review.openstack.org/531874 | 18:26 |
esberglu | burgerk: Okay the other issue. During my testing of the in-tree nova driver, the SaveBDM storage task is failing | 18:28 |
esberglu | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/tasks/storage.py#L474 | 18:28 |
esberglu | I haven't had a chance to confirm, but I suspect that nova-powervm also is failing there | 18:29 |
esberglu | The nova block device mapping is pretty convoluted so I haven't been able to figure out where we're going wrong | 18:29 |
esberglu | but I am getting attribute errors trying to access self.bdm.volume_id and call self.bdm.save() | 18:30 |
edmondsw | esberglu my patch above shouldn't really help here, but noticed it looking at the code and the messiness annoyed me... | 18:33 |
esberglu | edmondsw: Sounds good, I will port to the IT implementation | 18:34 |
esberglu | burgerk: http://paste.openstack.org/show/640973/ | 18:35 |
esberglu | ^ that's an example of the bdm parameter | 18:35 |
esberglu | I've been looking through this info to try to see where that is coming from | 18:35 |
esberglu | https://github.com/jaypipes/articles/blob/master/openstack/walkthrough-launch-instance-request.md#prepare-block-devices | 18:35 |
edmondsw | esberglu see also https://docs.openstack.org/nova/latest/user/block-device-mapping.html | 18:42 |
esberglu | edmondsw: I was using the openstack cli to spawn, not the nova cli. Which only has the legacy version documented. I'll try bdm v2 with the nova cli, see if that makes a difference | 18:48 |
esberglu | Same result | 19:04 |
esberglu | burgerk: Do you know what the bdm is used for after the attach? Do we even need to save it? | 19:05 |
esberglu | Without the SaveBDM method I haven't been having any issues with the in-tree nova driver, but that just does basic spawns/deletes | 19:06 |
esberglu | Not sure if it's needed in the more complex cases | 19:06 |
burgerk | if you detach, the BDM would contain the information to find the disk to detach | 19:09 |
esberglu | burgerk: I was attaching and detaching volumes after spawning with no problems though. And the bdm doesn't get saved in that case | 19:10 |
esberglu | At least not by our driver, maybe nova is doing the save? | 19:10 |
burgerk | when you say it doesn't get saved, have you looked in the DB to see there is no entry? | 19:11 |
esberglu | burgerk: No I haven't, just meant that the SaveBDM task isn't added | 19:13 |
burgerk | i would guess it is being done.... somewhere | 19:13 |
esberglu | burgerk: Yeah it's in the database after the volume attach | 19:21 |
esberglu | So perhaps we don't actually need the SaveBDM task at all? | 19:21 |
burgerk | probably not as long as what is in the DB is correct for your attach | 19:25 |
esberglu | edmondsw: burgerk: For now I'm going to remove the SaveBDM task from the WIP IT driver implementation and do some further testing | 19:29 |
esberglu | Once I get a chance I can get a stack set up with the OOT driver and confirm that | 19:29 |
esberglu | a) SaveBDM is broken there as well | 19:29 |
esberglu | b) We can remove it from nova-powervm | 19:29 |
edmondsw | esberglu sounds good. From what I've looked at I also suspect that is broken / not needed | 19:30 |
esberglu | edmondsw: Idk if you saw, but the pypowervm u-c is W+1 now. As soon as that is in I'm going to do one more manual run with the IT SEA CI patch | 19:32 |
edmondsw | esberglu just test both a) attach during spawn and then detach and b) attach after spawn and then detach | 19:32 |
esberglu | edmondsw: Yep | 19:32 |
edmondsw | esberglu I did see that. Thanks | 19:32 |
esberglu | If that manual run looks good, we can merge and recheck OVS and SEA, then bug the cores for reviews | 19:33 |
edmondsw | yep | 19:33 |
edmondsw | esberglu I think I finally tracked down what actually changes about the BDM during the ConnectVolume task... https://docs.openstack.org/nova/latest/user/block-device-mapping.html | 19:37 |
edmondsw | sorry, wrong link... | 19:37 |
esberglu | edmondsw: You had a mistake in the bdms patch, easy fix | 19:37 |
edmondsw | https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/volume/vscsi.py#L238 | 19:37 |
edmondsw | so if a save doesn't happen, that is what would be missed. Maybe you can check that in the db | 19:38 |
esberglu | edmondsw: The target_UDID is set in the db | 19:39 |
edmondsw | then something else must be saving it | 19:39 |
openstackgerrit | Matthew Edmonds proposed openstack/nova-powervm master: cleanup private bdm methods https://review.openstack.org/531874 | 19:41 |
edmondsw | esberglu fyi nova feature freeze is 1/18 | 19:54 |
edmondsw | esberglu and mriedem is on vacation that week, so if we can get things in this week it would be better | 19:56 |
*** tjakobs has joined #openstack-powervm | 20:00 | |
esberglu | edmondsw: 1/25 no? | 20:05 |
esberglu | edmondsw: https://releases.openstack.org/queens/schedule.html#q-3 | 20:06 |
edmondsw | esberglu 1/25 for novaclient but 1/18 for non-client per mriedem | 20:06 |
edmondsw | esberglu http://lists.openstack.org/pipermail/openstack-dev/2018-January/125953.html | 20:07 |
edmondsw | esberglu fair to say "Should have 2 of the 3 remaining patches ready for review in the next day or two, and the last later in the week." ? | 20:09 |
edmondsw | I'm replying to that note | 20:09 |
edmondsw | reasoning that SEA and OVS will be ready as soon as the global req and CI changes merge today/tomorrow, and then vSCSI is close | 20:10 |
esberglu | edmondsw: Yep I'm good with that | 20:10 |
edmondsw | esberglu they're big patches, so I went ahead and told mriedem and sdague they could start looking at OVS and SEA | 20:19 |
esberglu | edmondsw: Yep good call | 20:20 |
edmondsw | and I put status in the etherpad from mriedem's note... one of us should remember to update that etherpad when the global req change has merged and the CI is working | 20:21 |
esberglu | edmondsw: Added it to my list | 20:25 |
*** edmondsw has quit IRC | 20:26 | |
*** openstack has quit IRC | 20:38 | |
*** openstack has joined #openstack-powervm | 20:40 | |
*** ChanServ sets mode: +o openstack | 20:40 | |
*** openstackgerrit has quit IRC | 21:03 | |
*** burgerk has quit IRC | 21:28 | |
*** esberglu has quit IRC | 21:53 | |
*** esberglu has joined #openstack-powervm | 22:04 | |
*** chhavi has joined #openstack-powervm | 22:34 | |
*** chhavi has quit IRC | 22:38 | |
*** tjakobs has quit IRC | 22:42 | |
*** apearson__ has joined #openstack-powervm | 23:44 | |
*** apearson_ has quit IRC | 23:46 | |
*** gman-tx has quit IRC | 23:52 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!