*** openstack has joined #openstack-powervm | 05:59 | |
*** thorst_ has joined #openstack-powervm | 06:44 | |
*** thorst_ has quit IRC | 06:52 | |
*** thorst_ has joined #openstack-powervm | 07:49 | |
*** thorst_ has quit IRC | 07:56 | |
*** k0da has joined #openstack-powervm | 08:09 | |
*** thorst_ has joined #openstack-powervm | 08:54 | |
*** thorst_ has quit IRC | 09:01 | |
*** thorst_ has joined #openstack-powervm | 09:59 | |
*** thorst_ has quit IRC | 10:06 | |
*** smatzek has joined #openstack-powervm | 10:36 | |
*** thorst_ has joined #openstack-powervm | 11:03 | |
*** thorst_ has quit IRC | 11:11 | |
*** thorst_ has joined #openstack-powervm | 11:53 | |
*** thorst__ has joined #openstack-powervm | 11:55 | |
*** thorst_ has quit IRC | 11:58 | |
*** svenkat has joined #openstack-powervm | 12:12 | |
*** smatzek has quit IRC | 12:13 | |
*** smatzek has joined #openstack-powervm | 12:34 | |
*** kylek3h has joined #openstack-powervm | 12:41 | |
*** edmondsw has joined #openstack-powervm | 12:45 | |
*** kylek3h has quit IRC | 12:47 | |
thorst__ | svenkat: Do you know if you can increase the size of a SSP volume? | 12:58 |
---|---|---|
*** thorst__ is now known as thorst_ | 12:58 | |
svenkat | you can resize a volume.. | 13:00 |
svenkat | let me see if there are restrictions on ssp volume.. one moment | 13:00 |
thorst_ | svenkat: I was looking at this: https://www.ibm.com/support/knowledgecenter/POWER7/p7hcg/chbdsp.htm | 13:02 |
thorst_ | but it didn't seem to work for ssp's | 13:02 |
*** apearson has joined #openstack-powervm | 13:08 | |
*** mdrabe has joined #openstack-powervm | 13:10 | |
thorst_ | adreznec: there? | 13:28 |
adreznec | thorst_: Sup | 13:28 |
thorst_ | looking at Ashana's container. It just looks like nothing is in the keystone container at all | 13:28 |
*** Ashana has joined #openstack-powervm | 13:28 | |
adreznec | thorst_: That's... odd. Did it run the keystone role steps during container setup? | 13:30 |
thorst_ | Ashana: do you know? | 13:31 |
Ashana | let me check | 13:31 |
Ashana | in the lxc-container-create.yml it doesnt have that step. But when I ran the openstack-ansible setup-openstack.yml it does run the os-keystone-install.yml | 13:34 |
thorst_ | yeah. So the setup-hosts I think just creates the blank containers. But then the setup-openstack is supposed to put the content in them | 13:35 |
thorst_ | the setup-openstack is what failed...right? | 13:35 |
adreznec | thorst_: Yeah, you're right there. setup-hosts only creates the container shells and configures their networking, etc. It doesn't install any of the OS services into them | 13:36 |
Ashana | yep thats setup-openstack fais | 13:37 |
Ashana | fails* | 13:37 |
thorst_ | Ashana: can you run that in the VNC and I'll monitor | 13:37 |
Ashana | yep | 13:39 |
*** tjakobs has joined #openstack-powervm | 13:39 | |
thorst_ | Ashana: well, that fails fast | 13:40 |
adreznec | Ashana: thorst_ How fast is fast? | 13:40 |
thorst_ | adreznec: 3 seconds | 13:40 |
thorst_ | I think I see the problem though | 13:40 |
adreznec | Wow yep, fast | 13:40 |
thorst_ | Ashana: https://github.com/openstack/openstack-ansible/blob/master/playbooks/setup-everything.yml | 13:41 |
thorst_ | did you run 'setup-infrastructure' before 'setup-openstack'? | 13:41 |
Ashana | yes i did | 13:42 |
Ashana | I just ran it again in VNC | 13:43 |
adreznec | Ashana: Not connected, but I assume it succeeded? | 13:44 |
adreznec | When you ran it the first time that is | 13:44 |
Ashana | yes it did | 13:44 |
thorst_ | I see it running now...lets let that finish and see what happens | 13:45 |
thorst_ | it looks like this one could take a while | 13:45 |
*** tjakobs has quit IRC | 13:48 | |
thorst_ | Ashana: do you know roughly how much time setup-infrastructure tooko the first time? It looks like its at that 'requirement wheels' step right now...which is where it'll build the x86 ones (I think) and we need it to also include the ppc64le ones | 13:59 |
thorst_ | esberglu: lets for now shut off the ssh test? | 14:03 |
thorst_ | and handle that one in the staging env? | 14:03 |
esberglu | I have a change up already for that. | 14:03 |
esberglu | 3431 | 14:04 |
openstackgerrit | Drew Thorstensen proposed openstack/nova-powervm: Initial LB VIF Type https://review.openstack.org/302447 | 14:04 |
thorst_ | ahh, let me go +2 that one | 14:04 |
openstackgerrit | Merged openstack/nova-powervm: Support override migration flags https://review.openstack.org/329592 | 14:05 |
svenkat | thorst: ssp volume resize is supported. | 14:08 |
thorst_ | svenkat: know the vios command for it? | 14:08 |
svenkat | i will get it … | 14:08 |
svenkat | thorst: looking into ssp cinder driver… | 14:09 |
svenkat | thorst: i see the code called _extend_lu . it is done via k2. | 14:12 |
thorst_ | svenkat: thx. I'll find the vios command. | 14:13 |
*** smatzek has quit IRC | 14:14 | |
*** tjakobs has joined #openstack-powervm | 14:15 | |
thorst_ | svenkat: for future reference: http://www.ibmsystemsmag.com/aix/administrator/virtualization/grow-lu-ssp/ | 14:15 |
svenkat | thorst: thanks. | 14:16 |
thorst_ | Ashana: so I saw you highlight a mariadb install error. | 14:21 |
*** burgerk has joined #openstack-powervm | 14:21 | |
Ashana | Yea I was wondering why that happened and why it was ignored | 14:22 |
thorst_ | maybe because it already there or something | 14:22 |
thorst_ | but that is odd | 14:22 |
thorst_ | Ashana: so I just kicked off the setup-openstack | 14:25 |
thorst_ | and it is definitely not failing as fast...so we may be through it yet? | 14:26 |
*** tblakeslee has joined #openstack-powervm | 14:26 | |
Ashana | yea it looks like its going good | 14:26 |
*** erlarese has quit IRC | 14:28 | |
*** arnoldje has joined #openstack-powervm | 14:36 | |
thorst_ | Ashana: so I do eventually expect this to fail | 14:38 |
thorst_ | cause it SHOULD fail on the compute node | 14:38 |
*** kotra03 has joined #openstack-powervm | 14:39 | |
Ashana | yep it just failed on the infra01_glance_container-8587e81d | 14:39 |
*** efried has joined #openstack-powervm | 14:41 | |
thorst_ | I'm not super sure what is going on there | 14:43 |
thorst_ | but I can say...I'm a bit worried cause the disk we made was only 40 GB.... | 14:43 |
adreznec | thorst_: The controller disk? | 14:43 |
thorst_ | adreznec: yeah... | 14:43 |
thorst_ | that's not THIS error | 14:43 |
adreznec | Yeah... that might be too small | 14:43 |
adreznec | I know minimum AIO is 60GB | 14:43 |
adreznec | And this is relatively close to AIO | 14:44 |
thorst_ | hmm...well we're OK for now | 14:44 |
thorst_ | lets work through this then figure out what we may need to do there... | 14:44 |
thorst_ | Ashana: I think you can work through this by lxc-attach to the glance container | 14:45 |
adreznec | Fair enough | 14:45 |
thorst_ | and run that apt-get install command | 14:45 |
adreznec | What failed? | 14:45 |
thorst_ | but be sure to use the --allow-unauthenticated option | 14:45 |
Ashana | alrighty what am I intsalling | 14:46 |
thorst_ | the glance container tried to install a bunch of stuff | 14:46 |
thorst_ | I'll ping you the VNC to look at | 14:46 |
adreznec | Ok | 14:47 |
adreznec | Ah yeah, I bet it's because they're coming from that external repo | 14:49 |
adreznec | I thought they were adding a key for that repo though, similar to what gets done for the novalink repo for example | 14:49 |
adreznec | Ashana: We should double check that the apt keys are getting added properly | 14:50 |
adreznec | That should have taken care of this package auth issue | 14:50 |
*** kylek3h has joined #openstack-powervm | 14:52 | |
*** efried has quit IRC | 14:53 | |
thorst_ | adreznec: where does that auth key get added? | 14:54 |
thorst_ | is that something we should propose | 14:54 |
*** efried has joined #openstack-powervm | 14:54 | |
*** tblakeslee has quit IRC | 15:01 | |
*** tblakeslee has joined #openstack-powervm | 15:04 | |
adreznec | thorst_: Sorry, stepped away to talk to Taylor quick | 15:08 |
adreznec | It should be getting done as part of https://github.com/openstack-ansible/ansible-role-mariadb/blob/35d34b6bfb5e3aeda34053437d7a017542388d4d/tasks/install.yml#L13 I believe | 15:08 |
*** tjakobs has quit IRC | 15:09 | |
thorst_ | adreznec: but does that run on every container? Remember this was the glance container that failed | 15:09 |
adreznec | That's just the general role, so it should run in any container that's calling into the mariadb install | 15:09 |
adreznec | Unless they're not using the generic role in the glance container for some reasno | 15:09 |
adreznec | Wait, ignore my previous link. I forgot that they're using Galera to provide mariadb with OSA iirc | 15:11 |
adreznec | I need to look at the glance role quick... | 15:11 |
*** kotra03 has quit IRC | 15:16 | |
adreznec | Yeah, so the repo comes out of https://github.com/openstack/openstack-ansible-galera_client/blob/d394277b2194554c057cf3e429e4d1672b4ed04a/vars/ubuntu-16.04.yml#L20 by default, but I'm not seeing any apt-key setup within that role | 15:16 |
adreznec | I'm wondering if that's supposed to be handled as part of the galera_client_gpg_keys in there | 15:19 |
adreznec | But it's not happening for some reason | 15:19 |
thorst_ | adreznec: wonder if we need to propose a change up for that? | 15:27 |
adreznec | thorst_: Yeah idk... I mean this is obviously working in the gate, so I'm not sure if this is just something that's broken in our environment or what | 15:28 |
adreznec | Or if the gate image somehow gets that key another way | 15:28 |
Ashana | So I think I did all the unathenticated packages should I try running it again. The packages were mysql-common libmysqlclient18 mariadb-common libmariadbclient18", " libmariadbclient-dev mariadb-client-core-10.0 mariadb-client-10.0", " mariadb-client"] | 15:31 |
Ashana | installed* | 15:31 |
*** openstackgerrit has quit IRC | 15:34 | |
*** openstackgerrit has joined #openstack-powervm | 15:34 | |
thorst_ | Ashana: yea, try it again | 15:41 |
*** mdrabe has quit IRC | 15:49 | |
Ashana | @throst I have to install those packages on compute | 15:51 |
Ashana | @thorst_ I have to install those packages on compute | 15:51 |
thorst_ | Ashana: really? where does it say that? | 15:52 |
Ashana | the compute node just failed in the VNC saying those packages are unathenticated | 15:53 |
thorst_ | ahh, yeah | 15:54 |
thorst_ | but compute won't have containers (at least it SHOULDN'T) | 15:54 |
Ashana | thats true. And this task is still hanging on | 15:54 |
*** smatzek has joined #openstack-powervm | 16:00 | |
*** mdrabe has joined #openstack-powervm | 16:11 | |
*** seroyer has joined #openstack-powervm | 16:12 | |
*** k0da has quit IRC | 16:19 | |
*** tblakeslee has quit IRC | 16:22 | |
*** tblakeslee has joined #openstack-powervm | 16:23 | |
thorst_ | Ashana: sorry .... had to step away a bit. Should I be looking at anything for the environment now? | 16:35 |
*** tblakeslee has quit IRC | 16:37 | |
*** kotra03 has joined #openstack-powervm | 16:43 | |
efried | thorst_, svenkat: Care to talk through some of the design details of vNIC creation methods in pypowervm? | 17:02 |
svenkat | sure.. i was able to create couple of vnic ports on an active lpar using hmc on our dev novalink. | 17:08 |
efried | wohoo! | 17:09 |
Ashana | No it was just that the compute node fail so I will install those packets on there or should I do that thorst_ | 17:09 |
efried | svenkat, Can you get an XML dump | 17:09 |
efried | ? | 17:09 |
svenkat | xml dump using pvmctl lpar list —xml ? | 17:10 |
Ashana | thorst_: No it was just that the compute node fail so I will install those packets on there or should I do that? | 17:10 |
svenkat | in that vnic is not showing up | 17:10 |
svenkat | was talking to Nilam about that and he said thre is no code yet to pulll vnic details in pvmctl | 17:12 |
efried | svenkat, right - we'll have to do it via the K2 REST API. | 17:13 |
svenkat | ok... | 17:13 |
efried | (Can I say "K2"? The HMC REST API.) | 17:14 |
thorst_ | Ashana: Yeah, go ahead and do those isntalls on the compute node | 17:15 |
thorst_ | sorry, was AFK | 17:15 |
*** seroyer has quit IRC | 17:16 | |
efried | thorst_, svenkat: I've been walking through some of the models for vNIC management in pypowervm. Some interesting questions come out of this. | 17:18 |
svenkat | ok… | 17:19 |
efried | The data model informs a lot of what we can do | 17:19 |
efried | It's like this: | 17:19 |
efried | You create a vNIC via PUT /rest/api/uom/LogicalPartition/{uuid}/VirtualNICDedicated | 17:19 |
efried | The payload, a <VirtualNICDedicated/>, has certain 1:1 properties, and then a list of backing devices. | 17:20 |
efried | The assignable 1:1 properties include: slot designation (specific, or "use next"); PVID & PVID priority; Allowed VLAN IDs; MAC address | 17:21 |
svenkat | ok… looking at vnic gui on hmc simultaneously | 17:22 |
efried | A backing device comprises 1) pointers to (VIOS, adapter, phys port), and 2) capacity %. | 17:22 |
svenkat | ok… thelist is complete now.. matches with what we have in hmc ui | 17:23 |
efried | So there's several ways we could allow the user to set up a vNIC in pypowervm. | 17:23 |
efried | Generally all of our wrappers have a .bld() method that takes required and optional params according to all the things you are required/able to set *only* on creation. | 17:24 |
svenkat | ok.. agreed. | 17:25 |
efried | In this case, the slot designation would be the obvious one | 17:25 |
svenkat | should it be driven by last byte of mac as we do now? and range of 32-64, etc? | 17:26 |
svenkat | andpick next for subsequent ports | 17:26 |
thorst_ | svenkat: no, we shouldn't do any of that mac to slot stuff | 17:26 |
thorst_ | that was for P6 way back when | 17:26 |
efried | I'm not worried about that aspect of things. | 17:27 |
thorst_ | we don't do any of that anymore... | 17:27 |
svenkat | ok... | 17:27 |
efried | Right now I'm just trying to figure out how to allow/require the user to set up which parts of the vNIC. | 17:27 |
svenkat | ok… | 17:27 |
efried | I believe the other (non-slot) stuff - PVID/VLAN/MAC stuff - is all settable after the fact; so would not be necessary in .bld(). | 17:28 |
efried | Question is, do we want to allow/require backing devs to be assigned via bld()? | 17:28 |
svenkat | sure. agree. Capacity is needed at build? | 17:28 |
efried | Capacity, as far as I can tell from the schema, is a property on the *backing* devs, not on the outer vNIC thingy. | 17:28 |
efried | I see DesiredCapacityPercentage on the vNIC, but it's designated as ROO (read-only optional). | 17:29 |
efried | This means it can't be set - at least if the schema designation is correct. | 17:29 |
thorst_ | efried: Just because its settable after the fact, doesn't mean that we don't require it on the create | 17:29 |
thorst_ | PVID we should require. Mac, VLANs, Slot should all be optional? | 17:29 |
efried | thorst_, fair enough. We can hash out those details later. I'm going broader strokes right now. | 17:30 |
svenkat | ok… i think backing device must be mandatory to create vnic. how do you do otherwise. | 17:30 |
thorst_ | same with capacity...but figuring out how much capacity you have left is kinda a big deal | 17:30 |
efried | To create, yes - you can't PUT until you've got that stuff set up. But I need to be able to set up the backing devs after bld(). The reasons will become apparent in a bit... | 17:30 |
svenkat | ok... | 17:31 |
efried | Back to capacity: On the backing device, I see CurrentCapacityPercentage, which is designated COD (Create-Only Defaulting). | 17:31 |
thorst_ | back in a sec. | 17:32 |
efried | So - again assuming the schema is correct - if you want to designate capacities, it must be done on a per-backing-dev basis (which makes sense). If not specified, it defaults (to the MinimumEthernetCapacityGranularity on the physical port). | 17:32 |
efried | Which, to me, means that it doesn't make a whole lot of sense to have a 'capacity' param in VNIC.bld(). | 17:33 |
svenkat | sure. | 17:33 |
efried | The only way that could possibly work is if you *also* specified the backing devs to bld() (whereupon we would use that capacity for all of 'em) - otherwise it would be ignored, which would be confusing. So I say we do *not* have a capacity param in bld. | 17:34 |
svenkat | ok… | 17:34 |
efried | So before we decide whether to *allow* backing devices to be specified to bld() - let's walk through the reasons you must at least have the *ability* to specify them after the fact. | 17:35 |
svenkat | sue | 17:35 |
svenkat | sure | 17:35 |
efried | Sums up easily as: automatic/dynamic anti-affinity. | 17:35 |
svenkat | ok.. so setup vnic using bld() and then pick which adapter/pp its attached to… | 17:36 |
svenkat | but all of these are supposed to happen in plug itself isitnt? | 17:36 |
efried | plug is not the only usage scenario. | 17:36 |
svenkat | ok… | 17:37 |
efried | I believe plug will want to use the automatic/dynamic anti-affinity every time. | 17:37 |
svenkat | yes. | 17:37 |
efried | I believe that algorithm will accept (vnic_wrapper, pports, vioses, min_redundancy, max_redundancy) | 17:38 |
efried | it will update the vnic_wrapper's backing devices. | 17:38 |
efried | If it can't find enough VF-ness to satisfy min_redundancy, error. | 17:38 |
svenkat | ok… but to be clear, what other scenario other than plug is involved here to cerate vnic? | 17:39 |
efried | And it will assign at most min(max_redundancy, len(pports)) backing devs. | 17:39 |
efried | pvmctl | 17:39 |
efried | So the anti-affinity algorithm will try to figure out the optimal distribution of pports across cards & VIOSes, whereupon it will create the backing dev element wrappers and stuff 'em onto the vnic_wrapper. | 17:40 |
svenkat | pvmctl will create vnic without backing device, why ( i guess there iwll be pvmctl to attach backing device to a vnic). but. | 17:40 |
svenkat | ok… so the algorithm wlll expect vnics be created already. | 17:41 |
efried | no, pvmctl will need to be able to create the whole vnic in one whack. | 17:41 |
efried | No | 17:41 |
svenkat | ok.. | 17:41 |
efried | You can't create a vnic on the server without specifying all of this stuff. | 17:41 |
efried | Though I'm actually not sure whether you can modify the list of backing devices on the fly - can you see any indication of that one way or another in the HMC GUI? | 17:41 |
svenkat | let me look to see if there is any edit facility | 17:42 |
efried | I'm asking Nilam too. | 17:42 |
svenkat | vnic modify lets me update nly port vlan id | 17:42 |
svenkat | everything else is readonly | 17:43 |
efried | According to seroyer, it is possible. | 17:44 |
efried | Which makes for some interesting possibilities. | 17:45 |
efried | Like: creating with one backing device, then adding others after the fact. | 17:45 |
svenkat | ok.. adding will result in redundancy scenario only (like moving from 1 backing device to multiple) | 17:46 |
efried | svenkat, thorst_: So back to the pypowervm usage model: we certainly need the ability to modify the backing dev list on an existing vnic wrapper. | 17:54 |
Ashana | thorst_: so I installed the packages on the compute node but the "No package matching 'libmariadbclient-dev' is not available on 16.04 ubuntu so the compute node keeps failing, since the package isnt available | 17:55 |
efried | I don't see any harm in allowing a list of backing devs to be passed to VNIC.bld(). | 17:55 |
svenkat | ok… this is mainly due to need for redundancy support, is that right. | 17:55 |
efried | svenkat, for sure. | 17:55 |
svenkat | ok… | 17:55 |
efried | But I've been saying that I'm not planning to treat redundancy as a separate usage model in pypowervm. | 17:55 |
efried | You have a list of backing devices. | 17:55 |
efried | If it's a list of one, no redundancy. | 17:56 |
svenkat | yes.. i agree. | 17:56 |
efried | More than one, redundancy. | 17:56 |
efried | If you create the list yourself, you get whatever anti-affinity you can come up with. | 17:56 |
*** seroyer has joined #openstack-powervm | 17:56 | |
svenkat | so you start with no backing device, then add one . so far this is not redundancy.. | 17:57 |
efried | If you let our special algorithm get after it, we figure out the optimal anti-affinity and set up the backing devs for you. | 17:57 |
svenkat | then you add one more, it becomes redundant vnic | 17:57 |
svenkat | did i say it right | 17:57 |
efried | svenkat, that's for the (potential, brainstorm-level, proposed) pvmctl model. | 17:57 |
efried | I don't necessarily think that's the right model for the community code. | 17:57 |
svenkat | ok.. | 17:57 |
efried | Because we generally want to avoid making more REST calls than we have to. | 17:57 |
efried | BUT | 17:57 |
seroyer | efried: I missed part of this discussion, but you can’t start with no backing device. You must start with one backing device. There is no concept of vNIC without a backing device. | 17:58 |
efried | we have to consider out-of-band changes, e.g. saturation of a pport we wanted to use as a backing device. | 17:58 |
efried | (seroyer, understood) | 17:58 |
efried | That would cause the whole PUT op to fail. | 17:58 |
efried | So we would have to redrive from the start. | 17:58 |
efried | Which may be fine | 17:58 |
efried | Or we may prefer to add one at a time, absorbing failures and moving on to the next candidate, until we've reached the desired redundancy level. | 17:59 |
efried | It's going to be a matter of whether we want to try to assess the port saturation stuff on the client side (would still need to handle failures from the server), or just abdicate all of the validation to the server. | 17:59 |
thorst_ | efried: I think we do have to know about other usage...but we don't have to deal with an admin adding a pport to one of our VMs vnics? | 18:00 |
*** apearson has quit IRC | 18:01 | |
efried | thorst_, I'm not so much concerned about that as about multiple nova threads going on at once. | 18:02 |
efried | My thread looks at capacities and thinks we've got enough to create a vnic with a particular set of ports. | 18:03 |
efried | But meanwhile, seventeen other VMs got created and saturated that port. | 18:03 |
efried | So our PUT will fail because we have stale info. | 18:03 |
efried | But this isn't a situation where etags work for us. | 18:03 |
efried | Because we're not PUTting on the pport. | 18:03 |
efried | So no matter how recently we did our GET, we could still have out-of-date information. | 18:03 |
efried | So need to handle server-side errors in some consistent and reliable way. | 18:04 |
efried | We basically have two choices: all-or-nothing and one-at-a-time. | 18:04 |
thorst_ | efried: Agree... | 18:05 |
thorst_ | but I think we can put that against REST somehow | 18:05 |
efried | In all-or-nothing, we put together the whole package of backing devs at once, and if we get back a server failure (aside: need to be able to identify that failure *specifically* as "you tried to exceed the capacity of a pport") then we need to rebuild the whole package and try again. | 18:05 |
efried | One-at-a-time is simpler: We create with the first one and then add one at a time; and on each iteration, if we get a failure (aside: requires same error identification as above), we move on to the next possible candidate; iterate until we've satisfied our redundancy requirement. | 18:06 |
efried | Strike "is simpler". There's some edge cases that could be pretty hairy there. Like if we run out of pport candidates, do we go back and try to iterate over the list again? After all, they could have freed up some capacity while we've been diddling around. | 18:07 |
thorst_ | efried: KISS | 18:08 |
efried | I feel like all-or-nothing could allow us to take advantage of our @retry decorator (though not, alas, our FeedTask infra). | 18:08 |
thorst_ | fail the whole op | 18:08 |
thorst_ | lets see how often this fails | 18:08 |
*** tblakeslee has joined #openstack-powervm | 18:08 | |
thorst_ | I doubt it'll be often... | 18:08 |
thorst_ | this seems 'edge' | 18:08 |
efried | thorst_, but at least make an attempt to use unsaturated ports, presumably. | 18:09 |
efried | But I agree, definitely KISS to begin with. We can screw with the fine details later. | 18:10 |
efried | Ergo I suppose we don't need the deterministically-identifiable REST failure, at least at the start - it'll just blow through as a generic 400/500. | 18:11 |
thorst_ | yeah | 18:11 |
efried | Though it wouldn't be a bad idea to ask for that up front in anticipation, so we don't end up behind. | 18:11 |
*** apearson has joined #openstack-powervm | 18:13 | |
thorst_ | chongshi could probably add that | 18:14 |
efried | thorst_, not sure that's his bailiwick - might be nvcastet's. | 18:18 |
thorst_ | ah | 18:19 |
efried | Or mbuttard. | 18:19 |
svenkat | efried : on another topic, for pci_passthrough_whitelist, we decided to go with a new field physloc. | 18:24 |
efried | correct. | 18:24 |
svenkat | wilil it be only location code ? | 18:24 |
svenkat | what will be the format of it. adapterid - location code and comma separated? | 18:25 |
efried | I think just the physloc | 18:25 |
efried | Simple | 18:25 |
svenkat | so a comma separated physloc into one value | 18:25 |
efried | No | 18:25 |
efried | One whitelist entry per port | 18:25 |
svenkat | in nova.conf, separate physloc=<loc code> entries ? | 18:26 |
efried | pci_passthrough_whitelist = [{"physloc": "xxxx", "physical_network": "foo_net"}, {"physloc": "yyyy", "physical_network": "foo_net"}, | 18:27 |
efried | {"physloc": "aaaa", "physical_network": "bar_net"}, {"physloc": "bbbb", "physical_network": "bar_net"}, ...] | 18:27 |
svenkat | ok… | 18:27 |
svenkat | and about vif types for vf direct and vnic vif types, last night you pinged us about no free form in glance metadata | 18:28 |
efried | Right, which doesn't matter for now because not supporting direct VF in community for Newton. | 18:28 |
svenkat | ok.. so vnic is the only path - as PowerVC. | 18:29 |
efried | What's PowerVC? | 18:29 |
efried | Anyway... | 18:29 |
efried | I don't think we need to diddle with the glance metadata at all for the community code. | 18:29 |
svenkat | ok.. | 18:29 |
efried | Because I *think* there will only be the one possibility for a given setup. | 18:30 |
efried | thorst_ may want to weigh in here. | 18:30 |
efried | It would be technically possible for us to have the same network set up to allow either SEA or SRIOV at the same time. | 18:30 |
efried | Whereupon we would conceivably make the decision on the fly based on the glance metadata. | 18:31 |
efried | But not sure we want to support all of that out of the gate. | 18:32 |
efried | Something to put in the blueprint, though, svenkat - assuming thorst_ agrees it is technically possible. | 18:36 |
svenkat | sure. | 18:38 |
efried | Actually, SRIOV-backed SEA support is a definite topic for the blueprint. This should definitely be supported. But as with existing SEA-backed support, you would need to set it up beforehand. | 18:38 |
svenkat | yes. agree. | 18:38 |
efried | The setup would look like: Configure VF promisc ports (via pvmctl) and assign to VIOS(es); create SEAs using those VFs. And from that point on, it's the same as today. | 18:39 |
svenkat | yes. | 18:39 |
efried | Should be no code changes to support that. | 18:40 |
svenkat | yes. agree. | 18:40 |
efried | (no code changes in community - obviously pvmctl will have to support VF create-and-assign-to-VIOS.) | 18:40 |
efried | (But we knew that anyway - for installer etc.) | 18:41 |
thorst_ | we should tolerate SEA being backed by SR-IOV...but not provision it from OpenStack | 18:47 |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: DNM: CI Check2 https://review.openstack.org/328317 | 18:52 |
*** kotra03 has quit IRC | 19:02 | |
*** apearson has quit IRC | 19:04 | |
*** seroyer has quit IRC | 19:06 | |
*** apearson has joined #openstack-powervm | 19:06 | |
*** seroyer has joined #openstack-powervm | 19:06 | |
thorst_ | adreznec: I think efried wants you to be the +2 here - https://review.openstack.org/#/c/302447/ | 19:10 |
efried | thorst_, adreznec: for sure. | 19:11 |
efried | I +1ed. | 19:12 |
thorst_ | and I'm definitely looking for a +2 today if possible :-) | 19:12 |
efried | thorst_, tested? | 19:12 |
thorst_ | efried: yep | 19:12 |
efried | coo | 19:13 |
thorst_ | more iterations later...but its become a dependency for other things | 19:13 |
*** tblakeslee has quit IRC | 19:47 | |
*** apearson has quit IRC | 20:01 | |
*** apearson has joined #openstack-powervm | 20:04 | |
*** apearson has quit IRC | 20:11 | |
*** apearson has joined #openstack-powervm | 20:11 | |
*** tblakeslee has joined #openstack-powervm | 20:14 | |
*** openstackstatus has joined #openstack-powervm | 20:19 | |
*** ChanServ sets mode: +v openstackstatus | 20:19 | |
*** k0da has joined #openstack-powervm | 20:42 | |
*** k0da has quit IRC | 20:52 | |
*** svenkat has quit IRC | 20:53 | |
*** k0da has joined #openstack-powervm | 20:56 | |
tpeoples | thorst_: https://github.com/openstack/ceilometer-powervm/blob/master/ceilometer_powervm/compute/virt/powervm/inspector.py#L39 "This code requires that it is run on the PowerVM Compute Host directly." for the ceilometer_powervm inspector. Would you be opposed to a patch that either allows the powervm inspector to take a session object or allow it to use the | 21:02 |
tpeoples | CONF settings instead of assuming localhost for the session? | 21:02 |
tpeoples | (FWIW, I have it running outside of the compute host directly and it works fine) | 21:02 |
thorst_ | tpeoples: hmm...we have not done that else where | 21:10 |
thorst_ | let me think it through a little bit...we've been against that generally because you should be on the node itself | 21:10 |
thorst_ | but that may be more nova/neutron? | 21:10 |
thorst_ | I need to run...will get back to you | 21:11 |
*** thorst_ has quit IRC | 21:11 | |
*** thorst_ has joined #openstack-powervm | 21:11 | |
*** smatzek has quit IRC | 21:14 | |
*** thorst_ has quit IRC | 21:16 | |
*** thorst_ has joined #openstack-powervm | 21:23 | |
*** mdrabe has quit IRC | 21:25 | |
*** thorst_ has quit IRC | 21:28 | |
*** Ashana has quit IRC | 21:31 | |
*** tblakeslee has quit IRC | 21:32 | |
*** tblakeslee has joined #openstack-powervm | 21:35 | |
*** Ashana has joined #openstack-powervm | 21:38 | |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: DNM1 https://review.openstack.org/300231 | 21:40 |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: DNM: Test Change Set 2 https://review.openstack.org/300232 | 21:40 |
*** Ashana has quit IRC | 21:42 | |
*** thorst_ has joined #openstack-powervm | 21:44 | |
*** Ashana has joined #openstack-powervm | 21:44 | |
*** thorst__ has joined #openstack-powervm | 21:46 | |
*** burgerk has quit IRC | 21:46 | |
*** Ashana has quit IRC | 21:48 | |
*** thorst_ has quit IRC | 21:49 | |
*** Ashana has joined #openstack-powervm | 21:50 | |
*** thorst__ has quit IRC | 21:50 | |
*** Ashana has quit IRC | 21:55 | |
*** Ashana has joined #openstack-powervm | 21:55 | |
*** Ashana has quit IRC | 22:00 | |
*** Ashana has joined #openstack-powervm | 22:07 | |
*** Ashana has quit IRC | 22:12 | |
*** Ashana has joined #openstack-powervm | 22:13 | |
*** kylek3h has quit IRC | 22:16 | |
*** seroyer has quit IRC | 22:18 | |
*** Ashana has quit IRC | 22:18 | |
*** Ashana has joined #openstack-powervm | 22:19 | |
*** Ashana has quit IRC | 22:23 | |
*** Ashana has joined #openstack-powervm | 22:25 | |
*** Ashana has quit IRC | 22:30 | |
*** Ashana has joined #openstack-powervm | 22:31 | |
*** Ashana has quit IRC | 22:35 | |
*** Ashana has joined #openstack-powervm | 22:37 | |
*** arnoldje has quit IRC | 22:38 | |
*** Ashana has quit IRC | 22:42 | |
*** Ashana has joined #openstack-powervm | 22:43 | |
*** Ashana has quit IRC | 22:47 | |
*** Ashana has joined #openstack-powervm | 22:49 | |
*** k0da has quit IRC | 22:49 | |
*** Ashana has quit IRC | 22:53 | |
*** Ashana has joined #openstack-powervm | 22:55 | |
*** Ashana has quit IRC | 22:59 | |
*** Ashana has joined #openstack-powervm | 23:01 | |
*** Ashana has quit IRC | 23:05 | |
*** Ashana has joined #openstack-powervm | 23:07 | |
*** edmondsw has quit IRC | 23:07 | |
*** Ashana has quit IRC | 23:11 | |
*** Ashana has joined #openstack-powervm | 23:12 | |
*** Ashana has quit IRC | 23:17 | |
*** Ashana has joined #openstack-powervm | 23:18 | |
*** Ashana has quit IRC | 23:23 | |
*** Ashana has joined #openstack-powervm | 23:24 | |
*** Ashana has quit IRC | 23:29 | |
*** Ashana has joined #openstack-powervm | 23:30 | |
*** Ashana has quit IRC | 23:34 | |
*** Ashana has joined #openstack-powervm | 23:38 | |
*** Ashana has quit IRC | 23:42 | |
*** Ashana has joined #openstack-powervm | 23:44 | |
*** Ashana has quit IRC | 23:48 | |
*** Ashana has joined #openstack-powervm | 23:49 | |
*** tblakeslee has quit IRC | 23:52 | |
*** Ashana has quit IRC | 23:53 | |
*** Ashana has joined #openstack-powervm | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!