*** piet has joined #openstack-lbaas | 00:01 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 jinja template updates https://review.openstack.org/278223 | 00:03 |
---|---|---|
sbalukoff | Great... an now gerrit is hanging... | 00:04 |
sbalukoff | Oh there it goes! | 00:04 |
*** yamamoto_ has joined #openstack-lbaas | 00:05 | |
johnsom | Yeah, I had gerrit give me a 500 too | 00:06 |
rm_work | ok, i'm going to take a break and come back in a couple of hours -- probably by then we'll have things through checks? | 00:07 |
rm_work | which patch was the "new fix" in? sbalukoff | 00:07 |
rm_work | i'll take a look at that now | 00:07 |
rm_work | and are you *fully* rebased yet? I guess missing a couple at the top | 00:08 |
rm_work | err, end | 00:08 |
rm_work | top/bottom so ambiguous | 00:08 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 documentation https://review.openstack.org/278830 | 00:09 |
johnsom | rm_work https://review.openstack.org/#/c/265430/34..36/octavia/common/data_models.py | 00:09 |
rm_work | kk | 00:09 |
sbalukoff | rm_work: The very first one in the chain. | 00:09 |
*** manishg_wfh has quit IRC | 00:09 | |
sbalukoff | I have one more to rebase. | 00:09 |
rm_work | kk | 00:10 |
rm_work | my only comment is that I usually do "pool=None" above the if, and don't use an else | 00:11 |
rm_work | but it's just style | 00:11 |
rm_work | :P | 00:11 |
rm_work | I think yours actually does one less operation, technically | 00:11 |
rm_work | so might be better | 00:11 |
rm_work | but this looks sane to me | 00:12 |
*** ducttape_ has quit IRC | 00:12 | |
sbalukoff | Ok. | 00:12 |
rm_work | i'm tired and babbling | 00:12 |
sbalukoff | I think we are still waiting on a verdict from the gate on whether it fixes the issue we saw. | 00:12 |
rm_work | yep | 00:12 |
rm_work | which is why i'm going to go do stuff and be back in a couple of hours | 00:13 |
sbalukoff | Ok, cool. | 00:13 |
rm_work | i'll join from my desktop in case you need to ping me | 00:14 |
*** manishg_wfh has joined #openstack-lbaas | 00:14 | |
*** yamamoto_ has quit IRC | 00:14 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add listener stats API https://review.openstack.org/281603 | 00:15 |
*** rm_you has joined #openstack-lbaas | 00:15 | |
rm_you | o/ | 00:15 |
sbalukoff | Ok! The chain should now be rebased. | 00:15 |
rm_you | ah k | 00:15 |
rm_you | will go back and start a devstack build then | 00:15 |
sbalukoff | And most of the patches didn't lose their +2's because this was a rebase. | 00:15 |
openstackgerrit | Michael Johnson proposed openstack/octavia: Change HMAC compare to use constant_time_compare https://review.openstack.org/283330 | 00:16 |
rm_work | hmm when i go to https://review.openstack.org/#/c/280478/8 the related changes section looks much smaller | 00:16 |
rm_work | sbalukoff: ^^ | 00:16 |
rm_work | did some lose their topic or something? | 00:16 |
sbalukoff | Er.... | 00:16 |
rm_work | though they should still be listed there if they're in the chain i thought | 00:16 |
johnsom | That is strange | 00:17 |
sbalukoff | WTF? | 00:17 |
sbalukoff | That is really bizarre. | 00:17 |
rm_work | trying to trace the chain now | 00:17 |
johnsom | If I go the other direction, down from mine https://review.openstack.org/#/c/283330/, it looks ok | 00:18 |
rm_work | i'm lookng from 278830 | 00:18 |
sbalukoff | The chain is visible here: https://review.openstack.org/#/c/282288/2 | 00:18 |
sbalukoff | And those look to be the right patch sets. | 00:18 |
sbalukoff | It's only the one at the bottom that seems confused... | 00:19 |
sbalukoff | I wonder if that was related to the gerrit hang I saw earlier. | 00:19 |
*** manishg_wfh has quit IRC | 00:19 | |
rm_work | yeah looks ok | 00:19 |
rm_work | alright | 00:20 |
johnsom | No love | 00:20 |
rm_work | did anything change today client-side or n-lbaas side? | 00:20 |
johnsom | https://jenkins06.openstack.org/job/gate-neutron-lbaasv2-dsvm-listener/338/ | 00:21 |
rm_work | fff | 00:21 |
sbalukoff | Yeah... | 00:21 |
johnsom | Logs are here: http://logs.openstack.org/78/280478/8/check/gate-neutron-lbaasv2-dsvm-listener/219b44d/ | 00:22 |
johnsom | http://logs.openstack.org/78/280478/8/check/gate-neutron-lbaasv2-dsvm-listener/219b44d/logs/screen-o-cw.txt.gz#_2016-02-24_00_09_33_048 | 00:23 |
sbalukoff | Ok, I'm going to restack to try to figure out what's wrong with that tempest test locally. | 00:26 |
rm_you | OH | 00:27 |
sbalukoff | Oh? | 00:27 |
rm_you | that isn't ... | 00:28 |
rm_you | wat | 00:28 |
rm_you | that's the second patch, not the first | 00:28 |
rm_you | the first is 280478 | 00:28 |
rm_you | which is not the one you linked me for the fix | 00:28 |
rm_you | this is a totally different issue | 00:29 |
rm_you | this makes sense | 00:29 |
sbalukoff | Ok, I'm confused. | 00:29 |
rm_you | THAT error is from https://review.openstack.org/#/c/280478/8/octavia/common/data_models.py | 00:30 |
rm_you | on line 278 | 00:30 |
rm_you | which, obviously pool can be None | 00:30 |
rm_you | should have caught that | 00:30 |
sbalukoff | Argh! | 00:31 |
sbalukoff | Ok.. | 00:31 |
sbalukoff | Well... | 00:31 |
sbalukoff | Heh! At least the failing test makes sense now. | 00:32 |
rm_you | yes | 00:32 |
sbalukoff | Let me get on fixing *that* | 00:32 |
rm_you | i went from the end downward when reviewing | 00:32 |
sbalukoff | I really need to get more sleep at night. :P | 00:32 |
rm_you | so my guess is that my eyes had completely glazed over by the time i got here | 00:32 |
rm_you | same >_< | 00:32 |
rm_you | what is amazing though is: *tests caught a bug* :P | 00:33 |
rm_you | that's almost more surprising than the bug being there | 00:33 |
sbalukoff | HAHA! | 00:35 |
sbalukoff | Also part of the reason I want native tempest tests for Octavia. | 00:35 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Fix model update flows https://review.openstack.org/280478 | 00:36 |
sbalukoff | And here goes the rebase chain again. | 00:36 |
sbalukoff | I'm pretty sure I killed the bug this time. | 00:36 |
sbalukoff | But... again, we shall see. | 00:36 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Assign peer_port on listener creation https://review.openstack.org/282288 | 00:40 |
rm_you | well, it was a different bug | 00:41 |
rm_you | :P | 00:41 |
*** diogogmt has joined #openstack-lbaas | 00:42 | |
rm_you | ok cool and it fixed the dumbness with the side-panel | 00:43 |
sbalukoff | Err? | 00:43 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 database structures https://review.openstack.org/265430 | 00:43 |
rm_you | the "related changes" include everything on the first patch now | 00:44 |
sbalukoff | Oh! Ok. | 00:46 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Update repos for L7 policy / methods https://review.openstack.org/265529 | 00:46 |
*** crc32 has quit IRC | 00:46 | |
johnsom | sbalukoff I think we need to have the chat about the data model comments tomorrow. I really need to do some internal work I planned to do today. | 00:49 |
sbalukoff | johnsom: Ok, no time to chat now? | 00:49 |
sbalukoff | I can pause this rebasing stuff. | 00:49 |
rm_you | what was the issue you saw johnsom? | 00:49 |
johnsom | I figured you were back on the rebase train | 00:49 |
sbalukoff | I can hold off. I'd like to make sure that fix at the start really fixes tihng. | 00:50 |
sbalukoff | things. | 00:50 |
sbalukoff | Otherwise, I'm back on the train again. | 00:50 |
johnsom | Ok. rm_work it's the comments German and I put on https://review.openstack.org/#/c/265430/34 | 00:50 |
rm_you | ah that | 00:51 |
sbalukoff | johnsom: Ok, I'm assuming you read through my responses. What are your thoughts? | 00:51 |
johnsom | So, starting with line 219 where we put the policy in reject when the pool is removed. | 00:51 |
sbalukoff | Ok. | 00:52 |
johnsom | The user experience is a bit odd here, we are changing the behavior in a maybe hidden way. Would it be better to reject the pool delete if it is in use by a policy? | 00:52 |
sbalukoff | We don't presently reject pool deletes if they're in use by a listener. | 00:53 |
*** Aish has quit IRC | 00:53 | |
sbalukoff | I see this as being similar to that. | 00:53 |
johnsom | But we do reject pool deletes if there is a member in the pool | 00:54 |
rm_you | wait, we do? i guess i didn't test that :P | 00:54 |
rm_you | ah right because nothing cascades yet | 00:54 |
johnsom | I think this is more like deleting the listener when there is a pool attached | 00:54 |
rm_you | I would expect delete pool -> delete members | 00:54 |
sbalukoff | rm_you: Same here. :/ | 00:55 |
sbalukoff | Aah-- we do: | 00:55 |
sbalukoff | pool = orm.relationship("Pool", backref=orm.backref("members", | 00:56 |
sbalukoff | uselist=True, | 00:56 |
sbalukoff | cascade="delete")) | 00:56 |
sbalukoff | That's from models.py | 00:56 |
sbalukoff | In the Member class. | 00:56 |
johnsom | Right, but the api blocks it | 00:56 |
rm_you | yeah so funnily there can never be members when that happens | 00:56 |
rm_you | i remember this from part of the discussion of what xgerman is working on, from the midcycle | 00:57 |
sbalukoff | Aah. | 00:57 |
johnsom | What concerns me is deleting the pool can cause requests previously serviced by the policy to go from working to "REJECT" with no indication to the operator deleting the pool that they just caused that. | 00:57 |
rm_you | yeah i think MAYBE we should just block deleting a pool in use? | 00:57 |
rm_you | force them to either update the listener to use a different pool first, or delete it | 00:57 |
sbalukoff | johnsom: It could be an equally bad situation to route requests to the default pool. :/ | 00:57 |
rm_you | sbalukoff: ^^ | 00:58 |
rm_you | so literally just deny a delete on a pool in use by l7 | 00:58 |
sbalukoff | Ok. | 00:58 |
johnsom | Agreed. That was my first idea, but yeah, I think sending the traffic back to the default is not good either. | 00:58 |
johnsom | Ok, so it sounds like we agree on that one. Do you want me to open a bug for that? | 00:59 |
*** ducttape_ has joined #openstack-lbaas | 00:59 | |
sbalukoff | Ok, I think i agree that's probably the best user experience. It's not what Evgeny and I agreed on months ago though, so we'll want the neutron-lbaas stuff to behave the same way. | 00:59 |
sbalukoff | Can we file this as a bug for now? | 00:59 |
rm_you | I would say yes | 00:59 |
sbalukoff | HAHA | 00:59 |
sbalukoff | Wow, were we just on the same wavelength there, johnsom? | 00:59 |
johnsom | Don't freak me out man | 00:59 |
johnsom | Ok, next up is line 281 where a listener update is deleting the pool from the model (not sqlalchemy or db) | 01:00 |
sbalukoff | Right. | 01:01 |
sbalukoff | This is a little more complicated to understand. | 01:01 |
johnsom | This seems like a side effect of doing the listener update. If the model is used in later steps it would not see the pool in pools, right? | 01:01 |
sbalukoff | It's really important to understand that the data model graph that's being operated on is transient: It exists only as long as the operation doing the update / delete command is executing and gets discarded afterward in the flow regardless of whether whatever operation we were working on succeeded. | 01:02 |
johnsom | Tell me more about that. You are copying the model somewhere upstream or doing a reload task right after? | 01:03 |
sbalukoff | So, if the model *graph* is used later on, the pool will still be there (linked under loadbalancer and probably other places within the graph), but it won't be listed under the listener.pools list (which is actually what we want-- we just simulated updating that relationship.) | 01:03 |
*** paco20151113 has joined #openstack-lbaas | 01:03 | |
sbalukoff | Ok, let me find exactly where the Listener (data model) update method gets called: | 01:04 |
*** minwang2 has quit IRC | 01:04 | |
sbalukoff | Ok, it's called in model_tasks.UpdateAttributes | 01:05 |
johnsom | Right you changed the way I was doing that. | 01:06 |
*** minwang2 has joined #openstack-lbaas | 01:06 | |
johnsom | It's used to update the model just before pushing a new haproxy.cfg out | 01:06 |
sbalukoff | Right, because the old way assumed the attributes in the model were static attributes. | 01:06 |
sbalukoff | Yes, and that's the *only* place in the code tree where data model update() methods get called. | 01:06 |
sbalukoff | By design, I think-- any other time we manipulate stuff we want the changes to be saved in the database. | 01:07 |
sbalukoff | The problem with the old update method in that task was by assuming the attributed were simple and static, we break things when they're actually not. | 01:08 |
johnsom | Well, let's not argue that mess again. Let me ask some more questions and see if I can understand the point of this remove(old_pool) | 01:08 |
sbalukoff | Ok, that's to keep the data model graph accurately reflecting what would happen to the data model if we were doing this update on the repository. | 01:09 |
sbalukoff | So for example... | 01:09 |
johnsom | So, here is the question. If I make a lister update call, admin down or such, and there is no L7 policy, but a default pool, wouldn't the rendered haproxy be missing it's pool config? | 01:09 |
sbalukoff | Can you point me to the line numbers you're looking at? | 01:10 |
johnsom | 281-287, actually I have to change that slightly, the user is updating the listener to have a different default pool, as opposed to admin dnow | 01:11 |
sbalukoff | Right. | 01:11 |
sbalukoff | So, on lines 282-284, I figure out which pools are actually referenced by active l7 policies... | 01:12 |
sbalukoff | 285 I save the old_pool | 01:12 |
sbalukoff | 286 I see whether the old_pool is one that is referenced by an l7 policy, and if it *is*, i *don't* remove it from the list. | 01:13 |
sbalukoff | If the old_pool is not referenced by an l7 policy, then we know it was only ever referenced as the default_pool, so we will want to remove it from the listener.pools list. | 01:13 |
sbalukoff | Plus, right after, we remove the current listener (that is being changed) from the old_pool's listeners list. | 01:14 |
johnsom | I think I see it now, but if the new default is not on an l7 would it not get set? | 01:14 |
*** yamamoto_ has joined #openstack-lbaas | 01:14 | |
sbalukoff | Though, technically I don't think there's any code that deferences that right now-- but I figured it's better to keep the data model graph completely accurate. | 01:15 |
johnsom | Ah, ok. I get it now. | 01:15 |
johnsom | I think it is fine | 01:15 |
sbalukoff | Ok. | 01:15 |
johnsom | Ok. Last one, 427 | 01:15 |
sbalukoff | Ok. | 01:16 |
sbalukoff | That's a lot more convoluted. | 01:16 |
sbalukoff | But it's the same basic idea. | 01:16 |
johnsom | Great for the end of the day huh? | 01:16 |
sbalukoff | Haha! | 01:16 |
sbalukoff | Totally. | 01:16 |
sbalukoff | So here's what's up with that: | 01:16 |
sbalukoff | If we are deleting an l7rule, there is a chance that it's the last rule in an l7policy. | 01:16 |
johnsom | Right, following that | 01:17 |
sbalukoff | If that's the case, then the l7policy is going to become "inactive" on the listener, and any pool it references could disappear from the listener.pools list. | 01:17 |
sbalukoff | There's a *lot* of logic to handle that case in the L7Policy update and delete methods... | 01:18 |
johnsom | I guess this doesn't really matter as the haproxy would not have an acl to match, so dropping it from the render is the right thing | 01:18 |
sbalukoff | Yes--- | 01:18 |
sbalukoff | Technically, doing a l7policy.delete here means that the listener.l7policies list doesn't have it anymore... | 01:18 |
sbalukoff | But, that's OK because it wouldn't be rendered in the haproxy config anyway. | 01:18 |
johnsom | Ok, I think I have it | 01:18 |
sbalukoff | So... this is really just a shortcut not to have to copy-paste code for managing the l7policy relationships. | 01:19 |
*** yamamoto_ has quit IRC | 01:19 | |
*** yamamoto_ has joined #openstack-lbaas | 01:20 | |
johnsom | Ok, I think I am good to merge the beast. Those were really the last concerns (other than the gates) | 01:20 |
*** manishg_wfh has joined #openstack-lbaas | 01:20 | |
sbalukoff | Ok. Thank you for letting me fix the minor problems we've uncovered in the last week in follow-up bugs. :) | 01:20 |
sbalukoff | (And hopefully the gate clears now.) | 01:21 |
johnsom | Sure, NP. Sorry you had a heart attack over the -1's | 01:21 |
sbalukoff | I'm still baffled as to how it could have passed before. | 01:21 |
johnsom | I'm not quite the hard-a$$ you were.... | 01:21 |
*** armax has joined #openstack-lbaas | 01:21 | |
sbalukoff | Haha! | 01:21 |
sbalukoff | johnsom: Oh, you're definitely a hard-ass. But for the right reasons: You don't want to introduce stuff that's poorly written or likely to break later on in unusual ways. | 01:22 |
sbalukoff | I think that's a good thing. | 01:22 |
johnsom | It looks like it passed that time | 01:22 |
sbalukoff | Awesome! | 01:22 |
sbalukoff | Ok, thanks for taking time at the end of your day to talk through this stuff. | 01:22 |
sbalukoff | Any other questions before I go heads down on rebasing the rest of this chain after that fix at the start of the chain? | 01:23 |
johnsom | Sure. I'll be here for another hour or two working on the internal stuff. If you need +2's ping me | 01:23 |
johnsom | No, I am good | 01:23 |
sbalukoff | johnsom: Will do! | 01:23 |
johnsom | Tomorrow you can fix the bug in lbaas and I'll +2 that too | 01:23 |
sbalukoff | Heh! | 01:23 |
rm_you | so wait, +2 time on the first patches again? | 01:24 |
sbalukoff | Yeah, I think I will probably be able to sleep better tonight. :) I'll tackle that when I'm fresh in the morning. :) | 01:24 |
rm_you | +2/+A? | 01:24 |
johnsom | rm_you it passed the gate that failed before | 01:24 |
johnsom | scenario is still running | 01:24 |
*** minwang2 has quit IRC | 01:24 | |
sbalukoff | rm_you: I think so, if they're rebased off that first fix in the chain. | 01:24 |
rm_you | ah k | 01:24 |
sbalukoff | rm_you: You might want to wait until things pass tests the first time again. | 01:25 |
rm_you | yeah | 01:26 |
sbalukoff | Ok, I'll be around, rebasing stuff for a bit... | 01:26 |
madhu_ak | fnaval, commented on your scenario test patch | 01:27 |
*** madhu_ak is now known as madhu_ak|home | 01:27 | |
sbalukoff | johnsom: Once the Octavia patches land, and the neutron-lbaas stuff is taken care of so it can also land by next Monday, could you give me some direction on which of the outstanding bugs assigned to me you'd like to see fixed first? | 01:29 |
johnsom | Sure | 01:31 |
johnsom | I have assigned severities | 01:31 |
johnsom | https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka&field.assignee=sbalukoff | 01:34 |
sbalukoff | Ok, cool. | 01:35 |
sbalukoff | Thanks! | 01:35 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Update repos for L7 rules / validations https://review.openstack.org/276643 | 01:39 |
*** ducttape_ has quit IRC | 01:42 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 api - policies https://review.openstack.org/265690 | 01:43 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 api - rules https://review.openstack.org/277718 | 01:45 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 controller worker flows and tasks https://review.openstack.org/277768 | 01:46 |
*** Purandar has quit IRC | 01:51 | |
*** piet has quit IRC | 01:53 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 jinja template updates https://review.openstack.org/278223 | 01:57 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 documentation https://review.openstack.org/278830 | 01:59 |
*** yamamoto_ has quit IRC | 02:00 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add listener stats API https://review.openstack.org/281603 | 02:05 |
sbalukoff | Whew! | 02:06 |
sbalukoff | Man, I am not going to miss having to rebase that chain. | 02:06 |
sbalukoff | I *think* we're all good there. | 02:06 |
openstackgerrit | Michael Johnson proposed openstack/octavia: Change HMAC compare to use constant_time_compare https://review.openstack.org/283330 | 02:07 |
johnsom | You could review that one as it depends on your patches and already has a +2 | 02:07 |
*** pcaruana has quit IRC | 02:07 | |
sbalukoff | johnsom: Sure! | 02:08 |
johnsom | Of course it has to go through the gate again | 02:08 |
sbalukoff | Ok. | 02:08 |
sbalukoff | rm_work / rm_you: If you want to +2 / +A this (now working) head of the chain... well, we can get the merge train rolling, eh! https://review.openstack.org/#/c/280478/ | 02:09 |
rm_you | just did | 02:09 |
sbalukoff | Nice! | 02:09 |
rm_you | should have minimum of 3 merging | 02:09 |
rm_you | 7 | 02:09 |
rm_you | 7 of them in a row with +A | 02:09 |
rm_you | should all co-gate | 02:09 |
sbalukoff | Wow! | 02:11 |
*** H3y has quit IRC | 02:14 | |
johnsom | I think our project just changed. My last patch has a requirements gate | 02:15 |
rm_you | yep there they go | 02:15 |
rm_you | johnsom: uhhh | 02:15 |
rm_you | johnsom: i did +1 the requirements patch recently | 02:15 |
rm_you | i wonder if it merged | 02:15 |
johnsom | I suspect it will break as our requirements file is out-of-date | 02:16 |
rm_you | it should be ok | 02:16 |
rm_you | it just starts the requirements-update job i thinjk | 02:16 |
sbalukoff | Well... | 02:17 |
rm_you | where do you see that johnsom | 02:17 |
johnsom | zuul 283330 | 02:17 |
johnsom | there is a releasenotes gate there too | 02:18 |
johnsom | Which I asked about a long time ago, but got no traction, so ignored | 02:18 |
rm_you | hmm | 02:19 |
*** pcaruana has joined #openstack-lbaas | 02:19 | |
johnsom | Well, it passed, so that is good | 02:21 |
sbalukoff | I'm going to re-stack locally here and see if can figure out how many rules in an l7policy breaks the haproxy config... | 02:22 |
sbalukoff | (Should make for a nice diversion so I don't have to think about the gate for a while.) | 02:22 |
rm_you | heh | 02:23 |
rm_you | i was going to ask bedis | 02:23 |
rm_you | i figure he'd just ... know | 02:24 |
sbalukoff | I suspect it's going to be some multiple of 256 bytes on a line. | 02:24 |
sbalukoff | (Probably 1024) | 02:25 |
rm_you | i have a set of stuff i'm waiting to +A too lol | 02:30 |
rm_you | just sitting here watching this | 02:32 |
sbalukoff | It is pretty fascinating. | 02:34 |
*** Purandar has joined #openstack-lbaas | 02:34 | |
johnsom | Wow, really? | 02:35 |
*** kevo has quit IRC | 02:35 | |
johnsom | I just get depressed that the gates are so slow. Granted, we are lucky at the moment | 02:35 |
sbalukoff | It's kind of like watching a rube-goldberg machine. | 02:36 |
johnsom | So true | 02:36 |
sbalukoff | Kind of a "man, we were so lucky to witness that actually working" vibe to it. | 02:36 |
johnsom | hahaha | 02:36 |
johnsom | "I was there for L7" stickers | 02:37 |
sbalukoff | HAHA! | 02:37 |
sbalukoff | Totally! | 02:37 |
openstackgerrit | Merged openstack/octavia: Fix model update flows https://review.openstack.org/280478 | 02:42 |
openstackgerrit | Merged openstack/octavia: Assign peer_port on listener creation https://review.openstack.org/282288 | 02:43 |
sbalukoff | Yay! | 02:43 |
rm_you | IT BEGINS | 02:44 |
*** Purandar has quit IRC | 02:47 | |
*** yamamoto_ has joined #openstack-lbaas | 02:47 | |
openstackgerrit | Merged openstack/octavia: Add L7 database structures https://review.openstack.org/265430 | 02:50 |
openstackgerrit | Merged openstack/octavia: Update repos for L7 policy / methods https://review.openstack.org/265529 | 02:51 |
openstackgerrit | Merged openstack/octavia: Update repos for L7 rules / validations https://review.openstack.org/276643 | 02:51 |
openstackgerrit | Merged openstack/octavia: Add L7 api - policies https://review.openstack.org/265690 | 02:51 |
sbalukoff | It continues! | 02:51 |
*** Aish has joined #openstack-lbaas | 02:51 | |
*** Aish has left #openstack-lbaas | 02:51 | |
*** manishg_wfh has quit IRC | 03:00 | |
*** manishg_wfh has joined #openstack-lbaas | 03:00 | |
*** pcaruana has quit IRC | 03:01 | |
rm_you | LOL | 03:02 |
rm_you | johnsom: jinx | 03:02 |
rm_you | https://review.openstack.org/#/c/278223/ | 03:02 |
rm_you | within like | 03:02 |
rm_you | 5 seconds we both +A'd | 03:02 |
johnsom | Yeah, ha | 03:03 |
openstackgerrit | Franklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 03:03 |
rm_you | wish i could +A 281603 too but i haven't tested | 03:05 |
johnsom | Yeah, I have the console open watching the test | 03:05 |
openstackgerrit | Merged openstack/octavia: Add L7 api - rules https://review.openstack.org/277718 | 03:05 |
johnsom | rm_work it hit session persistence | 03:06 |
sbalukoff | 281603 is a relatively minor test-- adding functionality that's been in the docs but broken since as long as Octavia has been around. | 03:08 |
sbalukoff | I put it at the end of the L7 chain because it touches the listener API (as does L7) and didn't want to futz with L7. | 03:09 |
sbalukoff | johnsom: If you want to rebase your patch for doing the time comparison to go before that one, it shouldn't break anything, eh. | 03:09 |
sbalukoff | (And then that can get merged tonight as well.) | 03:10 |
johnsom | Nah, I have faith in the stats stuff testing out ok | 03:10 |
sbalukoff | Ok. | 03:10 |
sbalukoff | Once the last of the L7 stuff merges, I will probably rebase any of the smallish bugfixes I have in the queue that end up in merge conflict. | 03:12 |
rm_you | i might just +A this | 03:13 |
rm_you | it looks fine on review | 03:13 |
*** pcaruana has joined #openstack-lbaas | 03:16 | |
sbalukoff | It's pretty simple. | 03:17 |
*** TrevorV has joined #openstack-lbaas | 03:17 | |
*** TrevorV2 has joined #openstack-lbaas | 03:18 | |
rm_you | sure, why not | 03:20 |
rm_you | it's a gate party | 03:20 |
sbalukoff | Am I smoking crack in the comment I made here? https://review.openstack.org/#/c/283802/ | 03:24 |
rm_you | i think that makes sense | 03:25 |
sbalukoff | Ok. | 03:25 |
rm_you | only one could have "null" if it was unique :P | 03:25 |
rm_you | (and nullable) | 03:25 |
sbalukoff | Yeah, that's one that I apparently missed in the neutron-lbaas shared-pools patch. | 03:26 |
rm_you | i did that before by accident once | 03:26 |
*** TrevorV has quit IRC | 03:27 | |
*** TrevorV2 has quit IRC | 03:27 | |
*** TrevorV has joined #openstack-lbaas | 03:28 | |
TrevorV | rm_work, you online on here homie? | 03:30 |
rm_you | yes | 03:30 |
TrevorV | can you go active in TS for a minute or two | 03:30 |
TrevorV | ? | 03:30 |
rm_you | sec also in another ts | 03:31 |
TrevorV | kk | 03:31 |
johnsom | Ok, I will be away for a bit. If you happen to think about it, can you click rebase on this after the stack is merged? https://review.openstack.org/#/c/283255 | 03:34 |
openstackgerrit | Merged openstack/octavia: Add L7 controller worker flows and tasks https://review.openstack.org/277768 | 03:36 |
sbalukoff | johnsom: Sure! | 03:36 |
*** links has joined #openstack-lbaas | 03:46 | |
openstackgerrit | Merged openstack/octavia: Add L7 jinja template updates https://review.openstack.org/278223 | 03:50 |
openstackgerrit | Merged openstack/octavia: Add L7 documentation https://review.openstack.org/278830 | 03:50 |
*** ducttape_ has joined #openstack-lbaas | 03:58 | |
sbalukoff | Sweet! | 03:59 |
sbalukoff | Almost done with the merges for the evening... then it's rebase time, eh! | 03:59 |
sbalukoff | (for the small bugfixes already in the queue.) | 03:59 |
rm_you | yep | 03:59 |
sbalukoff | Also: Finished running my test: I was able to add 53 rules to an l7policy before haproxy refused to parse the config anymore. | 03:59 |
TrevorV | sbalukoff, your stuff all merged just now | 04:00 |
sbalukoff | I would be extremely surprised if anyone actually does that. | 04:00 |
TrevorV | Now *I* get the focus :D | 04:00 |
sbalukoff | TrevorV: Woo-hoo! | 04:00 |
sbalukoff | TrevorV: Yep! | 04:00 |
sbalukoff | TrevorV: I'm hoping to see the single-create merge this week as well, eh. | 04:00 |
TrevorV | scary being under the spotline, but I think I got this single-create thing licked for Octavia at least | 04:00 |
TrevorV | spotlight*** | 04:00 |
*** pcaruana has quit IRC | 04:01 | |
sbalukoff | Heh! I know the feeling. I'm glad I won't be holding others up anymore with the L7 stuff. Now I'll just be holding them up with bugfixes. :) | 04:01 |
TrevorV | Good plan actually! | 04:01 |
sbalukoff | Thought hopefully most of those will be minor. I feel like L7 got a good workout. | 04:01 |
rm_you | afk for 30m | 04:02 |
TrevorV | If I get the single-create stuff done this week I'll try to put some time in on rounding out those SQLAlchemy issues we've seen | 04:02 |
sbalukoff | TrevorV: Sounds good. | 04:02 |
*** manishg_wfh has quit IRC | 04:03 | |
*** neelashah has joined #openstack-lbaas | 04:04 | |
openstackgerrit | Merged openstack/octavia: Add listener stats API https://review.openstack.org/281603 | 04:05 |
openstackgerrit | Franklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 04:06 |
openstackgerrit | Merged openstack/octavia: Change HMAC compare to use constant_time_compare https://review.openstack.org/283330 | 04:06 |
sbalukoff | Rad! | 04:07 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add a request timeout to the REST API driver https://review.openstack.org/283255 | 04:07 |
sbalukoff | (rebased that per johnsom's request) | 04:07 |
*** allan_h has quit IRC | 04:08 | |
blogan | L7 merge in octavia! | 04:10 |
blogan | sorry I wasn't able to provide much testing/feedback on that stuff, too much going on internally for me, but thanks for all the hardwork you guys | 04:11 |
blogan | and all the scrollback to read :) | 04:11 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API https://review.openstack.org/256974 | 04:13 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Fix health monitor URL in API documentation https://review.openstack.org/281126 | 04:15 |
*** pcaruana has joined #openstack-lbaas | 04:15 | |
sbalukoff | blogan: Haha! you're welcome. XD | 04:15 |
*** ducttape_ has quit IRC | 04:15 | |
blogan | sbalukoff: i feel like so useless now :( | 04:16 |
johnsom | Back | 04:16 |
*** fnaval has quit IRC | 04:16 | |
sbalukoff | blogan: Naw-- there are lots of bugs to fix, most of which are small-ish. :) | 04:16 |
johnsom | blogan If it feels better, we still pick on you | 04:17 |
sbalukoff | Haha! | 04:17 |
johnsom | sbalukoff Thanks for the rebase | 04:17 |
sbalukoff | No problem, eh! | 04:17 |
blogan | sbalukoff: oh i don't doubt there are bugs to fix, but at least those can be done after | 04:17 |
blogan | and backported too | 04:17 |
sbalukoff | Yep. | 04:17 |
blogan | johnsom: at least that will never change, that is my legacy | 04:18 |
blogan | that and data models | 04:18 |
sbalukoff | Once Monday's deadline is passed, I plan on going on a smashing spree with great alacrity. | 04:18 |
sbalukoff | Octavia must not suck in Mitaka! | 04:18 |
johnsom | Cool, I think a bunch of them will go quickly | 04:18 |
blogan | well i hope some of the internal stuff gets settled and dies down so i can help out on that, but with the way things have been going i doubt that'll happen soon | 04:20 |
sbalukoff | johnsom: Don't know if you saw, but I found the limit for l7rules in a single l7policy: 53. haproxy stops parsing a config line after 2048 characters. | 04:20 |
sbalukoff | johnsom: That is a ridiculously high number of rules in one policy, so I think maybe we just set the limit at 50 and call it good. | 04:21 |
blogan | sbalukoff: meant to ask you a question, will a pool that is ahred between multiple listeners and/or multiple l7policies have different statuses? | 04:21 |
sbalukoff | If you want I can add a validation for that. | 04:21 |
sbalukoff | blogan: It shouldn't. | 04:21 |
sbalukoff | It's the same DB record. | 04:21 |
blogan | actually i dont think this is a big problem with pools, but will be for listeners, but they'd probably need different statuses | 04:22 |
johnsom | sbalukoff Yes, a limit at 50 sounds like a good and reasonable idea | 04:22 |
sbalukoff | blogan: Ok, we actually haven't done a whole lot of design around status updates and whatnot should work from what I can tell. Maybe we should dig into that deeply and figure out what makes sense? | 04:23 |
sbalukoff | I'm thinking this is something we want to spec out and have discussion about. | 04:23 |
openstackgerrit | Paco Peng proposed openstack/octavia: Fixed invalid IP value get by awk in sample local.sh file Closes-bug: #1549091 https://review.openstack.org/283929 | 04:24 |
openstack | bug 1549091 in octavia "samples/local.sh get IP1 IP2 value not correct with $5" [Undecided,New] https://launchpad.net/bugs/1549091 - Assigned to Paco Peng (pzx-hero-19841002) | 04:24 |
blogan | sbalukoff: yeah probably at some point, it was just something that we had issues in the beginning of the v2 circus | 04:24 |
sbalukoff | Because I think it's always been lightly passed over in other design discussions because the energy is used up on other things when we talke about it. | 04:24 |
sbalukoff | blogan: yeah, that's what I'm talking about. | 04:24 |
sbalukoff | i remember those discussions, and it was hard enough to get the structure we have... nobody had any big ideas around status... or statistics for that matter. | 04:25 |
blogan | if we had one haproxy process for all listeners it'd be easy to wave away the possible problem, its not a big problem really though | 04:25 |
sbalukoff | I'm happy to revisit that discussion: I'm less vehemently opposed to the idea of one haproxy per LB. | 04:26 |
sbalukoff | However... | 04:26 |
blogan | lol yeah it was, i remember almost going with a DETACHED status | 04:26 |
sbalukoff | we're also not out of the woods if we go there. | 04:26 |
sbalukoff | We already have active-standby (ie. two processes on different hosts) | 04:26 |
sbalukoff | And we should be thinking hard about active-active. | 04:26 |
blogan | good point | 04:26 |
blogan | well | 04:26 |
sbalukoff | Should probably revisit the log shipping discussion at some point, though thankfully we don't have clients begging for that yet. | 04:27 |
blogan | ha yeah | 04:27 |
sbalukoff | (That's another one to ignore until someone needs it enough to write a spec.) | 04:27 |
blogan | we'll have to deal with that towards the last quarter of the year i bet | 04:27 |
sbalukoff | Probably. | 04:28 |
blogan | we as in rackspace | 04:28 |
blogan | but we as in octavia works too | 04:28 |
blogan | bc that'll be Newton | 04:28 |
sbalukoff | Yep. | 04:28 |
blogan | so after L7 and single create call, there's active active right for octavia right? what else? because polishing and stabilizing would be great | 04:29 |
sbalukoff | johnsom: On rule limit: Want me to add a bug for that and assign it to myself? | 04:29 |
johnsom | https://bugs.launchpad.net/octavia/+bug/1549100 | 04:29 |
openstack | Launchpad bug 1549100 in octavia "L7 - HAproxy fails with more than 53 L7 rules" [High,New] - Assigned to Stephen Balukoff (sbalukoff) | 04:29 |
sbalukoff | johnsom: HAHA! | 04:29 |
sbalukoff | Ok! | 04:29 |
sbalukoff | blogan: The nice part about polishing and stabilizing is that that mostly consists of finding bugs, fixing bugs, and writing tests. | 04:30 |
johnsom | blogan Polish, stabilizing, and we should consider HA control plane (i.e. job board) | 04:30 |
*** fnaval has joined #openstack-lbaas | 04:30 | |
sbalukoff | Again, I'm hoping tempest tests land soon, which puts us in a good position to start fleshing out stuff that will ensure things say polished... | 04:30 |
blogan | sbalukoff: yeah but also includes stuff like refactoring the whole data model stuff, possibly revisiting the one haproxy process per listener | 04:30 |
sbalukoff | johnsom: +1 | 04:31 |
sbalukoff | blogan: Yeah, that's a larger discussion we need to have. | 04:31 |
sbalukoff | Possibly a good one for the summit, or do you not want to wait that long to have it? | 04:31 |
blogan | summit will be good | 04:31 |
sbalukoff | (summit is ~2 months away.) | 04:31 |
blogan | i just happen to be on right now and yall are listening | 04:32 |
sbalukoff | Haha! | 04:32 |
sbalukoff | And I'm on a dopamine high from not having much less to worry about with next Monday's deadline. XD | 04:32 |
sbalukoff | er.. from having much less... | 04:32 |
sbalukoff | Bah. | 04:32 |
sbalukoff | I'm babbling. | 04:32 |
blogan | at the summit ill bring up my obligatory "should we move away from taskflow" topic :), just for johnsom | 04:33 |
johnsom | Ah, it wouldn't be right if you didn't... | 04:33 |
sbalukoff | johnsom: Gonna re-stack with your REST API timeout patch and let you know how it goes. | 04:33 |
johnsom | I'll try to bring in the big guns again too | 04:33 |
johnsom | Ok, thanks | 04:33 |
blogan | ill be sure to have a sign out front that says "no taskflow cores allowed" | 04:34 |
johnsom | FYI, after next week, it's March 14th for RC1 | 04:34 |
sbalukoff | So, about 2 weeks to get a bunch of bugs fixed. | 04:34 |
TrevorV | Ugh... rebasing changes because I had to tip-toe for sbalukoff .... :P | 04:34 |
TrevorV | Its goin well actually, just taking longer than I wanted ha ha ha ha | 04:35 |
sbalukoff | TrevorV: Sorry! | 04:35 |
johnsom | blogan You know that once you spend time with it, you'll love it. | 04:35 |
johnsom | blogan I bet you will be evangelizing through the halls of castle.... | 04:35 |
TrevorV | johnsom, no he won't... he doesn't even do that about the things he does like. | 04:35 |
TrevorV | Currently I mean | 04:36 |
blogan | johnsom: i've spent enough time with it! | 04:36 |
sbalukoff | Haha! | 04:37 |
blogan | i would be a terrible evangelizer | 04:37 |
sbalukoff | blogan: If you really want to defeat it, you need to come up with a tangible alternative. :) | 04:37 |
blogan | haha chuck e cheese brawl, what has this world come too | 04:37 |
blogan | to | 04:37 |
blogan | sbalukoff: its called normal code structure :) | 04:38 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: WIP - Get Me A Load Balancer Controller https://review.openstack.org/257013 | 04:38 |
TrevorV | Alright, now to rebuild devstack with new changes so I can manually test by tomorrow morning | 04:43 |
TrevorV | woot | 04:43 |
TrevorV | brb, playing CS:GO | 04:43 |
*** TrevorV has quit IRC | 04:43 | |
sbalukoff | Heh! | 04:43 |
sbalukoff | I'll BBIAB. Just realized the only thing I've eaten today is a bowl of cereal. | 04:44 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements https://review.openstack.org/283935 | 04:44 |
*** piet has joined #openstack-lbaas | 04:52 | |
*** neelashah has quit IRC | 04:55 | |
*** Purandar has joined #openstack-lbaas | 05:00 | |
*** amotoki has joined #openstack-lbaas | 05:05 | |
*** minwang2 has joined #openstack-lbaas | 05:09 | |
*** manishg_wfh has joined #openstack-lbaas | 05:12 | |
rm_you | woo and requirements updates are back! | 05:14 |
*** minwang2 has quit IRC | 05:28 | |
*** manishg_wfh has quit IRC | 05:37 | |
*** pcaruana has quit IRC | 05:38 | |
*** pcaruana has joined #openstack-lbaas | 05:53 | |
openstackgerrit | Merged openstack/octavia: Updated from global requirements https://review.openstack.org/283935 | 06:00 |
*** piet has quit IRC | 06:00 | |
*** kobis has joined #openstack-lbaas | 06:00 | |
*** allan_h has joined #openstack-lbaas | 06:06 | |
*** allan_h has quit IRC | 06:07 | |
*** kevo has joined #openstack-lbaas | 06:09 | |
*** minwang2 has joined #openstack-lbaas | 06:14 | |
*** bana_k has joined #openstack-lbaas | 06:14 | |
*** numans has joined #openstack-lbaas | 06:19 | |
*** openstack has joined #openstack-lbaas | 13:23 | |
*** localloop127 has joined #openstack-lbaas | 13:31 | |
*** 32NAAD4T7 has quit IRC | 13:31 | |
*** rtheis has quit IRC | 13:43 | |
*** rtheis has joined #openstack-lbaas | 13:43 | |
*** amotoki has joined #openstack-lbaas | 13:44 | |
*** rtheis has quit IRC | 13:48 | |
*** rtheis has joined #openstack-lbaas | 13:51 | |
*** rtheis has quit IRC | 13:53 | |
*** rtheis has joined #openstack-lbaas | 13:53 | |
*** rtheis has quit IRC | 13:57 | |
*** rtheis has joined #openstack-lbaas | 13:57 | |
*** nmagnezi has quit IRC | 13:58 | |
*** rtheis has quit IRC | 13:58 | |
*** rtheis has joined #openstack-lbaas | 13:59 | |
ihrachys | dougwig: around? | 14:00 |
*** piet has joined #openstack-lbaas | 14:04 | |
*** neelashah has joined #openstack-lbaas | 14:06 | |
*** aryklein has joined #openstack-lbaas | 14:10 | |
*** nmagnezi has joined #openstack-lbaas | 14:11 | |
aryklein | is there any guide about installing and configuring lbaas v2 (using octavia) from scratch? I don't use Devstack or any of this scripts to deploy openstack. | 14:12 |
*** piet has quit IRC | 14:15 | |
*** piet has joined #openstack-lbaas | 14:16 | |
*** woodster_ has joined #openstack-lbaas | 14:20 | |
*** evgenyf has quit IRC | 14:26 | |
*** Purandar has joined #openstack-lbaas | 14:30 | |
*** rtheis has quit IRC | 14:40 | |
*** rtheis has joined #openstack-lbaas | 14:41 | |
*** diogogmt has quit IRC | 14:42 | |
*** rtheis has quit IRC | 14:42 | |
*** paco20151113 has quit IRC | 14:42 | |
*** rtheis has joined #openstack-lbaas | 14:43 | |
*** diogogmt has joined #openstack-lbaas | 14:44 | |
*** armax has quit IRC | 14:47 | |
*** localloop127 has quit IRC | 14:49 | |
*** Purandar has quit IRC | 14:50 | |
*** TrevorV has joined #openstack-lbaas | 14:56 | |
*** rtheis has quit IRC | 15:00 | |
*** ducttape_ has joined #openstack-lbaas | 15:01 | |
*** rtheis has joined #openstack-lbaas | 15:01 | |
*** armax has joined #openstack-lbaas | 15:01 | |
*** diogogmt has quit IRC | 15:01 | |
*** rtheis has quit IRC | 15:03 | |
*** rtheis has joined #openstack-lbaas | 15:03 | |
dougwig | ihrachys: ack | 15:05 |
ihrachys | dougwig: hey! was writing an email on how octavia manages amphora image updates, but since you are here, would like to run it with you first. | 15:06 |
ihrachys | dougwig: so the goal is: being able to update amphora image without octavia service reconfiguration/restart | 15:07 |
ihrachys | dougwig: you know, currently we hardcode the image ID, so a new image requires all that | 15:07 |
ihrachys | dougwig: my idea is to use glance image tags for that. we would have octavia know the tag used to mark the latest amphora image | 15:07 |
ihrachys | dougwig: and then we'll talk to glance to extract its ID, then pass it into nova | 15:08 |
ihrachys | dougwig: now, the complexity here is that we would then couple octavia with glance (thru glanceclient) | 15:08 |
ihrachys | dougwig: alternative would be having nova do the extraction for us | 15:08 |
dougwig | i think having it use glance would be a good idea. | 15:09 |
ihrachys | dougwig: but I talked it thru with sdague and he is not convinced it's a good idea, at least until glance provides some unique labeling to us | 15:09 |
dougwig | i can only speak for myself, but that seems a logical use of glance for a nova-enabled service. | 15:09 |
dougwig | we can't do our own by doing octavia-{uuid} or somesuch? | 15:09 |
ihrachys | dougwig: actually, similar needs are there in other nova-enabled services like trove | 15:10 |
ihrachys | dougwig: and that's why it could make sense to offload it to nova. | 15:10 |
dougwig | i agree that it's a logical fit. it is an image management service. and we have images. :) | 15:10 |
ihrachys | but since glance API is currently not very resilient, it may take some time and effort working with glance folks | 15:10 |
dougwig | isn't glance just image management, removed from nova in the first place? | 15:10 |
*** manishg_wfh has joined #openstack-lbaas | 15:10 | |
ihrachys | dougwig: right. but for nova that would be a logical step to make sense of tag names, as it currently does for image names | 15:11 |
dougwig | can't you already set meta-data on images? or is that purely for hypervisor matching? | 15:12 |
ihrachys | dougwig: the concern that sdague expressed is not about nova extracting the ID, but about the fact that glance does not guarantee uniqueness of the tag, so then nova would need to handle that somehow | 15:12 |
ihrachys | dougwig: I think there are two things in glance - one is metadata tags, and another just generic image tags | 15:12 |
ihrachys | the former is used by hypervisor | 15:12 |
ihrachys | the latter seems like a pure API thing to mark images with random strings | 15:13 |
ihrachys | now, if glance would provide unique labeling, nova would be able to boot a server using 'latest RHEL base image' | 15:13 |
ihrachys | instead of 'rhel-7.0' name or even worse, its ID | 15:13 |
nmagnezi | ihrachys, another downside of using the image id by Octavia is that you need to restart services after you change that id in octavia.conf | 15:13 |
ihrachys | nmagnezi: right. | 15:15 |
ihrachys | that's a more important one I think | 15:15 |
*** numans has quit IRC | 15:16 | |
ihrachys | if that would be just a matter of extracting ID for the service, a script could do it for the operator | 15:16 |
ihrachys | dougwig: so back to my original question - is glance coupling something that does not make you scared? :) I guess you replied 'no' before, but still would enjoy clarificaiton. | 15:17 |
nmagnezi | xgerman, ping re: Today's octavia meeting | 15:18 |
xgerman | surew | 15:18 |
nmagnezi | xgerman, hey :) i might not make it for today's meeting | 15:20 |
xgerman | ok, I will channel my inner nmagnezi | 15:20 |
nmagnezi | xgerman, could you please add some bug ajo submitted? | 15:20 |
dougwig | ihrachys: no, neither glance nor nova coupling scares me, as its a nova based service. | 15:20 |
nmagnezi | xgerman, mmm, what? :) | 15:21 |
xgerman | was joking… will add that bug… you have the # | 15:21 |
nmagnezi | xgerman, https://bugs.launchpad.net/octavia/+bug/1549297 :-) | 15:21 |
openstack | Launchpad bug 1549297 in octavia "octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net" [Undecided,New] | 15:21 |
ihrachys | dougwig: ok one more procedural thing. I was looking at getting something baked for Mitaka. is it still realistic? | 15:21 |
ihrachys | dougwig: assuming I get the code in a week. | 15:21 |
dougwig | ihrachys: code complete is the 29th, so if you have it in gerrit by then, it's possible, though tight. any chance you can introduce it to folks at the octavia meeting in 4 hours? or have some kind of wip up that i can point folks at during that meeting? | 15:22 |
ihrachys | dougwig: I won't have wip in 4h, I only made preliminary code investigation. | 15:23 |
xgerman | ihrachys I think it;s a good idea.. | 15:23 |
xgerman | but as dougwig points out things are tight | 15:23 |
ihrachys | dougwig: assuming that is well isolated (requiring a new option to be set to trigger the new code), I believe that could be fine as an exception? but I will try to get something till 29th | 15:24 |
ihrachys | we need the option anyway, for the image tag | 15:25 |
*** rtheis has quit IRC | 15:25 | |
*** rtheis has joined #openstack-lbaas | 15:26 | |
*** nmagnezi_ has joined #openstack-lbaas | 15:26 | |
*** rtheis has quit IRC | 15:26 | |
*** rtheis has joined #openstack-lbaas | 15:27 | |
dougwig | ihrachys: i'll bring it up at the meeting, but i'll support it. | 15:27 |
xgerman | +1 for support | 15:28 |
TrevorV | Hey dougwig I just did a single create with everything except a health monitor. (not sure if HM is bugged but it wasn't accepting rise/fall threshold, gonna look into that) | 15:28 |
TrevorV | On Octavia | 15:28 |
TrevorV | Sorry, clarification | 15:28 |
ihrachys | dougwig: xgerman: cool | 15:28 |
ihrachys | nmagnezi: are you going to the meeting? | 15:28 |
xgerman | cascading delete needs to be rebased in L7… would be good if some of the dependent stuff merges | 15:28 |
TrevorV | xgerman l7 all merged last night | 15:29 |
xgerman | yep, and that needs to be deleted as well | 15:29 |
*** nmagnezi has quit IRC | 15:29 | |
*** nmagnezi_ is now known as nmagnezi | 15:29 | |
dougwig | TrevorV: it's all about testing the failure cases. what's our plan for that? | 15:30 |
TrevorV | Honestly, not entirely sure. Should we talk about some of the failure cases we're concerned about? | 15:31 |
nmagnezi | ihrachys, i sometimes do, but today i'm not sure I'll be at home at that time (10pm my local time) | 15:33 |
ihrachys | ok. I will see what I can do, will try to join but no guarantees. | 15:33 |
*** rtheis has quit IRC | 15:34 | |
*** rtheis has joined #openstack-lbaas | 15:35 | |
TrevorV | Alright, got me a healthmonitor in the mix. Just had the body configured wrong for the request. | 15:38 |
*** manishg_wfh has quit IRC | 15:54 | |
*** jschwarz has quit IRC | 16:00 | |
*** Purandar has joined #openstack-lbaas | 16:03 | |
TrevorV | xgerman did sbalukoff also do changes in neutron lbaas for L7? | 16:11 |
TrevorV | More importantly, I guess, I can look myself, but was curious if they also merged | 16:12 |
johnsom | TrevorV The L7 patch for lbaas is up for review, but has some bugs. | 16:13 |
xgerman | I think he did — not sure if it merged | 16:13 |
TrevorV | johnsom https://review.openstack.org/#/c/148232/ | 16:13 |
TrevorV | is that one it? | 16:13 |
sbalukoff | It's not merged yet. | 16:13 |
johnsom | Yeah, that is the one | 16:13 |
sbalukoff | What on earth am I doing up so early? | 16:13 |
TrevorV | sbalukoff the real question is, did you sleep? | 16:13 |
johnsom | I was just wondering the same thing.... | 16:13 |
sbalukoff | TrevorV: I did, actually! Very well, in fact. | 16:13 |
johnsom | (thinking of both of us having a long night) | 16:14 |
*** diogogmt has joined #openstack-lbaas | 16:14 | |
sbalukoff | I think it helped that my internet cut out last night in mid-conversation with rm_work. | 16:14 |
sbalukoff | So Comcast decided I was done for the night. :P | 16:14 |
*** Purandar has quit IRC | 16:14 | |
TrevorV | I'm just happy I've now done a manual test of the single create | 16:14 |
TrevorV | With a default pool on the listener, and a redirect pool also on the l7 policy. | 16:14 |
TrevorV | Worked just fine | 16:14 |
sbalukoff | TrevorV: Is you patch out of WIP now? | 16:14 |
TrevorV | sbalukoff no, because unit tests | 16:15 |
TrevorV | I don't have many | 16:15 |
sbalukoff | Ok. | 16:15 |
TrevorV | In fact, I only added one. | 16:15 |
sbalukoff | But you're close. | 16:15 |
sbalukoff | And that's great! | 16:15 |
TrevorV | I honestly wasn't sure what else to actually test... | 16:15 |
sbalukoff | Still a few days to get it in before the deadline. | 16:15 |
TrevorV | Oh, shit, I haven't tried SNI stuff.... | 16:15 |
sbalukoff | TrevorV: You can look at sonar coverage reports for code you've added. | 16:15 |
TrevorV | Is that what tells me what I've missed? Where do I look at that? | 16:16 |
sbalukoff | Beyond that, if you can think of permutations of the single create that might hit edge cases based on the code you wrote, those are good candidates for tests. | 16:16 |
sbalukoff | So, that's the 'HP Octavia Sonar CI Check' that gets generated shortly after you upload a new patchset... | 16:17 |
TrevorV | Ooooh the " | 16:17 |
sbalukoff | Click the link that gets generated and you're at the web root of the report you want to look at. | 16:17 |
TrevorV | RACKSPACE-Octavia-Sonar-CI | 16:17 |
TrevorV | lulz | 16:17 |
sbalukoff | Navigate to 'coverage' and then the files you've modified, and look for lines showing they weren't tested. | 16:17 |
TrevorV | Remember, "HP Octavia Sonar CI Check - hosted by Rackspace" :P | 16:17 |
sbalukoff | Haha! | 16:18 |
sbalukoff | Yeah, that. | 16:18 |
johnsom | Right.... | 16:18 |
sbalukoff | Name is off now, but nobody seems bothered enough to fix it yet. | 16:18 |
sbalukoff | I've found that sonar is a very good tool for this. | 16:18 |
sbalukoff | Also, check out the 'issues' report for your patch. | 16:18 |
sbalukoff | The tuning on that it's always great: We have naming conventions that sonar doesn't like. | 16:19 |
sbalukoff | But it can often point out some boneheadedness in your code. | 16:19 |
TrevorV | Hey now, I ain't bone-headed. | 16:19 |
sbalukoff | You don't have bones in your head? | 16:19 |
sbalukoff | I'm so sorry.. | 16:20 |
TrevorV | Nah, see, I have bones in my head, but my head is not bones. | 16:21 |
TrevorV | That's the point. | 16:21 |
TrevorV | English is such a shitty language | 16:21 |
TrevorV | ha ha ha | 16:21 |
sbalukoff | You have bones in the point of your head? | 16:22 |
sbalukoff | So you're saying you're pointy-headed? | 16:22 |
xgerman | so a policy contains rules? | 16:23 |
sbalukoff | xgerman: Yes. | 16:23 |
johnsom | Yes | 16:23 |
xgerman | can rules be shared between policies? | 16:23 |
johnsom | Rules are AND'd, policies are OR'd | 16:23 |
sbalukoff | No | 16:23 |
xgerman | so I delete all the rules, then the policy? | 16:23 |
xgerman | for cascading delete... | 16:24 |
sbalukoff | Yes. | 16:24 |
xgerman | k, gotcha | 16:24 |
sbalukoff | Yay! | 16:24 |
TrevorV | sbalukoff sooo sonar says I'm good... Each file I've touched has places that aren't tested, but they're not where my code touched or was added. | 16:26 |
TrevorV | Like error trapping in delete/update on controllers and such | 16:26 |
sbalukoff | Ok, well... if you really can't think of more tests to add, you can get others here to review your code, and we'll probably be able to point some out to you. | 16:27 |
TrevorV | Oops, found one. | 16:28 |
TrevorV | Gotta update repository tests | 16:28 |
sbalukoff | Yeah. | 16:28 |
TrevorV | sbalukoff tsk tsk looks like you missed some coverage as well my friend ;) | 16:29 |
*** kobis has quit IRC | 16:29 | |
sbalukoff | Yep, I did. | 16:29 |
*** ajmiller_ has joined #openstack-lbaas | 16:29 | |
TrevorV | :P all good, thanks for drawing my attention to this job and how I should look at it. | 16:29 |
TrevorV | I'll use this to tinker around. | 16:29 |
sbalukoff | johnsom opened bugs for me to back-fill that. I'll be doing that after the Feb. 29 deadline. | 16:29 |
*** ajmiller has quit IRC | 16:29 | |
TrevorV | I did think of a few cases that I should check for functional tests at least. | 16:30 |
TrevorV | awesome. We made some good progress these past couple weeks | 16:30 |
sbalukoff | Yep! | 16:30 |
xgerman | sbalukoff, you have | 16:30 |
xgerman | https://www.irccloud.com/pastebin/8WKgv2mc/ | 16:31 |
xgerman | so if I blow the policy away it should delete the stuff in the DB? | 16:31 |
sbalukoff | xgerman: Oh yes-- I modeled that after the way members get deleted when you blow away a pool. So yes, that should work. | 16:31 |
xgerman | cool | 16:31 |
*** ihrachys has quit IRC | 16:33 | |
*** nmagnezi has quit IRC | 16:34 | |
johnsom | TrevorV Is your patch ready for review? I will put it on the "priority patch review" list on the agenda... | 16:37 |
*** fawadkhaliq has joined #openstack-lbaas | 16:38 | |
sbalukoff | Hmmm.... | 16:42 |
*** ducttape_ has quit IRC | 16:42 | |
sbalukoff | Aren't neutron-lbaas patch set supposed be spammed to this channel? | 16:42 |
johnsom | Yeah, but I think it has been broken recently | 16:43 |
*** ducttape_ has joined #openstack-lbaas | 16:43 | |
sbalukoff | Anyway, for anyone looking, I could use a +1 or two on this (trying to fix PostgreSQL gate): https://review.openstack.org/#/c/283802/ | 16:43 |
fawadkhaliq | hi octavia folks, question: does Octavia support Barbican in implementation or is it still in design phase? | 16:44 |
johnsom | fawadkhaliq It is in Octavia. | 16:45 |
*** ducttape_ has quit IRC | 16:45 | |
*** ducttape_ has joined #openstack-lbaas | 16:45 | |
johnsom | It went in for Liberty | 16:45 |
fawadkhaliq | johnsom: thanks, is there a document I can look at to see the exact functionality use-case Octavia has with Barbican | 16:45 |
fawadkhaliq | johnsom: I understand there are Nova instances running HAproxy etc and after that I am lost :-) | 16:46 |
xgerman | https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer | 16:46 |
johnsom | Yeah, just a second | 16:46 |
johnsom | fawadkhaliq http://www.octavia.io/review/master/specs/version0.5/tls-data-security.html | 16:46 |
fawadkhaliq | johnsom: thanks | 16:48 |
johnsom | Some of that might be a bit dated as it was done for liberty and barbican has evolved over time | 16:49 |
fawadkhaliq | johnsom: xgerman another question, might be a dumb one.. how does Octavia ensure that the communication between Octavia modules running the management plane and modules running inside the Nova instances is secure? | 16:49 |
xgerman | we have a two way SSL connection | 16:49 |
johnsom | We use two-way-ssl with certs generated unique for each amphora (service vm in this case) | 16:49 |
xgerman | amphora runs an HTTPS server and we have certs on both sides | 16:49 |
sbalukoff | The use case there is still essentially correct. | 16:49 |
fawadkhaliq | xgerman: johnsom, ah I see. Thanks a lot. In case you are wondering, I am from the Kuryr team and we are have similar use-case and trying to see how security can be introduced. I remebered Octavia does something similar. Thanks! | 16:51 |
*** piet has quit IRC | 16:51 | |
johnsom | Sure, NP. | 16:52 |
xgerman | y.w. | 16:52 |
johnsom | So are you going to enable neutron port hot plugging for containers? grin | 16:52 |
*** piet has joined #openstack-lbaas | 16:52 | |
sbalukoff | It's a bit freaky to consider that people are looking to potentially emulate some of what we do. ;) | 16:52 |
sbalukoff | Ooh! | 16:52 |
sbalukoff | Yeah, we would *love* to have that! | 16:53 |
*** localloop127 has joined #openstack-lbaas | 16:53 | |
*** madhu_ak|home is now known as madhu_ak | 16:54 | |
*** localloo1 has joined #openstack-lbaas | 16:54 | |
fawadkhaliq | lol | 16:54 |
dougwig | can i get a look at this gate_hook change, so we can enable a namespace driver job? https://review.openstack.org/#/c/282900/ | 16:54 |
fawadkhaliq | johnsom: on its way, you will see it soon ; | 16:54 |
fawadkhaliq | ;-) | 16:54 |
sbalukoff | fawadkhaliq: In time for Mitaka? :D | 16:55 |
johnsom | Awesome. We would really like to have that. We had to make some big changes to our workflow to work around that issue | 16:55 |
fawadkhaliq | sbalukoff: Hopefully the final design + a POC :D | 16:56 |
johnsom | dougwig I already +2'd that one. | 16:56 |
sbalukoff | Yay! | 16:56 |
johnsom | fawadkhaliq Cool, do you have a link we can read the design? | 16:56 |
fawadkhaliq | johnsom: absolutely, here you go: https://review.openstack.org/#/c/269039/ | 16:57 |
fawadkhaliq | johnsom: feel free to -1 ;-) | 16:57 |
johnsom | thanks! | 16:57 |
fawadkhaliq | you're welcome and thanks. | 16:57 |
TrevorV | Hey johnsom I have some testing to write still for the most part, but sonar didn't complain too much when I looked. | 16:57 |
*** localloop127 has quit IRC | 16:58 | |
*** amotoki has quit IRC | 17:00 | |
xgerman | sbalukoff can you have a look at https://review.openstack.org/#/c/282113/2 | 17:02 |
sbalukoff | Sure. | 17:02 |
xgerman | thanks, that’s a real simple change for some low priority bug | 17:02 |
*** manishg_wfh has joined #openstack-lbaas | 17:04 | |
*** fnaval has quit IRC | 17:08 | |
*** fnaval has joined #openstack-lbaas | 17:09 | |
sbalukoff | xgerman: Looks good. will wait for jenkins to finish before I +2, but I think the code you have there is correct. | 17:12 |
xgerman | thanks | 17:12 |
sbalukoff | (Please feel free to poke me again if it finishes and I don't notice for a bit.) | 17:13 |
*** fnaval_ has joined #openstack-lbaas | 17:15 | |
*** fnaval has quit IRC | 17:15 | |
rm_work | ha sbalukoff i wondered where you went :P | 17:15 |
rm_work | i failed and stayed up till 7am again <_< | 17:16 |
sbalukoff | D'oh! | 17:16 |
*** Purandar has joined #openstack-lbaas | 17:17 | |
*** kobis has joined #openstack-lbaas | 17:17 | |
*** manishg_wfh is now known as manishg | 17:23 | |
*** ihrachys has joined #openstack-lbaas | 17:26 | |
*** ihrachys has quit IRC | 17:29 | |
*** rtheis has quit IRC | 17:32 | |
*** prabampm has quit IRC | 17:32 | |
*** rtheis has joined #openstack-lbaas | 17:33 | |
openstackgerrit | Merged openstack/neutron-lbaas: Updated from global requirements https://review.openstack.org/283993 | 17:33 |
madhu_ak | fnaval_ I tested your patch, its working using tempest plugin. | 17:35 |
madhu_ak | fnaval_, I will push up a patch which will just correct plugon part in the same patch | 17:35 |
openstackgerrit | Adam Harwell proposed openstack/octavia: Remove old SSH specific config options from sample config https://review.openstack.org/284099 | 17:36 |
*** kobis has quit IRC | 17:38 | |
johnsom | sbalukoff This is marked WIP in commit message, but not gerrit: https://review.openstack.org/#/c/284008 | 17:38 |
rm_work | ha | 17:39 |
rm_work | i JUST commented | 17:39 |
rm_work | to that effect | 17:39 |
*** rtheis has quit IRC | 17:39 | |
rm_work | I was avoiding it since it said WIP but you +2'd so i was curious | 17:39 |
rm_work | johnsom: https://review.openstack.org/#/c/283929/ | 17:39 |
sbalukoff | johnsom: I think I want to add some functional checks to that. Will mark it WIP in gerrit. | 17:39 |
*** localloo1 has quit IRC | 17:39 | |
johnsom | Ok. It looks good so far. I think the scenario fail may be a bad test check. | 17:40 |
rm_work | relevant to you for comment ^^ | 17:40 |
rm_work | and https://review.openstack.org/#/c/280766/ i think is good now | 17:40 |
sbalukoff | I also want to dig through the scenario test logs to make sure I see the delay after Member create that I'm expecting. | 17:40 |
*** fnaval_ has quit IRC | 17:40 | |
sbalukoff | Gonna do that now. (Was waiting for it to complete last night when my home internet comcasted out.) | 17:41 |
sbalukoff | Ok! That was quick: I am seeing the expected delay! | 17:42 |
sbalukoff | So, good! I just need to add a functional test to that and I think I'm good. | 17:42 |
sbalukoff | Or update the ones in place. | 17:42 |
sbalukoff | Let me see how fast I can do that. | 17:42 |
johnsom | I am thinking the test needs to be updated to check for interrupted returns? http://logs.openstack.org/08/284008/1/check/gate-neutron-lbaasv2-dsvm-scenario/9fcf95d/console.html#_2016-02-24_09_55_39_630 | 17:43 |
sbalukoff | The tempest test? | 17:45 |
johnsom | That is what I am thinking. I'm poking around that code now | 17:45 |
sbalukoff | Ok. I think you're right, though obviously I've not looked closely into it yet. From what I can tell, load balancer status updates are working as they should now with my patch. Just need to add a couple tests to make sure it stays that way after this is committed. | 17:47 |
sbalukoff | (Ever notice that unit tests are pretty useless for troubleshooting code you're writing at the moment... but they can help prevent regressions?) | 17:47 |
*** yamamoto has quit IRC | 17:49 | |
*** rtheis has joined #openstack-lbaas | 17:50 | |
madhu_ak | Just wondering, Is there any reason that we need to use six.moves.urllib module than urllib package? | 17:51 |
*** fnaval has joined #openstack-lbaas | 17:54 | |
*** rtheis has quit IRC | 17:56 | |
*** rtheis has joined #openstack-lbaas | 17:56 | |
sbalukoff | No idea. I suggest recursive grep. :) | 17:57 |
fnaval | madhu_ak: awesome thank you sir | 17:58 |
johnsom | madhu_ak: The urllib, urllib2 and urlparse modules of Python 2 were reorganized | 17:59 |
johnsom | into a new urllib namespace on Python 3. Replace urllib, urllib2 and | 17:59 |
johnsom | urlparse imports with six.moves.urllib to make the modified code | 17:59 |
johnsom | compatible with Python 2 and Python 3. | 17:59 |
*** rtheis has quit IRC | 18:01 | |
madhu_ak | johnsom, I see. However, in regards to the error mentioned in http://logs.openstack.org/08/284008/1/check/gate-neutron-lbaasv2-dsvm-scenario/9fcf95d/console.html#_2016-02-24_09_55_39_630, I thought we can catch the exception and pass it in the event of the same error in future | 18:01 |
*** evgenyf has joined #openstack-lbaas | 18:03 | |
madhu_ak | johnsom, but still I thought it should catch the above exception by error.HTTPError. Not sure why that exception didnt catch it | 18:03 |
madhu_ak | something similar to like this: http://stackoverflow.com/questions/27619258/httplib-badstatusline | 18:04 |
johnsom | madhu_ak That is the bug I am working on right now | 18:04 |
madhu_ak | johnsom, oh okay. good to know | 18:04 |
*** kevo has joined #openstack-lbaas | 18:05 | |
*** ajmiller_ is now known as ajmiller | 18:11 | |
*** Purandar has quit IRC | 18:23 | |
*** neelashah has quit IRC | 18:33 | |
openstackgerrit | German Eichberger proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver https://review.openstack.org/284340 | 18:35 |
openstackgerrit | German Eichberger proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver https://review.openstack.org/284340 | 18:37 |
openstackgerrit | min wang proposed openstack/octavia: Implements: blueprint anti-affinity server group https://review.openstack.org/272344 | 18:42 |
*** minwang2 has joined #openstack-lbaas | 18:42 | |
minwang2 | sbalukoff can you please review anti-affinity patch—https://review.openstack.org/#/c/272344/ | 18:43 |
*** yamamoto has joined #openstack-lbaas | 18:50 | |
*** Aish has joined #openstack-lbaas | 18:51 | |
openstackgerrit | Madhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 18:51 |
sbalukoff | minwang2: Sure! Is later this afternoon OK? | 18:52 |
*** rtheis has joined #openstack-lbaas | 18:53 | |
sbalukoff | (Finishing unit tests on this critical patch and I have some internal stuff to deal with in the early afternoon.) | 18:53 |
minwang2 | sbalukoff, sure, i just rememberd that you mentioned in the comment that it is better to get this merge after L7 is merged first, so i think now is a good time to review it | 18:53 |
*** yamamoto has quit IRC | 18:56 | |
*** aryklein has quit IRC | 19:02 | |
*** evgenyf has quit IRC | 19:03 | |
sbalukoff | minwang2: I agree! | 19:04 |
*** neelashah has joined #openstack-lbaas | 19:04 | |
minwang2 | cool, thank you sbalukoff | 19:04 |
openstackgerrit | Bharath M proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver https://review.openstack.org/284340 | 19:06 |
TrevorV | Ugh.... I accidentally fixed some of my API bugs in the CW review... | 19:09 |
*** neelashah has quit IRC | 19:10 | |
*** Purandar has joined #openstack-lbaas | 19:12 | |
*** neelashah has joined #openstack-lbaas | 19:14 | |
openstackgerrit | Madhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 19:20 |
TrevorV | johnsom I'm writing some tests up right now, I think I'll knock off the WIP shortly though. | 19:21 |
TrevorV | Like, possibly before the meeting, or during | 19:21 |
johnsom | Ok | 19:21 |
*** evgenyf has joined #openstack-lbaas | 19:25 | |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API https://review.openstack.org/256974 | 19:25 |
TrevorV | Oops | 19:26 |
TrevorV | premature... forgot to fill in the tests... ha ha | 19:26 |
*** bana_k has joined #openstack-lbaas | 19:29 | |
*** ducttape_ has quit IRC | 19:38 | |
*** fawadkhaliq has quit IRC | 19:40 | |
*** manishg has quit IRC | 19:42 | |
*** rtheis has quit IRC | 19:42 | |
*** rtheis has joined #openstack-lbaas | 19:43 | |
*** rtheis has quit IRC | 19:47 | |
*** ducttape_ has joined #openstack-lbaas | 19:49 | |
*** neelashah1 has joined #openstack-lbaas | 19:51 | |
*** rtheis has joined #openstack-lbaas | 19:51 | |
*** neelashah has quit IRC | 19:53 | |
*** manishg has joined #openstack-lbaas | 19:55 | |
*** rtheis has quit IRC | 19:56 | |
*** rtheis has joined #openstack-lbaas | 19:56 | |
*** neelashah has joined #openstack-lbaas | 19:57 | |
*** neelashah1 has quit IRC | 19:58 | |
johnsom | Octavia meeting starting soon on #openstack-meeting-alt | 19:59 |
*** jschwarz has joined #openstack-lbaas | 19:59 | |
*** longstaff has joined #openstack-lbaas | 20:00 | |
*** neelashah1 has joined #openstack-lbaas | 20:00 | |
*** rtheis has quit IRC | 20:01 | |
*** neelashah has quit IRC | 20:01 | |
*** longstaff has quit IRC | 20:03 | |
*** bana_k has quit IRC | 20:09 | |
*** bana_k has joined #openstack-lbaas | 20:10 | |
*** bana_k has quit IRC | 20:10 | |
*** manishg has quit IRC | 20:17 | |
*** manishg has joined #openstack-lbaas | 20:25 | |
*** Bjoern_ has joined #openstack-lbaas | 20:27 | |
*** Purandar has quit IRC | 20:44 | |
*** ihrachys has joined #openstack-lbaas | 20:47 | |
*** localloo1 has joined #openstack-lbaas | 20:47 | |
*** jschwarz has quit IRC | 20:49 | |
*** localloop127 has joined #openstack-lbaas | 20:50 | |
*** localloo1 has quit IRC | 20:53 | |
ihrachys | sbalukoff: johnsom: reading logs of the octavia meeting on adopting glance images for easier image update. | 21:00 |
ihrachys | I guess the current stand of the team is we don't want that | 21:00 |
*** piet has quit IRC | 21:00 | |
ihrachys | and instead we want octavia to react to a signal to reload the conf option | 21:01 |
sbalukoff | ihrachys: No, just that "it's probably premature right now." | 21:01 |
johnsom | Ah, sorry we missed you at the meeting | 21:01 |
*** crc32 has joined #openstack-lbaas | 21:01 | |
sbalukoff | A signal reload would be useful to us in other ways as well. | 21:01 |
ihrachys | johnsom: np, that's my fault, I counted timezone incorrectly | 21:01 |
johnsom | We want the signal for other reasons as well, so that will probably move forward. | 21:01 |
markvan | blogan: would like to know a bit more about that issue you refd in meeting, with the wsgi and pools. Is this a bug or just how the namspaces collide somehow? | 21:01 |
sbalukoff | ihrachys: If you've got other reasons why we should do it, we're all ears, eh! | 21:02 |
johnsom | I don't think we are opposed to the glance tagging, if it is stable | 21:02 |
ihrachys | sbalukoff: I need to say that orchestration thru a signal is probably not as easy as via glance tags | 21:02 |
blogan | markvan: for the agent or the pools? | 21:02 |
sbalukoff | ihrachys: That may be, eh. :/ | 21:02 |
johnsom | ihrachys Agreed. It will be some work. | 21:03 |
blogan | ihrachys, sbalukoff, johnsom: if we did glance tags, i'd just put that in the compute driver, and not make an "image" interface | 21:03 |
markvan | blogan: the agent side of this | 21:03 |
sbalukoff | blogan: That sounds reasonable. | 21:03 |
ihrachys | if you go glance way, it's just a matter of some API admin calls; if it's signal, it would be 1) glance API call 2) puppet 3) signal propagation into all nodes. | 21:03 |
openstackgerrit | Franklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 21:03 |
johnsom | Ideally yes, it would be through nova like our image id is todya | 21:03 |
ihrachys | blogan: yeah, it would be just a matter of some glanceclient code inside compute driver. that was my plan. | 21:04 |
sbalukoff | Right. | 21:04 |
*** ducttape_ has quit IRC | 21:04 | |
sbalukoff | Is anyone using nova without glance? | 21:04 |
*** ducttape_ has joined #openstack-lbaas | 21:04 | |
blogan | markvan: its been a long time since I've tested it out, so I don't remember specifics, and its possible this may not be a problem now, but basically if I created a v2 load balancer, the v1 agent would attempt pick that up and obviously fail | 21:05 |
*** evgenyf has quit IRC | 21:05 | |
sbalukoff | I'm not familiar enough with the image-storage game in OpenStack. | 21:05 |
johnsom | ihrachys Hmmm, I think I would like to see more details before I would be a go on that | 21:05 |
sbalukoff | johnsom: +1 | 21:05 |
johnsom | We don't import glance at all right now | 21:05 |
markvan | blogan: ah, that helps. yeah, looks like it's time to try some scenarios out. thx. | 21:05 |
ihrachys | sbalukoff: johnsom: blogan: let me also clarify that I look at it ideally in Mitaka timeframe | 21:05 |
blogan | markvan: np, i'm 99% sure the pools colliding in the wsgi framework is still a problem | 21:06 |
johnsom | ihrachys Why don't you put in an RFE and/or spec? | 21:06 |
ihrachys | johnsom: absolutely. I can give some description in devref and try to pull code for that in next days | 21:06 |
blogan | markvan: the agent i'm less sure | 21:06 |
*** rtheis has joined #openstack-lbaas | 21:06 | |
sbalukoff | ihrachys: Really? You've got less than a week on that... | 21:06 |
ihrachys | johnsom: I am new to octavia, I am fine to write a spec or whatnot, as long as that's how it works here | 21:06 |
markvan | blogan: is that wsgi a code bug? or just how it works? | 21:06 |
ihrachys | sbalukoff: yes. though I don't expect the code to be invasive or huge | 21:06 |
sbalukoff | ihrachys: Well, get started then, eh! ;) | 21:07 |
neelashah1 | blogan: why would there be collision if v1 and v2 are two different nodes? | 21:07 |
ihrachys | sbalukoff: I would target ~100 lines including messing with keystone catalog | 21:07 |
blogan | ihrachys, johnsom, sbalukoff: as long as we continue to allow the old way to work too, of storing the image_id in the config, this sounds fine to me | 21:07 |
sbalukoff | (with the spec) | 21:07 |
ihrachys | sbalukoff: that's my plan for the week :) | 21:07 |
johnsom | blogan +1 | 21:07 |
blogan | markvan: just how it works really, "fixing" it would almost break all attribute extensions | 21:08 |
sbalukoff | Well... if we're going to do it, I want us to do it *right* so we don't have to support broken interfaces, eh. | 21:08 |
ihrachys | blogan: that would be the default, yes, and the new code would be well isolated by a new option (to store the tag) | 21:08 |
sbalukoff | So please, get the spec right! | 21:08 |
blogan | neelashah1: for the pools you mean? | 21:08 |
johnsom | ihrachys Ok, that sounds much more possible for mitaka | 21:08 |
ihrachys | sbalukoff: ok, I will start with the spec tomorrow morning, and will proceed with the code afterwards. | 21:08 |
johnsom | Sounds good | 21:08 |
neelashah1 | blogan yes | 21:09 |
sbalukoff | ihrachys: Please ping us when the spec is ready, eh! | 21:09 |
blogan | neelashah1: so you wouldn't be running v1 and v2 on the same api node? | 21:09 |
dougwig | i'd suggest not waiting on spec approval for code, though. | 21:09 |
sbalukoff | Right. | 21:09 |
ihrachys | sbalukoff: I will try to keep you informed. | 21:09 |
neelashah1 | blogan : correct, because v1 is on kilo and v2 will be M so they are different odes | 21:09 |
ihrachys | dougwig: obviously, I will code right away | 21:09 |
neelashah1 | nodes blogan | 21:09 |
openstackgerrit | Madhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test https://review.openstack.org/172199 | 21:09 |
sbalukoff | Just realize that spec might need changes which would render some of the code needing changing too. | 21:09 |
ihrachys | dougwig: and you then decide whether it's good | 21:09 |
fnaval | thanks madhu_ak | 21:10 |
sbalukoff | ihrachys: Cool beans. :) | 21:10 |
blogan | neelashah1: oh well if they're not being run by the same process then thats fine, i thought this was all about having v1 and v2 running inside the same neutron-server process | 21:10 |
ihrachys | sbalukoff: absolutely. I am ready to be turned back, I understand I am really late in the game. | 21:10 |
*** pcaruana has quit IRC | 21:10 | |
madhu_ak | no prob fnaval | 21:10 |
sbalukoff | ihrachys: It sounds like a useful feature, eh. | 21:10 |
blogan | neelashah1: separate api nodes for v1 and v2, then i think the only other problem you might run into would be the possible agent issue mentioned before | 21:10 |
dougwig | ihrachys: octavia specs are the octavia cores. | 21:11 |
dougwig | several have already agreed in theory to the gist of things here. | 21:11 |
ihrachys | thanks folks for considering it at all. I will go have some sleep now. will keep you updated. cu. | 21:14 |
blogan | ihrachys: g'night | 21:14 |
blogan | ihrachys: or g'day | 21:14 |
*** ihrachys has quit IRC | 21:17 | |
neelashah1 | blogan : ok, thanks for your insights and help | 21:22 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Fix LB/Listener status updates for HM/Member https://review.openstack.org/284008 | 21:23 |
sbalukoff | rm_work and johnsom: ^^^ should be no-longer WIP. | 21:23 |
sbalukoff | Note that I did some significant code changes in the health monitor and member API stuff: The previous code wrongly made the assumption that pools would have only one listener. Don't know how I missed that in the shared-pools patch. :P | 21:24 |
sbalukoff | (I have yet to look at the sonar coverage report on those code changes obviously-- but if I'm not covering something in there, I want to do one more revision before it's merged.) | 21:25 |
sbalukoff | Ok! I need to go AFK for about 2 hours real quick here. Will be back in a couple! | 21:25 |
*** localloop127 has quit IRC | 21:26 | |
TrevorV | johnsom found a bug... | 21:29 |
TrevorV | That I can't seem to fix. | 21:29 |
TrevorV | I added sni stuffs, because I had forgotten it. | 21:29 |
sbalukoff | D'oh! | 21:30 |
TrevorV | Sometime between storing the 2 tls_container_ids, and it using the "to_data_model" method to return, the list becomes 2 of the same SNI Container, and the other unique one sent in is lost... except still accessible in the DB. | 21:30 |
TrevorV | Sooooo.... | 21:30 |
sbalukoff | Ok, I'm still here for a little bit. Coverage report on the above patch looks pretty good... I think I'm still missing a duplice health monitor create coverage test, though... | 21:30 |
TrevorV | sbalukoff it has to do with this graphy thingy | 21:30 |
sbalukoff | TrevorV: Oh boy. | 21:30 |
sbalukoff | TrevorV: I don't have time to troubleshoot this with you right now. I'm sorry. | 21:31 |
sbalukoff | (At this moment, I mean.) | 21:31 |
TrevorV | Nah you're good :) | 21:31 |
TrevorV | I was just saying I found a new one that I have to look into | 21:31 |
TrevorV | :D | 21:31 |
TrevorV | brb | 21:31 |
johnsom | TrevorV That is neat! | 21:32 |
*** localloop127 has joined #openstack-lbaas | 21:34 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Fix LB/Listener status updates for HM/Member https://review.openstack.org/284008 | 21:38 |
sbalukoff | Ok, I think this one is ready for merge (if jenkins agrees) ^^^ | 21:39 |
sbalukoff | And now I've really got to run. BBIAB. | 21:39 |
*** _ducttape_ has joined #openstack-lbaas | 21:50 | |
*** ducttape_ has quit IRC | 21:54 | |
*** localloop127 has quit IRC | 21:58 | |
TrevorV | Wellp, I've confirmed that "to_data_model" graph method is what "duplicates" my sni_containers.... the problem now is "why" or at least "how" | 22:03 |
*** Purandar has joined #openstack-lbaas | 22:11 | |
*** _ducttape_ has quit IRC | 22:14 | |
*** ducttape_ has joined #openstack-lbaas | 22:15 | |
*** neelashah1 has quit IRC | 22:25 | |
*** _ducttape_ has joined #openstack-lbaas | 22:25 | |
*** ducttape_ has quit IRC | 22:28 | |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue https://review.openstack.org/278874 | 22:38 |
*** rtheis has quit IRC | 22:42 | |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue https://review.openstack.org/278874 | 22:43 |
*** rtheis has joined #openstack-lbaas | 22:45 | |
*** yamamoto_ has joined #openstack-lbaas | 23:08 | |
*** _ducttape_ has quit IRC | 23:15 | |
*** rtheis has quit IRC | 23:16 | |
*** rtheis has joined #openstack-lbaas | 23:16 | |
*** rtheis has quit IRC | 23:17 | |
TrevorV | johnsom I have a bug fix about to go up in review, high priority | 23:17 |
*** rtheis has joined #openstack-lbaas | 23:17 | |
*** rtheis has quit IRC | 23:17 | |
TrevorV | Its pretty simple, I have a test written for it too, mind taking a look when I drop the review? | 23:17 |
johnsom | Sure. I have been staring at the scenario tests too long. | 23:20 |
TrevorV | Kk :D | 23:20 |
TrevorV | Just a second on that one | 23:20 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API https://review.openstack.org/256974 | 23:20 |
TrevorV | NOT that one | 23:20 |
TrevorV | sorry lulz | 23:20 |
*** armax has quit IRC | 23:27 | |
*** armax has joined #openstack-lbaas | 23:28 | |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue https://review.openstack.org/278874 | 23:28 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: Use unique SNI identifier when building data model https://review.openstack.org/284464 | 23:28 |
*** armax has quit IRC | 23:28 | |
TrevorV | johnsom THAT review | 23:28 |
johnsom | TrevorV Yeah, that was a bug | 23:30 |
TrevorV | Yes, yes it was :) | 23:30 |
TrevorV | Credit to blogan for finding it, honestly | 23:30 |
*** mestery has quit IRC | 23:30 | |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: Get me a Load Balancer API https://review.openstack.org/256974 | 23:33 |
blogan | blame to blogan for data models | 23:33 |
blogan | and sqlalchemy | 23:33 |
johnsom | I'm pretty sure that is code that was changed for L7 | 23:34 |
xgerman | blogan make sure to put some blame on those reviewers giving +2s | 23:35 |
xgerman | they should have known better | 23:35 |
TrevorV | Hey now, that'd be rm_work and johnsom as far as I remember, especially if its L7 stuffs :P | 23:35 |
* johnsom bows his head in shame | 23:35 | |
xgerman | yep, I purposefully stayed out of that like blogan — we know sbalukoff | 23:35 |
*** mestery has joined #openstack-lbaas | 23:35 | |
sbalukoff | Heh! | 23:36 |
sbalukoff | Just got back and my ears were burning.... | 23:36 |
johnsom | sbalukoff https://review.openstack.org/#/c/284464 | 23:36 |
sbalukoff | Looking now... | 23:36 |
TrevorV | Frankly I was more surprised we didn't have a test that added 2 sni containers... | 23:37 |
johnsom | TLS has been a huge gap in testing | 23:37 |
xgerman | +1 | 23:37 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: Get Me A Load Balancer Controller https://review.openstack.org/257013 | 23:38 |
TrevorV | Good news, though, I'm ready for people to consume the single-create for Octavia at least :D | 23:38 |
*** yamamoto_ has quit IRC | 23:38 | |
TrevorV | I'll honestly probably have missed stuff, but please, let me know what's wrong with it. | 23:38 |
johnsom | sbalukoff On another topic, I think between your https://review.openstack.org/#/c/284008 and https://review.openstack.org/278874 we might get the scenario gate going again | 23:40 |
sbalukoff | Wow! nice! | 23:40 |
TrevorV | johnsom I added the single-create links to the etherpad | 23:40 |
sbalukoff | TrevorV: There's something missing from you patch-- I've commented. | 23:40 |
sbalukoff | But good catch, in any case. | 23:40 |
johnsom | TrevorV which eitherpad? The L7 one? | 23:41 |
TrevorV | johnsom https://etherpad.openstack.org/p/lbaas-l7-todo-list | 23:41 |
TrevorV | Suppose it might not go here? | 23:41 |
TrevorV | Is there a better place? | 23:41 |
johnsom | Not really, I don't have another etherpad going for Mitaka | 23:42 |
TrevorV | Then there 'tis :D | 23:42 |
xgerman | ok, will add my stuff there as well | 23:43 |
TrevorV | sbalukoff you mean to add a multiple-sni-containers-test right? | 23:43 |
sbalukoff | Yes.... | 23:43 |
madhu_ak | fnaval, you around? | 23:43 |
TrevorV | I didn't actually look where your comment was hosted, just read the comment on the review | 23:43 |
sbalukoff | It sounds like this is urgent, so if you just want to get it added to the get unique id method in there that would be great-- we should probably add more SNI-stuff to the model tests in any case at some point. | 23:44 |
sbalukoff | But I gather this is a show-stopper for you right now. | 23:44 |
TrevorV | sbalukoff I'm adding right now :D | 23:45 |
sbalukoff | TrevorV: around line 823 in octavia/tests/functional/db/test_models.py | 23:45 |
TrevorV | Yep, gots it | 23:45 |
*** Bjoern_ has quit IRC | 23:46 | |
johnsom | sbalukoff https://review.openstack.org/#/c/284008 passed, is that ready to review? | 23:46 |
sbalukoff | johnsom: Yep! | 23:46 |
sbalukoff | Huh. Scenario test passed. | 23:47 |
sbalukoff | Great! | 23:47 |
TrevorV | sbalukoff I'm not sure exactly how to add a second sni object to get these tests updated.... | 23:47 |
johnsom | Yeah. I still think we need the scenario test changes in my lbaas patch, but we are in the right direction | 23:47 |
xgerman | made_ak was asking what is more urgent SSL scenario or API... | 23:48 |
xgerman | made_ak | 23:48 |
madhu_ak | yep | 23:48 |
sbalukoff | TrevorV: So, in that test file, in the _get_unique_key function, you should do the same thing there that you're doing in octavia/common/data_models.py in the _get_unique_key function. | 23:49 |
TrevorV | oh yeah I already added that, but should I not also update the tests to account for that? | 23:50 |
sbalukoff | TrevorV: I was saying earlier, if the tests pass as is, you're probably OK for now, but that probably indicates that we're not doing enough SNI testing in that file. | 23:51 |
johnsom | It seems like API is getting covered for the most part by the neutron-lbaas tests right? I think TLS is a problem area when it comes to testing, so I guess I would vote for TLS | 23:51 |
TrevorV | Ah I see what you mean. Alright, well they pass with that change, but it definitely needs to be updated to include multiple containers. | 23:51 |
sbalukoff | We should back-fill that at some point, but I wouldn't require it of you right now because this is urgent, right? | 23:51 |
TrevorV | Yeah, but I rebased on top of it | 23:51 |
TrevorV | Oh well. | 23:51 |
sbalukoff | Um... | 23:51 |
sbalukoff | Let me have a closer look at the SNI class... | 23:52 |
TrevorV | Sure thing | 23:52 |
sbalukoff | Is SNI.tls_container_id a unique identifier for the SNI object? | 23:52 |
sbalukoff | As in, can there be more than one 'SNI' object with the same tls_container_id? | 23:53 |
sbalukoff | Oh! Is it the combination of listener_id and tls_container_id that makes it unique? | 23:53 |
madhu_ak | johnsom, Will that be TLS scenario or API tests? | 23:54 |
sbalukoff | (I guess I could just check the DB schema...) | 23:54 |
johnsom | madhu_ak TLS scenario | 23:54 |
sbalukoff | Ok, so yes, the primary key is both listener_id + tls_container_id. | 23:54 |
madhu_ak | okay, will work on it and I hope it is not coiniciding with fnaval's work on TLS scenario | 23:55 |
sbalukoff | TrevorV: Make the unique identifier line look like this, then: return obj.__class__.__name__ + obj.listener_id + obj.tls_container_id | 23:55 |
sbalukoff | TrevorV: Make sense? | 23:55 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: Use unique SNI identifier when building data model https://review.openstack.org/284464 | 23:57 |
TrevorV | Oh dammit... why sbalukoff ? | 23:58 |
sbalukoff | Haha! Sorry-- I finally realized what the real nature of the problem was. | 23:58 |
blogan | omg scenario tests passed! | 23:58 |
rm_work | yeah i mentioned that but he pointed out why it didn't matter | 23:58 |
TrevorV | sbalukoff no no, the tls_container_id is unique | 23:58 |
rm_work | (sni_container_id is unique) | 23:58 |
sbalukoff | Is it? | 23:58 |
sbalukoff | Ok. | 23:58 |
sbalukoff | Then that's fine. | 23:58 |
TrevorV | THat should be a negative test for it. | 23:59 |
TrevorV | HOnestly | 23:59 |
rm_work | yeah, since we pull the host data for the SNI from the cert... there is no reason to have the same cert twice | 23:59 |
blogan | sbalukoff: could you have just used id(SqlAlchemyObjectInstance) | 23:59 |
sbalukoff | blogan: I'm not sure we actually want to do that the way we're building the data model. | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!