Friday, 2014-12-05

sbalukoffMaybe have a pros and cons list for each alternative?00:00
sbalukoffI would *hope* that quells too much unnecessary back-and-forth by people who won't be 100% satisfied with this solution, but would choose it as the lesser of the evils.00:00
sbalukoff(Because there certainly isn't a panacea for this problem.)00:01
*** vivek-ebay has quit IRC00:01
dougwigpros/cons would get repetitive, let me try some wordy words and see what comes out.00:02
sbalukoffSounds good.00:03
sbalukoffI just try to be pragmatic about this kind of thing:  Yes, it isn't a perfect solution, but STFU unless you can come up with something better.00:03
dougwigi have yet to see the overall openstack community adopt that attitude.  :)00:03
sbalukoffHeh! True enough. Maybe we need a few more assholes who are willing to ram-rod stuff through when we need practical solutions.00:05
sbalukoff(ie. tyrants)00:06
blogansbalukoff's new alias RamRod00:07
dougwiguhh..  not touching that.00:09
sbalukoffHaha!00:14
*** openstackgerrit has quit IRC00:18
*** openstackgerrit has joined #openstack-lbaas00:19
sbalukoffSpeaking of ramrodding something through:  dougwig or xgerman, have time to take a look at this? https://review.openstack.org/#/c/121233/00:20
xgermanI need to do some more research00:22
xgermanhow microverisons and the other projects do it00:22
xgermanI have seen both in openstack decimal and non-decimal but the plan was to do the "right" thing00:22
xgermanreading now: https://wiki.openstack.org/wiki/Nova/ProposalForAPIMicroVersions00:23
*** crc32 has quit IRC00:31
*** mlavalle has quit IRC00:35
*** mlavalle has joined #openstack-lbaas00:35
dougwigsbalukoff: how about this?00:41
dougwighttps://www.irccloud.com/pastebin/OIZccGJu00:41
sbalukoffare not being addressed by the OpenStack dev community yet00:42
sbalukoffOtherwise, the wording looks good.00:42
*** mlavalle has quit IRC00:45
dougwigsbalukoff: new rev pushed00:47
sbalukoffOk, looking.00:48
xgermanalreday -1d - and dare you to rebase while I comment00:59
*** xgerman has quit IRC01:10
*** fnaval has quit IRC01:58
*** fnaval has joined #openstack-lbaas02:14
*** xgerman has joined #openstack-lbaas03:28
*** xgerman has quit IRC03:30
*** xgerman_ has joined #openstack-lbaas03:30
*** rm_you has joined #openstack-lbaas03:33
*** rm_you has joined #openstack-lbaas03:33
*** ptoohill_ has joined #openstack-lbaas03:35
*** xgerman_ has quit IRC03:53
rm_yousbalukoff: replied again04:30
sbalukoffUh-oh.04:31
*** vivek-ebay has joined #openstack-lbaas04:33
sbalukoffYo dawg, I hear you like comments. So I put a comment on your comment and commented on it.04:38
rm_youi sometimes wonder if LBaaS is the most sarcastic/memetastic OS group :P04:41
*** vivek-ebay has quit IRC04:49
*** fnaval has quit IRC05:18
*** fnaval has joined #openstack-lbaas05:58
*** fnaval has quit IRC06:21
rm_yousbalukoff: did you read that ML email about FLIPs?06:24
*** ptoohill_ has quit IRC06:37
*** ptoohill_ has joined #openstack-lbaas06:39
*** ptoohill_ has quit IRC06:44
*** openstackgerrit has quit IRC06:49
*** openstackgerrit has joined #openstack-lbaas06:49
*** amotoki is now known as amotoki__away06:55
*** enikanorov_ has joined #openstack-lbaas07:04
*** enikanorov__ has quit IRC07:07
*** cipcosma has joined #openstack-lbaas07:35
*** vivek-ebay has joined #openstack-lbaas07:49
*** vivek-ebay has quit IRC07:54
rm_youblogan: BOOM. Comments.08:00
rm_youoh actually I guess that's on johnsom's review <_<08:00
*** woodster_ has quit IRC08:30
*** Despruk has joined #openstack-lbaas09:15
DesprukHello. I have a bug in lbaas haproxy driver when using session_persistence type HTTP_COOKIE. In haproxy conf, line for pool memers is gerated "server 62d96c09-0316-43ae-a831-8dd2b22aa983 10.0.0.10:12345 weight 1 cookie 0" adding the "cookie <number>" for each pool member. However, when adding new server to members pool, new haproxy conf entry is added as09:30
Despruk"cookie 0" and previous "cookie 0" server is now "cookie 1". This causes old HTTP sessions with cokie 0 to go to new pool member and sessions are broken.09:30
Desprukis this a known bug?09:32
Desprukit appears to be coming from "neutron/services/loadbalancer/drivers/haproxy/cfg.py" line 145.09:35
Desprukserver += ' cookie %d' % config['members'].index(member)09:35
Desprukis there some reason why a pool index is used here instead of something unique, like member id?09:36
*** amotoki__away is now known as amotoki10:22
*** SumitNaiksatam has quit IRC10:25
*** f13o has joined #openstack-lbaas10:31
*** SumitNaiksatam has joined #openstack-lbaas10:34
*** amotoki is now known as amotoki__away11:54
*** mikedillion has joined #openstack-lbaas12:55
*** woodster_ has joined #openstack-lbaas13:00
*** mikedillion has quit IRC13:19
*** kbyrne has quit IRC14:19
*** kbyrne has joined #openstack-lbaas14:21
*** vivek-ebay has joined #openstack-lbaas14:40
*** mikedillion has joined #openstack-lbaas14:45
*** fnaval has joined #openstack-lbaas14:45
*** TrevorV_ has joined #openstack-lbaas15:04
*** mestery has quit IRC15:42
*** ptoohill_ has joined #openstack-lbaas15:54
*** ptoohill_ has quit IRC15:56
*** fnaval has quit IRC16:18
*** xgerman has joined #openstack-lbaas16:21
*** mestery has joined #openstack-lbaas16:21
*** mestery has quit IRC16:28
*** mestery has joined #openstack-lbaas16:28
dougwigDespruk: can you file that bug here https://bugs.launchpad.net/neutron/+filebug, and then send me a link to it?16:29
*** fnaval has joined #openstack-lbaas16:45
blogandougwig: does the neutron meetup start on monday or tuesday?16:54
dougwigmonday - wednesday.16:54
*** barclaac has joined #openstack-lbaas16:55
*** mlavalle has joined #openstack-lbaas16:59
*** sbfox has joined #openstack-lbaas17:10
*** sbfox has quit IRC17:12
*** sbfox has joined #openstack-lbaas17:12
*** sbfox has quit IRC17:16
*** sbfox has joined #openstack-lbaas17:17
*** vivek-ebay has quit IRC17:19
*** ajmiller has quit IRC17:19
*** jschwarz has joined #openstack-lbaas17:22
*** mikedillion has quit IRC17:24
*** mikedillion has joined #openstack-lbaas17:25
*** jschwarz has quit IRC17:28
*** TrevorV_ has quit IRC17:30
*** mestery has quit IRC17:35
*** vivek-ebay has joined #openstack-lbaas17:38
*** mikedillion has quit IRC17:40
*** TrevorV_ has joined #openstack-lbaas17:41
*** ptoohill_ has joined #openstack-lbaas17:41
*** ptoohill_ has quit IRC17:41
*** ptoohill_ has joined #openstack-lbaas17:42
dougwigsbalukoff, blogan, xgerman: does haproxy support M:N as suggested in that thread?18:02
xgermanhaproxy can share pools among listeners (if you install it all in one process - not like sbalukoff suggested)18:04
xgermanwe just can't share listeners outside the same LB18:05
sbalukoffExcept that nobody does that between listeners in reality. ;)18:05
xgermanwell, the legitimate use case is to cut down on health monitoring traffic18:05
xgermanbut we are in the cloud they can just by more boxes18:06
sbalukoffSure, but doing that sharing buys you essentially nothing because nobody does that "legitimate use case"18:06
sbalukoffAnd it means that you don't have fault separation between listeners18:06
sbalukoffAnyway, we've already debated this, and these aren't new arguments. :P18:07
dougwigmight be interesting to see if *any* of the backends actually support sam's model, since the "lots of events" problems gets even worse as "lots of events AND lots of dup" if the model isn't native.18:07
sbalukoffdougwig: What's your question relate to?18:07
dougwigmy original objection was around implementation complexity, which i'm not sure has gone away, even moving operational status into the root.18:08
sbalukoffWell, one of the feature requests we have going for the haproxy guys (and I'll be poking Baptiste about at the hack-a-thon) is the ability to take health-check information from an outside source.18:08
xgermandougwig, that's my worry, too18:08
sbalukoffThis is going to be important once we're doing ACTIVE-ACTIVE18:08
dougwiga10 supports more of this model than haproxy, and i still don't relish the failure cases.18:08
xgermanyeah, it doesn't strike me as something we should support18:09
sbalukoffFor what it's worth, I think we still have complexity there, as you've suspected...  not sharing means that more of that complexity is exposed to the user.18:09
sbalukoffHaving said this...18:09
*** mestery has joined #openstack-lbaas18:09
sbalukoffI think the case where entities are shared is actually the exception18:09
xgerman+118:09
sbalukoffExcept listeners sharing a single load balancer18:09
sbalukoffOh-- and pools being shared within a single listener (via L7 rules) is also fairly common.18:10
sbalukoffBut, again, I don't think we're really saying we can't do that... :/18:10
sbalukoffBut pools being shared outside a single listener-- nobody does that.18:11
sbalukoffMembers being shared between pools-- nobody does that18:11
sbalukoffListeners being shared between loadbalancers-- again, nobody does that.18:11
xgermanAnd to be honest we like to have this thing in production rather sooner than later so i am inclined to veto anything wgich causes more work/delays18:12
sbalukoffxgerman: +118:12
dougwigvip -> vport at 1:M is obviously a minimum (LB -> listener, in our parlance.)18:13
sbalukoff(And by "nobody" I mean "almost nobody--  less than 1% of deployments")18:13
sbalukoffdougwig: Agreed.18:13
sbalukoffAnd again, when L7 rules actually happen, then having a pool used multiple times within a single listener is also "perfectly legitimate"18:14
sbalukoffAnd will reduce health monitoring traffic.18:14
dougwiglooks like a10 is 1:M on pool -> listener, and M:N on l7 <-> listener (attached separately), so i'll be dup'ing pools in that scenario.18:18
xgermanyeah, that feels more and more like a whiteboarding exercise18:19
xgermandougwig, something completely different: where do we file bugs against LBaaS V218:19
xgerman?18:19
sbalukoff"Premature optimization"-- solving problems that don't actually come up enough to worry about them in the real world.18:19
dougwigxgerman: neutron18:20
xgermanlink?18:20
dougwighttps://bugs.launchpad.net/neutron/+filebug18:20
dougwigdirect assign them to someone on our team, or they might languish.18:20
xgermanthanks --18:20
dougwig(feel free to assign them to me,)18:20
dougwig((for triage))18:21
xgermanok, we (=HP) can also fix it18:21
xgermanbut will assign to you18:21
xgermanhttps://bugs.launchpad.net/neutron/+bug/139974918:26
xgermandougwig - I can't assign to you (the system doesn't let me)18:26
dougwigWe are but peons18:35
*** vivek-ebay has quit IRC18:39
*** SumitNaiksatam has quit IRC18:39
xgermanok, they let me assign to myself which I did18:41
xgermanand subscribe you...18:41
dougwigOk18:41
dougwigTy18:41
xgermanDoug Wigley Jr --> that's you?18:42
xgermanthey had anither doug wigley Jr -- which made it confusing18:42
*** vivek-ebay has joined #openstack-lbaas18:56
dougwigreally?  wow.  no, i'm "Doug Wiegley" (or Douglas)19:03
*** SumitNaiksatam has joined #openstack-lbaas19:09
*** mwang2 has quit IRC19:13
*** mikedillion has joined #openstack-lbaas19:17
*** sbfox has quit IRC19:27
*** vivek-ebay has quit IRC19:29
*** Youcef has quit IRC19:57
openstackgerritTrevor Vardeman proposed stackforge/octavia: Creation of Octavia API Documentation  https://review.openstack.org/13649919:59
*** xgerman has quit IRC20:09
bloganso just read through the backlog here20:13
bloganis the suggestion that we support sharing of pools within a single listener context vs no sharing at all?20:14
dougwigi think the suggestion is still "do what we have now, let's ship before we shoot ourselves in the head again".20:14
sbalukoffdougwig: +120:15
bloganwell the only problem i can foresee with that is the status changes from what they are now to what they might be would be a backwards incompatible change20:15
sbalukoffblogan: I agree that sharing pools within the context of a single listener makes sense when L7 rules are introduced. But I think even more strongly that we need to get something out.20:16
sballedougwig +120:16
sballesbalukoff: +120:16
blogani dont dispute that, but my concern again is releasing this API and then immediately having to introduce another version that supports the status changes20:16
bloganwhich may or may not include sharing, bc the status changes are something that will need to be addressed with or without sharing20:17
sbalukoffblogan: Does the API we're introducting include L7?20:17
sbalukoffEr... that we're releasing, I mean.20:17
bloganit will20:17
blogannot immediately it is a separate blueprint20:18
sbalukoffOk, then we should figure this out.20:18
bloganbut im pretty sure we would like it to be in kilo20:18
sbalukoffWhen L7 is introduced that will be a backward incompatible change in many ways, IMO...20:18
sbalukoffI agree.20:18
sbalukoffAnd I'm saying, I guess, that when L7 is introduced we should allow sharing of pools in the context of a single listener.20:19
sbalukoff*shrug*20:19
*** ajmiller has joined #openstack-lbaas20:19
bloganwell we can punt on sharing of pools even then, and add it in as a nother feature when tis ready, that wouldn't be backwards incompatible20:19
sbalukoffBut I'd like to see LBaaSv2 out with or without L7.20:19
blogani just think the status changes will be backwards incompatible20:19
sbalukoffI'm not sure that's true.20:19
bloganespecially if loadbalancer and listener have two status fields20:20
sbalukoffThe examples you gave on the ML discussion didn't include L720:20
sbalukoffThat is to say....20:20
sbalukoffWith L7, we will need to support 1:N for listeners to pools.20:20
*** jorgem has joined #openstack-lbaas20:21
sbalukoffThe question is does that "N" include pools which are shared by multiple L7 policies or not.20:21
sbalukoffI don't see a reason not to allow them to be shared in this contexxt.20:21
dougwigDuke Nukem Forever, LBaaS pack20:21
sbalukoffThe status information will essentially relay the same information.20:21
sbalukoffdougwig: +120:21
sbalukoffYeah, I'd rather see L7 delayed if we can't resolve this than LBaaSv2 delayed for any reason at all.20:22
dougwigi think the status tree is additive, not incompatible, unless i'm missing something.20:22
sbalukoffdougwig: That's how I'm seeing it, too.20:23
bloganwell every entity currently has a status, if we did the status tree we'd have to remove that20:28
bloganbc now /pools are logical20:28
bloganstatus directly on the pool entity wouldn't make sense20:29
dougwigis there a reason those entities couldn't continue to have a status?  i mean, would a member be up for one pool and down for another?20:29
sbalukoffdougwig: That would be strange. It's not beyond the realm of possibility, but it would be strange.20:30
sbalukoffdougwig: That could happen if the pools have different health checks and it succeed for one and not the other.20:30
bloganwell the reason would be the status of a pool shared by listeners could be different between the listeners, thus the one /pools/{pool_id} entity having a status looks like a global status20:30
sbalukoffblogan: I'm not advocating sharing pools between listeners.20:31
sbalukoffPartially for the reason you've just elucidated.20:31
dougwigwe have to deal with possible breaking enhancements while moving forward.  and is the status tree big enough that it'd force a protocol rev, or do we do it non-breaking and say, hey, use these status trees now, the objects will still have a status, but it may not be the *most* correct status it could be in the context of *your* LB.  i can see that being20:32
dougwigtechnically feasible, and not too horrible.  which, to me, means that it's not worth revving the protocol for.  which means it's not worth holding for.20:32
dougwigthat was just short of a ramble, but i think it conveyed my thinking.20:33
sbalukoffdougwig: I'm all for not doing anything to create more delays.20:33
dougwigi'm just trying to get at whether there's a feasible backwards compatible answer that lets blogan sleep at night.20:35
dougwigwhich is a pretty high bar, actually.20:35
sbalukoffHaha20:35
sbalukoffI personally don't understand how blogan sleeps.20:36
dougwigi'm not sure that he does.20:36
sbalukoffI suspect he doesn't, given how late I've seen him online here.20:36
blogani dotn know what yall are talking about, i see you guys on here later than me20:37
bloganand dougwig gets on earlier!20:37
blogananyway, if we do sharing of pools, and one backend doesn't support sharing of pools, then its creating a different pool for each time it is shared and thus it will have a different status, which can't be reprsented by the /pools/{pool_id}20:38
blogani fear i am not making sense, so i am sorry20:38
sbalukoffIt sounds like you're worried about implementation details slipping in there.20:39
sbalukoffAs much as I despise that phrase.20:39
sbalukoffIn that case, the driver writer needs to do what she thinks is best in this case:20:39
sbalukoffIf it were me writing it, I would probably just return the status of the first one of these groups / pools that is "shared" with the assumption that the rest of them probably have the same status.20:40
sbalukoffWhich is almost always going to be correct.20:40
bloganokay so you'd be fine with keeping the status on the root pool object?20:40
sbalukoff(Because they'd have the same connectivity and same health checks.)20:40
sbalukoffblogan: Yes.20:40
sbalukoffWe're going to have to make that choice as well when we do active-active on Octavia.20:41
sbalukoff*shrug*20:41
sbalukoffIf it turns out to cause problems to make that kind of assumption, then we will have to revisit the design.20:41
sbalukoffBut I don't think it'll cause problems at this point.20:41
dougwigfyi, neutron-lbaas split is scheduled for 10am MDT, monday morning.  expect the repo to be in a funky state for a day or two.20:42
sbalukoffIn fact-- if we can get haproxy to take health check information from an external source (to reduce the number of health checks hitting back-end servers in big active-active deployments)...20:42
sbalukoffI'm kind of counting on that working.20:42
sbalukoffdougwig: Nice!20:42
sbalukoffPlease keep us posted!20:42
blogansbalukoff: I'm not objecting with movign forward with what we have, im just voicing concerns I have about possible issues down the immediate road20:43
sbalukoffThat's fine.20:43
sbalukoffAnd I think it's worthwhile to have this kind of discussion.20:43
bloganso I think if we plan on just allowing pools to be shared within the scope of a single listener, that will alleviate many of my concerns20:44
*** crc32 has joined #openstack-lbaas20:44
bloganthough we do still plan on having separate statuses, provisioning and operating20:44
sbalukoffAnd I think it'll alleviate many of Samuel's concerns, too.20:44
sbalukoffYep, that's a good idea.20:44
bloganso I assume the pool's status field will be its operational status20:47
sbalukoffDoes provisional status make sense for a pool?20:48
sbalukoff(Unless it's connected, ultimately, to a loadbalancer?)20:48
bloganin some backends yes20:48
bloganbut implementation detail!20:49
sbalukoffHaha!20:49
sbalukoffI don't think it'll break anything to have different back-ends report different things for provisional status depending on what they have to do.20:49
blogani think we can say pools only have operational status, and the status field on the pool is the operational status20:49
sbalukoffFor all practical purposes, we only care when 1) it's provisioned 2) whether it's operationally available or not.20:50
sbalukoffYeah, in my mind provisional status has less meaning for a pool.20:50
sbalukoffBut then, I am coming from an haproxy-centric perspective.20:50
bloganwell i woudl prefer if the provisioning status was reflected in the listener's provisioning status20:50
bloganso if i change a pool, its parent listener's provisioning status changes to PENDING_UPDATE20:51
sbalukoffIf other drivers will make meaningful use of the provisional status on a pool, I don't think that's something that hurts anything else.20:51
bloganor if i add a pool as well20:51
sbalukoffMy thought is that provisional status should always cascade from parent objects to child objects.20:51
sbalukoffBut again, that is something that apparently will vary from implementation to implementation.20:52
bloganso in my scenario the listener's status remains unchanged but the pool's status goes into PENDING_UPDATE (and if member's had a provisional status, those would go into PENDING_UPDATE?20:53
*** vivek-ebay has joined #openstack-lbaas20:53
sbalukoff(What's frustrating about this is that that PENDING_UPDATE state should be about the shortest lived state of all.)20:53
sbalukoffblogan: What if we leave it up to the driver to modify provisional status as they see fit?20:54
sbalukoffI mean... what's the point of provisional status anyway? To block other updates from happening while a change is underway, right?20:55
bloganthat leads to inconsistent behavior20:55
sbalukoffIs there any other use for this field?20:55
bloganother than user feedback, no20:55
sbalukoffSo...  when a change happens, drivers should change any entities which can't tolerate parallel changes to 'PENDING_UPDATE' until the update is complete.20:56
sbalukoffAnd yes, different back-ends will have their quirks.20:57
sbalukoffI don't know that there's a good way around that.20:57
sbalukoffBut again... the hope is that most of these kinds of changes happen quickly, so unless you're firing update requests out *really fast* you probably won't notice the differences.20:57
*** jorgem has quit IRC20:57
sbalukoffAnd if you are firing out updates that quickly, then it's a *good thing* to block where it's appropriate so that we don't mess up the back-end.20:58
bloganyes and honestly update requests are rare with load balancers (relatively speaking)20:58
bloganat least from our experience20:59
sbalukoff(It's good to have this dicussion of handling status, by the way-- we've avoided it for months, and we need to have it.)20:59
sbalukoffOur experience too.20:59
bloganso my preference is if any child of a load balancer is updated, it should put the load balancer in PENDING_UPDATE20:59
sbalukoffProbably less than 10% of deployments ever see a change after they're deployed. And those that do, it's usually something really minor (adding and removing members)20:59
bloganwhich is the lowest common denominator20:59
sbalukoffI'm OK with that.21:00
bloganomg21:00
sbalukoffI'd probably target the listener, but again, that's haproxy-specific thinking there.21:00
sbalukoffHAHA!21:00
bloganwell id target both, but that would just be for user feedback21:00
bloganand i know why you'd do it on the listener, but doing it on the load balancer should work for all backends21:01
sbalukoffWe probably need to write up all these musings into a formal recommendation, as far as how to handle the various statuses.21:01
sbalukoffAnd get feedback from the larger community on it.21:01
bloganumm this is in that reproposed lbaas v2 spec, which you +1'ed21:02
bloganlol21:02
blogannot any of the explanations of course though21:03
dougwigweren't we splitting the status field?21:03
*** mikedillion has quit IRC21:03
bloganwell thats another thing to discuss21:03
bloganif load balancer and listener should have provisioning status and operational status, how do we do that21:06
bloganjust add the fields21:06
blogan?21:06
sbalukoffSure.21:06
sbalukoffAny reason not to?21:06
bloganadding immediately into lbaas v2, i dont see a reason21:07
bloganif we waited until later, i would21:07
sbalukoffDo eet.21:07
blogani guess it could be a more complex migration from v1 to v221:07
dougwiglet's just spec it as two and charge ahead.21:07
bloganbut taht would have to happen anyway if we waited21:07
blogansounds good to me21:08
blogani'm going to add to the spec21:08
bloganyall look at it closely21:08
*** jorgem has joined #openstack-lbaas21:08
sbalukoffOk.21:08
blogansbalukoff: since you're here, would agree with my comments that how the vip is created and whetehr the user owns the vip or octavia does is up to the network driver implementation?21:09
bloganand by that i mean, implementation detail?21:09
sbalukoffWell...  I think German's concern that if a user loses a specific VIP that causes them unnecessary grief is completely valid.21:10
bloganoh i dont dispute that either21:10
sbalukoffSo, whatever we do, and whoever owns the VIP, we need to make sure the user doesn't lose control of it.21:10
sbalukoffIt's easy to "guarantee" that if the user owns the VIP21:10
bloganbut if we have to create our own custom network driver and we don't want to allow that, our driver won't allow that21:10
sbalukoffBut that makes implementing things on Neutron a lot harder, I seem to recall.21:11
sbalukoffRight.21:11
sbalukoffOk, if you put it that way (and explain it that way), then yes-- who owns the VIP is an implementation detail.21:11
sbalukoffAgain, I would love it if we hat an actual IPAM where users could "own" public IPs in a more meaningful way.21:12
bloganok good, and that has a caveat that the code otuside the network driver can't make assumptions on who owns teh vip21:12
sbalukoffSure.21:12
bloganthough it sounds like in German's case, octavia will still own the VIP, but they will allow users to point a floating ip to that vip, or octavia can do that for them giving them ownership of that flip21:13
sbalukoffI think you're correct.21:13
bloganthough im not sure if users can point a flip they own to a port htey do not own21:13
sbalukoffUsers probably can't. But a service account ought to be able to.21:14
rm_workYeah, that was what I explained in my comment21:14
bloganyeah so it would then fall to octavia to do that21:14
sbalukoff(I think it's essentially got the same as admin privileges there.)21:14
bloganin which case if the user ever diassociated that flip with the port, they wouldn't be able to reassign it to that same port again21:14
sbalukoffrm_work: You're assuming I read what you write. ;)21:14
rm_workwhatever the "scaling IP" is in your deployment, needs to be handled by octavia21:14
sbalukoffActually, I do, but I am a little behind.21:14
bloganno one should ever read what he writes21:15
sbalukoffAnd I was rather brain-fried after all the reviewing yesterday.21:15
rm_workin our case, it just happens that the scaling IP *is* the FLIP21:15
rm_workbut that's an implementation detail21:15
sbalukoffrm_work: Are the scaling IP and the public IP one and the same in all deployments?21:16
rm_worksballe: no21:16
rm_workerr21:16
rm_worksbalukoff: no21:16
rm_workT_T21:16
sbalukoffThat's what I thought.21:16
sbalukoffAnd, it's sounding like Octavia needs to own both of them in any case to do what it'll have to do.21:16
rm_workbut it doesn't matter, since WHATEVER the scaling IP is, it needs to be done by Octavia21:16
sbalukoffRight.21:17
rm_workOctavia doesn't need to own the public IP21:17
rm_workjust the scaling IP21:17
blogandamnit rm_work sbalukoff and i were in agreement and then you come in21:17
sbalukoffHow will the user point  / route the public IP at the scaling IP if he doesn't own the scaling IP?21:17
blogango back to your hobbit hole21:17
sbalukoffHaha21:17
rm_worklol21:17
sbalukoffblogan: We were agreeing about status information.21:17
bloganand that creating the vip and ownership is an implementation detail21:18
sbalukoffrm_work decided to regurgitate discussion about network topology.21:18
sbalukoffAah... true.21:18
rm_workcopy/pasting from my comment:21:18
rm_workRS Impl: Octavia assigns a FLIP to the Amphorae, returns IP. User uses that IP as the LB's public IP.21:18
rm_workHP Impl: Octavia assigns whatever their scaling backend IP thing is, returns the IP. User assigns their own FLIP to that IP. User uses their FLIP's IP as the LB's public IP.21:18
rm_workEither way, Octavia has to do something similar -- attach some sort of scaling IP to all of the Amphorae. It's just that in our case, that IS the public IP, and in HP's case, it's an internal only IP that needs a FLIP created and routed to it.21:18
bloganim pretty sure rm_work and I are saying the same thing21:18
rm_workyes, since we agreed in the comments there :P21:19
sbalukoffCan a user point a FLIP at an IP they don't own?21:19
rm_work... at least I think we did21:19
sbalukoff(And isn't on a network they own?)21:19
rm_worksbalukoff: we should make it so they can, I think21:19
sbalukoff(I'm speaking of the HP impl view)21:19
rm_workit may require their implementation to do some permission granting21:20
rm_worknot sure21:20
sbalukoffSo there's a serious security problem there...21:20
rm_workthat's kinda an implementation detail21:20
rm_work<_<21:20
rm_workI mean, not *any* IP they don't own21:20
rm_workbut the one that was created for them, yes21:20
sbalukoffThe security problem being that if I can point a FLIP at any IP on any network I want, I can now expose other tenants' "private" servers if I know anything about their internal networks.21:20
rm_work^^21:21
sbalukoffOk, we're going to have to be more explicit in describing the restrictions there.21:21
rm_workwould require them to grant permission on specific objects to specific users, which might be hard21:21
rm_workwe're getting there with Barbican, but I don't think Neutron has plans for that kind of granular access control21:21
sbalukoffBut... correct me if I'm wrong, I think you're providing that option because HP has a hard and fast need to have the user own the FLIP?21:21
rm_workso they might need an "attach" command where the user passes in a FLIP ID and Octavia/Neutron-LBaaS does the attachment of the FLIP as an admin21:22
sbalukoffAah... that would work.21:22
bloganso why dont we just say the neutron driver HP uses will have a config option to turn on automatic creation of the FLIP to point to the scaling VIP, on behalf of the user21:22
rm_workyeah because HP is weird :P21:22
sbalukoffSo Octavia, using its service-account-level privilege, would be the thing altering the FLIP.21:22
rm_workbah where is xgerman I wanted to prod him with that comment T_T21:22
sbalukoffIt's Friday. Nobody at HP works on Fridays. ;)21:23
blogankeep the prodding to a minimum, you dont wont to anger him21:23
rm_worksbalukoff: could be, if Neutron doesn't support fine-grained access control that we'd need to have to allow the user to point a FLIP at "specific" IPs21:23
sbalukoff(I am, of course, joking.)21:23
rm_workdamn, i was just sending me resume off to HP21:23
* rm_work puts resume back in folder21:23
sbalukoffrm_work: It's something to look into.21:24
sbalukoffHaha!21:24
rm_workmy old job let us off at 3:00 on fridays :P21:24
rm_workof course, they also enforced 8am start times the rest of the week >_> so bleh21:24
sbalukoffAll you had to do was show up at 6:00am?21:24
sbalukoff...21:25
sbalukoffHAHA21:25
sbalukoffYeah... I like where I'm at. I'm literally still in my pyjamas, working from the bedroom.21:25
rm_worksbalukoff: oh man, pajamas?21:25
rm_workthat's a step up from what I am usually wearing <_<21:25
rm_work>_>21:25
rm_work<_<21:25
sbalukoffAnd thus we enter the TMI territory21:26
rm_workso anyway, THOSE FLIPS, am i right? :P21:26
sbalukoffEr... sure/21:26
sbalukoff?21:26
* rm_work is trying to change the subject21:27
sbalukoffI need to respond to your comments in any case. And the ML discussion started about FLIPS.21:27
rm_workYES21:28
rm_workrespond to that ML post21:28
rm_workI hope your response is something like "Great idea, we'll start working with you on that immediately, and plan to go forward with that plan for our own network infrastructure ASAP."21:28
rm_workWould make this Octavia discussion much simpler :P21:29
openstackgerritTrevor Vardeman proposed stackforge/octavia: Nova driver implementation  https://review.openstack.org/13310821:30
sbalukoffHaha!21:30
sbalukoffOh shit! Trevor's actually getting useful work done today.21:30
rm_workwat21:30
rm_worknah21:30
sbalukoffI better get back at it. Makin' me look bad.21:30
bloganyou do that well enough yourself21:31
bloganBOOM!21:31
* blogan drops the mic21:32
sbalukoffHAha!21:32
*** jorgem has quit IRC21:52
*** jorgem has joined #openstack-lbaas21:55
*** jorgem has quit IRC21:56
openstackgerritTrevor Vardeman proposed stackforge/octavia: Creation of Octavia API Documentation  https://review.openstack.org/13649921:57
TrevorV_Alright guys!  I'm done for the day and will hear from/see you guys Monday!  Have a good weekend!21:58
*** TrevorV_ has quit IRC21:58
*** xgerman has joined #openstack-lbaas22:19
*** jorgem has joined #openstack-lbaas22:24
rm_workptoohill: when are you partying tonight?22:26
ptoohillI may have rsvp'd to the wrong person, so possibly not at all :/22:26
rm_workT_T22:26
rm_workI would assume you RSVP'd and just go22:27
rm_workI doubt they'll be like "NO, GO HOME"22:27
rm_work:P22:27
rm_workyou're the one that convinced me to go T_T22:27
ptoohillhope not, and was planning on that, so im heading that way sometime after 622:27
ptoohill:P22:27
rm_workwhen does it officially start?22:27
ptoohill6:3022:27
xgerman6:30 - is that a party you cna bring kids?22:29
ptoohillheh22:29
xgerman;-)22:29
ptoohillits more of a dinner thing22:29
rm_workgotta start drinking early to get my money's worth :P22:29
xgermanbetter head there at 5 then :-)22:29
ptoohill'Jingle and Mingle'22:30
xgermananyhow, rm_work sadly we work full days Friday at HP22:30
rm_workheh22:30
rm_workso you did see our conversation above :P22:30
rm_workdidn't think you were here22:30
xgermanyeah, I was at the shop and got a new battery for my car22:31
xgerman(those things only last 11 years!!)22:31
rm_workholy shit you got 11 years? O_o22:31
rm_workdown here it's more like 4-522:31
rm_workheat related, I think22:31
xgermanwell, the shop asked me to replace like 3 years ago but after I coudn't start it the other day I caved in22:32
rm_workheh22:32
johnsomsbalukoff: I saw that comment about not working on Friday....22:32
rm_workah, johnsom tattled22:32
sbalukoffHehe!22:32
sbalukoffAin't I a stinker?22:33
xgermanrm_work we don't have a hard and fast rule who owns the VIP -- I just was on the receiving end of customers complaining to often when we misplaced their VIP so I am reluctant to own it any longer...22:33
rm_workyeah, I get it22:33
xgermancool, yeah and I am not sure if the special RAX FLIPs will gain traction in the community. We might just get a set of ips and being told "good luck"22:34
xgermanfrom our Neutron people22:35
xgermanso hard to state requirements -- so I am stating prferences22:35
xgermanhope that makes sense22:35
*** vivek-eb_ has joined #openstack-lbaas22:37
*** vivek-ebay has quit IRC22:37
*** vivek-ebay has joined #openstack-lbaas22:38
*** vivek-eb_ has quit IRC22:38
openstackgerritMichael Johnson proposed stackforge/octavia:  Add Amphora base image creation scripts for Octavia  https://review.openstack.org/13290422:41
*** BhargavBhikkaji has joined #openstack-lbaas22:41
johnsomsbalukoff: I dropped the comment about the template file.  You are correct about the config rendering.22:42
BhargavBhikkajiHello.22:42
sbalukoffSounds good. I'll check it out later today.22:42
sbalukoffThanks!22:42
sbalukoffHello!22:42
johnsomNo, thank you for reviewing!22:42
BhargavBhikkajiI am trying to integrate HAProxy LB to my openstack network. Is there any link that walks me thro' the procedure ?22:43
rm_workhttps://review.openstack.org/#/c/132904/9/elements/haproxy-octavia/os-refresh-config/configure.d/20-haproxy-tune-kernel22:43
rm_workI thought sbalukoff was saying it was better to disable conntrack altogether?22:44
rm_workthough I heard that secondhand so maybe that is not right?22:44
johnsomYeah, but he sent settings for it anyway...  Grin22:44
sbalukoffThat's correct.22:44
rm_work<_<22:44
rm_work>_>22:44
rm_work<_<22:44
bloganBhargavBhikkaji: are you using the neutron lbaas extension with the haproxy driver?22:44
sbalukoffAnd yes, the settings are there for tuning it anyway.22:44
sbalukoff:)22:44
rm_workk, I guess that's in case it isn't off, and has no effect if it is?22:45
sbalukoffWell, I'm not sure we're going to be able to get away from having conntrack loaded, running as we are on a compute node, and given the machinations that need to happen there for VM connectivity.22:45
sbalukoffIt will still have an effect if it's loaded-- a negative one.22:46
sbalukoffBut, those settings in that file are FAR better than the default for a load balancer.22:46
dougwigBhargavBhikkaji: http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html22:46
rm_worksbalukoff: i mean, if it ISN'T loaded, then setting that conntrack_max will do nothing22:47
rm_workright?22:47
johnsomright22:47
rm_workso either it fixes the broken default settings, or it does nothing22:47
sbalukoffYour are correct22:48
rm_workk22:48
*** cipcosma has quit IRC22:48
sbalukoffYou are, even.22:48
rm_workheh22:48
rm_workI refrained from asking "my what are correct?" :P22:48
rm_workI really was going to let that slide.22:48
BhargavBhikkajiblogan: I installed mirantis and trying to integrate LB now. I am not sure sure if i am using neutron lbaas extension with the haproxy driver22:49
BhargavBhikkaji<blogan>: Can you explain more ?22:49
bloganBhargavBhikkaji: I think the link dougwig posted is a good starting point22:49
BhargavBhikkajidougwig: Thanks. I am going thro' the link22:50
*** barclaac has quit IRC23:01
*** barclaac has joined #openstack-lbaas23:09
johnsomrm_work blogan: Looking at the comments on the controller spec.  Just to come up to speed, the current plan is the api process will have access to the Octavia DB (tm) directly?23:10
bloganjohnsom: yes, but i wouldn't consider it part of the controller23:11
bloganbut i guess its a gray area23:12
johnsomNo, that is fine that it is *not* part of the controller (actually better).  I just thought we were limiting the components that had direct access to the DB.23:12
bloganwell the API needs to have access to teh DB23:12
rm_workyes23:13
BhargavBhikkajidougwig: does "apt-get install neutron-lbaas-agent" downloads haproxy ?23:13
johnsomOk.  I thought someone was pushing to have that access proxy through something.  I will update the spec with this in mind soon.23:13
bloganwell i think the original plan was to just have things make requests to a queue, and the controller pulls off that queue23:14
johnsomProbably back when the API code was just the neutron plug in.23:14
johnsomRight23:14
johnsomOk, thanks for the catch and review.23:15
bloganbut having an API in front lets us let the controller only worry about creates, updates and reads, and the API process handles the higher frequency reads23:15
bloganand we immediately store the user's request, instead of having it travel trhough the queue to the controller, and then the controller has to store that23:16
bloganand also teh api process is the gateway on whether a load balancer can be updated or not, based on its provisioning status23:16
bloganso we don't get a bunch of updates to the same load balancer and they get processed in an odd order23:17
johnsomI think the controller will need to manage that as well.23:17
johnsomHealth events and such23:18
bloganyeah definitely the health events23:19
bloganbut that shouldn't chagne the provisioning status, just the operating status23:19
johnsomI think we need to go into a "pending update" like state during fail over due to health event situations23:20
johnsomwe don't want to try to push out a config change during deploy23:21
*** vivek-ebay has quit IRC23:22
dougwigBhargavBhikkaji: i'm not sure.  check your package list (dpkg)23:26
*** mlavalle has quit IRC23:27
openstackgerritJorge Miramontes proposed stackforge/octavia: Added versioning and migration mandates  https://review.openstack.org/13975523:37
jorgemyeah buddy :)23:38
bloganupdated the lbaas v2 spec https://review.openstack.org/#/c/13820523:42
*** jorgem has quit IRC23:52
dougwigsbalukoff: the reason for entry_point instead of driver was that flavors can also be applied to plugins23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!