Monday, 2015-08-31

*** chlong has joined #openstack-lbaas00:55
*** chlong has quit IRC01:49
*** mlavalle has quit IRC02:11
*** bana_k has joined #openstack-lbaas02:44
*** clev-away has quit IRC03:26
*** clev has joined #openstack-lbaas03:31
*** I has joined #openstack-lbaas03:35
*** I is now known as Guest2505903:36
*** Guest25059 has quit IRC03:49
*** haigang has joined #openstack-lbaas03:49
*** bharathm has joined #openstack-lbaas04:12
crc32'any one around? I can't rebase anything from CR 202336 on up.04:21
*** bana_k has quit IRC04:37
openstackgerritCarlos Garza proposed openstack/octavia: Adding amphora failover flows  https://review.openstack.org/20233604:45
openstackgerritCarlos Garza proposed openstack/octavia: health manager service  https://review.openstack.org/16006104:47
*** ig0r__ has joined #openstack-lbaas04:59
*** ig0r_ has quit IRC04:59
openstackgerritCarlos Garza proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188205:00
*** haigang has quit IRC05:23
*** numans has joined #openstack-lbaas05:39
*** haigang has joined #openstack-lbaas05:47
openstackgerritCarlos Garza proposed openstack/octavia: Implementing EventStreamer  https://review.openstack.org/21873506:01
openstackgerritStephen Balukoff proposed openstack/neutron-lbaas: Make pools independent of listeners  https://review.openstack.org/21856006:09
*** Alex_Stef has joined #openstack-lbaas06:23
*** haigang has quit IRC06:27
*** haigang has joined #openstack-lbaas06:28
*** haigang has quit IRC06:29
*** haigang has joined #openstack-lbaas06:36
*** apuimedo_ has joined #openstack-lbaas07:27
*** jschwarz has joined #openstack-lbaas07:32
openstackgerritBrandon Logan proposed openstack/octavia: Do not remove egress sec group rules on plug vip  https://review.openstack.org/21875407:53
*** apuimedo_ has quit IRC08:05
*** Jianlee has joined #openstack-lbaas08:11
*** kiran-r has joined #openstack-lbaas08:15
openstackgerritStephen Balukoff proposed openstack/neutron-lbaas: Make pools independent of listeners  https://review.openstack.org/21856008:20
*** mixos has quit IRC08:28
*** Jianlee is now known as Jian061208:31
*** nmagnezi has joined #openstack-lbaas08:34
*** amotoki has joined #openstack-lbaas08:41
*** haigang has quit IRC08:44
*** haigang has joined #openstack-lbaas08:44
*** haigang has quit IRC08:50
*** haigang has joined #openstack-lbaas08:52
*** crc32 has quit IRC08:55
*** bharathm has quit IRC09:12
*** haigang has quit IRC09:20
*** haigang has joined #openstack-lbaas09:33
*** numans has quit IRC09:36
*** ig0r_ has joined #openstack-lbaas09:37
*** ig0r__ has quit IRC09:39
*** bharathm has joined #openstack-lbaas10:01
*** numans has joined #openstack-lbaas10:05
*** bharathm has quit IRC10:06
*** haigang has quit IRC10:14
*** haigang has joined #openstack-lbaas10:15
*** Jian0612 has quit IRC10:30
*** nmagnezi has quit IRC10:53
*** bharathm has joined #openstack-lbaas11:02
*** bharathm has quit IRC11:07
*** nmagnezi has joined #openstack-lbaas11:22
*** nmagnezi has quit IRC11:24
*** nmagnezi has joined #openstack-lbaas11:29
*** amotoki has quit IRC11:39
*** amotoki has joined #openstack-lbaas11:39
*** openstackgerrit has quit IRC11:46
*** openstackgerrit has joined #openstack-lbaas11:47
*** bharathm has joined #openstack-lbaas12:51
*** bharathm has quit IRC12:56
*** kbyrne has quit IRC13:19
*** kbyrne has joined #openstack-lbaas13:32
*** kiran-r has quit IRC13:48
*** numans has quit IRC14:36
*** bharathm has joined #openstack-lbaas14:40
*** apuimedo has joined #openstack-lbaas14:42
*** mixos has joined #openstack-lbaas14:42
*** bharathm has quit IRC14:45
*** mixos has quit IRC14:46
*** Alex_Stef has quit IRC14:56
*** apuimedo is now known as apuimedo|away15:13
*** woodster_ has joined #openstack-lbaas15:18
*** sbalukoff has quit IRC15:29
openstackgerritOpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements  https://review.openstack.org/21891115:35
xgermanblogan, johnsom, rm_work: Please don’t approve any patches until the smoke cleared: http://lists.openstack.org/pipermail/openstack-dev/2015-August/073270.html15:41
johnsomJoy15:42
johnsomActually that global update above seems to address this issue15:43
*** nmagnezi has quit IRC15:48
xgermanoh15:48
xgermanconfusing times — gave it a +A15:49
*** amotoki has quit IRC15:50
*** mlavalle has joined #openstack-lbaas15:54
*** apuimedo|away has quit IRC16:05
*** jschwarz has quit IRC16:06
openstackgerritMichael Johnson proposed openstack/neutron-lbaas: Set up LBaas V2 tempest test gate against Octavia.  https://review.openstack.org/20967516:06
*** apuimedo|away has joined #openstack-lbaas16:09
*** bharathm has joined #openstack-lbaas16:12
*** kiran-r has joined #openstack-lbaas16:13
*** bharathm has quit IRC16:16
*** clev is now known as clev-away16:34
*** bana_k has joined #openstack-lbaas16:44
*** kiran-r has quit IRC17:10
*** madhu_ak has joined #openstack-lbaas17:20
*** TrevorV has joined #openstack-lbaas17:28
TrevorVjohnsom, xgerman hey guys you around?17:34
johnsomHi17:34
xgermanHi17:34
TrevorVI didn't actually test the failover with REST stuffs.  Have either of you?17:35
johnsomNot yet.  I was going to give it a go today.  I'm addressing comments on the UDP patch at the moment.  After that I will give it a whirl17:36
*** bharathm has joined #openstack-lbaas17:41
xgermanI had a horrible time getting my devstack to work… but will try today17:41
*** clev-away is now known as clev17:41
TrevorVI'm in no real hurry, was only curious.  I'm working on some diagrams for internal stuff, but since its not "merged" it popped in my head that I hadn't looked at that side.17:43
*** ig0r__ has joined #openstack-lbaas17:43
bloganlooks like gate is clear17:45
bloganwe can +A things again i believe17:45
TrevorVNo... no I can't :(17:46
*** ig0r_ has quit IRC17:46
johnsomIt's still very slow though.  I have a job in zuul that has been waiting 1:4017:47
openstackgerritMerged openstack/octavia: Do not remove egress sec group rules on plug vip  https://review.openstack.org/21875417:54
bloganthat merged fast!17:54
openstackgerritMerged openstack/octavia: Updated from global requirements  https://review.openstack.org/21891118:00
*** abdelwas has joined #openstack-lbaas18:05
bloganbtw the docs merged for the tls and sni attributes18:10
bloganhttp://developer.openstack.org/api-ref-networking-v2-ext.html#createListener18:11
johnsomExcellent18:11
*** sbalukoff has joined #openstack-lbaas18:14
johnsomTrevorV Is it ok if I do the rebase on the failover flow?18:31
sbalukoffAah, it's been a while since I've argued technical design with other people. It's good to get back into it, eh.18:34
sbalukoffFWIW, I'd love to see others' opinions on this, too: https://review.openstack.org/#/c/218560/18:34
openstackgerritTrevor Vardeman proposed openstack/octavia: Adding amphora failover flows  https://review.openstack.org/20233618:36
johnsomTrevorV - I guess you were ahead of me on that rebase....18:36
TrevorVJust finished it johnsom18:36
TrevorV:D18:36
TrevorVWas on laptop, chat on desktop, didn't see it until just now :D18:36
johnsomThanks18:36
openstackgerritMichael Johnson proposed openstack/octavia: health manager service  https://review.openstack.org/16006118:37
openstackgerritMichael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188218:38
openstackgerritSherif Abdelwahab proposed openstack/octavia: Amphora Flows and Service Drivers for Active Standby  https://review.openstack.org/20625218:45
*** ig0r__ has quit IRC18:48
openstackgerritMichael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188219:07
rm_workI'm ... "off" today, but looking at a couple reviews really quick to see if we can get stuff moving ahead19:17
rm_workneed failover flows to merge before we can merge HMService19:17
rm_workare we going to try to merge those today or can I wait to do a full review until tomorrow?19:18
TrevorVrm_work, probably wait, since failover hasn't actually been tested for the REST impl19:22
TrevorVjust SSH19:22
rm_workkk19:28
*** TrevorV has quit IRC19:52
openstackgerritBertrand Lallau proposed openstack/octavia: Fix doctrings typo in delete_member  https://review.openstack.org/21900819:56
xgermanTrevorV found an issue with failover20:12
xgermanwill do some more tests20:13
xgermanok, second test failed as well20:18
*** diogogmt has joined #openstack-lbaas20:36
bloganfyi dont +A anything again21:06
rm_worklol21:09
rm_workgotta love the gate right before code freeze :P21:09
rm_workeveryone wants to submit stuff and something external always borks everything T_T21:09
bloganindeed21:10
blogansbalukoff: when you get a chance to talk abotu the independent pool stuff id like to talk about it here21:12
sbalukoffblogan: Okeedokee!21:47
sbalukoffblogan: I'm actually free right now. Do you have time to chat now?21:48
blogansbalukoff: in 30 mins?21:52
sbalukoffWorks for me!21:52
rm_workdougwig: can review https://review.openstack.org/#/c/217330/ ? I think it is good, and the change that relies on it on the project-side depends-on it22:07
rm_workdougwig: https://review.openstack.org/#/c/167885/ <-- passes with experimental queue job (the one that I made to test the change above)22:07
*** mlavalle has quit IRC22:12
dougwigrm_work: commented22:13
rm_workdougwig: well, it's *replacing* the existing gate22:14
rm_worknothing can pass both the existing gate and this one22:14
dougwigrm_work: sure, but you could keep it separate and make it non-voting in check for a few days, and *then* cut it over.22:14
dougwigthat's what i'd do. but if you're confident enough to just switch, hey, go nuts. :)22:14
rm_workerr, check what? that it fails on everything but a single CR?22:14
johnsomdougwig While you are here... Min and I are running into some interesting issues booting the amp vm in the gate.22:14
rm_workdougwig: want to make sure i'm not missing something here22:15
dougwigjohnsom: shoot22:15
rm_workit would just fail on everything but the single CR that changes the gate code on the project side (https://review.openstack.org/#/c/167885/)22:15
johnsomThe nova scheduler is rejecting the amp build claiming there is only 1024 of storage available. (we had a similar issue with RAM, but I created a smaller flavor)22:15
rm_workand that review couldn't merge until the gate is replaced22:15
johnsomAny thoughts about how to get devstack to use more of the 30GB of free disk for nova hosts?22:16
blogansbalukoff: ya there?22:17
sbalukoffI am indeed!22:17
dougwigrm_work: ok, so you have two jobs.  the passing one, and a failing one that doesn't vote.  then you merge the barbican change, and the non-voting one starts to pass.  then you get a few days data that it's stable, so you make it voting (and kill the old one/rename it).22:17
dougwigjohnsom: it's an evil thought, but you could set the disk overcommit in nova.conf via local.conf.22:17
rm_workdougwig: if we left it in that state for a few days, nothing else could merge, because it would all be failing the old gate check?22:17
johnsomdougwig Hahaha, I like evil...22:18
rm_workdougwig: the important part being, this is not backwards compatible with the old gate job (which will fail everything)22:18
johnsomInteresting thought.22:18
dougwigrm_work: is the old devstack service disappearing or something? aren't those separate code paths? i must be missing why you need/want a big bang in the middle of the process.22:19
rm_workdougwig: the old process is phased out with the new one22:19
dougwigrm_work: but you're in control of that, right? you don't have to do that at the same time.  devstack won't melt.  :)22:20
blogansbalukoff: so you decided to leave loadbalancer_id off of the pool?22:20
rm_workI guess it might be possible to do it in multiple steps <_<22:20
sbalukoffblogan: No, actually, I decided to keep it, and leave listener_id off of it.22:20
rm_workthough we are a bit short on time because of code freeze being in two days22:20
blogansbalukoff: but loadbalancer_id is required now?22:20
rm_workso this is a little bit accelerated22:20
sbalukoffblogan: In a round about way, yes. When using the API or the CLI you can specify just the listener_id...  but all the code does is look for which loadbalancer the listener is using and associates the pool with that (on creation only--  can't switch with an update, just like listeners)22:21
sbalukoffI've also updated the CLI and API such that when creating a listener, you can just specify the pool it should be associated with, and it'll automatically get assigned to the loadbalancer.22:22
sbalukoff(I figured, if we allow that pattern for pool creation, why not allow it for listener creation?)22:22
bloganbut what your code is doing is exposing a loadbalancer_id attribute off the pool, which seems odd to me22:23
johnsomdougwig Just as we were typing my latest job finished.  This worked: VOLUME_BACKING_FILE_SIZE=24G export VOLUME_BACKING_FILE_SIZE22:23
xgermanblogan +122:23
sbalukoffblogan: Why?22:23
xgermancan’ we have a many-many on listener-id?22:23
sbalukoffPools exist in the context of a loadbalancer.22:23
xgermando they?22:23
blogansbalukoff: bc pools are directly (or indirectly via L7) tied to listeners, which are then tied to load balancers22:23
sbalukoffxgerman: You mean pool <-> listener relationship?  Yes.22:23
xgermanyep, that’s all I think we need22:24
blogansbalukoff: tying to a LB should just be as simple as tying to a Listener right now, its just an indirect coupling, im sure im missing something though22:24
xgermanwell, we had people lobby for sharing pools between LBs22:24
sbalukoffblogan: It is that simple, from the user's perspective.22:24
xgermanbut I am pretty sure we decided against that22:25
sbalukoffxgerman: We did. I used to be a supporter of it. I would still be if we limit the sharing to providers....22:25
sbalukoffie...  you can share pool and listeners between loadbalancers so long as those loadbalancers are all using the same driver.22:25
sbalukoffThat's not what I've written right now.22:25
sbalukoffBut I would be in favor of that.22:25
blogansharing pools across LBs can be something we solve for if needed after doing it within an LB22:25
sbalukoffI am not in favor of sharing listeners / pools across drivers / vendors.22:25
xgermanagreed. But hanging the pool on the LB would make that unatural22:26
sbalukoffThe one use case where it makes sense is having the same pool for an IPv4 and IPv6 loadbalancer.22:26
sbalukoffxgerman: It does. It's the same thing for listeners, actuall.22:26
sbalukoffactually.22:26
sbalukoffSo...22:26
sbalukoffWhen we introduce support for IPv6 load balancers, then it might be time to revisit whether we want to decouple listeners and pools from loadbalancers.22:27
sbalukoffI don't really see much of a case for doing that before then.22:27
dougwigjohnsom: sweet.22:27
bloganme either22:27
sbalukoffIMO, anyway.22:27
xgermanI though we support ipv622:27
blogandigression completed!22:27
sbalukoffThat change will involve a massive API change...22:27
bloganxgerman: i think he means ipv4 and ipv622:27
sbalukoffxgerman: I thought support was removed with a recent commit?22:27
xgermanwe have a ton of test cases for ipv4 and ipv622:28
blogansbalukoff: it should just be a matter of choosing an ipv6 subnet22:28
sbalukoffblogan: Ok, I've not done that on my side yet. But if y'all say it works, then I'll believe you.22:28
sbalukoffDo you think it' worthwhile decoupling both listeners and pools from loadbalancers at this time?22:29
blogansbalukoff: well i honestly havent tested it out yet, so i cant say that it works22:29
bloganother than what people have said22:29
bloganno i dont22:29
bloganits not22:29
sbalukoffOk.22:29
blogani want to solve having pools being shared within the context of an LB22:29
bloganbut also solve it in a way that feels natural22:29
xgermanagreed22:29
bloganand loadbalancer_id off pool doesnt seem natural22:29
sbalukoffOk, my code essentially does that. I just need to finish getting the last few broken tests fixed.22:29
sbalukoffI'm still not sure I understand what you mean when you say it doesn't seem natural.22:30
blogansbalukoff: with your code can i create a pool without being tied to a listener or load balancer?22:30
sbalukoffblogan: It will be tied to a loadbalancer, ultimately.22:30
blogansbalukoff: natural in the model relationship hierarchy22:30
sbalukoffIt's not tied to a listener.22:30
sbalukoffNot really.22:30
blogansbalukoff: lb -> listener -> pool or lb -> listener -> l7 stuff -> pool22:30
blogansbalukoff: but its tied to an LB no?22:31
sbalukoffblogan: Yes, it is.22:31
sbalukoffUltimately, I know that for haproxy a 'pool' without a listener doesn't make any sense. A pool just ends up becoming a backend config in a haproxy config file....22:31
sbalukoffBut other vendors do need to create "pool" entities on their devices.22:32
blogansbalukoff: so what if a user can either 1) create a pool and have it tied to a listener, 2) create a pool and have it tied to an L7 policy/rule (cant remember which one has the pool) or 3) create a pool not tied to either22:32
sbalukoffAnd these exist on their devices independently from the listener...  until they're bound to the listener in some way.22:32
blogansbalukoff: yeah if a pool doesn't have a parent, then it wouldn't go to the driver at all, neutron-lbaas would store it in the db22:32
bloganwell scratch that, it can go to the driver22:33
bloganand it should go to the driver22:33
bloganbut in an haproxy case, nothing would happen22:33
sbalukoffRight.22:33
sbalukoffSo creating a pool tied to an l7 policy is like tying it to the listener, since l7 policies cannot exist independent of listeners.22:33
sbalukoffOr, shouldn't be able to exist.22:34
bloganyeah22:34
blogani agree on that22:34
sbalukoffI'll need to check Evgeny's code there again.22:34
sbalukoffSo...  if you think I ought to decouple the pool from the loadbalancer...  that means the pool could potentially be shared between load balancers.22:35
bloganso it might be easiest to just allow the pool to be created without a parent or attached to a listener immediately, and leave the L7 immediate attachment to another patch22:35
sbalukoffThe problem with treating it like just an entry in the database is that it's not just that for some vendors... and pools have statuses.22:35
blogansbalukoff: no, bc once its associated with a listener, we can set up a check to see whether its being associated with an entity on another lb22:36
sbalukoffblogan: Again, if you're asking me to decouple the pool from the loadbalancer, then we might as well do this for the listener too, at the same time. It'll be about as much work (which is to say, a metric crap-ton.)22:36
blogansbalukoff: which is why i corrected myself and said we would send it to the driver, and the driver decides whether its useful or not22:36
sbalukoffAfter all a listener (ie. tcp port) independent of a listener makes about as much sense as a pool independent of a listener.22:37
sbalukoffer.. pool independent of a loadbalaner.22:37
blogansbalukoff: i think we haev a misunderstanding, im not saying decouple from LB, just allow for parentless until it has been given a parent and then its tightly coupled to the LB indirectly22:37
sbalukoffIf you want to go down that road, then we need to come up with a different model for metrics and status. Because the current one isn't going to cut it when we're spanning providers.22:38
sbalukoffblogan: I think it's a better idea to have it associated with a listener or loadbalancer on creation. After all, people don't usually create pools without intending them to be used somewhere.22:38
sbalukoffSo, I don't think it breaks any workflows to require a listener or loadbalancer to be specified when you create the pool.22:39
blogansbalukoff: okay that'd be fine, but we'd have to add an L7_policy/rule attribute to the pool then22:39
sbalukoff(Certainly not with current users of Neutron LBaaS v2--  since right now a pool is tightly coupled to a listener.)22:39
sbalukoffblogan: No, we don't.22:39
sbalukoffThat attribute is an attribute of the L7 policy.22:40
bloganokay so if there wasn't a loadbalancer_id on the pool, just listener_id then how would a user put a pool on the L7 policy?22:41
sbalukoffpools should not care whether they're being accessed as a 'default_pool' (attribute of the listener) or as the result of an L7 policy (attribute of the policy)22:41
bloganokay, okay, so duh yeah i see the code you're allowing default_pool_id to be updateable now22:41
sbalukoffOk, so I think you missed it earlier:  When you create a pool you must specify either loadbalancer_id or listener_id.  What really happens in the code is that if you specify the listener_id, we look up which loadbalancer_id it is using and use that for the pool.22:42
blogani got that22:42
sbalukoffThat workflow is allowed so we don't break people's existing workflows, eh.22:42
bloganyeah22:42
sbalukoffSo a created pool ALWAYS has a loadbalancer_id.22:42
sbalukoffAnd it never has a listener_id.22:42
blogancan you do this over hangouts?22:42
sbalukoff(It never did--  default_pool_id was always an attribute of the listener.)22:42
sbalukoffblogan: Potentially.22:43
bloganit was but not updateable, i made that decision early so we could update it later and its still backwards compatible22:43
sbalukoffRight.22:43
blogansbalukoff: so hmmm22:43
sbalukoffWould you rather talk about this on a hangout?22:44
blogansbalukoff: if its not too much trouble22:44
bloganyou wont see my pretty face22:44
sbalukoffOkeedokee!22:44
bloganno webcam support22:44
sbalukoffWell, then I'm in!22:44
sbalukoff;)22:44
bloganbut voice is all i need22:44
bloganlol22:44
blogani just need to hear your soothing voice22:44
sbalukoffOk, anyone else want to be part of this hangout?22:44
blogani mean loud piercing voice22:44
sbalukoffHAHA!22:45
xgermanI am always game for hangouts22:45
sbalukoffSo shrew-like.22:45
johnsomSure22:45
blogancan we use the one we do for testing?22:45
sbalukoffOk, let me remember how to do this....  (Gotta use hangouts while I can, eh...  IBM will eventually move us off gmail...)22:45
sbalukoffYou have a hangout you use for testing?22:45
johnsomYes22:45
sbalukoffOk, how do I access it?22:45
xgermanhttp://bit.ly/LBaaS_HP_Hangout22:45
xgermanfollow the link22:46
*** apuimedo|away has quit IRC23:07

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!