Monday, 2014-08-18

*** sbfox has joined #openstack-lbaas00:37
*** sbfox has quit IRC00:39
*** vjay3 has joined #openstack-lbaas00:42
vjay3hi blogan01:52
vjay3i am trying to rebase the netscaler driver with the latest v2 patches. i get the following error01:54
vjay3 ! [remote rejected] HEAD -> refs/publish/master/bp/netscaler-lbass-v2-driver (squash commits first)01:54
*** woodster_ has joined #openstack-lbaas01:57
*** vivek-ebay has joined #openstack-lbaas03:13
*** vjay3 has quit IRC03:25
bloganits been over 24 hours since I logged into irc!03:27
rm_youoh god03:27
rm_youvjay3 is about to clobber you, i feel03:27
bloganhe's not in channel03:27
rm_youi know03:27
rm_youbut03:27
rm_you<vjay3> i am trying to rebase the netscaler driver with the latest v2 patches. i get the following error03:27
rm_you<vjay3>  ! [remote rejected] HEAD -> refs/publish/master/bp/netscaler-lbass-v2-driver (squash commits first)03:27
blogani got it03:27
rm_youyeah03:27
bloganhe sent email too on ML03:27
rm_youheh03:28
dougwigblogan: jinx03:28
bloganim just going to have to update teh GerritWorkflow page with explicit details on how to properly do rebases03:28
bloganand ive been lazy about it because i hate wiki markup03:28
dougwigi sent a safe example to the ML (using a new clone.)03:29
blogandougwig: why'd you do git review -d first?03:30
bloganoh nvm03:30
dougwignew clone, you have to re-setup the dependencies.03:30
blogansorry i thougth the git checkout after was pulling from gerrit03:30
bloganthat is the best way to do it03:31
bloganit's just obvious because gerrit provides no help03:31
bloganjust not obvious03:31
dougwigdid my email come through with a bunch of weird characters?  i told outlook to use plain text.03:32
bloganŒrebase¹03:32
dougwigfuck.03:32
dougwigi hate outlook.  i hate exchange.03:32
bloganlol yeah they are sprinkled throughout03:32
dougwigfuck, i'll probably have to use a different client for ML emails.  that's just crappy.03:33
bloganwhat have you been using before?03:33
dougwigit's always been outlook.  but i've noticed a few of those characters before, it's just less obvious when it's text.03:34
blogani was just about to use outlook myself but then realized i can't figure out how to do in-line comments correctly with outlook03:38
rm_youhaha yes03:38
rm_youcomments in red03:39
rm_youthat's how03:39
rm_youYOLO03:39
dougwigyou set it to compose in text, and then tell it to use '>'.  but it clearly is encoding more than ascii, or mine wouldn't be getting corrupted.03:39
*** vjay3 has joined #openstack-lbaas03:39
rm_youdougwig: is that on windows or OSX?03:39
dougwigos x03:39
rm_youhmm03:39
rm_youI looked and could not find that option in OSX for the life of me03:39
rm_youalso, does practically every dev these days use OSX? <_<<03:40
rm_youseems like 90% of people I talk to are on OSX03:40
dougwigoutlook -> preferences -> compose, unclick html by default, then in the settings pane click 'text'.03:40
rm_youhmm k03:40
dougwigi just died a little inside, being able to type that from memory.03:40
rm_youi'll try not to forget that before tomorrow afternoon03:40
dougwigthis channel is archived.03:40
blogandon't worry dougwig can die a little more inside tomorrow03:40
bloganill be sure to ask him agian tomorrow as well.  at some point dougwig will just be an empty husk03:41
dougwigthat's alright, i've been sitting with a -2 all weekend, which is killing much more of my soul.03:41
dougwigblogan: thanks for the +1 re-up, btw.03:42
blogannp, you've given me a gajillion re-ups03:42
bloganand evgeny too03:43
bloganand avishay03:43
bloganand ptoohill03:43
bloganand yourself03:43
bloganand soon to me vjay303:43
blogansoon to be03:43
bloganoh i have a list03:43
dougwigthe blogan naughty/nice list.03:44
bloganall naughty03:44
bloganhey you never added your thoughts on the object model discussion03:44
dougwigsorry, i've worked over 40 hours since friday morning, trying to get the new CI processes in place, since they morphed again (and look to be in flux AGAIN on monday.)  after fpf/juno, i expect to refocus.03:45
bloganno dougwig, you need to put in 60 hour weekends03:46
blogansomehow squeeze out 6 extra hours on both days03:47
dougwigfriday was a 17 hour day, so in comparison, i took it easy on the weekend.03:47
dougwighere, jump through this hoop.  now 40 more.  you're on 39?  oh wait, we painted them all red.  have you been through the red ones yet?03:48
bloganthose are just flames03:48
dougwigmy family went camping for the week this morning.  so for the first time in years, i watched a gory violent war movie on full volume in the middle of the day.  i guess i could've been working then.03:50
rm_youheh03:50
rm_youlike03:50
bloganno you gotta take advantage of that03:50
rm_youoldschool War Movie03:50
bloganwhat movie?03:50
rm_youor Inglorious Bastards? :P03:50
bloganrm_you: what do you consider oldschool?03:50
rm_youblogan: anything serious about war where the moral is "war is bloody and everyone dies"03:51
bloganplease don't say "saving private ryan" please don't say "saving private ryan"03:51
dougwigit was lone survivor03:51
bloganoh ive been meaing to watch that03:51
rm_youhmm03:51
rm_youalternatively, Enemy at the Gates was good03:51
*** amotoki_ has joined #openstack-lbaas03:51
rm_younot really "oldschool" either :P03:51
bloganthough it will be hard not to do mark wahlberg impressions while watching it03:51
*** amotoki_ has quit IRC03:52
bloganyeah i like enema at hte gates03:52
dougwigi never said old school.03:52
dougwignot that i'm knocking old school war movies.03:52
*** Krast has joined #openstack-lbaas03:56
bloganalirght im going to play a few games and going to sleep04:09
bloganyall have a good night04:09
dougwigNight.  I'm going to go find another violent movie.04:15
rm_younight04:22
*** vivek-eb_ has joined #openstack-lbaas04:25
*** vivek-ebay has quit IRC04:28
*** vivek-ebay has joined #openstack-lbaas04:50
*** vivek-eb_ has quit IRC04:53
*** vivek-ebay has quit IRC04:54
*** amotoki_ has joined #openstack-lbaas05:12
*** enikanorov has quit IRC05:32
*** enikanorov has joined #openstack-lbaas05:32
*** vjay3 has quit IRC05:34
*** sbalukoff has joined #openstack-lbaas05:41
*** vjay3 has joined #openstack-lbaas05:43
*** vjay3 has quit IRC05:54
*** orion_ has joined #openstack-lbaas05:59
*** orion_ has quit IRC06:04
*** jschwarz has joined #openstack-lbaas07:00
*** mestery_afk has quit IRC07:05
*** mestery_afk has joined #openstack-lbaas07:07
*** sbfox has joined #openstack-lbaas07:50
*** sbfox has quit IRC08:03
*** vjay3 has joined #openstack-lbaas09:06
*** vjay3 has quit IRC09:44
*** vjay3 has joined #openstack-lbaas09:46
*** vivek-ebay has joined #openstack-lbaas10:26
*** vivek-eb_ has joined #openstack-lbaas10:30
*** vivek-ebay has quit IRC10:31
*** vjay3 has quit IRC11:25
*** vjay3 has joined #openstack-lbaas11:26
*** vivek-ebay has joined #openstack-lbaas11:58
*** vivek-eb_ has quit IRC12:01
*** vivek-ebay has quit IRC12:03
*** jschwarz has quit IRC12:25
*** amotoki_ has quit IRC12:31
*** vjay3 has quit IRC12:40
*** rohara has joined #openstack-lbaas13:02
*** enikanorov__ has joined #openstack-lbaas13:08
*** enikanorov_ has quit IRC13:11
*** jschwarz has joined #openstack-lbaas13:46
*** orion__ has joined #openstack-lbaas14:01
*** enikanorov_ has joined #openstack-lbaas14:03
*** Krast_ has joined #openstack-lbaas14:04
*** enikanorov__ has quit IRC14:11
*** Krast has quit IRC14:11
*** sbfox has joined #openstack-lbaas14:38
*** jschwarz has quit IRC14:41
*** markmcclain has joined #openstack-lbaas14:49
*** fnaval has joined #openstack-lbaas14:49
*** fnaval_ has joined #openstack-lbaas14:51
*** sbfox has quit IRC14:53
*** fnaval has quit IRC14:54
*** xgerman has joined #openstack-lbaas15:10
*** sbalukoff has quit IRC15:12
*** TrevorV_ has joined #openstack-lbaas15:16
*** TrevorV_ has quit IRC15:17
*** TrevorV_ has joined #openstack-lbaas15:18
*** markmcclain has quit IRC15:35
blogangood morning!15:53
dougwigmorning!15:59
*** sbfox has joined #openstack-lbaas16:00
*** sbfox has quit IRC16:07
*** sbfox has joined #openstack-lbaas16:07
blogandougwig i think you're a zombie16:09
dougwigblogan: most days i feel like one.16:09
*** vivek-ebay has joined #openstack-lbaas16:33
*** VijayB has joined #openstack-lbaas16:41
*** crc32 has joined #openstack-lbaas16:55
*** sbfox1 has joined #openstack-lbaas16:57
*** sbfox1 has quit IRC16:57
*** sbfox1 has joined #openstack-lbaas16:58
*** sbfox has quit IRC16:59
*** sbfox has joined #openstack-lbaas17:01
*** sbfox1 has quit IRC17:02
*** vivek-ebay has quit IRC17:07
*** vivek-ebay has joined #openstack-lbaas17:08
*** VijayB has quit IRC17:08
*** vivek-ebay has quit IRC17:10
*** markmcclain has joined #openstack-lbaas17:18
*** markmcclain has quit IRC17:18
*** markmcclain has joined #openstack-lbaas17:19
*** VijayB_ has joined #openstack-lbaas17:27
*** fnaval_ has quit IRC17:30
*** jschwarz has joined #openstack-lbaas17:37
*** TrevorV_ has quit IRC17:38
*** sbfox1 has joined #openstack-lbaas17:44
*** sbfox has quit IRC17:47
*** VijayB_ has quit IRC17:56
*** sbalukoff has joined #openstack-lbaas17:57
*** magesh has joined #openstack-lbaas17:59
*** vjay3 has joined #openstack-lbaas18:01
*** VijayB_ has joined #openstack-lbaas18:21
*** VijayB_ has quit IRC18:25
*** sbfox1 has quit IRC18:29
*** sbfox has joined #openstack-lbaas18:29
*** vjay3 has quit IRC18:31
*** TrevorV_ has joined #openstack-lbaas18:35
*** jorgem has joined #openstack-lbaas18:35
*** VijayB has joined #openstack-lbaas18:54
*** VijayB has quit IRC18:59
bloganno more -2 dougwig19:21
dougwigwoohoo, now i need to work on some + symbols.19:23
*** TrevorV_ has quit IRC19:26
*** TrevorV_ has joined #openstack-lbaas19:27
*** VijayB_ has joined #openstack-lbaas19:32
*** VijayB_ has quit IRC19:32
*** fnaval has joined #openstack-lbaas19:35
*** fnaval has quit IRC19:39
*** jschwarz has quit IRC19:45
bloganhttps://wiki.openstack.org/wiki/Gerrit_Workflow19:51
bloganedited with dependency chain maintenance at the bottom19:51
*** VijayB_ has joined #openstack-lbaas19:52
dougwiglooks good, but i think each scenario needs a concrete example.19:54
*** VijayB_ has quit IRC19:58
*** rohara has quit IRC19:58
*** rohara has joined #openstack-lbaas19:59
*** VijayB has joined #openstack-lbaas20:10
bloganthat might require images20:10
bloganand im too lazy to do that20:10
blogani guess you mean just fake commit hashes and Change-Ids20:11
*** sbfox has quit IRC20:14
*** VijayB has quit IRC20:15
*** enikanorov has quit IRC20:15
*** enikanorov has joined #openstack-lbaas20:15
*** VijayB has joined #openstack-lbaas20:29
sbalukoffHeh!20:31
*** VijayB has quit IRC20:33
*** TrevorV_ has quit IRC20:38
blogansbalukoff ping20:43
dougwigno route to host20:49
sbalukoffHaha!20:52
sbalukoffI'm here.20:52
sbalukoffMore or less.20:52
sbalukoff(I'm planning on at least lurking in the Neutron meeting coming up in 7 minutes.)20:52
sbalukoffblogan: pong20:53
*** VijayB_ has joined #openstack-lbaas20:53
blogansbalukoff: question about single listener, wouldn't the https redirect scenario require two listeners in an haproxy config?20:55
bloganor is it just redirecting to the machine listening on 44320:56
bloganand if haproxy is listening on that its fine20:56
*** sbfox has joined #openstack-lbaas20:56
bloganim probably not making sense20:57
sbalukoffblogan: No. It requires two listeners in separate configs and processes.20:58
sbalukoffSo you can redirect from one listening service to any other listening service on any other machine or port.20:58
bloganok makes sense20:58
sbalukoffThis is just a specialized case of that.20:58
sbalukoff:)20:58
sbalukoffOk, cool.20:58
*** sbfox1 has joined #openstack-lbaas21:00
*** sbfox has quit IRC21:00
*** orion_ has joined #openstack-lbaas21:29
*** sbfox1 has quit IRC21:30
*** sbfox has joined #openstack-lbaas21:30
*** orion__ has quit IRC21:32
*** orion_ has quit IRC21:33
*** fnaval has joined #openstack-lbaas21:34
*** crc32 has quit IRC21:46
johnsom_I think we will need one vip per instance, many listeners per instance.21:49
johnsom_Running multiple HAProxy instances per Octavia VM gets messy with log file locations, start/stop scripts, api ports, etc.21:50
sbalukoffActually, it's reeeeally simple.21:50
sbalukoffAnything that needs a custom path, use the listener's UUID.21:51
xgermanok, so that we are on the same page loadbalancer = VM21:51
xgermanlistener = haproxy on that VM?21:51
sbalukoffEr... anything that needs a unique path.21:51
sbalukoffNo21:51
sbalukoffListener should be a listening TCP port.21:51
johnsom_So, instead of having canned Octavia VM images, you propose "building" HAProxy directories/instances?21:52
sbalukoffloadbalancer is loadbalancer as defined in Neutron LBaaS (ie. basically a virtual IP address with a couple extra attributes.)21:52
sbalukoffjohnsom_: Yes.21:52
bloganit'll have colocation/apolocation hints as well21:52
xgermanunderstood I was more tlaking what real objects (aka nova vms map to what)21:52
sbalukoffIt's a specialized appliance. why not?21:52
bloganor will that be on the listener?21:52
sbalukoffblogan: That will be on the loadbalancer.21:53
xgermanso Neutron LBaaS object maps to "load balancing appliance"21:53
sbalukoffxgerman: Nova VMs map to an Octavia VM21:53
sbalukoffAn Octavia VM may have many load balancers21:53
sbalukoffA load balancer may have many listeners21:53
bloganokay so if a loadbalancer has two listeners, can those two listeners be on separate nova VMs?21:54
sbalukoffA listener may have many pools21:54
sbalukoffA Pool may have many members.21:54
johnsom_That seems like it makes the "shared fate" decisions a lot uglier as we won't be able to let nova manage it21:54
sbalukoffblogan: No.21:54
dougwigmight be a good time to revisit ipv4+ipv6 on a single LB (we can map it to two in neutron lbaas)21:54
sbalukoffdougwig: I'm indifferent on that.21:55
sbalukoffdougwig: On the one hand, the most common use case by far for IPv6 + IPv4 is to expose the same service over different IP protocols.21:55
sbalukoffBut, allowing them to be separate allows them to scale separately.21:55
sbalukoffSo, I could see arguments for either way.21:55
dougwighmm, that only matters if you want more ipv6 than 4.  which should be true in a few decades.21:56
dougwig:)21:56
sbalukoffdougwig: I might not actually matter at all.21:56
sbalukoffdougwig: The other argument that holds weight here is that it's a simpler model to have a loadbalancer have just one vip_address.21:57
xgermanyeah, I like that21:57
sbalukoff(And not care whether that address is IPv4 or IPv6)21:57
xgermansimple = good21:57
xgerman(for me at least)21:57
bloganor we could make it totally awesome21:57
bloganand redefine loadbalancer to have many VIPs and each VIP have many Listeners!21:58
johnsom_sbalukoff: With your terminology, does load balancer == HAProxy instance in the Octavia case?21:58
xgermanI love awesome - you know the Hulu shod the Awesomes?21:58
sbalukoffjohnsom_: No. Listener = HAproxy instance.21:59
xgermanok, that makes sense now21:59
bloganmany loadbalancers can be on a VM though right?21:59
xgermanyep, he said that earlier21:59
sbalukoffblogan: Yes.21:59
xgermanthough I would love to restrict 1 LB per VM21:59
blogani think it should be configurable21:59
sbalukoffxgerman: I don't see a reason we couldn't make that configurable.22:00
sbalukoff1 LB per VM is just a specialized case of many LBs per VM. :)22:00
xgermantrue.22:00
bloganso is it the plan to have an API in front of Octavia for 0.5? i can't remember22:01
johnsom_I would like to see more of load balancer == HAProxy instance, where HAProxy instance can have one or more listeners.22:01
sbalukoffblogan: I think so. We'll want an Operator API in any case.22:01
sbalukoffjohnsom_: Why?22:01
sbalukoff(Seriously, I'm just looking to understand your reasoning for wanting that. A pros and cons list would be great.)22:03
bloganso is everyone okay with the object model and API only having the loadbalancer as the root object?22:03
johnsom_It let's the customer manage their shared fate better.  It saves memory and simplifies SSL termination.  It is a common use case to have many ports on the same vip.  I want to use additional listeners for proof-of-life tests.22:03
sbalukoffblogan: I certainly am.22:03
xgermanblogan: +122:04
sbalukoffjohnsom_: Can you elaborate on what yo mean by 'shared fate'?  I don't think I understand that.22:04
sbalukoffThe memory shared is negligible. I don't agree that SSL termination is simplified. Common use case is satisfied with many haproxy processes on the same vip as well.22:05
sbalukoffCase for keeping separate haproxy instance per listener:22:05
sbalukoff1. Simpler template22:05
sbalukoff2. Good separation of resources / accounting at the haproxy level.22:05
sbalukoff3. If you mess up listener on port A, you don't also take out listener on port B.22:06
johnsom_Customers see the load balancers in horizon as a "unit" and every listener that is a member of it has a shared fate with the others on the same load balancer unit.22:06
sbalukoff4. Restarting HTTP listener doesn't cause you to lose your SSL session cache for the HTTPS listener.22:06
sbalukoff5. Troubleshooting is simpler if you can watch just the listener having problems.22:06
sbalukoffjohnsom_: Is that not the case with separate haproxy instances? They would all be listening on the same Octavia VM.22:07
bloganjohnsom_: do you mean if you "restart" a loadbaalncer then it should restart all of the listeners22:08
blogan?22:08
sbalukoff6. Restarting an haproxy instance is probably the "most risky" part of any kind of regular maintenance. It's better to leave the instances listening on different ports alone.22:08
johnsom_I don't think it makes the template much more complicated really.  The memory does add up.  #3 is a feature. #5 is harder as you have more logs to download and look through.22:10
johnsom_blogan: yes22:10
sbalukoffI disagree entirely on #5. When you're troubleshooting the listener on port 443, you don't want to have to filter out log lines for the listener on port 80.22:11
johnsom_Customers tend to align "services" to load balancers.  I.e. the have a "acme application" service and it's load balancer.22:11
sbalukoff#3 is a *really good* feature.22:11
blogani think #5 is harder or simpler based on the case22:11
sbalukoffblogan: You're too nice. ;)22:12
bloganno I'm just saying if you don't know which port its on, it would be more difficult to have to pull down all the logs rather than looking through one big log22:12
johnsom_#3 is a feature in that they are tied together, if one is wrong, restarted, shutdown they all should be22:12
sbalukoffOh--  I would consider that an anti-feature.22:13
xgermanblogan +1 also if you are debugging redirects or mixed poages (e,g, image son 80 and secure stuff on 443)22:13
sbalukoffNot quite a bug, but not desirable behavior.22:13
dougwig1 LB per VM + dpdk == lots of wasted vm potential.22:13
dougwig(i'm way behind on this chat.)22:13
johnsom_Typically you are debugging the service vs. the port.  I.e. you need to see the hand offs or associated communication flows.22:13
sbalukoffInterestingly enough: you don't *have* to use separate log files if you're using separate haproxy instances.22:14
sbalukoffBut if you are using a single haproxy instance, you can only use a unified log file, IIRC.22:14
sbalukoffSeparate instances therefore gives you more flexibility.22:14
johnsom_dougwig: doesn't dpdk work at the hypervisor level?  I'm all for stacking up the VMs on the hardware22:15
sbalukoffAlso, if something like syslog-ng is used (as we use it here), then the whole "what gets logged where" question mostly becomes moot-- as you can do whatever you want with a utility like that.22:15
dougwigthe notion of userland networking is not specific to the hypervisor.22:16
xgermananyhow, the majority of our customers run one listener per ip22:16
sbalukoffxgerman: By design, or is that because "that's how we wrote it originally"22:16
sbalukoff?22:16
xgermanI think both22:17
sbalukoffI'm getting the impression that you're advocating this behavior not because it's actually better, but because it's what you're used to.22:17
johnsom_How can you isolate load when stacking instances in on VM?22:18
xgermanin our implementation the listene ron port 80 and port 443 are presented as two separate load balancers22:18
xgerman(internally they are the same haproxy)22:18
johnsom_sbalukoff: I think what he meant is what we see our customers doing is one listener per ip.22:18
xgermanso either way would wotrk with our customers22:18
bloganxgerman: thats probably because you went off of the atlas API which only allowed one port per load balaancer22:19
xgermanyep, we did...22:19
blogando you have the concept of shared vips in libra?22:19
sbalukoffjohnsom_: Are your customers configuring the haproxy service(s) directly, manually?  If so then I'm not surprised: this is what the haproxy docs would have you do. But it doesn't make for the best design for a service controlled through an API.22:19
xgermanblogan: yes, we can share vips22:20
bloganxgerman: so in taht case don't you have the ability to have one vip with many listeners?22:20
xgermansbalukoff: the main argument is that we need to predict load to pick a specific vm size22:20
johnsom_sbalukoff: No, they can't access the haproxy config.  It is either through horizon or the API, typically automated with tools like puppet22:20
sbalukoffxgerman: Yes, that will always be a problem... at least until we have some good auto-scaling capability (ie. Octavia v2)22:21
xgermanyep, so having restrictions/configurations allows to kind of reign that in22:21
xgermanso we can run on say xsmall with minimum memory...22:21
sbalukoffAlso interestingly, if you have to re-balance load, it helps to have separate haproxy instances because it becomes much more obvious which ones are the ones consuming the most resources.22:22
sbalukoff(Again, this goes back to the "good separation of resources" argument above)22:22
johnsom_Sorry to ask again, but how do you separate load with multiple instances on one vm?22:22
sbalukoffOh, you were asking me? I thought you were talking to dougwig.22:23
johnsom_With instance per vm nova and neutron provision/schedule for a performance floor.22:23
xgermanblogan: yes we have one vip and many listeners... kust somehow convoluted through Atalas :-)22:23
sbalukoffSo, you can use the standard linux process measures to separate load (eg. things like resident memory, % CPU usage, etc.)  But to counter your question: How do you separate load if it's all one haproxy instance?22:24
bloganxgerman: yes you can blame Jorge22:24
sbalukoffjohnsom_: Can you be more explicit in what you mean by 'instance' in this case?22:25
johnsom_See my above statement.  Linux process tools can't see outside the VM, so they have no concept of the hardware load beyond the "stolen cpu time"22:25
sbalukoffjohnsom_: I'm still not quite following you: How is this any different if we're using multiple haproxy instances?22:26
bloganyou mean a single haproxy instance?22:27
johnsom_With one HAProxy instance running per VM nova and neutron provision/schedule the VM to have a performance floor.  I.e. minimum guaranteed level of performance.22:27
sbalukoffblogan: Who is "you"?22:27
bloganlol you22:27
xgerman+ we know the memory footprint in advance22:27
sbalukoffjohnsom_: Right... again... so, that one haproxy instance is servicing, say, requests on port 80 and port 443 for a single VIP, right?22:28
xgermanin our current world, yes22:28
sbalukoffjohnsom_: So how would this be different if a separate haproxy instance were used for port 80 and one for port 443?22:28
sbalukoffblogan: I think I used the words I intended there.22:29
johnsom_With multiple instances per VM you can't guarantee that the load balancer the customer allocated will get any level of service if instance B eats the performance allocated to the VM.22:29
blogansbalukoff: yeah im pretty sure you did now too22:29
johnsom_Take the customer perspective of what they are purchasing22:30
sbalukoffjohnsom_: I think you're arguing against having more than one loadbalancer (ie. VIP) per VM. And that's fine. It will be a configurable option that you can use in your environment.22:30
sbalukoffjohnsom_: In your scenario, you're still selling the customer a single service that is delivering service to port 80 and port 443 clients.22:30
sbalukoffWhat I'm proposing doesn't change that. It just does it in two separate processes instead of one for various reasons I've previously alluded to.22:31
johnsom_That is cool and good to know.  Now, can I have multiple listeners per load balancer.  That is the flip side to that statement.  If that is configurable I think we will have the flexibility for all of the topologies.22:31
sbalukoffSo, again, I think you're arguing against multiple loadbalancers (VIPs) per VM, and not really making a case for multiple listeners per HAproxy instance.22:32
sbalukoffjohnsom_: Yes, I mentioned that above.22:32
sbalukoffA VM *may* have multiple loadbalancers (VIPS) (but doesn't have to)22:32
sbalukoffA loadbalancer *may* have multiple listeners (but doesn't have to)22:33
*** vjay3 has joined #openstack-lbaas22:33
sbalukoffA listener may have multiple pools (but doesn't have to)22:33
sbalukoffA pool may have multiple members (but doesn't have to)22:33
blogani think he means multiple listeners in the same haproxy instance22:33
bloganbut i've been wrong before, i should probably just shutup22:33
sbalukoffArgh...22:33
johnsom_blogan: +122:33
xgermanyeah, the last thing worries me. Right now we have special nova flavors (aka the cheapoest vm settings to run haproxy) and this makes it unpredictable22:33
sbalukoffDammit, please be very specific in your teminology!22:33
sbalukoffxgerman: How?22:34
sbalukoffHow does it make any lick of a difference for performance if all the listeners on a single loadbalancer (VIP) are serviced by one process versus several processes?22:34
xgermanif one of my users goes crazy and makes like 100 haproxy instances ont he same VM to listen on each port22:35
sbalukoffxgerman: Right, he'll have the same problem if it's one haproxy instance.22:35
sbalukoffAgain, NO DIFFERENCE.22:35
xgermanWe said earlier there was a memory footprint difference between the two models22:36
johnsom_sbalukoff: context switching load, memory overhead, and it breaks the customer's "model" of what they are purchasing.22:36
sbalukoffxgerman: Not significantly.22:36
sbalukoffSeriously, a single haproxy instance doesn't take that much RAM.22:36
xgermanwell, we have very small machines :-)22:37
sbalukoffYou can't spare an extra 5-10MB per listener?22:37
xgermanwe can chip in a few but not unlimited22:38
xgermanso at least there should be a configurable limit22:38
bloganso would making this a configurable option be something that makes doing either one drastically harder?22:39
sbalukoffxgerman: What kind of limit are you talking about? Number of listeners per loadbalancer?22:39
johnsom_What would the horizon UI look like.  One load balancer per VIP:PORT pair?22:39
xgermansbalukoff, I just want to make sure we don't run out of RAM22:39
sbalukoffblogan: It's a lot simpler to gather usage data, etc. if you're dealing with an haproxy config that only knows about one listener and its associated backends.22:40
sbalukoffSo practically speaking, it's actually quite a bit more complicated to have multiple listeners per haproxy instance.22:40
sbalukoffAnd makes a lot of other tasks more complicated.22:40
sbalukoffxgerman: So...22:40
sbalukoffOn the RAM thing:22:40
sbalukoffIn our experience, the higher-volume instances end up chewing up more RAM anyway.22:41
sbalukoffThe amount lost to having separate processes is negligible compared to that.22:41
sbalukoffAlso, I don't think we're losing much in terms of context switching either...  a single haproxy instance needs to be doing this internally to service 10's of thousands of connections per second anyway.22:41
sbalukoffSo again, the overhead in terms of resources for doing it this way is small.22:42
xgermanagreed. But I don't like to give the customers the opportunity to go crazy, make a 100 listeners, and tank the VM22:42
johnsom_sbalukoff: is the billing model per VIP:PORT pair or per load balancer?  Maybe that is the disconnect.22:42
sbalukoffAnd is totally worth it for the gains we get with more simple configuration, separation, and troubleshooting.22:42
sbalukoffjohnsom_: Actually, we hadn't discussed the billing model yet.22:42
sbalukoffBut!22:42
bloganjohnsom_: i think thats up to the oeprator, octavia just needs to provide the stats22:43
bloganmy opinion22:43
sbalukoffIt's actually quite common to bill more for terminated TLS connections.22:43
sbalukoffBecause that alone requires considerably more resources.22:43
johnsom_blogan: +122:43
sbalukoffBut yes, it is up to the operator.22:43
sbalukoffAnd stats are easier to come by with simpler configs. :)22:43
blogansbalukoff: we currently bill only for extra the uptime of a loadbalancer with termination22:43
johnsom_I think by not allowing multiple listener per HAProxy instance we are limiting the operator's ability to use alternate billing models.22:44
xgermanwe don't offer SSL termination...22:44
bloganbut that doesn't mean it can't change22:44
sbalukoffjohnsom_: I disagree22:44
sbalukoffI think that multiple listeners per HAProxy instance doesn't actually affect the billing problem significantly.22:44
bloganas long as Octavia provides the stats broken down by loadbalancer, and each listener, then I don't think it would change much with billing either22:45
bloganand whatever else people need since the stats requirements hasn't been discussed at length22:46
dougwiggood lord, i was typing up one email, and now there's a novel waiting for me here.22:46
blogandougwig: enjoy22:47
sbalukoffAgain, I'm going to say that it's actually easier to have more flexibility in billing with multiple instances because it's easier to get the stats necessary. :/22:47
sbalukoffdougwig: Sorry!22:47
johnsom_If I bill for bandwidth, per load balancer exposed in horizon, and those load balancers have multiple ports, the operator would have to aggregate that data across all of the instances to report billing in the same model the customer sees in horizon.  Where if the operator has the option to have multiple listeners per instance there is no aggregation required.22:47
sbalukoffjohnsom_: The operator is going to have to do something similar anyway when an active-active topology is used.22:48
sbalukoffAlso, when haproxy restarts, it loses its stats. So it's probably best to rely on something like iptables for accounting for raw bandwidth numbers.22:48
bloganshouldn't it be Octavia's job to do the aggregation of stats in a consumable format?22:48
sbalukoffAnd well...  it's simple to aggregate. But how are you going to provide a breakdown, say, if you charge a different bandwidth rate for terminated SSL traffic versus non-terminated SSL traffic?22:49
sbalukoffCombining this data actually removes options the operator might like for billing.22:49
sbalukoffblogan: Yes, I think so, too.22:49
sbalukoffAlso, our customers like seeing graphs which show their HTTP versus HTTPS bandwidth usage.22:50
bloganwe currently track them separately as well22:51
bloganand we also currently have to deal with the stats being reset when an "instance" is restarted22:52
bloganand its a pain in the ass22:52
sbalukoffblogan: That problem is reduced if only the instance servicing a single listener gets restarted, rather than all the listeners for a given loadbalancer (VIP).22:53
bloganreduced maybe, but if we're still using haproxy to get teh stats, it's still something that has to be accounted for22:53
xgermanwell, if you send the stats every minute you are just loosing a minute or so22:53
sbalukoffblogan: True.22:53
bloganwhich is why I'm all for staying away from that22:53
xgermanso I am not convinced that this is a deciding factor22:54
sbalukoffWell, for raw bandwidth, we use iptables. It's handy, doesn't get reset when instances get reset, and is as reliable as the machine itself.22:54
bloganxgerman: if they're counters of bandwidth that continue to increase and you hve to take the delta of the counter at different time points to determine the bandwidth that was done in that time, thats when resets cause issues22:54
xgermanmy proposal is that the Octavia VM always sends deltas...22:55
sbalukoffAnd frequent resets (which will certainly happen when automated auto-scaling is in use) mean that the data loss there is not insignificatn.22:55
johnsom_Next question as I work through this....  Would the Octavia VM be limited to the customer or would multiple customer haproxy instances be stacked on one Octavia VM?22:55
dougwigand if you miss a delta?  counters that reset to zero are the NORM in management circles.22:55
bloganxgerman: i guess the pont is that using haproxy to get teh stats will require some code to deal with the resets22:55
xgermanyep22:56
sbalukoffjohnsom_: Initially, a VM would be used for just one tenant.22:56
sbalukoffjohnsom_: That's the simplest way to start.22:56
blogansbalukoff: not only that, race conditions have caused some serious overbilling which is the worst problem to have22:56
sbalukoffBut it doesn't mean we can't revisit that later on.22:56
johnsom_I think we would have "security" push back with multiple tenants per VM22:56
xgerman+122:56
sbalukoffjohnsom_: That's why we're saying one tenant per VM for now.22:56
sbalukoffIt's in theory a solvable problem using network namespaces.22:57
* dougwig wonders how many topics are being inter-leaved right now.22:57
sbalukoffBut... since most of us can just pass the cost for inefficieny on to the customer anyway... it's not something we feel highly motivated to solve.22:57
*** orion_ has joined #openstack-lbaas22:57
dougwigsbalukoff: the namespaces have been a performance issue for neutron, haven't they?22:57
sbalukoffdougwig: I think that's mostly OVS being a performance problem.22:57
sbalukoffNamespaces haven't been bad, from our perspective.22:57
sbalukoffBut OVS is a piece of crap.22:58
sbalukoff;)22:58
johnsom_We get namespace management from neutron.  I wouldn't want to duplicate that complexity.22:58
sbalukoffjohnsom_: Agreed!22:58
sbalukoffSo yes, at least until someone feel inclined for a *lot* of pain, then we're going to go with "a single Octavia VM can be used with a single tenant"22:58
dougwigthe other thing that sucks about deltas is when your collector is down, btw.22:59
sbalukoffAnd even if that changes, it would be a configurable option.22:59
sbalukoffI don't really see that changing, though.22:59
sbalukoffdougwig: +122:59
blogandougwig: yes indeed22:59
johnsom_sbalukoff: we agree on one tenant per Octavia VM22:59
dougwigi spent far too many years doing network management crap.  the resetting counters might be one form of pain, but it's less than deltas.22:59
dougwigand i'd expect ceilometer or the like to deal with counters natively and hide all that.  if not, i'll be shocked.23:00
xgermanyeah, we probably need to explore that more23:00
sbalukoffdougwig: It's been a part of MRTG and rrdtool for over 15 years... so.. if they don't we can always tell them to please get with the program and join the 1990's.23:00
dougwigit's in those because it's been a part of every SNMP MIB since that protocol was invented.23:01
sbalukoffjounsom: Yes, we agree on that.23:01
bloganso is the only disagreement still the one listener per instance?23:01
sbalukoffblogan: I think so.23:01
sbalukoffAnd I'm being damned stubborn because I'm not really hearing good technical reasons not to go with this. :)23:02
johnsom_Yeah, I think we need to think about that one more.23:02
dougwigso if i have a web server that listens on port 80 and 8080, and via ipv4 and ipv6, i'd have to have FOUR load balancers?23:02
dougwigsorry, FOUR instances?23:02
sbalukoffdougwig: Again, please be careful with your teminology.23:02
sbalukoffHaha23:02
sbalukofffour haproxy instances. Yes.23:03
blogansbalukoff: i semi-disagree about it being more complex to make it configurable because with the right design it wouldn't matter, though implementing will almost always dig up issues not thought of23:03
sbalukoffAll of which consume not much RAM.23:03
johnsom_sbalukoff: stubborn is ok, that is part of coming to a good answer, but I disagree that there haven't been a few technical reasons in the negative column.23:03
dougwigholy shit, what a colossal waste.23:03
sbalukoffdougwig: Really?23:03
sbalukoffWhy?23:03
dougwigit's probably an extra 1-2GB of ram for that example, right?23:03
dougwigwait.  are these VM instances, or haproxy instances?23:03
sbalukoffYou mean, 5-10MB?23:03
xgermanhaproxy instances23:03
sbalukoffdougwig: haproxy processes.23:03
dougwighahahahaha.23:03
dougwigfuck me.23:03
dougwigi could care less how many processes we run.23:04
xgermanyeah, I am alreday running on a tight RAM budget23:04
sbalukoffI could!23:04
sbalukoffBecause multiple processes actually simplifies a lot of aspects of writing the code, maintenance and troubleshooting. It also makes individual listeners more reslienet.23:04
sbalukoffresilient.23:04
sbalukoffAll of that for a negligible amount of extra RAM. :/23:05
dougwigit also adds context switching waste, which will matter for high-throughput loads.  but not enough to matter for 0.5, 1.0, or even 2.0.23:05
sbalukoffdougwig: Not much waste there...  a single haproxy instance is essentially doing all that heavy lifting internally anyway, servicing 10's of thousands of of simultaneous connections at once.23:06
sbalukoffThe OS context switching is, again, negligible compared to that.23:06
johnsom_dougwig: it depends on how many haproxy instances we stack on one Octavia VM23:06
dougwignot if it's using epoll correctly; run enough processes, and switching those around might well take up more time than watching sockets.23:06
sbalukoffjohnsom_: Which is going to be a configurable option. ;)23:06
dougwigkernel time, not user time.23:06
johnsom_dougwig: +123:07
xgermanyeah, unless we throw cpus at the problem (and we are tight on that ones, too)23:07
*** dlundquist has joined #openstack-lbaas23:07
xgermanwe usually only give out one nova cpu23:07
dougwigwhich should be plenty for a non-ssl byte pump.23:08
xgermanyep23:08
dougwigif one listener per haproxy simplifies things, let's do it for 0.5, and then get some data on if it matters?23:08
sbalukoffdougwig: +123:09
dougwigwe're in stackforge, we can try stuff that isn't the end perfect solution.23:09
xgermanyeah, we need to do some research. So far we have been speculating23:09
dlundquistIt might make a significant different on some embedded platforms like older ARMs, but I don't think it would be noticable on a modern x8623:09
dougwigraspberry pi octavia demo!23:10
*** sbfox has quit IRC23:10
*** sbfox has joined #openstack-lbaas23:10
sbalukoffxgerman: Our current load balancer product uses this topology. For our part, it isn't just speculation going into this. ;)23:10
sbalukoffdougwig: Haha!23:10
*** orion_ has quit IRC23:10
sbalukoffdougwig: You make nova capable of spinning up instances on a raspberry pi, and I'll run that demo for you. ;)23:11
johnsom_So in horizon the customer would see one load balancer per VIP:PORT pair?23:11
xgermanwell, we are thinking of using Atom chips23:11
dougwigqemu can already emulate it, i'm sure.23:11
sbalukoffjohnsom_: Again, please be careful with terminology here.23:11
*** orion_ has joined #openstack-lbaas23:11
sbalukoffWhat do you mean by 'load balancer' in that statement?23:11
dlundquistUsing one process, would allow us to share pools...23:11
xgermansbalukoff, what RAM/CPU are you running?23:11
sbalukoffdlundquist: Except I think sharing pools is off the table.23:11
johnsom_sbalukoff: I was being very specific.  "load balancer" as is listed in horizon.23:12
dougwigatoms, really?  they except at low-power applications.  i'd think LB would benefit greatly in a cost/performance ratio from hyper-threading?23:12
xgermanwell, horizon for v2 isn't developed yet23:12
johnsom_Right23:12
sbalukoffxgerman: Our typical load balancer "appliances" run with 16GB of RAM, and 8 cores. But we'll put dozens of VIPs on these.23:12
xgermanwe run 1 GB/1 VCPU23:12
xgermanas a nova instance23:13
xgermandougwig: http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.aspx23:13
dougwig /except/excel/23:13
rm_worklol moonshot23:14
*** vjay3 has quit IRC23:14
xgermanyeah, everyhting needs to run on moonshot - we don't want people not to buy those servers because they can't run LBaaS23:14
dougwigxgerman: how many novas do you put on one of those blades?23:14
dougwigxgerman: in that case, i think you need to send us all some sample units.  ;-)23:15
xgermanwell, moonshot will be in our future data centers23:15
sbalukoffHeh!23:15
xgermanright now we have different hardware23:15
*** orion_ has quit IRC23:15
sbalukoffxgerman: Again, I remain unconvinced that very slightly less RAM usage and very slightly less CPU usage is going to make or break this product.23:16
sbalukoffAnd I'm certianly unconvinced that it's worth sacrificing a simpler design for such considerations. :/23:17
rm_workI just like the naming23:17
rm_workreminds me of Moonraker23:17
xgermansbalukoff, those are operator concerns: Can we run it the cheapest way possible23:18
rm_workwe're hoping to be able to run on an LXC based solution, I think23:18
rm_workbut no idea how far out that is23:18
xgermanYeah, without load tests we won't knoe23:18
sbalukoffxgerman: I would argue that quicker to market, simpler troubleshooting, faster feature development, etc. make the simpler design the "cheaper" solution.23:19
xgermanfor sure what the hardware/software does23:19
dlundquistI don't see any reason we couldn't make this a configuration option if there is demand for both.23:19
xgermandlundquist +1 : my problem is I don't really know the performance implications without trying it out23:20
*** sbfox has quit IRC23:20
dlundquistxgerman: we could easily benchmark that outside of octavia23:21
bloganalright im out of here23:21
sbalukoffI think making that particular design feature a configurable option just gets us the worst of both worlds: We don't actually get simpler code or design.23:21
xgermanyep, so maybe that should be the action item for this week :-)23:21
xgermanbenchmark and report back23:21
*** crc32 has joined #openstack-lbaas23:21
blogansbalukoff: i think if designed for both teh code would be simpler and the design may not be that much more complex23:22
xgerman+123:22
bloganbut I really don't know until its tried23:22
xgermanI don't see that many code compications23:22
xgermanyeah, the devil is in the details23:22
sbalukoffblogan: Really? You'd have to make the code handle both cases. You don't get the advantage of being able to assume that a single haproxy instance is in charge of one listener (which has repercussions throughout the rest of the tasks that need to be accomplished, from stats gathering to status information...)23:23
bloganwell handling two different possible locations and configurations could definitel complicate it23:23
blogansbalukoff: if we're just mapping loadbalancer listener combinations to some end value, then it would work for both ways23:23
sbalukoffblogan: No, you can make assumptions about paths pretty easily.23:24
sbalukoffEr...23:24
sbalukoffSorry, yes.23:24
bloganbut like I said i know there will be issues that pop up that I'm not thinking about that would complicate it23:24
bloganbut the same goes with the other way23:25
sbalukoffYes, I'm sorry I'm apparently not articulating that very well.23:25
bloganso I still say we just do the single listener per haproxy instance for 0.5 and see how that goes23:26
bloganif it becomes a major problem, then spend the time necessary to allow it to be configurable23:26
sbalukoffRight.23:26
dlundquistIf we are going seriously consider this, we should get a comparison doc together with pros and cons of each and benchmarks.23:26
sbalukoffxgerman: I think doing the benchmarks would be worthwhile.23:26
sbalukoffThough I'm reluctant to sign anyone in particular up for that work. :P23:27
xgermanyeah, agreed.23:27
blogandlundquist: i say we seriously consider it after 0.523:27
sbalukoffblogan: So part of the point of 0.5 is to get the back-end Octavia API mostly baked an in place, with few changes (hopefully)23:28
dlundquistblogan: I agree, but think we should be methodical about it if we consider running only a single haproxy process per instance23:28
sbalukoffThis discussion actually affects that.23:28
xgermandlundquist +123:28
xgermanwe should write that up + our reasoning for the ML23:28
sbalukoffDoes anyone here want to actually run some benchmarks?23:29
*** VijayB_ has quit IRC23:29
blogansbalukoff: thats a good goal for sure, but changes are going to happen regardless23:29
sbalukoffblogan: Yeah. This just seems... more fundamental to me. :/23:29
dlundquistI'm a bit concerned about the lack of fault isolation, and since generally epoll loops are single threaded one could achieve better utilization across multiple thread units.23:29
bloganyou're a fundamentalist!23:29
sbalukoffHaha!23:30
bloganim leaving for reals now23:30
sbalukoffHave a good one!23:30
bloganoh yeah23:30
bloganpeople should raise concerns about the object model discussion on the ML now23:30
sbalukoffYes.23:30
sbalukoffYes, indeed!23:30
*** sbfox has joined #openstack-lbaas23:32
*** VijayB has joined #openstack-lbaas23:35
dougwigif +cpu and +ram will kill this product, it shouldn't be in python.  :)23:36
dougwigkeep it simple.  iterate.23:36
xgermanhaproxy is not in npython23:36
xgermanwe have more cpu/RAM budget for control servers :-)23:36
*** vjay3 has joined #openstack-lbaas23:43
*** vjay3 has quit IRC23:48
*** vjay4 has joined #openstack-lbaas23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!