Wednesday, 2014-08-27

*** xgerman has quit IRC00:09
sbalukoffxgerman: Yes, we can discuss it there.00:18
sbalukoffcrc32: Technically it was a tie. But we're going to give the IRC thing a shot for a couple weeks and re-evaluate. Again, my prediction is that some of the people voting for IRC probably don't actually intend to attend. However, it's possible they might. Also, it's possible some people who voted for voice might change their opinions.00:20
*** ptoohill-oo has quit IRC00:31
*** sbfox has joined #openstack-lbaas00:48
*** crc32 has quit IRC00:52
*** sbfox has quit IRC00:56
*** sbfox has joined #openstack-lbaas01:28
*** amotoki has joined #openstack-lbaas01:50
*** woodster_ has quit IRC01:55
*** fnaval has joined #openstack-lbaas02:20
*** fnaval has quit IRC03:29
*** fnaval has joined #openstack-lbaas03:30
*** fnaval has quit IRC04:17
*** dkehnx has quit IRC06:43
*** dkehn has joined #openstack-lbaas06:44
*** dkehn has quit IRC07:32
*** jschwarz has joined #openstack-lbaas07:33
*** rm_you| has joined #openstack-lbaas07:38
*** rm_you has quit IRC07:40
*** johnsom__ has joined #openstack-lbaas08:02
*** electrichead has joined #openstack-lbaas08:02
*** sbfox has quit IRC08:10
*** redrobot has quit IRC08:10
*** johnsom_ has quit IRC08:10
*** dkehn has joined #openstack-lbaas08:42
*** rm_you has joined #openstack-lbaas09:12
*** rm_you has quit IRC09:12
*** rm_you has joined #openstack-lbaas09:12
*** rm_you| has quit IRC09:12
*** rm_you| has joined #openstack-lbaas09:21
*** rm_you has quit IRC09:24
cikoHi, is there any documentation on how graceful server shutdown work with Neutron? Ich found something in https://wiki.openstack.org/wiki/Neutron/LBaaS/API but it basically says that connections might or might not be terminated and Session Persistence will be lost.09:25
cikoAint there a way to just terminate new connections and let all sessions end before shutting down a server?09:26
*** rm_you|wtf has joined #openstack-lbaas09:28
*** rm_you| has quit IRC09:30
*** rm_you|wtf has quit IRC09:35
*** woodster_ has joined #openstack-lbaas12:37
*** sballe has joined #openstack-lbaas12:47
*** balles has quit IRC13:25
jschwarzhey guys13:34
*** amotoki has quit IRC13:52
sballejschwarz, morning :)14:16
*** markmcclain has joined #openstack-lbaas14:17
jschwarz:P14:17
jschwarzmarkmcclain, Do you have a minute to talk about one of your old changesets? https://review.openstack.org/#/c/22794/14:19
markmcclainjschwarz: sure what's up?14:19
jschwarzmarkmcclain, specifically, https://review.openstack.org/#/c/22794/4..15/quantum/plugins/services/agent_loadbalancer/drivers/haproxy/cfg.py L5614:20
jschwarzmarkmcclain, the one with the logging configuration on the right side14:21
jschwarzmarkmcclain, the configuration as is, with the default rsyslog configuration on some redhat based systems, causes some broadcast logs to spam in certain cases14:21
jschwarzmarkmcclain, I found a fix for it which involves changing the log configuration, but thought I'd ask you because you wrote this14:22
markmcclainI tested on 12.04 so feel free to change it for the better14:22
jschwarzmarkmcclain, do you remember why are there 2 different facilities (level0 and level1)?14:23
jschwarzas far as I understand, only one is needed to produce logs...14:24
markmcclainjschwarz: honestly no :)14:27
markmcclainit's likely a product of the fact I wrote it < week14:27
jschwarzlol14:27
jschwarz^^14:27
jschwarzmarkmcclain, i'll write the changeset and add you as a reviewer if you could look at it when you get the chance14:27
markmcclainsounds good14:28
jschwarzif someone has any objections he can take it up with the mighty Gerrit God14:28
markmcclainmy guess is probably lifted that bit from a conf file I pulled from an internal system at the time14:28
markmcclainjschwarz: sounds good14:28
jschwarzmust be it14:28
jschwarzmarkmcclain, me again :) I can probably make it work so that haproxy sends its logs to the lbaas agent process (instead of syslog->/var/log/messages)14:51
*** HenryG has joined #openstack-lbaas14:51
*** mlavalle has joined #openstack-lbaas14:51
jschwarzmarkmcclain, though it would require me to create a listening socket in the ns driver14:51
jschwarzmarkmcclain, what do you reckon?14:52
markmcclainjschwarz: why do we want to incept the logs?14:53
jschwarzmarkmcclain, tenant operators could notice the error and then realize their backend members are down, for example14:54
*** openstackgerrit has joined #openstack-lbaas14:55
jschwarzie. they'll probably be of use to tenant operators (but on second thought they don't usually get access to the lbaas process logs, right?)14:55
markmcclainjschwarz: hmmm.. that would be an interesting thing to add14:57
markmcclainvs waiting for the monitor timer to fire to determine a member is gone14:57
jschwarzmarkmcclain, actually the logs are triggered when the monitor discovers that all the members are gone14:58
jschwarzmarkmcclain, on the other hand, the difference between lbaas agent's logs and /var/log/messages is not so big (and a tenant operator is unlikely to look at both imo)14:59
markmcclainright14:59
jschwarzso /var/log/messages it is :)15:00
*** Zebra has quit IRC15:06
*** dkehn has quit IRC15:16
*** xgerman has joined #openstack-lbaas15:18
*** busterswt has joined #openstack-lbaas15:18
*** dkehn__ has joined #openstack-lbaas15:19
*** johnsom__ has quit IRC15:22
*** dkehn__ is now known as dkehnx15:22
*** johnsom__ has joined #openstack-lbaas15:22
*** markmcclain has quit IRC15:24
sballe\o?15:25
*** electrichead is now known as redrobot15:26
*** openstack has joined #openstack-lbaas16:18
*** sbfox has joined #openstack-lbaas16:31
*** barclaac has joined #openstack-lbaas16:34
*** barclaac|2 has quit IRC16:37
*** mageshgv has joined #openstack-lbaas16:51
*** markmcclain has joined #openstack-lbaas16:59
*** johnsom__ has quit IRC17:10
*** TrevorV_ has joined #openstack-lbaas17:14
*** jorgem has joined #openstack-lbaas17:14
*** ajmiller has joined #openstack-lbaas17:33
*** sbfox has quit IRC17:36
*** sbfox has joined #openstack-lbaas17:42
*** barclaac|2 has joined #openstack-lbaas17:54
*** barclaac has quit IRC17:55
*** sbfox has quit IRC17:55
*** rohara has joined #openstack-lbaas17:57
roharasbalukoff: probably going to miss today's meeting17:58
*** TrevorV_ has quit IRC18:00
*** TrevorV_ has joined #openstack-lbaas18:00
sbalukoffrohara: Well, there'll be a transcript for this one. :)  In any case, hope you can make it next week.18:07
roharasbalukoff: cool beans. and yeah, i will be there next week for sure18:08
*** sbfox has joined #openstack-lbaas18:10
TrevorV_sbalukoff: correct me if I'm wrong, but I won't have to spend time writing up meeting minutes tonight right?18:36
sbalukoffTrevorV: Correct.18:40
sbalukoffWell, the minutes we get out of the automated system won't be nearly as nice as the ones you've put together over the last few meetings. So if you really want to do that / annotate them or something, that's fine, eh.18:40
sbalukoffI'm not really expecting to be able to cover as much material as we have in previous weeks.18:41
TrevorV_So the meeting minutes don't capture the communications?18:41
TrevorV_in IRC I mean?18:41
sbalukoffNo, they only capture items we specifically point out (in conversation) to capture. Like action items, topic changes, etc.18:41
sbalukoffCan't really expect an automated system to be able to comprehend what is actually being said to automatically capture points of interest that fall outside those categories. ;/18:42
TrevorV_Nah I thought it kept the log of input/output from users too, guess not18:45
TrevorV_I can write stuff up still if that really does help you guys :)18:45
TrevorV_It will also be easier since I can just do a copy-paste of the text being sent around and pull out some useful informations :D18:49
blogan_ping sbalukoff18:50
sbalukoffblogan_: Pong19:03
sbalukoffTrevorV_: Oh, it does keep a line-by-line transcript of everything. But that's the "full log" not the minutes.19:03
TrevorV_So should I write stuff up or no, sbalukoff ?19:04
* TrevorV_ is confused :D19:04
blogan_sbalukoff: i'm responding to your comments19:04
sbalukoffTrevorV_: For the full "IRC meeting experience" let's not write anything up for the next couple meetings?19:04
sbalukoff(At least)19:04
sbalukoffblogan_: Oh, good!19:05
TrevorV_sbalukoff: I'm down with that :)19:05
blogan_sbalukoff: real quick though, the host_id the load_balancer is meant to store something such as the VM id that the loadbalancer exists on19:05
sbalukoffblogan_: Ok, so the problem there is that in an active/standby or active/active topology it won't be just one host.19:05
sbalukoffSo either we make that a list, or we come up with some other entity that describes "the thing hosting the loadbalancer" and the specific Nova VM ids just become a list attached to that.19:06
blogan_sablukoff: yeah and I thought about that and didn't think it mattered, but now i can't remember why I thought that19:06
sbalukoffblogan_: If it doesn't matter, why track the information at all? ;)19:06
blogan_but i dont know why I would think that because it does matter19:06
sbalukoffRight.19:07
blogan_no i mean it didn't matter to keep track of any but one19:07
blogan_for some reason19:07
sbalukoffBecause Octavia's controller needs to actually know the Nova VMs hosting loadbalancers, eh.19:07
sbalukoffSo the question is, are there any additional attributes that would get assigned to that "thing hosting the loadbalancer"?19:07
sbalukoff(The answer to that tells me whether we should just list the host ids, or whether we need that separate object.)19:08
sbalukoffPerhaps a flavor?19:08
blogan_well i would assume the flavor would jsut be on the loadbalancer19:08
sbalukoffRight, that works.19:08
blogan_since that can determine how many vms are being used19:08
sbalukoffYep.19:08
blogan_well i think we need another table19:09
sbalukoffSo right now, it's sounding like host_id should just be a list.19:09
blogan_a comma delimited list?19:09
sbalukoffNo-- use another table.19:09
blogan_ah ok i was gonna say19:09
sbalukoffHaving to parse a field in the database defeats a lot of the reason to use relational databases in the first place.19:09
blogan_lol i know19:09
blogan_this actually brings up another issue then, colocation and apolocation19:10
sbalukoffYep!19:10
*** sbfox has quit IRC19:10
blogan_if someone says they want their loadbalancer colocated with antoher does that mean all the haproxy's are supposed ot be on all the same vms19:10
sbalukoffThose will also effectively be lists.19:10
blogan_and what if they specify a different flavor19:11
sbalukoffblogan_: Yes.19:11
sbalukoffThen we have to return an error.19:11
blogan_if you use colocate you have to have the same flavor19:11
sbalukoffAll colocated loadbalancers must be of the same flavor.19:11
blogan_okay19:11
blogan_were you planning on apolocating allowing a list of loadbalancer ids not to be on? or just one loadbalancer id/19:12
blogan_?19:12
sbalukoffblogan_: A list.19:12
sbalukoffReally, what we're doing here is inventing group logic (again?)19:12
blogan_that will get complicated19:12
xgermanall colocated once on the same nova flavor?19:12
blogan_doable19:12
xgermanthat sounds wrong19:12
blogan_xgerman: no octavia flavor19:12
xgermanok19:12
blogan_but actually that woudl end up causing the same nova flavor19:12
xgermanso I can have SSL and non SSL (using different nova flavors colocated)19:13
sbalukoffIn an indirect way, yes.19:13
xgermanwell, my advice is to allow different flavors since for SSL we would like to use beefier machines..19:13
blogan_is SSL supposed to be a feature defined by flavor?19:13
blogan_oh i see19:13
blogan_what you mean19:13
sbalukoffRight...19:13
*** sbfox has joined #openstack-lbaas19:13
blogan_so with this solution, they user woudl need to use a beefier flavor for the nonssl19:14
sbalukoffxgerman: Keep in mind that all listeners on a single loadbalancer end up on the same VM. So, there's implied colocation between listeners when they're on the same loadbalancer.19:14
sbalukoffWhat we're talking about is colocation of loadbalancers19:14
sbalukoff(ie. different vips) on the same machine.19:14
xgermanyep, and I still might want to colocate SSL (more CPU) with non-SSL...19:15
sbalukoffIt's a relatively infrequent requirement, but we've seen it come up with users who want to make sure, for example, all their development environments use the same physical hardware to save costs and separate them from production.19:15
xgermanwell, since we don;t offer that I really have no dog in that...19:16
sbalukoffThe apolocation stuff is the way to guarantee separation from production.19:16
sbalukoffIt comes up in private clouds, mostly, where the customer is trying to minimize their hardware costs, yet still have a funcionally equivalent dev environment.19:17
sbalukoffFunny thing is this requirement comes out of how things get billed. If you don't bill the same way we do, then it probably won't come up for you at all.19:18
xgermanyeah, I just think restricting it to be inside one flavor is limiting...19:18
sbalukoffxgerman: I'm not sure I understand why.19:18
sbalukoffAre you saying that one flavor could supersede another flavor? That flavor A is a fully-contained subset of flavor B, or something?19:18
xgermanwell, if I have flavor A and flavor B of a software load balancer I might wnat ti to be on the same hardware19:19
sbalukoffI'm not sure we intend for flavors to work that way (that sounds really complicated anyway).19:19
sbalukoffxgerman: And by 'hardware' we mean 'Octavia VM', right?19:19
xgermanmaybe that's my confusion I thought you meant physical box where the VMs run19:19
sbalukoffSince the flavor actually defines several key characteristics of the octavia VM (eg. RAM, CPU, HA topology), I don't see how that's possible unless they're the same flavor.19:20
sbalukoffxgerman: Oh! No, I don't think so. Though it's probably worth considering whether apolocation requirements should take that into account.19:20
sbalukoffHmmmm...19:20
sbalukoff(You know, so that apolocated loadbalancers don't share the same fate.)19:20
xgermanyep, I thought you were talking that19:21
sbalukoffYeah, sorry-- I've been somewhat ambiguous as to whether I was talking about physical hardware or virtual hardware.19:21
*** sbfox has quit IRC19:22
xgermanno worries - I am also easily confused anyway...19:22
xgermanI will grab lunch so I am not hungry during our meeting ;-)19:23
sbalukoffThat sounds like a really good idea.19:23
*** TrevorV_ has quit IRC19:23
blogan_sbalukoff: vip_port_id19:27
blogan_sbalukoff: your comments on that are valid, but also brought up another possible issue19:28
blogan_using vip_subnet_id would not be valid if we were using nova-networks, so we can't assume subnets exist for anything the network driver abstracts19:28
*** barclaac has joined #openstack-lbaas19:29
*** barclaac|2 has quit IRC19:32
sbalukoffHmmm...19:47
sbalukoffnova-networks doesn't know what a subnet is?19:47
sbalukoffHow the heck does it function at all? O.o19:47
blogan_from my quick look at it, nova-networks just has a network entity, but you define a cidr on it19:48
sbalukoffAah.19:48
blogan_its really just a naming issue19:48
sbalukoffWell, that's... mostly what we're after, I guess. Do we know how nova-networks deals with overlapping ranges?19:48
blogan_and we can call keep it vip_subnet_id but the network driver will have to know that means network in nova-network terms19:48
blogan_no idea on that one19:48
sbalukoffWe can invent our own term, or, I dunno... use something more intuitive and "industry standard"19:49
blogan_though, from a very fuzzy memory i think it does validate overlapping cidrs19:49
sbalukoffHuh...  does it disallow them?19:49
blogan_yes19:49
*** crc32 has joined #openstack-lbaas19:49
blogan_by validate i mean it won't allow them19:49
sbalukoffAs in, no two tenants can use the same back-end ip range (even if it's RFC1918 range)?19:49
blogan_no i mean a tenant cannot have two networks with overlapping cidr blocks19:50
sbalukoffAah!19:50
blogan_i dont see why it would limit it across tenants19:50
blogan_but again, I don't know much about nova-networks19:50
sbalukoffYeah.19:50
sbalukoffNeither do I. :/19:50
blogan_back to the host/apolocation talk19:51
sbalukoffWell, it sounds like there probably is a way to abstract it out that makes sense...19:51
blogan_there is19:51
*** dlundquist has joined #openstack-lbaas19:51
blogan_and it will just be a naming issue19:51
sbalukoffWelcome Dustin!19:51
sbalukoffblogan_: Yep.19:51
dlundquistHi all19:51
blogan_wouldn't it be easier to define clusters/groups of vms, and a loadbalancer is assigned to that cluster?19:52
sbalukoffblogan_: Yes. Yes it would.19:52
*** TrevorV_ has joined #openstack-lbaas19:52
sbalukoffWasn't that part of the original proposal, like months ago?19:52
sbalukoffI seem to recall having a conversation about this months ago.19:52
TrevorV_Hey guys, how are we doing the meeting?  A different channel or this one?19:52
sbalukoffAt the ATL summit, right?19:52
sbalukoffTrevorV: This channel.19:52
TrevorV_kk sweet19:53
sbalukoffStarting in 7 minutes.19:53
blogan_I think so, but wasn't ever written down19:53
blogan_but a problem with that is that it implies that an haproxy instance is installed on all the VMs on a cluster for every loadbalancer on that cluster19:53
blogan_i feel like that could cause a problem with ha topologies19:54
blogan_the differing ones19:54
sbalukoffHmmm...19:54
sbalukoffSo, I'm not sure I follow, mostly because I'm not sure exactly what the model looks like that you're referring to. We should probably define that first, and extrapolate implications from that.19:55
blogan_but I think if the operator can define clusters and how many vms are in them and whether they are active or standby19:55
sbalukoffThat's an interesting idea.19:55
blogan_sbalukoff: yes a definition would be great, becasue I may be wrong on this and more thought needs to be put into it19:55
sbalukoffYeah.19:56
sbalukoffPerhaps something we can approach after the meeting...  unless this is a blocker for getting the DB model stuff sorted?19:56
blogan_well the DB model can change easily19:56
sbalukoffIf we have to do colocation / apolocation stuff a little later that won't be the end of the world.19:56
blogan_so even if it gets merged, it can be changed quite easily19:56
sbalukoffIt's a fairly advanced feature, anyway.19:56
TrevorV_sbalukoff (he'll make me do it o_0)19:57
*** jamiem has joined #openstack-lbaas19:57
sbalukoffHaha19:57
*** johnsom has joined #openstack-lbaas19:57
blogan_yeah and I was thinking we would just do the basic features first, and once things got more stable and a workflow is in place, then feature iteration would happen much easier19:57
sbalukoff*nod*19:58
blogan_but keeping all these features in mind that we want, because we dont want to paint ourselves in a corner19:58
xgerman+119:58
sballe+119:59
*** tmc3inphilly has joined #openstack-lbaas19:59
tmc3inphillygood day/night all19:59
sbalukoffOk, I think it's about time to start...19:59
crc3230 seconds20:00
sballeblogan_, Can you point me to the bp/review for the data model?20:00
sbalukoff#startmeeting Octavia20:00
crc32never mind I'm lagged by 30 seconds20:00
openstackMeeting started Wed Aug 27 20:00:12 2014 UTC and is due to finish in 60 minutes.  The chair is sbalukoff. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
blogan_its our inaugural Octavia IRC meeting20:00
openstackThe meeting name has been set to 'octavia'20:00
xgermano/20:00
sbalukoffHowdy folks!20:00
dougwigo/20:00
blogan_hello20:00
sballe\o/20:00
*** min has joined #openstack-lbaas20:00
sbalukoffThis is the agenda we're going to be using:20:00
sbalukoff#link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2014-08-2720:00
tmc3inphilly+120:00
*** min is now known as Guest6241220:01
sbalukoffSo, let's get started, eh!20:01
sbalukoff#topic Review action items from last week20:01
blogan_sablle: sbalukoff will probably link them in 3..2..1..20:01
sbalukoffHaha!20:01
sballeblogan_, ok20:01
*** juliancash has joined #openstack-lbaas20:01
rm_workI'm here, just distracted by a production issue at the moment :) ping me if you need anything specific20:02
sbalukoffWell the first item on the list here is to go over the benchmarks put together by German20:02
xgerman#link https://etherpad.openstack.org/p/Octavia_LBaaS_Benchmarks20:02
*** ptoohill-oo has joined #openstack-lbaas20:02
sbalukoffxgerman: Can you go ahead and speak to that?20:02
blogan_rm_work: great reason to have this on IRC20:03
xgermanI compared two listeners in one haproxy / and two haproxy processes with each one listener20:03
xgermanthe results are fairly similar so either one seems viable20:03
xgermanthroughput was a tad higher for two listners/1 haproxy but I think tuning can fix that20:03
xgerman  haproxy - 2 listeners:  545.64  RPS, 430.57 RPS20:04
xgerman    haproxy - 2 processes:    437.51 RPS, 418.93 RPS20:04
xgermanthe two values in each line are for the two ports20:04
sbalukoffxgerman: How many times did you re-run this benchmark?20:04
xgermaneach three times20:04
xgerman+ glided in to the right concurrent requests20:04
sbalukoff(Given it was being done on cloud instances, where performance can sometimes be variable. ;) )20:04
tmc3inphillywhich version of HAProxy was used for testing?20:05
xgerman1.520:05
dougwigthat kind of variation can just be context switching two processes instead of one, too.20:05
sbalukoffYeah, given the only distinction between the two ports' configurations was the ports themselves, it seems like performance for both in one test ought to be really close.20:05
xgermanagreed20:05
sbalukoffSo, this being one test:  haproxy - 2 listeners:  545.64  RPS, 430.57 RPS20:06
*** barclaac|2 has joined #openstack-lbaas20:06
blogan_so the difference in the first listeners is not concerning?20:06
blogan_545 vs 437?20:06
tmc3inphillywould it be possible to share your haproxy.conf files?  i am curious of you assigned listeners to cores20:06
sbalukoffMy point is that is wider than I would have expected.20:06
xgermanI can share them20:06
*** jwarendt has joined #openstack-lbaas20:06
tmc3inphillyagreed sbalukoff20:06
tmc3inphillywere the tests run once or several times and averaged20:07
dougwignot unless it reproduces a lot.  at that rate, that's not a lot of variance, given that it's cloud instances.20:07
blogan_i'd like to see benchmarks on much higher RPS20:07
sbalukoffdougwig: Right, but that variance is larger than the variance between running 1 process versus 2.20:07
xgermanI ran them several times to see if there are big variations but they clocke din to gther20:07
*** ptoohill-oo has quit IRC20:07
*** ptoohill-oo has joined #openstack-lbaas20:08
sbalukoffthat tells me the biggest factor for variable performance here is the fact that it's being done on cloud instances.20:08
xgermanI was worried about the RPS, too -- but I used standard hardware and a fairly small nox20:08
dougwigwhat is fast enough?  the quest for fastest can be endless.  does starting simpler mean we can't add in the other?20:08
blogan_sbalukoff: didn't you mentiong something about getting 10k RPS per instance?20:08
blogan_or did I misunderstand that20:08
sbalukoffblogan_: Yes, but that's on bare meta.20:08
sbalukoffmetal.20:08
blogan_ah okay20:09
*** barclaac has quit IRC20:09
xgermanalso the exercise was to comapre the two approaches20:09
sbalukoffblogan_: To get that kind of performance... well, apache bench starts to be the bottleneck.20:09
xgermanand since they are fairly close I was calling it  a tie20:09
sbalukoffSo you have to hit the proxy / load balancer with several load generators, and the proxy / load balancer needs to be pointed at a bunch of back-ends.20:09
blogan_xgerman: I know, I'm just concerned that the gap could widen with higher loads20:09
dougwigwas the generator at 100% cpu?  ab should scale better than that; it's nowhere near line speed.20:10
sbalukoffblogan_: I have that concern too, but probably less so:20:10
sbalukoffThe reason being that if there were a major difference in performance, it would probably show up at these levels.20:10
tmc3inphillyxgerman, is HP using Xen or KVM?20:10
*** ptoohill-oo has quit IRC20:10
xgermanKVM20:10
tmc3inphillydanke20:10
*** ptoohill-oo has joined #openstack-lbaas20:10
sbalukoffAnd, let's be honest, the vast majority of sites aren't pushing that kind of traffic. Those that are, are already going to want to be on the beefiest hardware possible.20:10
*** barclaac has joined #openstack-lbaas20:11
sballesbalukoff, 100% agree20:11
xgermanI also deliberately put the lb on the smallest machine we have to magnify context switching/memory problems20:11
xgerman(if any)20:11
*** ptoohill-oo has quit IRC20:11
blogan_xgerman: what size were the requests?20:11
xgerman177 Bytes20:11
dougwigagree, my point above is that both look fast enough to get started, and not spend too much time here.20:11
sbalukoffdougwig: +120:12
*** ptoohill-oo has joined #openstack-lbaas20:12
blogan_fine by me20:12
sballedougwig, +1 BUT it would be nice if RAX did the benchamrk too to get two samples20:12
tmc3inphillydid you directly load test the back end servers to get a baseline?20:12
xgermanyes, I did20:12
tmc3inphillywhat were you seeing direct?20:13
*** barclaac|2 has quit IRC20:13
sbalukoffOn that note, is everyone up to speed on the other points of discussion here (that happened on the mailing list, between me and Michael)?20:13
ptoohillxgerman, do you have the configs/files for your tests so it could be easily duplicated20:13
xgermanRequests per second:    990.45 [#/sec] (mean)20:13
xgermanyes, I will upload the haproxy files20:13
blogan_lets talk more about the benchmarks after the meeting20:13
sbalukoffLet me rephrase that:  Has anyone not read that e-mail exchange, where we discussed pros and cons of each approach?20:13
sballesbalukoff, I haven't but I am stil cathing up... Will do rigth after the meeting20:14
dougwigi have not, but nor do i feel strongly either way.20:14
blogan_I've read it and multiple processes is fine, but was waiting on benchmarks too20:14
dougwigbecause last i read, this wasn't a corner painting event.20:14
sbalukoffdougwig: Mostly it affects a couple workflow issues having to do with communication with the Octavia VM. So no, if we have to change it won't be *that* hard.20:15
blogan_dougwig: i think it is20:15
sbalukoffblogan_: Oh?20:15
dougwigplease expand?  because if it is, i think we've got a crap interface to the VMs.20:15
blogan_you mean the process per listener vs process per loadbalancer?20:16
sbalukoffblogan_: That is what we're discussing, I think.20:16
dougwigcorrect.20:16
*** TrevorV_ has quit IRC20:17
blogan_well stats gathering would be different for each approach, provisioning, updating all of that would be affected20:17
*** samuelbercovici has joined #openstack-lbaas20:17
sbalukoffblogan_: That is true.20:18
xgerman+120:18
blogan_but that woudl fall under sbalukoffs "not that hard"20:18
tmc3inphillyxgerman, could you please rerun your tests with the keepalive flag (-k) enabled?  This will greatly increase your performance20:18
barclaacdougwig, why do you think we have a "crap interface to the VMs"20:18
sbalukoffHaha!20:18
sbalukoffYou're right. I'm probably trivializing it too much.20:18
xgermanand that leads me to wondering if we should abstract the vm <-> controller interface and not just ship haproxy files back and forth20:18
dougwigbarclaac: if it's something other than "not that hard", it implies we're not abstracting the vm implementation enough.20:18
sballexgerman, +1 I agree20:19
xgermandougwig +120:19
barclaacShouldn't the decision on process/listener or process/lb be an implementation detail within the VM? It seems we're making a low level decision and then allowing that to percolate through the rest of Octavia20:19
dougwigbarclaac: yes, that's another way of saying exactly what i'm trying to communicate.20:19
sbalukoffbarclaac: Octavia is an implementation.20:19
sbalukoffSo implementation details are rather important.20:19
barclaacBut within Octavia we want to try to have as loose a coupling as possible between components.20:20
dougwig(although it's vm implementation + vm driver implementation, to be precise.)20:20
sballesbalukoff, But also a framework so yes implementaiton is important but we need the right level of abstraction20:20
barclaacIf the control plane "knows" that HAProxy is in use we're leaking architectural concerns20:20
sballebarclaac, +120:20
xgerman+120:20
sbalukoffUm...20:20
sbalukoffOctavia knows that haproxy is in use.20:20
sballesbalukoff, We need to keep the interface clean20:20
dougwigdisagree, but if something outside the controller/vm driver knows about haproxy, that's too much leak.20:21
barclaacI think that's the statement that I don't agree with.20:21
sbalukoffIt's sort of a central design component.20:21
blogan_sbalukoff: do we need to store anything haproxy specific in the database?20:21
barclaacI'm not sure haproxy is a central design component. Herding SW load balancers is the central abstraction. HAProxy is an implementation detail.20:22
sballesbalukoff, I always looked at ha-proxy as the first implementation. We could switch ha-proxy with somethign else in the future20:22
barclaacIf I want to create a VM with nginx would that be possible?20:22
sbalukoffbarclaac: You have no idea how much I hate the phrase "it's an implementation detail" especially when we're discussing an implementation. :P20:22
sballebarclaac, +120:22
dougwigthe skeleton that blogan_ and i briefly discussed had the notion of "vm drivers", which would encapsulate the haproxy or nginx or other implementation details that lived outside the VMs.20:22
barclaacsbalukoff +2 :-)20:22
blogan_from my understanding, we were not going to do anything beside haproxy at first, but abstract everything enough to allow easy pluggability20:23
tmc3inphillywouldn't we need to support flavors if we want to have different backends?20:23
xgermandougwig +1 that would also allow a future UDP load balacning solution to use Octavia20:23
sbalukoffWell, the haproxy implementation will be the reference.20:23
dougwigsballe +1, blogan_ +120:23
sbalukoffAnd implementations have quirks.20:23
dougwigxgerman: bring on the udp.20:23
xgerman:-)20:24
sbalukoffIt's going to be the job of any other impelementation to work around those quirks or otherwise have similarly predictable behavior.20:24
xgermanI want to avoid that each implementation has to talk in haproxy config files...20:24
barclaacxgerman, you've just hit the nail on the head!20:24
sballesbalukoff, I thought dougwig might want to switch ha-proxy out with his a10 back-end in the future20:25
barclaacWe should abstract those files20:25
dougwigpull up, pull up.  do we agree that we're going to have an haproxy vm, and a file or module that deals with that vm, and that module is not going to be "octavia", but rather something like "octavia.vm.driver.haproxy", right ?  in which case, i'm not sure this detail of how many processes really matters.20:25
sbalukoffxgerman: Right. It depends on which level you're talking about, and we are sure as hell not being specific enough about that in this discussion.20:25
sbalukoffsballe: Yes, and he should be able to.20:25
sbalukoffdougwig: Thank you. Yes.20:25
sballesbalukoff, ok so we need t make sure we have enought abraction to allow him todo that and not add too much ha.proxy stuff into the main frameork20:26
* dougwig suspects that we are all violently agreeing.20:27
sbalukoffSo it matters for the reference implementation (which is what the point of the discussion was, I thought).20:27
sbalukoffdougwig: We are.20:27
blogan_I think the only thing that will know about haproxy is a driver on the controller, and then also if the VM is running Octavia code it would need an haproxy driver20:27
rm_workAGREEMENT.20:27
* rm_work is caught up20:27
tmc3inphilly+1 blogan_20:27
xgerman+120:28
dougwigalright, circling back to 1 vs 2 processes, does it matter?20:28
blogan_it matters in the haproxy driver right?20:28
sbalukoffdougwig: It does for the reference implementation.20:28
blogan_did i just invoke "implemenation detail"?20:28
sbalukoffSo, would anyone object to making a simple vote on this and moving on?20:29
tmc3inphillycouldn't it be a configurable option of the haproxy driver?20:29
blogan_tmc3inphilly: two drivers20:29
dougwigright, let whoever writes that driver/VM decide whether to add the complexity now or in a future commit.20:29
sbalukofftmc3inphilly: I don't see a poitn in that.20:29
blogan_sbalukoff: really it could just be two haproxy drivers20:29
sbalukoffdougwig: The reason we are discussing this is because people from HP have (had?) a strong opinion about it.20:29
dougwigand right now we have 0 drivers, so cart, meet the horse.20:30
sbalukoffThis conflicted with my usual strong opinions about everything.20:30
dougwigso HP can write the driver.20:30
sbalukoffHence the discussion.20:30
dougwig:)20:30
sballeblogan_, we would have to maintain extra code20:30
xgermanwe did our benchmark and it looks like  a tie - so we bow to whatever driver is written for now20:30
blogan_sballe: totally agree, but thatst he point, what is the way WE want to go wtih this? and if we have two different views then two drivers would need to be created?20:30
blogan_so it sounds to me like we can make a consensus decision on what this haproxy drvier will do20:32
sbalukoffOk, I want to move on.20:32
dougwiganyone strongly opposed to 1 lb:1 listener to start?20:32
dougwigif not, let's move on.20:32
sbalukoffBut I don't want to do so without a simple decision on this...20:32
sbalukoffSo. let's vote on it.20:32
dougwigabstain20:32
xgermanabstain20:32
sbalukoff#startvote Should the (initial) reference implementation be done using one haproxy process per listener?20:32
openstackBegin voting on: Should the (initial) reference implementation be done using one haproxy process per listener? Valid vote options are Yes, No.20:32
openstackVote using '#vote OPTION'. Only your last vote counts.20:32
blogan_+1 for multiple processes20:32
blogan_oh20:32
sbalukoff#vote Yes20:32
blogan_#vote Yes20:33
sbalukoffNo one else want to vote?20:33
sbalukoffI'll give you one more minute...20:34
xgermanyou got a runaway victory :-)20:34
sbalukoffHaha!20:34
sballelol20:34
dlundquist#vote Yes20:34
dlundquistOn account of failure isolation20:34
sbalukoffOk, voting is about to end...20:35
sbalukoff#endvote20:35
openstackVoted on "Should the (initial) reference implementation be done using one haproxy process per listener?" Results are20:35
jorgem...20:35
blogan_no results!20:35
juliancashGerman gets my vote.  :-)20:35
blogan_no one wins!20:35
tmc3inphilly#drumroll20:35
sbalukoffA mystery apparently.20:35
sbalukoffHAHA20:35
barclaacI'm fine with the single/multiple HAProxy question - my issue is the control plane knowing about HAProxy instead of talking in terms of IPs, backend nodes etc.20:35
dougwigbarclaac: we all agree on that point.20:36
sbalukoffbarclaac: Have you been following the rest of the Octavia discussion at this point?20:36
sbalukoffBecause i think you're arguing something we've agreed upon for... probably months.20:36
sbalukoffAnyway, moving on...20:36
sbalukoff#topic Ready to accept v0.5 component design?20:36
blogan_sbalukoff: I think he's getting at making sure we're abstracting enough, which seemed to be in question a few minutes ago20:37
blogan_but i think we all agree on that now20:37
xgermanblogan_ +1 -20:37
xgermannext topic20:37
sbalukoffHeh! Indeed.20:37
blogan_I'm ready to accept it20:37
barclaacOf course. It was blogans comment above about having an haproxy driver in the controller (sorry for delay, had to read back up the stack)20:37
dougwigsbalukoff: i'm not today, but i will commit to having a + or - by friday.20:38
sbalukoffOk, so I'm not going to +2 the design since I wrote it, but we'll need a couple of the other octavia cores to do so. :)20:38
blogan_i haven't give a +2 because I think this is important enough to have everyone onboard20:38
xgermanwe just discovered that we can't have Neutron Floating IPs in private networks20:39
sbalukoffOn that note then: Does anyone have any major issues with the design that are worth discussing here?20:39
dougwigif i have any, i'll put them on gerrit well before friday, and ping you directly.20:39
sballesbalukoff, I am fine with the spec assumign that we are flexible with changing things as we start implementing it.20:39
sbalukoffxgerman: I think if we are working with an abstract networking interface anyway, that shouldn't be a problem.20:39
sbalukoffsballe: Yes, of course. This spec is to determine initial direction.20:40
xgermanyep. agreed20:40
sbalukoffBasically, I look at it as the 10,000 mile overview map of how the components fit together.20:40
sbalukoffIt's definitely open to change as we do the actual implementation and discover where our initial assumptions might have been wrong.20:41
blogan_I totally expect that to happen20:41
sbalukoff(Not that I'm ever wrong about anything, of course. I thought I was wrong once, but I was mistaken.)20:41
blogan_I'd be amazed if everything went according to plan20:41
sballeblogan_, sbalukoff me too20:41
xgerman<put in approbiate A-Team reference>20:42
tmc3inphillyI love it when a plan comes together20:42
xgerman:-)20:42
sbalukoffOk, so: Doug, German, and/or Brandon: Can I get a commitment to do a (final-ish) review of the component design this week and work toward a merge?20:42
dougwigyes20:43
blogan_I can +2 it now and let doug be the final +220:43
blogan_or -120:43
blogan_depending on what he decides20:43
sbalukoff#action v0.5 component design to be reviewed / moved to merge this week.20:43
xgermanyeah, I can +2, too20:43
sbalukoffWell, if there's a -1, we can probably fix that this week, too.20:43
xgermandougwig just ping us and we will +220:43
sbalukoffOk, let's move on to the next topic20:44
sbalukoff#topic Ready to accept Brandon's initial database migrations?20:44
blogan_NO20:44
blogan_lol20:44
dougwiglol20:44
blogan_per your comments, some need to be changed20:44
sbalukoffHaha! Indeed.20:44
xgermanwell, if the creator has no confidence20:44
blogan_actually brings up another topic we can discuss20:44
sbalukoffblogan_: Ok, so, do you have enough info there to make the fixes, or are there details that will need to be discussed in the group?20:45
sbalukoffblogan_: Oh, what's that?20:45
blogan_I think we should try to start with basic load balancing features first, and then iterate on that after we have a more stable workflow and codebase20:45
xgerman+120:45
blogan_so should colocation, apolocation, health monitor settings, etc be saved for that20:46
xgermanyeah, we can put them in with the understanding they might need to be refactored/refined20:46
sbalukoffblogan_: I'm game for that, so long as we don't do anything to paint ourselves into a corner with regard to later planned features.20:46
sbalukoffblogan_: Want to call that v0.1 or something? ;)20:46
sballeI might be naive but I am not sure how we can have a LB without health monitoring20:46
xgerman0.5 Beta20:46
sbalukoff(I don't think we need a component design document for it, per se.)20:46
blogan_sbalukoff: v0.2520:46
sbalukoffHeh! Ok.20:46
blogan_sballe: i mean extra health monitor settings20:47
sbalukoffsballe: I don't think health monitoring is one of the non-basic featured.20:47
sbalukofffeatures.20:47
sbalukoffYes.20:47
sballeoh ok20:47
sbalukoffblogan_ Agreed.20:47
blogan_basically I'm kind of following what neutron lbaas has exposed20:47
sbalukoffv2?20:47
blogan_yes20:47
sbalukoffOk, that works.20:48
sbalukoffNon-shitty core object model and all. :)20:48
sbalukoffAny objections?20:48
dougwigshould be easy enough to do a v1 driver on that first pass, and get a stable cli/ui to test with.20:48
tmc3inphilly#salty20:48
ptoohill#teamsalty20:48
sbalukoffdougwig: +120:48
sbalukoffOk then!20:48
blogan_ill get those changes up today and please look give it some love and attention20:49
sbalukoff#agreed We will start with a basic (Neutron LBaaS v2) feature set first and iterate more advanced features on that. We shall call it v0.2520:49
xgermandougiwg - no v1 driver we migth end up with users20:49
sbalukoff#action Review blogan's changes regarding the v0.25 iteration.20:49
blogan_next topic!20:50
sbalukoffI want to skip ahead a bit on this since we're almost out of time.20:50
sbalukoff#topic Discuss ideas for increasing project velocity20:50
sbalukoffSo! Right now, it feels like there ought to be more room for many people to be doing things to bring this project forward.20:51
dougwigblueprints, milestones, track deliverables to dates.20:51
sballesbalukoff, +120:51
blogan_should we put all these tasks as blueprints in launchpad and allow people to just take htem as they want?20:51
sbalukoffblogan_: I'm thinking that's probably a good idea.20:52
sballeblogan_, sounds like a good idea.20:52
xgerman+120:52
johnsomSome have volunteered for work already20:52
sballeblogan_, We can always coordinate among ourselves20:52
xgermanjohnsom +120:52
sbalukoffjohnsom: Yep! So, we'll want to make sure that's reflected in the blueprints.20:52
xgermansbalukoff, you  wnated to start a standup etherpad20:53
blogan_okay I'll put as many blueprints as I can20:53
sbalukoffxgerman: Thanks-- yes. I forgot about that.20:53
johnsomAgreed.  The blueprint process is on my list to learn in the near term20:53
sbalukoff#action sbalukoff to start a standup etherpad for Octavia20:53
sbalukoffjohnsom: I think it's on many of our lists. :)20:53
sballesame here20:54
blogan_ill add as many blueprints as i can, but they're probably not going to be very detailed20:54
xgermanno problem we can detail20:54
sbalukoffI also wanted to ask publicly (and y'all can feel free to respond publicly or private as you see fit), what else can I be doing to both make sure you and your individual teams know what they can be doing to be useful, and what else can I do to help you convince your management it's worthwhile to spend time on Octavia?20:54
sballeblogan_, how are we going ot do design review before the code shows up in gerrit? and it takes forever for us to understand the code?20:54
sbalukoffblogan_: I'll also work on adding blueprints and filling in detail.20:55
blogan_sballe: good question, and that should be the spec process, but I really think the spec process will totally slow down velocity in the beginning for trivial tasks20:55
sbalukoffblogan_: +120:55
sballeblogan_, sbalukoff I would like to see details in the bp so we know how it will be implemented with adjustments later if needed20:56
sbalukoffFor major features (eg. TLS or L7) we'll definitely want them specced (probably just copied from work on neutron LBaaS), but yes, trivial things shouldn't require a spec up front right now.20:56
sballemaybe I am talking about spces and not bp20:56
blogan_sballe: I think the blueprints can be created initially to give out a task list, but when someone grabs that blueprint to work on, they should be responsble for detailing it out20:56
dougwigsballe: let's take one example, which is the directory skeleton for octavia.  the best spec for that is going to be the code submission.  and in the early days, that's going to be true for a fair number of commits.20:56
sballeblogan_, we are totally in agreemet.20:56
sbalukoffsballe: Cool. After the meeting, can you point out a few specific areas where you'd like to see more detail to me?20:57
blogan_i have a lot20:57
sbalukoffWe've got three minutes left. Anyone have anything dire they want to bring up?20:57
sballedougwig, with the old LBaaS team we ran into an issue that they didn't document anything and we were told the code was the documentaiton and design document.20:57
xgermanyeah, let's avoid that20:58
xgermanalso we need to bring in QA early20:58
sbalukoffsballe: Yeah, I hate that, too.20:58
samuelbercovicianyone have spoken or heard from mestrey?20:58
sbalukoffI loves me some good docs.20:58
blogan_sbalukoff: dont we know it20:58
blogan_samuelbercovici: nope we ahve not20:58
sbalukoffsamuelbercovici: I have not.20:58
dougwigsamuelbercovici: he was at the neutron meeting monday.20:58
sbalukoffAnd, still no incubator update, IIRC.20:59
blogan_some discussions going on the ML about it20:59
dougwigi asked about that at the meeting, and was told to expect it yesterday.20:59
blogan_but nothing official20:59
blogan_I was told to expect it 2 or 3 weeks ago20:59
barclaacI chatted with him briefly yesterday. I'd expect by the end of the week.20:59
sbalukoffOk, folks! Meeting time is up.  Yay IRC. :P20:59
samuelbercovicii was not able to get from him a reason why lbaas should fo to incubation20:59
blogan_I now expect it at the samet ime Half-Life 3 is released20:59
sbalukoff#endmeeting21:00
openstackMeeting ended Wed Aug 27 21:00:03 2014 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.html21:00
samuelbercoviciLOL21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.log.html21:00
sbalukoffNot that we have to stop discussing things, eh.21:00
dougwigblogan_: as long as it's not a duke nukem forever quality release.21:00
samuelbercovicilets hope it will not be a new duekm newkem21:00
rm_workalright now time for me to go to the airport :P21:00
dougwighahaha.21:00
dougwigi now have to go back into the sales training conference.21:00
*** jwarendt has quit IRC21:00
samuelbercovicidougwin:like the way you are thinking...21:01
sbalukoffdougwig: You poor soul.21:01
sballedougwig, have fun21:01
sballerm_work, safe travel21:01
samuelbercovicibye21:01
blogan_I'm taking a break now21:01
*** samuelbercovici has quit IRC21:01
rm_work:)21:02
*** rm_work is now known as rm_work|away21:03
*** tmc3inph_ has joined #openstack-lbaas21:04
*** tmc3inphilly has quit IRC21:05
*** jamiem has quit IRC21:08
*** markmcclain has quit IRC21:12
*** ptoohill-oo has quit IRC21:15
*** tmc3inph_ has quit IRC21:16
*** markmcclain has joined #openstack-lbaas21:18
*** sbfox has joined #openstack-lbaas21:27
blogan_so how did everyone like the irc part?21:34
xgermanI didn't have to shave - so this is a +21:34
blogan_ol21:34
blogan_and lol21:34
*** ajmiller has quit IRC21:47
sbalukoffHeh! I feel like we got about 1/3rd accomplished during this meeting than what we took 40 minutes to do last time.21:48
sbalukoffAlso, I see crc32's point about it being difficult to keep track of what is being discussed when there isn't the exclusivity of one person talking at a time.21:48
sbalukoff(ie. the "text orgy" thing happened several times in that meeting.)21:49
sbalukoffAlso, the automated meeting minutes are not as good as the ones Trevor prepared.21:49
sbalukoffBecause our conversation was pretty free-form at times (today), and it's difficult to know when we've reached a solid point that should be noted in the meeting minutes sometimes, without someone going back and reviewing.21:50
sbalukoffAlso, xgerman: I don't shave for the video meetings. You're lucky I'm wearing clothing at all. XD21:51
xgermanin that case to allow for your office nudity we should stay in irc  ;-)21:52
*** rm_work|away is now known as rm_work21:57
sbalukoffHAHA21:58
rm_workwell, I liked it, because i had to be distracted for a bit due to a production issue, but was able to catch back up easily21:58
rm_workif i'd popped back into webex 30 min through, I would have been like "lolwut"21:59
sbalukoffrm_work: Yes, it would have taken more time to get up to speed on the first half of the meeting.21:59
rm_worklike, until after the meeting when notes were posted <_<22:00
rm_workso, not really possible22:00
*** Guest62412 has quit IRC22:05
*** sbfox has quit IRC22:19
*** sbfox has joined #openstack-lbaas22:32
*** barclaac|2 has joined #openstack-lbaas22:33
*** sballe_ has joined #openstack-lbaas22:35
*** barclaac has quit IRC22:36
*** sballe has quit IRC22:37
*** markmcclain has quit IRC22:38
dougwigsbalukoff: i like the text orgy, but i read and type quickly.  i think chairing an IRC meeting, keeping stuff out of the weeds, kicking things to the ML, etc., is something that happens after chairing a few times, and that meeting let a few topics wander and go on too long, which exacerbated the "not getting as much done" issue.22:45
* rm_work agrees with dougwig 22:46
*** jorgem has quit IRC22:59
blogan_dougwig: +123:02
blogan_plus I think we got a lot more people able to voice their opinions than if it were in webex23:03
rm_workit's hard to actually physically interrupt someone else speaking when it's on IRC, even if you're talking in parallel :P23:04
rm_workwhich kept happening in webex >_<23:04
rm_workI often didn't say things I was thinking just because it wasn't easy to get in the flow of conversation23:04
rm_workthen again, some people would consider that a positive :P23:04
*** crc32 has quit IRC23:05
rm_workbrb boarding23:05
blogan_rm_work: yes it has the potential to be a positive23:05
blogan_but I think its a negative more times than not23:05
rm_workyes, because hopefully people DO want to hear my thoughts :)23:05
rm_workerr, right, boarding23:05
blogan_have fun23:06
*** rm_work is now known as rm_work|away23:08
*** sbfox has quit IRC23:23
*** dlundquist has left #openstack-lbaas23:25

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!