blogan | will the lbaas master only have lbaas tests? | 00:00 |
---|---|---|
blogan | well either way, this wouldn't happen | 00:01 |
blogan | if lbaas has all the tests, then the co-gate would fail, if it only has lbaas tests then these tests would not have run anyway | 00:01 |
blogan | and why it would have all the tests, doesn't make sense, so i retract my questions | 00:01 |
xgerman | rm_work once asked me to start talking with the HP security guys and now they want pictures | 00:02 |
blogan | whoa | 00:02 |
blogan | invasion of privacy there | 00:03 |
xgerman | well, diagrams to be precise having networks in them | 00:03 |
blogan | ah much more reasonable | 00:03 |
xgerman | well, they want to know which traffic/function runs over which network | 00:04 |
xgerman | did we ever draw something like that (I hate to duplicate work) or do i need to fire up my Visio | 00:04 |
xgerman | (or asciidiagrmmer) | 00:04 |
blogan | don't think we've ever drawn up all the types of traffic going over which networks | 00:05 |
xgerman | that's what I thought -- just making sure :-) | 00:05 |
blogan | 0.5 and 1.0 have only defined one lb network | 00:05 |
blogan | that all amphorae and controllers are connected to that is used for communication between controllers and amphorae | 00:06 |
blogan | however, there is worry that one neutron netowrk cannot scale | 00:06 |
xgerman | yep, that is one of my worries + they want it as a diagram | 00:06 |
xgerman | so I need to draw some boxes for them | 00:07 |
xgerman | saying exactly that :-) | 00:07 |
blogan | get your crayons out | 00:07 |
xgerman | yep, or Visio as we call them | 00:07 |
blogan | i think the options are 1) one lb network for all 2) one lb network per controller (scales with controllers) 3) one lb network per load balancer amphorae group 4) one lb network per amphorae | 00:08 |
blogan | pros and cons to all of course | 00:08 |
xgerman | well, I don't like this controller -> amphora assignment - I prefer more of a controller cluster where all controllers can talk to all amphora | 00:09 |
xgerman | so I would just shard over networks randomly but that is me :-) | 00:09 |
xgerman | and connect all controllers to all newtorks but an amphora only to one network | 00:10 |
*** mlavalle has quit IRC | 00:11 | |
xgerman | but the security guys will be happy with a big box "lbaas newtork" anyway | 00:11 |
*** mlavalle has joined #openstack-lbaas | 00:19 | |
*** mlavalle_ has joined #openstack-lbaas | 00:23 | |
blogan | xgerman: yeah all controllers being able to talk to amphora is ideal | 00:25 |
sbalukoff | So my mostly-maleable opinion on all of this so far: | 00:25 |
blogan | oh no | 00:25 |
*** mlavalle has quit IRC | 00:25 | |
sbalukoff | I think one LB network is fine for v0.5. Though German is right in that that probably won't scale for v1.0 | 00:25 |
blogan | hes back, quick everyoen scurry away | 00:25 |
sbalukoff | So of your options: | 00:26 |
sbalukoff | 1) Doesn't scale. | 00:26 |
sbalukoff | 2) Also doesn't scale, but in another way (ie. who says one controller can handle all amphorae on a subnet?) | 00:26 |
blogan | i do | 00:26 |
blogan | without any evidence | 00:26 |
rm_work | xgerman: yeah I think if Amphorae are assigned to a specific controller, we've done something horribly wrong | 00:26 |
sbalukoff | 3) probably the preferred solution I'm leaning toward, though again we don't have solid requirements as to what amphoae groups are | 00:26 |
sbalukoff | 4) Doesn't scale (wow, that would eat through networks fast!) | 00:27 |
blogan | lol indeed | 00:27 |
sbalukoff | There are probably other options we aren't considering yet either. | 00:27 |
rm_work | yeah I think it's really between 1 and 4 | 00:27 |
rm_work | err | 00:27 |
rm_work | 1 and 3 | 00:27 |
rm_work | !! | 00:27 |
openstack | rm_work: Error: "!" is not a valid command. | 00:27 |
blogan | well #3 was more grouping by load balancer, so each load balancer would have a network, which wouldn't scale either | 00:27 |
sbalukoff | There's the option of deploying amphorae directly on tenant networks and binding them to a FLIP so they can be configured. | 00:27 |
rm_work | I've been saying "HA Set of Amphora" | 00:27 |
sbalukoff | (which foregoes the whole concept of an LB network) | 00:28 |
sbalukoff | Though that doesn't scale with Octavia v2.0 | 00:28 |
rm_work | that's one network per LB and I don't think that's unreasonable honestly <_< | 00:28 |
sbalukoff | rm_work: Sure, but there might be other definitions we'd consider for "amphorae group" | 00:28 |
xgerman | yeah, we can either bolt them on a management network or on the tenat network | 00:28 |
blogan | well if we're talking active passive set up, then that is bascially #4 divided by 2 | 00:28 |
sbalukoff | Um... what? | 00:29 |
sbalukoff | Oh! | 00:29 |
sbalukoff | Yes. | 00:29 |
sbalukoff | By "amphorae group" I'm thinking something to the effect of a spares pool + all active amphorae for a given "location" | 00:29 |
sbalukoff | Whatever "location" means. | 00:29 |
sbalukoff | Though it will probably have something to do with physical or geographic proximity | 00:29 |
xgerman | I need to see that in a diagram | 00:29 |
rm_work | ugh #2 binds us in a bad way architecturally | 00:29 |
rm_work | controller failover becomes a nightmare of bookkeeping and scheduling | 00:30 |
sbalukoff | rm_work: True. | 00:30 |
rm_work | we have this stuff kinda marked up on our whiteboard here... | 00:30 |
sbalukoff | well, it's already going to be a bit of a nightmare. | 00:30 |
sbalukoff | Well.. | 00:30 |
sbalukoff | Sort of. | 00:30 |
xgerman | rm_work is there a link I am missing | 00:30 |
rm_work | xgerman: with which? | 00:30 |
xgerman | the options explained | 00:31 |
sbalukoff | rm_work: Can y'all take pictures and / or write it down in some way for the rest of us? ;) | 00:31 |
rm_work | heh well | 00:31 |
xgerman | yeah, I feel like a mind reader | 00:31 |
blogan | xgerman: i just came up with those off the top of my head | 00:31 |
sbalukoff | xgerman: I was just talking about the options blogan just mentioned above in this conversation. | 00:31 |
rm_work | it might not make sense without the conversation we had at the time of writing... | 00:31 |
rm_work | this is the #1 on my list for whiteboard topics in Seattle | 00:31 |
rm_work | I think it is #1 on the etherpad too :P | 00:32 |
sbalukoff | rm_work: That's my fear when it comes to white-boarding sessions where nobody wrote anything down other than the diagram. | 00:32 |
blogan | we cant take pictures of our whiteboard, we started making caricatures of sbalukoff | 00:32 |
rm_work | I can regurgitate most of it | 00:32 |
sbalukoff | By the way, at the hack-a-thon, I'm going to want a notes-keeper at each of the whiteboard sessions so we don't have this problem. XD | 00:32 |
xgerman | sure you cna take pictures -- we just can't share them with sbalukoff | 00:32 |
blogan | we have you on there too :( | 00:32 |
sbalukoff | blogan: Aaw! I want to see how many warts you gave me! | 00:32 |
rm_work | ugh this is frustrating for another reason as well -- a lot of this has tie-ins to our specific network architecture and our FLIP implementation | 00:33 |
xgerman | I guess the whole whiteboard is carricatures then | 00:33 |
rm_work | which I think is different than yours... | 00:33 |
xgerman | yep | 00:33 |
blogan | yeah its going to be a minefield | 00:33 |
sbalukoff | Anyway, with regard to multiple LB networks: Yes, we'll probably have to do this, but not until v1.0 | 00:33 |
sbalukoff | But v1.0 we want to follow relatively quickly after v0.5, so it is something we should put some thought into now. | 00:34 |
xgerman | my fear is if we do a poor job with that in 0.5 going to it will be a pain | 00:34 |
blogan | sbalukoff: yes, .5 only has one controller, however we can also write the code in a way that allows for both | 00:34 |
xgerman | making it scale | 00:34 |
rm_work | for us, the VMs will spin up on a rackspace servicenet type thing and will get "internal only" IPs, which the FLIP will bind to and handle the ACTIVE-ACTIVE stuff, exposing the FLIP IP externally and routing between the internal IPs for the VMs | 00:34 |
sbalukoff | xgerman: Hence the reason we do talk about it now, but we remain open to just one LB network in v0.5 | 00:34 |
xgerman | yep, I am fine with that | 00:34 |
rm_work | I still kinda hate that we have a 0.5 at all | 00:34 |
sbalukoff | Like we've done with limiting relationships between logical objects in v0.5 | 00:34 |
rm_work | I'd much rather roll 0.5 into 1.0 and just do it all at once >_< | 00:34 |
rm_work | rather than having to do any temp work at all | 00:35 |
xgerman | well, if we say put the amphora on the lbaas network and then decide tenant network is the way to go... | 00:35 |
rm_work | and realizing later "oh shit, this won't work" | 00:35 |
blogan | i think its a good stepping stone | 00:35 |
blogan | but we do have to be mindful of what we want with 1.0 | 00:35 |
sbalukoff | rm_work: I understand but don't agree: There are enough challenges to getting v0.5 out the door, and I don't see most of this as temporary work anyway. | 00:35 |
blogan | and honestly we're going to have oh shit moments like that anyway bc of the nature of plugging into many services | 00:35 |
sbalukoff | blogan: +1 | 00:36 |
blogan | ideally we could build out a huge test environment and test all of these scenarios at scale | 00:37 |
xgerman | we will | 00:37 |
xgerman | we are starting a new group just for that purpose | 00:37 |
sbalukoff | Wow! | 00:37 |
xgerman | well, they will also to DNS, DBaaS, etc. | 00:37 |
blogan | so yall can test that out before octavia is even completed? | 00:37 |
xgerman | we hope so | 00:38 |
rm_work | hmm | 00:38 |
sbalukoff | Huh. | 00:38 |
rm_work | I was going to offer to Hangouts or something to explain what we've been whiteboarding about Controller stuff, but it really won't help unless I have an actual whiteboard :( | 00:38 |
rm_work | Online Whiteboards still suck too much | 00:39 |
blogan | i need to go home too | 00:39 |
xgerman | you can always point your camera at the whiteboard :-) | 00:39 |
rm_work | ah yeah, true | 00:39 |
rm_work | xgerman: heh | 00:39 |
sbalukoff | Heh! Indeed. | 00:39 |
rm_work | xgerman: not sure you'd be able to read anything, but maybe :P | 00:39 |
xgerman | depends on the camera | 00:39 |
rm_work | would have to get some boxes and put them on my chair | 00:39 |
rm_work | MBP camera | 00:40 |
rm_work | iSight? | 00:40 |
rm_work | dunno wtf they call it | 00:40 |
rm_work | I could bring in my 1080p webcam from home | 00:40 |
rm_work | would be easier to maneuver that | 00:40 |
xgerman | or just use somebody's fancy Android phone | 00:40 |
rm_work | lol | 00:40 |
rm_work | could try the camera on my N5 | 00:40 |
xgerman | yeah, use the backward facing one | 00:41 |
blogan | finally going home | 01:15 |
xgerman | I a m alreday home -- you shoudl work from home :-) | 01:16 |
*** xgerman has quit IRC | 01:26 | |
*** mlavalle_ has quit IRC | 01:52 | |
*** fnaval has quit IRC | 02:34 | |
rm_you | heh | 02:42 |
rm_you | sbalukoff: apparently there'll be an email going out detailing our approach to FLIPs, and it'll be directed especially in the direction of BBG/HP, so maybe we can stop handwaving whenever we talk about architecture soon :P | 02:43 |
*** fnaval has joined #openstack-lbaas | 02:58 | |
sbalukoff | rm_you: That's good to hear! | 03:19 |
rm_you | :P | 03:19 |
sbalukoff | No, I'm serious about that. | 03:34 |
sbalukoff | I've done a lot of hand waving on certain details, but we do need to get down to brass tacks at some point, eh. | 03:35 |
rm_you | yeah | 03:35 |
rm_you | i am not sure but I think the point of the email might actually be "let's all do FLIPs this way *together*" not just "here's some info" | 03:36 |
sbalukoff | I look forward to reading and responding. | 03:36 |
*** woodster_ has quit IRC | 03:40 | |
*** ptoohill_ has joined #openstack-lbaas | 03:50 | |
*** ptoohill_ has quit IRC | 04:09 | |
*** amotoki has joined #openstack-lbaas | 04:53 | |
*** fnaval has quit IRC | 06:00 | |
*** woodster_ has joined #openstack-lbaas | 06:13 | |
*** kobis has joined #openstack-lbaas | 06:22 | |
*** cipcosma has joined #openstack-lbaas | 06:32 | |
*** ptoohill_ has joined #openstack-lbaas | 07:04 | |
*** ptoohill_ has quit IRC | 07:04 | |
*** ptoohill_ has joined #openstack-lbaas | 07:08 | |
*** Putns has joined #openstack-lbaas | 07:29 | |
*** openstackgerrit has quit IRC | 07:50 | |
*** openstackgerrit has joined #openstack-lbaas | 07:50 | |
*** ptoohill_ has quit IRC | 07:54 | |
*** ptoohill_ has joined #openstack-lbaas | 08:18 | |
*** woodster_ has quit IRC | 08:20 | |
*** jschwarz has joined #openstack-lbaas | 08:23 | |
*** ptoohill_ has quit IRC | 08:28 | |
*** maishsk has joined #openstack-lbaas | 09:22 | |
maishsk | enikanorov: HI are you around? | 09:22 |
enikanorov | maishsk: hi | 09:23 |
maishsk | mind if I ask you something? | 09:23 |
maishsk | can an haproxy LB with a VIP on an external network - serve requests to an instance on a private network? | 09:24 |
maishsk | ie my LB has an IP of 10.10.10.100 with two members that are on a private network - 1.1.1.10 and 1.1.1.20 | 09:25 |
enikanorov | maishsk: no | 09:26 |
enikanorov | you can't have VIP on external network with haproxy | 09:26 |
enikanorov | external network is floating ip pool, which you need to associate to VIP's ip | 09:27 |
maishsk | enikanorov: I can choose to associate a free address from the subnet (ext) to VIP ? | 09:27 |
maishsk | Can I not? | 09:27 |
enikanorov | no, you can't. you can't create ports on external network directly, afaik | 09:31 |
maishsk | I am going over the use cases doc - https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit# | 09:31 |
maishsk | I think what I am trying to accomplish is point #8 | 09:32 |
maishsk | LB VIP which is accessible from the outside - forwarding to two internal (private) instances | 09:32 |
enikanorov | it's possible if you create vip on internal network and associate floating ip to it | 09:33 |
maishsk | is that only possible with CLI - or also in the GUI ? | 09:34 |
*** jschwarz has quit IRC | 09:53 | |
maishsk | enikanorov: GUI as well - just had to find it.... | 09:54 |
*** jschwarz has joined #openstack-lbaas | 10:03 | |
*** jschwarz has quit IRC | 10:49 | |
*** woodster_ has joined #openstack-lbaas | 13:00 | |
*** busterswt has joined #openstack-lbaas | 13:11 | |
*** busterswt has quit IRC | 13:11 | |
*** mestery has quit IRC | 13:17 | |
*** fnaval has joined #openstack-lbaas | 13:54 | |
*** mestery has joined #openstack-lbaas | 14:10 | |
*** busterswt has joined #openstack-lbaas | 14:32 | |
*** busterswt has quit IRC | 14:32 | |
*** mestery has quit IRC | 14:46 | |
*** mestery has joined #openstack-lbaas | 15:10 | |
*** woodster_ has quit IRC | 15:10 | |
*** TrevorV_ has joined #openstack-lbaas | 15:17 | |
*** TrevorV_ has quit IRC | 15:21 | |
*** TrevorV_ has joined #openstack-lbaas | 15:27 | |
*** fnaval has quit IRC | 15:28 | |
*** amotoki_ has joined #openstack-lbaas | 15:43 | |
*** fnaval has joined #openstack-lbaas | 15:51 | |
*** mlavalle has joined #openstack-lbaas | 15:58 | |
*** fnaval_ has joined #openstack-lbaas | 16:02 | |
*** maishsk has quit IRC | 16:02 | |
*** fnaval has quit IRC | 16:02 | |
*** mlavalle has quit IRC | 16:06 | |
*** mlavalle has joined #openstack-lbaas | 16:07 | |
*** xgerman has joined #openstack-lbaas | 16:07 | |
*** mlavalle has quit IRC | 16:13 | |
*** SumitNaiksatam has quit IRC | 16:14 | |
*** woodster_ has joined #openstack-lbaas | 16:22 | |
TrevorV_ | For all who may be concerned: http://www.collegehumor.com/post/7004936/stephen-colbert-addressed-the-star-wars-lightsaber-controversey# | 16:54 |
*** mlavalle has joined #openstack-lbaas | 16:56 | |
*** mlavalle has quit IRC | 17:11 | |
dougwig | split spec, with nits cleaned, this is the one that will hopefully merge: https://review.openstack.org/#/c/136835/ | 17:59 |
*** sbalukoff has quit IRC | 18:08 | |
*** mlavalle has joined #openstack-lbaas | 18:08 | |
*** kobis has quit IRC | 18:16 | |
*** SumitNaiksatam has joined #openstack-lbaas | 18:23 | |
TrevorV_ | No balukoff right now? Eeek! | 18:32 |
dougwig | it's before noon somewhere, of course he's not on yet. :) | 18:40 |
TrevorV_ | Ha ha ha dougwig nice | 18:48 |
*** jorgem has joined #openstack-lbaas | 18:52 | |
rohara | who knows about pbr's version_string? doesn't that pickup git tags? | 19:10 |
blogan | not i | 19:26 |
rohara | this is driving me mad :/ | 19:27 |
blogan | then you're doing your job right! | 19:27 |
rohara | heh | 19:27 |
*** sbalukoff has joined #openstack-lbaas | 19:30 | |
rm_work | rohara: it is supposed to... but I have seen times where it doesn't do it right | 19:35 |
rm_work | that's all I can say tho, so probably not helpful :/ | 19:35 |
rohara | rm_work: ok so maybe i am not insane. it definitely does not want to pickup git tags | 19:35 |
rohara | i'm punting on this for now | 19:35 |
*** mlavalle has quit IRC | 19:36 | |
blogan | rohara: welcome to the punt side | 19:40 |
sbalukoff | Heh! | 19:40 |
*** mlavalle has joined #openstack-lbaas | 19:41 | |
*** mestery is now known as mestery_afk | 19:42 | |
RamaKrishna | Hi mlavalle | 19:43 |
RamaKrishna | I sent you an email | 19:43 |
mlavalle | RamaKrishna: hi | 19:43 |
RamaKrishna | Have some hiccups in running the tempest on top of LBaaS 2.0 | 19:43 |
mlavalle | RamaKrishna: is it ok if I look at it during the afternoon today? | 19:43 |
RamaKrishna | ok | 19:44 |
mlavalle | RamaKrishna: cool :-0 | 19:44 |
mlavalle | :-) | 19:44 |
RamaKrishna | Thanks for your help | 19:44 |
sballe | sbalukoff: ping | 19:44 |
sbalukoff | pong | 19:44 |
sbalukoff | What's up? | 19:44 |
sbalukoff | Er... can anyone see what I'm typing? | 19:45 |
sballe | Hey :-) Just saw you latest comment on https://review.openstack.org/#/c/132130/ and I agree. the sequence diagrams might not be useful even thought they did help me understand the flow as it might work. | 19:45 |
sbalukoff | Oh, good, you can. :D | 19:46 |
sballe | sbalukoff: Yes. I was writing my IRC | 19:46 |
sbalukoff | Oh, it's not you: I said something earlier without anyone commenting on it. Being a diva, this is a shocking experience for me. | 19:46 |
TrevorV_ | sbalukoff, had some questions, maybe we can talk a bit more in General Discussion in meeting today? | 19:46 |
sbalukoff | TrevorV_: Sounds good | 19:47 |
sballe | What I am thinking is to drop it until we can create workflows for our most important items such as create LB, etc. | 19:47 |
TrevorV_ | If not I'll probably bother you here afterward | 19:47 |
sbalukoff | sballe: Ok-- I do think having a good diagram of what talks to what is a good idea... I just think a sequence diagram probably isn't the right representation since we're not actually talking about a sequence. | 19:47 |
sbalukoff | sballe: That works too, eh. | 19:47 |
rm_work | blockdiag also works | 19:47 |
sbalukoff | rm_work: True. | 19:48 |
rm_work | I think I prefer that, in general | 19:48 |
sbalukoff | If the diagram has relatively few elements, I agree: It's then readable in the ASCII, too. :) | 19:48 |
sbalukoff | I'm also a fan of traditional flow charts. | 19:49 |
sballe | blockdiagram makes sense to me too BUT at htis point I am not sure we could agree on the lines going between the boxes | 19:49 |
rm_work | heh | 19:49 |
sbalukoff | sballe: True. | 19:49 |
sbalukoff | Ok, gotta run off for about 5 minutes. Will be back just before the meeting. | 19:50 |
sballe | I am happy to work on this later if it make sense but for now I feel we are not far enough in the design process. it did help me to understand the various pieces. | 19:50 |
rm_work | yeah, excited for much whiteboarding | 19:50 |
openstackgerrit | Brandon Logan proposed stackforge/octavia: Spec defining the networking driver interface https://review.openstack.org/135495 | 19:53 |
sbalukoff | sballe: Sounds good. Feel free to either abandon the gerrit review or let it languish until we're ready to create those diagrams. | 19:55 |
TrevorV_ | aren't we supposed to be in openstack-meeting-alt? | 20:00 |
sbalukoff | Yes. | 20:00 |
sbalukoff | They're finishing up. | 20:00 |
rm_work | yes | 20:00 |
TrevorV_ | right, got it, just saw that :P | 20:00 |
*** barclaac has joined #openstack-lbaas | 20:05 | |
*** barclaac has quit IRC | 20:09 | |
*** crc32 has joined #openstack-lbaas | 20:20 | |
*** Putns has quit IRC | 20:50 | |
sbalukoff | Ok, TrevorV_: Give me an earful! | 20:56 |
* blogan throws corn at sbalukoff | 20:56 | |
sbalukoff | Mmmm... delicious corn | 20:57 |
sballe | I am interested in the lb versus management network discussion ;-) | 20:58 |
dougwig | while everyone is around, split and flavors need to harden this week: | 20:58 |
dougwig | split - https://review.openstack.org/#/c/136835/ | 20:58 |
dougwig | flavors - https://review.openstack.org/#/c/102723/ | 20:58 |
TrevorV_ | sbalukoff, I can't say I have an earful to give honestly | 20:58 |
rm_work | oh yeah, we got some extra info on the management network stuff yesterday too | 20:58 |
sbalukoff | dougwig: Good to know! Thanks! | 20:58 |
sbalukoff | Ok, well, let me know what your thoughts / concerns are thus far. | 20:59 |
sbalukoff | Also, rm_work: Is that that e-mail you've alluded to sending me? | 20:59 |
TrevorV_ | rm_work, want to elaborate some? | 20:59 |
rm_work | sbalukoff: no, but kinda related | 21:00 |
rm_work | do you guys want to Hangout or something? | 21:00 |
TrevorV_ | We could, I'd have to switch to Windows though | 21:00 |
rm_work | eh, I can type, was just feeling lazy | 21:00 |
sbalukoff | Heh! | 21:00 |
rm_work | do hangouts not work on OSX? | 21:00 |
TrevorV_ | rm_work, I'm on my tower | 21:00 |
rm_work | ah | 21:00 |
* TrevorV_ working from home gives me the power to use linux, but my headsets don't like it ha ha | 21:00 | |
rm_work | do hangouts not work in linux? | 21:00 |
rm_work | :P | 21:00 |
rm_work | ah | 21:00 |
rm_work | T_T | 21:00 |
sbalukoff | I'm up for either, though, I don't know if I've tested hangout on my Ubuntu laptop yet... | 21:00 |
rm_work | anywho | 21:01 |
rm_work | we talked about some options previously -- management network per amphora, per controller, per ha-set, whatev | 21:01 |
sbalukoff | Yes | 21:01 |
rm_work | so at least for OUR architecture, it looks like we're limited to 255 machines per neutron network, so doing it as one neutron network for all amphorae obviously won't work | 21:02 |
TrevorV_ | Wait, rm_work is this going to talk about the necessity for multiple management networks, or the use of L3 layers on the same network Operators will use for amphora management (so to speak) | 21:02 |
sbalukoff | That's true. | 21:02 |
rm_work | but doing one per ha-set doesn't work either, because we can't put that many vifs on each controller :/ | 21:02 |
sbalukoff | That's true.though, you can always use something larger than a /24 | 21:02 |
rm_work | sbalukoff: no, i mean, we literally can't do it | 21:02 |
rm_work | it's a limitation of the implementation we use | 21:03 |
sbalukoff | rm_work: I agree that one per HA set is not scalable either. | 21:03 |
rm_work | we are not on ML2 | 21:03 |
sbalukoff | rm_work: Oh really? | 21:03 |
rm_work | we use... Juniper or something, i don't remember, blogan do you remember? | 21:03 |
blogan | rm_work: totally hijacked TrevorV_'s discussion | 21:03 |
TrevorV_ | sbalukoff, rm_work This is a different conversation than I'm concerned with at this point.... | 21:03 |
rm_work | anyway, yeah, it isn't doable. period | 21:03 |
rm_work | well, i'll get to there | 21:03 |
rm_work | it's related | 21:03 |
sbalukoff | rm_work: That's a really big limitation. | 21:03 |
rm_work | yes. | 21:03 |
sbalukoff | Ok. | 21:04 |
rm_work | BUT, it sounds like we can sort of... emulate it with some magic (this is always what I end up with from talking with jkoelker) | 21:04 |
sbalukoff | Oh boy. | 21:04 |
rm_work | anywho, we get to the "management_network_id" | 21:04 |
sbalukoff | I tend to cringe at "magic" solutions. | 21:04 |
blogan | well we'd just essentially be hooking into a tenant network | 21:04 |
sbalukoff | Ok. | 21:04 |
sbalukoff | Well, the Octavia service account is essentially "just another tenant" | 21:05 |
blogan | so from neutron standpoint thats how it is, hwo that tenant network is setup behind the scenes here is the "magic" | 21:05 |
sbalukoff | So, it makes sense that it's loadbalancer network(s) would be the same. | 21:05 |
sbalukoff | Aah | 21:05 |
sbalukoff | Also s/it's/its/ | 21:05 |
blogan | one lb network would work for us in this case | 21:05 |
rm_work | So the debate around "management_network_id" is in regard to whether that will actually be a single network_id stored in config, or if we'll have to move that to the DB -- as well as what to call it, and what kind of traffic actually passes over it, right? | 21:06 |
sbalukoff | (Sorry, the pedantic grammatician in me is annoyed.) | 21:06 |
sbalukoff | (and annoying) | 21:06 |
TrevorV_ | rm_work, that's what I'm concerned with, yes | 21:06 |
johnsom | I lean towards one management network per controller | 21:06 |
sbalukoff | rm_work: I suspect in the long run we're going to have to move it to the DB. | 21:06 |
sbalukoff | We don't have to do this for v0.5 though. | 21:06 |
rm_work | johnsom: so, that is another interesting problem, if we are trying to architect such that ANY controller can talk to ANY amphora | 21:07 |
rm_work | if we assign one per controller, then we have to make sure each amphora has a VIF on each controller's network | 21:07 |
sbalukoff | I see v0.5 being a sort of advance proof-of-concept where we work out other nastiness like hot-plugging interfaces to amphorae across tenants and whatnot. | 21:07 |
johnsom | Yeah, exactly. | 21:07 |
rm_work | which will be fun to bookkeep if controllers go up/down | 21:07 |
rm_work | since I am hoping the controllers will scale | 21:07 |
rm_work | IE, auto-scale | 21:07 |
rm_work | well | 21:08 |
johnsom | We still had a limitation around amphora and the managing controller due to the health monitoring scheme (checking for missing heartbeats) | 21:08 |
sbalukoff | The controllers will scale, will hopefully auto-scale at some later date (Octavia v3.0 perhaps?) | 21:08 |
rm_work | this is one thing where i REALLY don't want to see us punt it to 1.0 | 21:08 |
rm_work | auto-scaling, yeah, 3.0 or whatever | 21:08 |
blogan | octavia 5.0 will be come skynet | 21:08 |
TrevorV_ | None of these concerns is mine yet, ha ha | 21:08 |
rm_work | but, the way the networks are assigned... i want to get it right in 0.5 | 21:08 |
sbalukoff | So, I don't quite understand the issue of an amphora being assigned to a specific controller. | 21:08 |
rm_work | sorry TrevorV_ got distracted again | 21:08 |
rm_work | anyway | 21:08 |
rm_work | gah | 21:08 |
rm_work | sbalukoff: well | 21:08 |
blogan | rm_work: we're nto going to get it right in 0.5, we can mak a best guess though | 21:08 |
TrevorV_ | They're all valid concerns, just not concerning my point is all :D | 21:09 |
sbalukoff | I originally threw that idea forward because it seems to scale... and we're already going to have to deal with certain bookkeeping problems anyway. This actually reduces some of them. | 21:09 |
sballe | sbalukoff: +1 | 21:09 |
TrevorV_ | Much of this can be fleshed out in whiteboarding sessions during the hackathon too, which would be awesome | 21:09 |
sbalukoff | blogan: +1 | 21:09 |
rm_work | sbalukoff: THIS is the stuff we were whiteboarding :P | 21:09 |
rm_work | blogan: right, and a best guess doesn't involve us saying "well, this probably won't work for 1.0, but this is 0.5 so whatever" | 21:10 |
rm_work | it means we should at least think it MIGHT work for 1.0 | 21:10 |
rm_work | anyway | 21:10 |
sbalukoff | rm_work: I feel like some of this is re-hashing ideas already considered. But you're right, I don't think we properly documented the ideas we considered and rejected... meaning that these kinds of discussions will happen until we do. ;) | 21:10 |
blogan | im just saying we shouldn't spend eternity trying to solve for things we do not know | 21:10 |
sbalukoff | blogan: +1 | 21:11 |
rm_work | there were two parts i mentioned earlier and i am not sure which one TrevorV_ was more concerned with -- the naming of the thing (being related to the question of what traffic actually goes over that network and what its primary purpose is), and whether it's one ID in config versus possibly even per-amphora in the DB | 21:11 |
sbalukoff | rm_work: Part of the concern is that we can't expect perfection on day 1. | 21:11 |
sbalukoff | Or with the initial release | 21:11 |
sbalukoff | We have to make some compromises to make progress at all. | 21:11 |
rm_work | either way it does tie in to what I'm talking about above | 21:12 |
TrevorV_ | rm_work, I have zero qualms with making many networks in DB or in config or whatever. My biggest concern is utilizing a network that has customer data/traffic for use as an Operator | 21:12 |
blogan | but we can decide on the goal of 1.0, and design 0.5 in that direction | 21:12 |
rm_work | ok | 21:12 |
rm_work | so to that point | 21:12 |
rm_work | There will be at least three distinct networks attached to each amphora | 21:12 |
rm_work | 1) Public(ish) -- this is what the FLIP is pointed to | 21:13 |
sbalukoff | rm_work: If I had a crystal ball, I would suspect that the "lb network" becomes a database object, and that an LB network contains controllers and their amphorae in a sort of heirarchy. | 21:13 |
sbalukoff | But... there are a lot of leaps of faith to get there for me right now. | 21:13 |
rm_work | 2) Management -- the controller sending config updates and communicating with the amphora for stats/heartbeat | 21:13 |
rm_work | 3) Customer network for backend nodes (members) | 21:13 |
blogan | every amphora will be on the Public net? | 21:13 |
johnsom | Could be two, if it is a one-armed reflex LB (lb servers inside a tenant network) | 21:13 |
rm_work | that's why i said public(ish) | 21:13 |
rm_work | not actually public net | 21:13 |
sbalukoff | In my proposal I had 1 and 2 combined. | 21:13 |
rm_work | but available to the FLIPs | 21:14 |
sbalukoff | blogan: The LB network becomes a target for FLIPs. | 21:14 |
rm_work | sbalukoff: ok, that is not something we want | 21:14 |
johnsom | Interesting, I missed that there was a propose for public and management net to be one network | 21:14 |
sbalukoff | That is to say, the "back end" for the FLIP. | 21:14 |
rm_work | public traffic should be on a completely different network from control/management traffic | 21:14 |
sballe | johnsom: I didn't read that eitehr | 21:15 |
johnsom | rm_work: +1 | 21:15 |
TrevorV_ | sbalukoff, I'm vehemently against using the same network for customer traffic along with access as an Operator. | 21:15 |
sballe | rm_work: +1 | 21:15 |
rm_work | ok, looks like we're all in agreement except sbalukoff. :P | 21:15 |
sbalukoff | So... there's a lot of hand-waving going on. | 21:15 |
rm_work | sbalukoff: not so much with this part... | 21:15 |
sballe | TrevorV_: I agree since it could mean that we cannot kill the amphora if it is under attack | 21:15 |
sbalukoff | And I suspect a proper discussion of topologies is going to shed some light on this. | 21:15 |
*** crc32 has quit IRC | 21:15 | |
TrevorV_ | sballe, that's a great summation to my concern ha ha sbalukoff ^^ | 21:16 |
sbalukoff | sballe: Not necessarily. | 21:16 |
johnsom | Or customer load can isolate and fail over an LB | 21:16 |
sbalukoff | Again, I think you're assuming I'm saying something I'm not actually saying. | 21:16 |
TrevorV_ | sbalukoff, That's possible, since I'm fairly naive to many things :( | 21:16 |
sbalukoff | My point is that if you're using layer-2 connectivity for the front end, then yes, management and "public service network" are separate networks. | 21:17 |
sballe | I agree that some topology doscussion and going over various use cases will make this more clear | 21:17 |
rm_work | sbalukoff: i think what I'm saying is fairly unambiguous, and this is definitely the least hand-wavey magic thing we've talked about today | 21:17 |
sbalukoff | Much the same way that layer-2 networking on the back end (to members) is a separate network. | 21:17 |
rm_work | hmm | 21:17 |
rm_work | so you're not talking about L2? or... ? | 21:18 |
sbalukoff | But if you're using layer-3 routing, then it doesn't make a whole lot of sense to keep the "public service network" separate. | 21:18 |
xgerman | well, per se sbalukoff is just talking about where to put the Public FLIPs | 21:18 |
sbalukoff | Since you can kill the amphora, reroute traffic, etc. | 21:18 |
sbalukoff | Yeah, public FLIPs (the public side of these anyway) is not on the LB network. | 21:18 |
sbalukoff | At all. | 21:18 |
sballe | sbalukoff: +1 | 21:18 |
*** mestery_afk is now known as mestery | 21:18 | |
rm_work | sure | 21:18 |
sbalukoff | rm_work: I'm not talking about L2. | 21:19 |
sbalukoff | Also, to be honest: In Octavia v2.0 in order to get horizontal scalability of a loadbalancer object (as defined in Neutron LBaaS v2) across amphorae, you *can't* do layer-2 front-ends. | 21:19 |
johnsom | sbalukoff: with your layer 3 proposal (route injection) the traffic would still share the interface on the amphora instance right? | 21:19 |
sbalukoff | You have to go with layer-3 routing for that functionality. | 21:19 |
sbalukoff | This is why I'm thinking mostly about the layer-3 front-end case. | 21:20 |
xgerman | yeah, I don;t think we want layer-2 connetcivity to the outside world -- should all be done with FLIPs | 21:20 |
sbalukoff | johnsom: If I understand what you're asking, yes. :) | 21:20 |
TrevorV_ | So sbalukoff you're saying the same network I would use to kill a rogue amphora would NOT see customer traffic at all or would? | 21:20 |
sbalukoff | So... by the way, I hate the term FLIPs... | 21:20 |
sbalukoff | It's really just a static NAT. | 21:21 |
xgerman | me, too | 21:21 |
xgerman | but rm_work gets confused if I say VIP | 21:21 |
sbalukoff | Why did they have to re-define what "floating IP" means? | 21:21 |
sbalukoff | Very annoying. | 21:21 |
johnsom | Yeah, and we are confusing FLIPs, sbalukoff's layer 2 proposal, and layer 3 proposal | 21:21 |
sbalukoff | TrevorV_: You kill a rogue amphora through the controller and interaction with Nova, right? | 21:22 |
sbalukoff | You don't actually have to talk to the amphora to do this. | 21:22 |
sbalukoff | johnsom: This is why I think we need a larger discussion where we define terms and propose layouts. | 21:22 |
sbalukoff | And whatnot. | 21:22 |
rm_work | sec, i have been furiously drawing on paper | 21:22 |
TrevorV_ | Actually that's fair sbalukoff, even though it still leaves a bad taste in my mouth for some reason | 21:23 |
sbalukoff | But I was hoping to see Rackspace's info on FLIPs that's been alluded to before getting into that. | 21:23 |
xgerman | sbalukoff you might want to hive off data before so you cna troubleshoot | 21:23 |
* TrevorV_ thinks he wasn't furious about it at all | 21:23 | |
rm_work | i mean, with great gusto | 21:23 |
rm_work | not angrily :P | 21:23 |
TrevorV_ | :D | 21:23 |
xgerman | let me tell you just killing the Amphora under attack doesn't give you enough info to not get the same attrack the next minute | 21:23 |
xgerman | (real life expereince) | 21:24 |
rm_work | imgur is being slow T_T | 21:24 |
rm_work | or else my phone us | 21:24 |
rm_work | *is | 21:24 |
sbalukoff | TrevorV_: The trade off is an extra network per "group" (whatever that means) and an extra interface per amphora, versus keeping "management" traffic on a "management network" that is pretty close to the traditional definition for the same. | 21:24 |
sballe | and I know our security team wants to be able to do forensics | 21:24 |
sbalukoff | I could definitely be swayed on this, but my perception was that extra complication to the networking of amphorae was not a good idea. | 21:24 |
rm_work | it's kind of an analog of our Servicenet concept | 21:24 |
rm_work | on RS you spin new VMs with Publicnet and Servicenet | 21:25 |
rm_work | this would like... Publicnet, Servicenet, and LBnet | 21:25 |
rm_work | *be like | 21:25 |
sbalukoff | xgerman: True, but you can also re-route the attack traffic to the bitbucket since he attack traffic is coming in for a loadbalancer VIP, which is not the same IP as the "primary" or (ugh) "managment" interface on the amphora | 21:26 |
xgerman | yep | 21:26 |
xgerman | was just responding to your nova delete | 21:26 |
sbalukoff | xgerman: Agreed. | 21:26 |
sbalukoff | My point was that control does not compete with attack traffic in a DDOS situation. | 21:27 |
xgerman | ok, for me we spin up the a,phora with the tenant network and our control network | 21:27 |
sbalukoff | It's also a really good idea to be doing healthchecks and whatnot over the same interface that actual client traffic traverses. | 21:27 |
johnsom | What about port conflicts? If we are managing amp on same network as public side (inside) couldn't we hit a port conflict? | 21:27 |
xgerman | customer assigns public FLIP to the port in the tenant network if he needs to rpoute public IP | 21:27 |
rm_work | http://i.imgur.com/9lJXmzy.jpg | 21:27 |
xgerman | rationale is that the customer should own the public IP so if he looses it he has only himself to blame | 21:28 |
sbalukoff | johnsom: I'm not sure I follow you. Could you elaborate? | 21:28 |
johnsom | rm_work: that is what I had been thinking | 21:28 |
rm_work | took longer to get it uploaded to imgur than to draw it T_T | 21:28 |
xgerman | and I am thinking we should be good without the public part | 21:29 |
sbalukoff | Haha | 21:29 |
xgerman | since DVR will route it straight to the LB from the VIP | 21:29 |
sbalukoff | xgerman: That's sort of my point as well. | 21:29 |
rm_work | well, that part is up to the network | 21:29 |
rm_work | in RS, we need something for the FLIP to point to | 21:30 |
sbalukoff | If we're doing layer-3 routing, we're assuming we can control the network. | 21:30 |
rm_work | and we don't want it to point to the management network | 21:30 |
sbalukoff | rm_work: Define "something" | 21:30 |
johnsom | If public and management are the same network, and NAT'd IP points to the listener IP on the NAT'd private network, you could have a port conflict. I guess you are assuming AMP is on a seperate internal IP? | 21:30 |
rm_work | sbalukoff: an IP | 21:30 |
sbalukoff | rm_work: Right. It will. | 21:30 |
rm_work | sbalukoff: hopefully not the same IP as the management network O_o | 21:30 |
sbalukoff | Why don't you want that going to the loadbalancer network? (Again, I don't like the name "management network") | 21:30 |
sbalukoff | rm_work: Why not? | 21:31 |
sbalukoff | Also... | 21:31 |
sbalukoff | Ok... so... | 21:31 |
rm_work | I'd like to only expose the Amphora API on a network that can't be reached by anything external | 21:31 |
rm_work | for one | 21:31 |
sbalukoff | The way FLIPs work is akin to the "layer-2" front-end which won't work with horizontal scaling in Octavia v2.0 | 21:31 |
TrevorV_ | sbalukoff, with your distinction calling it management network doesn't make sense. With my concern (and rm_work may also be arguing this) management network is appropriate | 21:31 |
xgerman | well, I like the user to assign the FLIP so he needs to have admin access | 21:31 |
xgerman | and he can't have that on the managment network | 21:32 |
xgerman | hence, the FLIP should be on the tanant network | 21:32 |
rm_work | sbalukoff: maybe the way YOUR FLIPs work? >_> | 21:32 |
sbalukoff | rm_work: That's entirely possible. | 21:32 |
sballe | xgerman +1 | 21:33 |
xgerman | same here | 21:33 |
TrevorV_ | xgerman, correct me if I'm wrong, but I thought the user would provide that via a request to the API? We can utilize a "management network" to do the assignment without the user being involved | 21:33 |
rm_work | sbalukoff: thus, the larger explanation of exactly how our FLIP setup works, which I had hoped would be emailed out soon | 21:33 |
johnsom | sbalukoff Couldn't the flip point to the vip (ugh) used for horizontal scaling? | 21:33 |
* TrevorV_ thinks about what he said, may have not had the right idea there | 21:33 | |
sbalukoff | It's possible that Neutron doesn't have the same kind of concept I'm going for. It isn't exactly a floating IP in the way they've defined it. It's more like a layer-3 route for a specific IP. | 21:33 |
rm_work | TrevorV_: since HP lets customers control FLIPs, they want the customer to bring up a LB and then do the FLIP creation/assignment themselves | 21:33 |
xgerman | johnsom that is what I am thinking, too - and so making the scaling case a specialization of the nomral one | 21:33 |
rm_work | since we don't allow customers to control their own FLIPs, we'd need to do it inside Octavia for them | 21:34 |
xgerman | rm_work +1 | 21:34 |
rm_work | jkoelker: how goes that email drafting? :) | 21:34 |
sbalukoff | johnsom: Yes, that is possible. | 21:34 |
xgerman | and that's a clean way to do it as well | 21:35 |
sbalukoff | It would be a hack-ish work around for what I'd really like to see, but it ought to work. | 21:35 |
sbalukoff | Again, assuming I'm understanding what you're saying correctly. | 21:35 |
rm_work | in our setup, the FLIP *is* what provides the scaling | 21:35 |
rm_work | T_T | 21:35 |
xgerman | yeah, the RAX FLIP will point at multiple amphora | 21:35 |
rm_work | the FLIP is the horizontal scaling IP. the Amphorae themselves all have individual private IPs | 21:35 |
xgerman | what's called a VIP in johnsom | 21:35 |
rm_work | we'll say "FLIP, point to these 3 amphorae IPs" | 21:35 |
sbalukoff | Ok, cool! | 21:35 |
sbalukoff | That's not what a Neutron FLIP is, just be clear | 21:36 |
johnsom | I was just trying to come up with some kind of distinction | 21:36 |
sbalukoff | So, yes, we've been using the same term to mean different things. | 21:36 |
sbalukoff | Hence... confusion. | 21:36 |
rm_work | I think an HP FLIP can only have one IP in the backend? | 21:36 |
xgerman | agreed | 21:36 |
rm_work | well | 21:36 |
rm_work | maybe not soon :P | 21:36 |
sbalukoff | Mostly.... | 21:36 |
rm_work | I think the idea is we patch neutron to allow FLIPs to point to multiple machines | 21:36 |
xgerman | well, we anted to hid the fanning out behind some 10.x FLIP | 21:36 |
rm_work | via the thing our network guys are working on... something with OVS | 21:36 |
sbalukoff | I'd rather see an actual IPAM of some kind where the user can get a public IP and then do what they want with it... | 21:37 |
xgerman | yep, we want to use the same but haven't got very far | 21:37 |
rm_work | xgerman: ok so you'll have TWO FLIPs, one to expost to the user as the "LB IP" and then the user makes their own FLIP to point to the LB FLIP? | 21:37 |
xgerman | yep, that is my plan which might be fkawed | 21:37 |
sbalukoff | Be that deploy it as a 1:1 static NAT (ie. Neutron FLIP), or use that as a layer-3 load balancer destination. | 21:37 |
rm_work | xgerman: i think the idea is jkoelker is hoping you guys will collaborate via the method they've worked out | 21:37 |
xgerman | so they can "upgrade" to a scalable LB | 21:37 |
johnsom | rm_work: how does session persistence work? (out of curiosity) | 21:37 |
sbalukoff | Which, AFAIK doesn't exist yet. | 21:37 |
rm_work | johnsom: OVS handles it via | 21:38 |
rm_work | err | 21:38 |
rm_work | i could explain it | 21:38 |
rm_work | i think we have a whiteboard picture somewhere | 21:38 |
rm_work | there's a hash based a bunch of data | 21:38 |
rm_work | so it uses a hashtable to ensure each connection from the same source goes to the same dest | 21:38 |
xgerman | yep, but that's not very helpful if you scale up or down :-) | 21:38 |
rm_work | no, it works | 21:39 |
*** ptoohill_ has joined #openstack-lbaas | 21:39 | |
rm_work | or, it should | 21:39 |
xgerman | ok, so you keep track of all open connections... | 21:39 |
rm_work | kinda | 21:39 |
rm_work | I always suck at explaining this | 21:39 |
*** ptoohill_ has quit IRC | 21:39 | |
xgerman | well, gotcha -- we spend an hour thinking about that, too | 21:39 |
rm_work | jkoelker is writing an email. NOW. which will explain this hopefully better than I ever could | 21:39 |
TrevorV_ | rm_work, is this the thing we whiteboarded a long time ago that I made the diagram for in jira? | 21:39 |
rm_work | TrevorV_: yes I think so | 21:39 |
sbalukoff | You all are talking about the layer-4 router / load balancing service that is part of the Octavia v2 desing... | 21:40 |
rm_work | not sure if it is still up to date | 21:40 |
sbalukoff | design | 21:40 |
*** ptoohill_ has joined #openstack-lbaas | 21:40 | |
rm_work | IE, they may have messed with the design since then | 21:40 |
TrevorV_ | rm_work, I don't think it is honestly. | 21:40 |
rm_work | but yeah if you had that jira diagram... | 21:40 |
TrevorV_ | I'll try to find it | 21:40 |
TrevorV_ | Requires laptop and VPN, brb | 21:40 |
rm_work | TrevorV_: this is the Unicorn (which is what you did the diagram for -- with OVS and Ryu) | 21:40 |
xgerman | yeah, my hope was to hide ll of that behind some 10.x adress the suer assigns his 15.x FLIP to | 21:40 |
rm_work | i am looking in our jira too | 21:41 |
rm_work | T_T | 21:41 |
rm_work | this diagram would be very helpful | 21:41 |
TrevorV_ | rm_work, Check issues for me | 21:42 |
TrevorV_ | kk? | 21:42 |
rm_work | trying | 21:43 |
rm_work | TrevorV_: I think its OPENCLB-82 but there is nothing attached | 21:45 |
rm_work | ah | 21:45 |
rm_work | it's on OPENCLB-80 | 21:45 |
rm_work | http://i.imgur.com/IIqloLM.png | 21:46 |
sbalukoff | Ok, I gotta hit the head. BBIAB. | 21:46 |
rm_work | sbalukoff: ^^^^ hit your head with that diagram | 21:46 |
johnsom | These lunch time meetings are a challenge.... | 21:47 |
sbalukoff | johnsom: Agreed! | 21:47 |
dougwig | balls, i walk away for an hour. i'm gonna need cliff's notes. | 21:47 |
rm_work | ah yeah it is noon for you guys T_T | 21:47 |
sbalukoff | rm_work: The HAproxy VM is an amphora, correct? | 21:47 |
sbalukoff | What is DHT? | 21:47 |
TrevorV_ | rm_work, you found it? | 21:47 |
rm_work | dougwig: tl;dr: no one agrees about what FLIPs are | 21:47 |
rm_work | TrevorV_: yes | 21:47 |
rm_work | sbalukoff: yes | 21:47 |
dougwig | they're nat ips. | 21:48 |
sbalukoff | rm_work: +1 on the tl;dr | 21:48 |
rm_work | http://en.wikipedia.org/wiki/Distributed_hash_table | 21:48 |
rm_work | sbalukoff: ever use torrents? :P | 21:48 |
sbalukoff | I think we need a define some terms for this discussion (and be very detailed, with examples) | 21:48 |
sbalukoff | rm_work: You mean, like bittorrent? | 21:49 |
sbalukoff | Or something else? | 21:49 |
rm_work | sbalukoff: as I have said maybe 8 times now, jkoelker is currently drafting an email :P | 21:49 |
rm_work | sbalukoff: yes | 21:49 |
sbalukoff | Haha! Awesome. | 21:49 |
rm_work | hopefully I don't over-hype his email :P | 21:49 |
sbalukoff | Ok, well... let's table this discussion until then, perhaps? | 21:49 |
rm_work | TrevorV_++ for the awesome diagram tho :P | 21:49 |
sbalukoff | Just having a comprehensive commen set of terms will help us not to miss each others' meanings. | 21:50 |
TrevorV_ | sbalukoff, so my review will have changes pending this discussion yeah? | 21:50 |
sbalukoff | TrevorV_: More than likely, yes. | 21:50 |
rm_work | I would assume so | 21:50 |
sbalukoff | So, let's hope we can get that discussion underway soon. | 21:50 |
TrevorV_ | Yeah, sounds good to me, just making sure I understood the results :D | 21:51 |
rm_work | I will check in with jkoelker (who I am surprised hasn't chimed in yet) about the status of his email :P | 21:51 |
rm_work | oh | 21:51 |
TrevorV_ | rm_work, He probably got caught up talking to someone again, like he does :D | 21:51 |
rm_work | that's because i can see him sitting around a whiteboard 10 feet away T_T | 21:51 |
sbalukoff | Or didn't want to enter the fray here. ;) | 21:51 |
sbalukoff | Can't imagine why not. XD | 21:51 |
rm_work | yeah he is thoroughly distracted. k | 21:51 |
rm_work | oh wat | 21:52 |
rm_work | jorge and brandon are int hat discussion... uhhh | 21:52 |
rm_work | brb | 21:52 |
sbalukoff | What is Ryu in this diagram? | 21:52 |
sbalukoff | Haha! | 21:52 |
johnsom | Would like to see the Octavia to VM path in this diagram | 21:52 |
xgerman | I love that they use Redis!! | 21:52 |
johnsom | Or I guess, Octavia to AMP API | 21:53 |
sbalukoff | johnsom: Agreed! | 21:53 |
TrevorV_ | sbalukoff, we have some concerns with Bison | 21:54 |
sbalukoff | Controller to amp API | 21:54 |
TrevorV_ | Lulz jp | 21:54 |
sbalukoff | TrevorV_: I recommend eating them. They're delicious. | 21:54 |
* TrevorV_ comes up with street fighter reference, sbalukoff counters with animal definition... joke averted | 21:54 | |
sbalukoff | Haha | 21:55 |
sbalukoff | OH! | 21:55 |
sbalukoff | Ryu, | 21:55 |
sbalukoff | Yes, that totally sailed over my head. | 21:55 |
TrevorV_ | Ha ha ha I figured, but sometimes people just like to ruin the joke :D | 21:55 |
sbalukoff | Yes, had I gotten the joke, I am asshole-ish enough to derail it like that. | 21:55 |
sbalukoff | Though everyone knows that Chun-Li would kick Ryu's ass. | 21:56 |
sbalukoff | (Like, many many times in rapid succession) | 21:56 |
johnsom | sbalukoff: +1 | 21:56 |
johnsom | Strange that I still remember that.... | 21:57 |
*** cipcosma has quit IRC | 21:57 | |
sbalukoff | Dude, it was one of the cheapest moves that newbs used constantly. | 21:57 |
sbalukoff | That and Guile's unnaturally long reach with his leg sweep. | 21:58 |
sbalukoff | I use the past tense as if people aren't still playing that game. XD | 21:58 |
TrevorV_ | Well, to be honest I never really played the games, but I always hated the fat sumo dude flying through the air | 21:58 |
TrevorV_ | Spinning like a top mid-air wasn't fair | 21:59 |
sbalukoff | Just... too unrealistic for you? | 21:59 |
TrevorV_ | The imagery was devastating... and I couldn't figure out how to do it in the game | 21:59 |
TrevorV_ | lulz | 21:59 |
sballe | sbalukoff: Is that a reference to the Trekken game? | 21:59 |
rm_work | ok so turns out they were literally discussing THIS | 21:59 |
rm_work | and i missed it T_T | 21:59 |
rm_work | soooo | 21:59 |
sbalukoff | Not that Indian dude's extendable arms weren't either, eh. | 21:59 |
rm_work | whatev, maybe blogan can chime in now | 21:59 |
sbalukoff | Heh! | 22:00 |
TrevorV_ | Context people, CONTEXT | 22:00 |
rm_work | Ryu is http://osrg.github.io/ryu/ | 22:00 |
sbalukoff | Ok, I really do need to hit the head now. BBIAB | 22:00 |
TrevorV_ | ha ha just kidding | 22:00 |
rm_work | sbalukoff: ^^ | 22:00 |
rm_work | SDN thing | 22:00 |
rm_work | "Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API that make it easy for developers to create new network management and control applications. Ryu supports various protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully 1.0, 1.2, 1.3, 1.4 and Nicira Extensions. All of the code is freely available under | 22:00 |
rm_work | the Apache 2.0 license." | 22:00 |
TrevorV_ | I'm actually going to step away from IRC and do some reviews that were mentioned in the meeting today, and then I'm going to boot into windows to kiss my work day goodbye :D | 22:01 |
TrevorV_ | I'll talk to you guys tomorrow! Have a good night! | 22:01 |
dougwig | you have to boot windows to kiss your computer? | 22:01 |
*** TrevorV_ has quit IRC | 22:01 | |
sballe | lol | 22:02 |
rm_work | sbalukoff: let me know when you're back | 22:02 |
johnsom | It's an SDN controller... | 22:02 |
rm_work | alright I've been told we're now moving away from what that diagram shows, and to something a bit different T_T | 22:03 |
rm_work | so | 22:03 |
* rm_work throws up his hands | 22:03 | |
rm_work | will await the prophesied jkoelker email | 22:03 |
ptoohill | Ryu and Openflow is pretty cool stuffs | 22:03 |
rm_work | ptoohill: you're alive!?! | 22:03 |
rm_work | ptoohill: sup | 22:03 |
ptoohill | sorta | 22:03 |
ptoohill | Just wanted to share my love of those technologies :P | 22:04 |
rm_work | brb | 22:07 |
sbalukoff | rm_work: I'm back. | 22:07 |
xgerman | we were looking at OpenDaylight | 22:07 |
ptoohill | java impl? | 22:07 |
sbalukoff | Ok, so RYU is an SDN platform. | 22:07 |
ptoohill | SDN controller | 22:07 |
sbalukoff | Ok. | 22:08 |
*** jorgem has quit IRC | 22:08 | |
ptoohill | http://osrg.github.io/ryu/ | 22:09 |
dougwig | in case any of y'all missed this happening: https://review.openstack.org/#/c/138870/ | 22:10 |
ptoohill | +1'd it | 22:11 |
ptoohill | :P | 22:11 |
sbalukoff | Test review, do not merge? | 22:11 |
sbalukoff | Oh! | 22:12 |
sbalukoff | Jenkins failures. | 22:12 |
sbalukoff | Joy. | 22:12 |
mlavalle | RamaKrishna: ping | 22:12 |
sbalukoff | Well, it's good to see this discussion happening. Looking forward to jkeolker's e-mail. ;) | 22:14 |
dougwig | note the repo that it's filed against. :) | 22:14 |
johnsom | Yeah, nice dougwig | 22:14 |
sbalukoff | Yes, neutron-lbaas | 22:14 |
sbalukoff | OH! | 22:15 |
sbalukoff | NICE! | 22:15 |
sbalukoff | So that's happening. Awesome! | 22:16 |
*** vivek-ebay has joined #openstack-lbaas | 22:28 | |
jkoelker | sorry guys been in a bunch of meetings | 22:29 |
dougwig | that's ok, they left you a short novel to read. | 22:29 |
jkoelker | https://i.imgur.com/bivSdcC.png | 22:29 |
jkoelker | that's the current image of the architechture | 22:29 |
jkoelker | that i'm playing with | 22:29 |
jkoelker | the DHT thing, i'm leaning twoards using BGP for now | 22:30 |
jkoelker | L2VPN EVPN (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) | 22:30 |
sbalukoff | If it's site-internal, I'd probably say OSPF. | 22:30 |
sbalukoff | But... eh... whatever. ;) | 22:31 |
jkoelker | well this is for advirtising the tunnel endpoints up to the NAT boxes | 22:31 |
jkoelker | s/NAT/FLIP/ | 22:31 |
jkoelker | ;) | 22:31 |
sbalukoff | You'll need to define what you mean by tunnel in these diagrams (as in, is it a "usual" tunnel, or something else?) | 22:31 |
jkoelker | vxlan or stt, or gre | 22:31 |
sbalukoff | Ok. | 22:32 |
dougwig | those don't look like flip's so much as IP's on an external network. | 22:32 |
dougwig | ahh, you mean those as nat gateways, ok | 22:32 |
jkoelker | yea | 22:32 |
sbalukoff | dougwig: Agreed. But we need to have a better definition of what people mean when they say "FLIP", as it's been redefined several times throughout the conversation. | 22:32 |
sbalukoff | We do need a glossary here, eh! | 22:32 |
jkoelker | sorry, there are only 2 hard problems in computer science, cache invalidation, naming things, and off by 1 errors | 22:32 |
sbalukoff | :D | 22:32 |
sbalukoff | +1 | 22:32 |
*** vivek-ebay has quit IRC | 22:33 | |
dougwig | so where is the LB vip in that picture? (internal and/or external nat) ? | 22:33 |
dougwig | or is that current topology for VMs? | 22:33 |
*** vivek-ebay has joined #openstack-lbaas | 22:33 | |
jkoelker | the vip is physically is at the FLIP boxes | 22:33 |
jkoelker | on all of them | 22:33 |
jkoelker | the idea is that we get traffic shunted via ecmp to any of the availible boxes | 22:34 |
sbalukoff | jkoelker: It would be great to see how Octavia components fit into this too. (where are the amphorae? To which networks do the amphorae connect? To which networks do the controllers connect? etc.) | 22:34 |
dougwig | so the amphoras and member VMs live in the same internal networks, and the VIP is a nat'ed IP to an amphora's internal VM ip? | 22:34 |
jkoelker | us an ovs flow table set that flips the DST ip to the one that the LB uses (see it punny!) | 22:34 |
jkoelker | they all live on the hypervisors, as guests/containers | 22:35 |
jkoelker | or could possibly be bare metal with ironic | 22:35 |
sbalukoff | Again, seeing that diagrammed out will help with understanding this. | 22:36 |
jkoelker | one of my goals with this was to make it so its just another peice of lego that we use to build up the systems | 22:36 |
dougwig | right. two uses case problems: how do amps talk to members, and how does the outside world talk to the amp (LB)? from your diagram, amp -> member is internal routing, and internet -> amp is via a nat'ed ip to an internal VM ip, correct? | 22:36 |
sbalukoff | Right. I'm looking for context and trying to get my bearings using terms and concepts I'm already familiar with. | 22:36 |
jkoelker | wheither that is octavia vips or neutron floating ips for customers, we would liek to use the same arch | 22:36 |
jkoelker | yea i need to clean up a bunch more diagrams and remove all the crud | 22:37 |
sbalukoff | Ok. | 22:37 |
jkoelker | this was the one i had already ready and handy-ish | 22:37 |
sbalukoff | I also need to take care of some non-Octavia related stuff for a bit, so I'mma go effectively AFK for a while unless someone pings me by name here. | 22:38 |
jkoelker | dougwig: correct | 22:38 |
jkoelker | if i'mo understanding what amphorae means ;) | 22:39 |
jkoelker | that's the lb processes right? | 22:39 |
dougwig | amphorae == vm/container/bare-metal that's running the actual load balancing goo (harpy) | 22:39 |
dougwig | haproxy | 22:39 |
jkoelker | kk | 22:39 |
dougwig | it's roman for container, since we couldn't just pick vm or container. | 22:39 |
jkoelker | so amps would then talk to members via existing overlay neutron networks | 22:40 |
jkoelker | and the outside talks to the amp via the NAT/FLIP boxen | 22:40 |
dougwig | via implicit l3 routing, or do you need ports into member subnets plugged? | 22:40 |
dougwig | for amp -> member | 22:40 |
jkoelker | implicit l3 routing | 22:40 |
dougwig | sweet, that's a common and straight-forward use case. | 22:41 |
jkoelker | so via those 2 draft rfcs, BGP can advirtise out a tuple of network-id, ip address, mac address, and tunnel endpoint | 22:41 |
jkoelker | so the idea is the host that is running the amps would advirtise up to the NAT boxes the current location of that tuple (which is pretty much just the neutron port) | 22:42 |
dougwig | and can i assume that your neutron and/or sdn fabric is handling that part of the plumbing? | 22:42 |
jkoelker | the NAT boxes then listen to the update messages and modify the ovs flows such that traffic from a particular vip/flip will tunnel (via stt, gre, or vxlan (also able to be advirtised up via BGP)) will terminate on the tunnel endpoint | 22:43 |
jkoelker | correct | 22:43 |
dougwig | that's even simpler than the current namespace haproxy driver, to be honest. (very similar to many of the current hardware drivers.) | 22:44 |
jkoelker | on the amp hosts then, the "public" size of the LB would then plug into an intermediary ovs bridge that will determine if the outbound traffic should go to the NAT/FLIP boxes or not, and take the inbound traffic and deliver it to the vif | 22:44 |
jkoelker | the cool thing i like, is that on the NAT/FLIP boxes we can use the ovs bundle action to spread the traffic out then to one of many potential amps | 22:46 |
jkoelker | so a FLIP could be associated with more than 1 neutron port to scale out, without having to have a ton of different flavor sizes for the lb's | 22:46 |
jkoelker | the downside of course is draining will be interesting | 22:46 |
dougwig | i think the mountain of debate up above might be because folks were assuming that octavia had to do this wiring, which unless i'm misunderstanding, it's all stock (heck, the topology would almost work with nova-network, even.) | 22:47 |
jkoelker | ;) | 22:47 |
jkoelker | yea i'm all about legos | 22:48 |
jkoelker | so just to give ya'll a heads up, the email is gonna be pretty light on implementation ideas, mostly because we're trying to start a discusstion about it | 22:49 |
jkoelker | and i'm under the gun to get that discustion rolling | 22:49 |
dougwig | i think the email can say, "shut up and do less work" and be close to adding clarity. | 22:50 |
jkoelker | lulz | 22:50 |
*** mugu has joined #openstack-lbaas | 22:53 | |
*** BeardyMcBeards has joined #openstack-lbaas | 22:53 | |
*** mlavalle has quit IRC | 22:53 | |
*** Clev_ has joined #openstack-lbaas | 22:54 | |
*** jorgem has joined #openstack-lbaas | 22:55 | |
johnsom | Would be interested to understand the session persistence | 22:58 |
jkoelker | tcp session? | 22:59 |
jkoelker | the bundle command take the tcp tuple as the input to its hash | 22:59 |
johnsom | HTTP session persistence for customer traffic. example is incoming HTTP session needs to hit the same amp listener and backend | 22:59 |
jkoelker | so as long as its the same tcp session, it will get to the same host | 22:59 |
rm_work | see, jkoelker does about 1000000x better at explaining this low-level networking stuff than me trying to regurgitate everything badly | 23:00 |
jkoelker | ah, ok, that's a function of the lb then | 23:00 |
rm_work | johnsom: session persistence is guaranteed to the amphora, so the amphora is in charge of session persistence at that point, as would be expected | 23:00 |
jkoelker | since we get all tcp sessions to the same lb, then haproxy can then take into account the http session to deliver it to the right member | 23:00 |
johnsom | That is what I was trying to make sure we had covered, the path upstream of the amp | 23:00 |
rm_work | johnsom: yeah, that is handled via the hashing of TCP source/dest/etc combo | 23:01 |
johnsom | bundle command is a function of the flip/nat boxen? | 23:01 |
jkoelker | yea its an openflow action | 23:01 |
jkoelker | might be an NXM extension, but its in ovs ;) | 23:01 |
jkoelker | which is good enough for me ;) | 23:02 |
johnsom | Cool. I am very rusty on the state of openflow match/actions | 23:02 |
rohara | blogan: you still kickin' around? | 23:02 |
mugu | rm_work: huehuhehuehueheuhue | 23:04 |
rm_work | mugu: >_< | 23:05 |
mugu | rm_work: ^_^ | 23:05 |
rm_work | mugu: T_T | 23:05 |
mugu | i dont even know what that one is | 23:06 |
blogan | rohara: here but trying to catch up on the thesis above | 23:06 |
*** vivek-ebay has quit IRC | 23:06 | |
*** vivek-ebay has joined #openstack-lbaas | 23:07 | |
rohara | blogan: np, i annoyed sbalukoff with my musings | 23:14 |
*** Clev_ has quit IRC | 23:20 | |
jkoelker | so clev sent the email out with the header "[openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration" | 23:26 |
jkoelker | i'm still working on cleaning up digrams and will reply adding context and the like as I get them finished | 23:26 |
rm_work | jkoelker: so you weren't deemed worthy enough to actually SEND the email? or clev just wanted all the credit? :P | 23:28 |
clev | just stealing all the credit for jkoelker's ideas | 23:28 |
clev | ;-) | 23:28 |
rm_work | woo I will read it now, avoided reading the draft because *spoilers* :P | 23:29 |
clev | We all know who the real brains of this operation are | 23:29 |
sbalukoff | Thanks guys! I shall read and comment. | 23:29 |
clev | Thanks in advance for any thoughts or contributions | 23:31 |
*** vivek-eb_ has joined #openstack-lbaas | 23:44 | |
*** vivek-ebay has quit IRC | 23:44 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!