*** armax has quit IRC | 01:00 | |
*** openstack has joined #openstack-neutron-ovn | 01:21 | |
*** openstack has joined #openstack-neutron-ovn | 01:36 | |
*** openstack has quit IRC | 01:52 | |
*** openstack has joined #openstack-neutron-ovn | 01:53 | |
*** armax has joined #openstack-neutron-ovn | 05:36 | |
*** armax has quit IRC | 08:11 | |
gsagie | russellb : i am looking at the security groups implementation, and we probably will need to hold all the ports and security rules in memory of the ML2 | 12:45 |
---|---|---|
gsagie | mech driver | 12:45 |
russellb | ok | 12:46 |
gsagie | because when security rule changes, need to update all relevant ports | 12:46 |
russellb | right, but i'd expect at that point you'd go pull the port info you need from the db | 12:47 |
russellb | i don't think we want to have a copy of the db in memory and have to be sure we keep it up to date properly | 12:48 |
gsagie | what do you mean? i know that a security rule changed, now i need to know all the ports that have that security group set as their security group | 12:48 |
gsagie | will need to iterate the entire db | 12:48 |
gsagie | unless you have another idea | 12:49 |
russellb | i haven't looked at the db api | 12:51 |
russellb | if there's not already a query written, i'd write a query that can give you all the ports with that security group | 12:52 |
gsagie | ok will do :) or try to | 13:27 |
gsagie | havent touch the db layer much but guess its time to learn | 13:27 |
russellb | :) | 13:28 |
russellb | i haven't touched it in neutron yet | 13:28 |
russellb | gsagie: are you familiar with provider networks in neutron? i'm writing up something about provider networks and OVN, curious if you'd like to review | 13:33 |
gsagie | i can look if you want | 13:34 |
russellb | ok | 13:34 |
russellb | maybe i should just post to our gerrit | 13:34 |
russellb | yeah, i'll do that ... | 13:34 |
openstackgerrit | Russell Bryant proposed stackforge/networking-ovn: docs: separate design docs from general docs https://review.openstack.org/188393 | 13:38 |
*** shettyg has joined #openstack-neutron-ovn | 14:01 | |
mestery | russellb: Curious about you using the DCO in your recent commits. :) | 14:17 |
russellb | kind of a habit, and we clarified in the dev docs what it means, and that it's welcome in openstack commits since it doesn't hurt: http://docs.openstack.org/infra/manual/developers.html#using-signed-off-by | 14:25 |
shettyg | Question for OpenStack guys. Kubernetes has a feature called "services". To summarize the feature, it provides multiple public IP addresses that all point to a single private IP address. I am trying to do a 1:1 mapping between Kubernetes features and OpenStack features and was wondering whether it is possible. | 14:28 |
russellb | i guess that would be "floating IPs" in OpenStack | 14:28 |
russellb | you can allocate floating IPs, which are generally public, and have them mapped to existing ports | 14:29 |
*** armax has joined #openstack-neutron-ovn | 14:31 | |
openstackgerrit | Merged stackforge/networking-ovn: docs: separate design docs from general docs https://review.openstack.org/188393 | 14:34 |
shettyg | russellb: thanks. can one update a floating ip to point to a different IP address, via some openstack api? | 14:36 |
russellb | yes | 14:36 |
russellb | that's the "floating" part, it can be dynamically moved around to be mapped to different addresses | 14:36 |
gsagie | shettyg : do you happen to know if any work started regarding the L3 design ? or the distributed ovsdb ? | 14:37 |
shettyg | gsagie: Work has been started on distributed ovsdb. blp had done some work on that front using raft algorithm. Andy Zhou looks to have taken over from him to see how to take it to practical conclusion, I think. | 14:39 |
shettyg | L3 design: There is talk on the approach. But I haven't asked questions to know the thoughts | 14:40 |
gsagie | shettyg : thanks | 14:41 |
russellb | would be great to see more of that on the ovs-dev list | 15:27 |
russellb | completely opaque to us | 15:27 |
shettyg | russellb: I think it is pretty much still in 'thought' phase or the right approach phase. I am sure people will post it to dev mailing list once there is something concrete. | 15:43 |
shettyg | Btwn, I am looking at Kubernetes integration with OpenStack OVN. If you are familiar with Kubernetes, would you be interested in designing it? | 15:45 |
russellb | I'm not familiar with it | 16:12 |
russellb | not enough to design that | 16:12 |
russellb | i know what it is, but that's about it :) | 16:12 |
shettyg | I have only started looking at it. I can write a github summary sometime today. Providing per container networking for Kubernetes container with OVN is easier to do. But the additional features like floating ips, load balancers etc is more OpenStack involved and will likely need OpenStack experts involved. | 16:16 |
russellb | definitely happy to contribute! | 16:18 |
russellb | probably others that would be interested too, i'll let you know | 16:20 |
*** marun has joined #openstack-neutron-ovn | 16:22 | |
marun | russellb: hi! | 16:22 |
russellb | shettyg: marun (just joined) is much more experienced with Neutron than me, and has also been looking at kubernetes. I would definitely ping him with your ideas | 16:22 |
russellb | marun: backlog here http://eavesdrop.openstack.org/irclogs/%23openstack-neutron-ovn/%23openstack-neutron-ovn.2015-06-04.log.html | 16:24 |
* marun looking | 16:24 | |
russellb | marun: shettyg was also the one who drove the OVN design that led to http://networking-ovn.readthedocs.org/en/latest/containers.html | 16:25 |
shettyg | marun: I don't know how familiar you are with OVN+OpenStack+ containers design. To summarize OVN is capable of providing per container networking for containers running in VMs via neutron. | 16:25 |
marun | shettyg: I'm going to review that link real quick. | 16:25 |
russellb | marun: which i implemented for now using a data bag ;-p | 16:25 |
marun | heh | 16:26 |
russellb | but i think the VLAN-aware VMs proposal can be made to fit this use case | 16:26 |
marun | russellb: hmmm | 16:27 |
marun | russellb: I'm not sure I understand the use case | 16:28 |
russellb | k, i'll try to clarify | 16:28 |
russellb | start with an openstack cloud using OVN as the neutron backend | 16:28 |
marun | russellb: I remember you sent me a ml link, I didn't get a chance to read in detail. Should I do so to spare you some trouble? | 16:28 |
russellb | i can boot VMs on it, i have tenant networks, the usual | 16:28 |
russellb | i don't mind trying a tl;dr | 16:29 |
marun | ok :) | 16:29 |
russellb | of course, you can also run containers in those VMs | 16:29 |
russellb | it's also common to create overlay networks among containers (flannel, etc) | 16:29 |
russellb | OVN can be used inside those VMs to do the same thing | 16:29 |
russellb | but there's a possible optimization here | 16:30 |
russellb | what we're proposing is that you tell Neutron about the networks you want for your containers | 16:30 |
russellb | and let them be implemented by OVN providing networking for Neutron | 16:30 |
russellb | which should provide better performance | 16:31 |
russellb | and digging into details of how that works, you tell neutron that traffic from each container will be tagged with a VLAN tag | 16:31 |
russellb | so the hypervisor can differentiate traffic from the VM from the traffic from each container | 16:31 |
russellb | that's really not a very good tl;dr, because it's still kind of long | 16:32 |
russellb | https://github.com/openvswitch/ovs/blob/ovn/ovn/CONTAINERS.OpenStack.md | 16:32 |
marun | So the idea is not to have l2 segmentation at the vm level and at the container level | 16:33 |
marun | since there would be performance and complexity costs | 16:33 |
russellb | logically still have that segmentation | 16:33 |
russellb | just that we let the underlying network implement it | 16:34 |
russellb | instead of as another layer of overlay | 16:34 |
marun | I've almost got it (I'm slow, apologies) | 16:35 |
marun | so logically there is nesting | 16:35 |
russellb | no worries! | 16:35 |
marun | russellb: coming at it from the other side, how would kub communicate with neutron? | 16:37 |
* russellb has no idea | 16:38 | |
russellb | i haven't thought that far | 16:38 |
russellb | shettyg was looking at that though | 16:38 |
marun | russellb: at least today, the kublet process would be in the vm and that would be where port creation would originate from | 16:38 |
russellb | i've only really thought it through from a connectivity point of view | 16:38 |
russellb | ok, so i guess kublet would need some neutron credentials and know where the API is | 16:39 |
russellb | also assumes the neutron API is accessible from the VM | 16:39 |
marun | that's reasonable I think | 16:39 |
marun | I'm less sure how the 'vif plug' would work | 16:39 |
russellb | so for that, we were assuming the VM would run OVS | 16:40 |
russellb | and each container would get hooked up to OVS | 16:40 |
russellb | and have its traffic tagged with a VLAN id | 16:40 |
marun | that's a bit messier for a generic neutron solution | 16:40 |
marun | but maybe I'm overthinking | 16:40 |
russellb | well, yeah | 16:40 |
russellb | this concept doesn't exist in neutron today | 16:40 |
russellb | but there's a proposal for "VLAN aware VMs" | 16:40 |
russellb | where you define a port, and then create sub-ports | 16:41 |
russellb | which basically matches what we're trying to achieve | 16:41 |
russellb | i haven't really followed up on the spec yet, i just found out it existed at summit | 16:41 |
russellb | but in general, we need to be able to create child ports in neutron | 16:41 |
marun | right | 16:41 |
marun | It is good food for though | 16:41 |
russellb | so the VM has to know its own port | 16:42 |
russellb | and then it can create ports for containers that run in the VM, listing the VM's port as the parent | 16:42 |
marun | When I've thought about vm-hosted kub with neutron-managed networking, I was confused about how we would allow something like the ovs agent on the vm to talk to the server via rpc | 16:42 |
marun | But if the vm simply has a consistent ovs setup, then the underlying neutron implementation is completely separate | 16:43 |
marun | That's a good approach, I think. | 16:43 |
russellb | awesome, that's what we were hoping | 16:43 |
russellb | some of the complexity is left to the VM to implement | 16:43 |
russellb | but not sure how else to do it | 16:43 |
marun | someone running kub wouldn't care if ovs was setup locally so long as they could use the cloud networking backend | 16:43 |
shettyg | marun: I was thinking it this way. Today in each minion, you can place a network plugin. So when a pod gets created, the network plugin gets called. But other than the pod id, the network plugin does not have any context, so it will contact a daemon running in kubernetes master with a unused vlan in the minion + vif id of minion. The daemon in master queries the api server to get networking context and makes a call to Neutron to create the port. N | 16:43 |
marun | so as you say, the trick is to enable logical child ports so that the vm vlans could be handled by the compute host properly | 16:44 |
russellb | marun: yep, and for the OVN ML2 mech driver today, we do it with binding:profile ... which made me feel dirty | 16:44 |
russellb | binding:profile includes the parent and vlan tag | 16:45 |
marun | shettyg: hmmm | 16:45 |
shettyg | The usp is that you can now have a kubernetes pod talk to a VM in OpenStack for a service. You can also use load balancers of OpenStack. The bigger USp is that you can not apply security policies to your containers in the hypervisors. | 16:45 |
shettyg | *not = now | 16:46 |
marun | USp? | 16:46 |
shettyg | usp = unique selling point. | 16:47 |
marun | shettyg: haven't heard that one before :) | 16:47 |
russellb | i haven't either | 16:48 |
marun | shettyg: I have a poc for kub/neutron integration that just uses hard-coded values for network etc for now | 16:48 |
marun | shettyg: but yeah, it makes sense to have looked up via a centralized service | 16:49 |
shettyg | With a centralized service, you only need to store neutron credentials at one place. | 16:49 |
shettyg | But, if the master goes down, you are screwed. But as I see it, in case of Kubernetes, the master going down is a problem anyway | 16:50 |
shettyg | marun: What about Kubernetes services? They provide you a public ip for a pod. And when the pod goes down and gets created in a different host, that public ip should still access the pod. Are using Neutron floating ips for that? | 16:52 |
marun | shettyg: floating ips are a bit of a mess | 16:56 |
marun | shettyg: at least by default | 16:56 |
shettyg | So what is your approach towards Kubernetes services concept? | 16:58 |
marun | shettyg: I haven't thought that through, to be honest. | 16:59 |
marun | shettyg: In the scheme we're discussing, neutron would be responsible for private address assignment? | 17:02 |
shettyg | marun: yes | 17:03 |
marun | shettyg: Is there a reason not to have kub 'public ips' be neutron private ips? | 17:03 |
marun | shettyg: and then associate a floating ip with the kub service ips? | 17:03 |
marun | shettyg: that wouldn't require modifying any neutron abstraction | 17:03 |
shettyg | marun: I see what you mean. Would that not mean that traffic going to a pod will always need to go through kube master? | 17:04 |
marun | shettyg: If a pod moves to a different host, does it retain the same ip? | 17:04 |
marun | shettyg: It would mean that pod ips would be private by default | 17:05 |
marun | shettyg: and service ips could optionally be made public by associating a floating ip | 17:06 |
shettyg | marun: It need not retain the same ip. If it gets a different ip, the idea was that the we need an api in Neutron that will now point the public ip to that. | 17:06 |
marun | shettyg: change the floating ip associating, right | 17:07 |
marun | shettyg: that already exists | 17:07 |
marun | shettyg: So it would probably be a matter of watching for kub events that signified a pod move and update the networking accordingly | 17:07 |
marun | shettyg: as I understand it, there isn't much desire to integrate this capability directly into kub itself | 17:08 |
shettyg | I see what you mean. | 17:10 |
shettyg | I like your approach too. I will have to think about it. | 17:10 |
marun | Cool, me too. | 17:11 |
marun | If you're interested I'd suggest touching base periodically. There aren't many people focused on this problem and I appreciate having other perspectives. | 17:12 |
*** marun has quit IRC | 17:12 | |
shettyg | marun: me too. Always good to talk things loudly as you get to know other possibilities. | 17:12 |
*** marun has joined #openstack-neutron-ovn | 17:13 | |
marun | shettyg: I'm having trouble wrapping my head around how to support both vm and non-vm deployed scenarios in a reasonable way. | 17:14 |
marun | shettyg: It's not clear to me which use case is most important. | 17:14 |
marun | shettyg: In either case, there is the suggestion that segmentation is desirable, but I find that pretty confusing given that kub doesn't really have the concept of 'users'. | 17:16 |
marun | Not in the way openstack does, at least. | 17:16 |
marun | Which for me calls into question how segmentation would be managed. | 17:16 |
openstackgerrit | Russell Bryant proposed stackforge/networking-ovn: Document ideas for supporting provider networks https://review.openstack.org/188519 | 17:17 |
marun | shettyg: Have you seen calico's coreosfest presentation? | 17:17 |
shettyg | marun: No. I haven't. Is it interesting? | 17:17 |
marun | shettyg: I think so: https://www.youtube.com/watch?list=PLlh6TqkU8kg8Ld0Zu1aRWATiqBkxseZ9g&v=44wOK9ObAzk | 17:17 |
marun | shettyg: It gets away from l2 entirely, but I think it's interesting to think about providing isolation with edge-only filtering. | 17:18 |
marun | shettyg: Even in the case of l2, though, their suggestion of augmenting the kub pod config to support 'intent' could be useful. | 17:18 |
shettyg | marun: listening | 17:19 |
marun | shettyg: Such that it would allow app developers to indicate who should be able to talk to who without necessarily specifying the implementation | 17:19 |
marun | shettyg: such that both l2 segmentation or l3 isolation could both accomplish the configured state | 17:20 |
marun | shettyg: I'm not sure the k8s team would be amenable to supporting that kind of configuration, though. | 17:20 |
marun | shettyg: since my understanding is they want core kub to closely match what they want to support on gke | 17:21 |
shettyg | marun: In OVN, we will have distributed firewall too. Wherein firewall follows the container interface. | 17:21 |
marun | shettyg: does that require that ovs firewall be ready? | 17:21 |
shettyg | marun: yes. | 17:22 |
marun | shettyg: cool | 17:23 |
shettyg | I am not able to digest the information that only developers add the firewall rules via pod json though. I would imagine that it would also be the cloud manager's job. | 17:24 |
marun | shettyg: I'm sure there would have to be a review process for config that was going to be released to production, and non-production would be limited by default. | 17:25 |
marun | shettyg: But I think my desire to trust developers by default may be at odds with how some organizations work. | 17:26 |
marun | (I'm just glad I don't work at those places) | 17:26 |
russellb | +1 | 17:26 |
russellb | and it's kind of anti-cloud | 17:26 |
marun | russellb: I'm starting to realize just how limited we are in the networking world by established practice and convention. | 17:27 |
russellb | don't crush my spirit so soon | 17:27 |
marun | russellb: hah | 17:27 |
marun | russellb: ovn/ovs has the advantage of using l2 precepts that are generally accepted | 17:27 |
marun | russellb: it's not revolutionary in any sense of the word, it's basically virtualized l2 | 17:28 |
marun | russellb: in the same way that virtualized computers are way easier to sell than something like containers, virtualized l2 is an easy sell | 17:28 |
* russellb nods | 17:28 | |
marun | russellb: but when people talk about ditching l2 because we can do more interesting things at the l3-only layer (calico, contrail, facebook, etc), I don't think it's going to become commonplace anytime soon | 17:29 |
marun | russellb: the people in charge will have to retire first | 17:29 |
marun | (or the org has to have competitive pressures and talent to justify the radical moves) | 17:30 |
russellb | and then we've got NFV folks wanting to use this stuff for non-IP traffic | 17:31 |
marun | russellb: yeah, that will definitely persist | 17:31 |
russellb | so at least the virtual L2 solutions serve them too | 17:32 |
marun | russellb: for sure. legacy business will continue for some time | 17:32 |
russellb | but like you said, just part of how the existing world limits things | 17:32 |
russellb | interesting to think about | 17:32 |
marun | russellb: I'm a software engineer, though, not a network engineer. I'm a bit frustrated at how slow things work in the ops-y world. | 17:33 |
russellb | same | 17:33 |
marun | russellb, shettyg: interesting conversation, thank you. I'm off to lunch! | 17:34 |
russellb | thanks marun! hope we can stay in touch on all of this | 17:35 |
marun | russellb: for sure :) | 17:35 |
*** marun has quit IRC | 17:39 | |
*** yapeng has joined #openstack-neutron-ovn | 17:47 | |
*** yapeng has quit IRC | 18:39 | |
*** ajo has quit IRC | 18:47 | |
*** hitalia has joined #openstack-neutron-ovn | 18:58 | |
*** hitalia has quit IRC | 19:00 | |
*** marun has joined #openstack-neutron-ovn | 20:00 | |
*** shettyg has quit IRC | 23:53 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!