Wednesday, 2024-10-16

opendevreviewTakashi Kajinami proposed openstack/os-vif master: Clean up Windows support  https://review.opendev.org/c/openstack/os-vif/+/93243600:56
opendevreviewTakashi Kajinami proposed openstack/os-vif master: Drop kuryr-kubernetes job  https://review.opendev.org/c/openstack/os-vif/+/93247402:17
opendevreviewLiushy proposed openstack/neutron master: Only consider the IPv4 subnets when creating a FIP  https://review.opendev.org/c/openstack/neutron/+/93240503:56
gsamfiraralonsoh skrobul: I am ready whenever you folks are. 06:01
ralonsohgsamfira, hello06:08
ralonsohcan you post the LP bug id?06:08
gsamfirayup. One sec06:08
ralonsohhttps://bugs.launchpad.net/neutron/+bug/199507806:08
ralonsohfound it06:08
gsamfirathere's also this one: https://bugs.launchpad.net/neutron/+bug/208453606:09
gsamfirayou mentioned last time we should have one that states we need to use internal VLAN networks connected to virtual routers06:09
gsamfiraas a use case06:09
ralonsohthe second is a bit different06:10
ralonsohin the second the network that is connected to the baremetal node is the internal06:10
ralonsohin what are you interested?06:10
gsamfiraYes. But it will probably help the first one as well. 06:11
gsamfiraThat bug is how I ended up opening the second06:11
ralonsohno, the second is for routed traffic06:11
ralonsohif you use the same scheduler for GW ports and extenal ports, that is using HA_Chassis_Group, the problem is solved06:12
gsamfirahmm06:12
gsamfiraeven if we connect multiple VLAN networks to a router that has an external port set?06:12
ralonsohthat is different06:12
gsamfiraahh, right06:13
ralonsohin the 1st LP, the VLAN network is the GW network06:13
gsamfiracorrect06:13
gsamfirathey are indeed different06:13
gsamfiraI am willing to work on the first one as well as the second one. But we need to make sure that the second one is something that is acceptable to have in neutron06:14
gsamfiraI can detail our user story if it helps06:14
ralonsohthe problem in LP#2084536 is that 1) I don't know how OVN will react to binding an internal interface (I need to ask this to core OVN folks) and 2) if you bind the internal interface to a node, the GW port must be too in this node06:15
ralonsohbecause, if I'm not wrong, what you need is to route baremetal nodes traffic06:15
gsamfirayes06:15
gsamfirawe did this in our lab 06:16
ralonsohif you send the traffic of the baremetal nodes to a node and then you snat this traffic in other, then you won't have any benefit06:16
ralonsohthe snat should be done in the same GW chassis too06:16
ralonsohbut my first issue, and I need to ask this first, is what is the effect of binding an internal interface06:17
gsamfiraas far as I can tell, there is no SNAT. It's just L3 routing. The difference when binding the internal router port to the same HA chassis group as the network is that for external ports at least, traffic will go through the chassis where the port i bound06:17
gsamfirafor VMs, the distributed router port feature still works06:17
gsamfiraif we have an external port attached to the router, the traffic from the internal VLAN network is redirected from the chassis where the internal port is bound, to the chassis where the extenal port is bound06:18
gsamfiraand then NAT takes place06:18
gsamfiraand this happens when you need to communicate with a network not directly connected to the router06:18
gsamfirabasically the router sending the traffic to its default GW06:19
gsamfirathis essentially mimics how neutron-gateway works06:19
gsamfiravia the veth pairs and the qrouter netns06:19
gsamfiraIf you'd like, I can share my screen and walk through the setup06:20
gsamfiraif you're willing to join a call06:20
ralonsohnot necessary, it is better if you share this info in the LP bug06:20
gsamfirasure!06:20
ralonsohmy question is about the VLAN traffic06:20
ralonsohif we have other VMs in this VLAN network06:20
ralonsohI don't know if the traffic for these VMs must got to the chassis with the internal interface bound06:21
ralonsohinstead of being redirect directly to the chassis with the other VM in the geneve network06:21
gsamfiraI can do a packet capture on the compute nodes and confirm whether the distributed port feature works. IE: the vrouter MAC is replaced with the local bridge MAC and put on the wire06:21
gsamfirawithout redirecting to the gw chassis06:22
ralonsohin your LP bug, there should not be GW06:22
ralonsohonly two internal networks06:22
ralonsohright?06:23
gsamfiraYes, but attaching a GW also needs to work. And it does06:23
gsamfiraI will spend today on doing some ovn-trace 06:23
gsamfiraand packet captures06:23
gsamfirawill include everything in the LP bug06:23
ralonsohin the case of having a GW network, the GW port must be located in the same chassis06:24
ralonsohas the VLAN internal interface06:24
gsamfirait seems it does not need to be for things to work. The GW chassis can be on one set of GW chassis/HA chassis group, while the internal VLAN port can be on another06:24
gsamfiraif both are bound06:24
ralonsohok, I'll try to check this this week06:25
gsamfiraand it kind of makes sense. Once they're bound, the chassis to which they're bound, will respond to ARP06:25
gsamfiraand this being VLAN, it works as any normal L2 network06:25
ralonsohI don't understand the last part, who replies to ARP?06:26
ralonsohwhat IP?06:26
ralonsohah, this chassis in particular is capturing the ARPs and replying, ok06:27
gsamfirathe internal router port IP. If we have 192.168.100.0/24, which we attach to a router, the router interface IP will probably be 192.168.100.106:27
gsamfirayup06:27
gsamfiraif that interface is bound to chassis01.cloud06:27
gsamfirathen chassis01.cloud will reply with the MAC address of the port06:27
gsamfirathe internal router port06:28
gsamfirait's VLAN so this will go on the physnet to which the VLAN is mapped06:28
gsamfiraand everything works06:28
gsamfirasame thing happens in external networks. The upstream GW needs to communicate with the external port of the virtual router06:28
gsamfirathe internal VLANs are no different when it comes to neutron06:28
gsamfirathe only difference is that we have a BM node, not an upstream router06:29
gsamfirabut conceptually they are the same06:29
ralonsohok, I don't have baremetal nodes but I don't need them. Actually my main concern are the other VMs traffic with the interface bound06:29
gsamfirathe internal VLANs are no different when it comes to neutron --> the internal VLANs are no different when it comes to ironic06:29
gsamfirafairn enough. I can do some packet capture, traces and dump flows to confirm nothing undesirable happens06:30
ralonsohyeah, the main issue I want to check is the following scenario:06:30
ralonsohyou have the internal interface bound to chassis 1, a VLAN VM in chassis 2 and a geneve VM in chassis 306:31
ralonsohif you route traffic form VLAN VM to geneve VM, this traffic must not go to chassis 106:31
ralonsohthat's my only concern (so far)06:32
gsamfiraack!06:32
gsamfiranoted06:32
gsamfiraas far as I can tell, it does, but I will do some sleuthing to confirm with flows, tcpdump, etc06:32
gsamfiraand I will add it to the LP bug06:33
gsamfirait's in our best interest to get this to work. This cloud will probably reach around 800 BM nodes by mid next year. OVN is desirable :D06:34
gsamfirathanks ralonsoh!06:35
ralonsohyw06:35
opendevreviewMerged openstack/neutron stable/2023.2: Revert "[OVN] Prevent Trunk creation/deletion with parent port bound"  https://review.opendev.org/c/openstack/neutron/+/92160707:41
opendevreviewMerged openstack/neutron stable/2023.1: Revert "[OVN] Prevent Trunk creation/deletion with parent port bound"  https://review.opendev.org/c/openstack/neutron/+/92160807:41
opendevreviewLiushy proposed openstack/neutron master: Only consider the IPv4 subnets when creating a FIP  https://review.opendev.org/c/openstack/neutron/+/93240508:39
*** liuxie is now known as liushy08:41
liushyhi folks, please feel free too review: https://review.opendev.org/c/openstack/neutron/+/932405   ^-^09:57
ralonsohliushy, you need some testing there. You have, in tests.unit.objects.test_subnet some tests on ``test_find_candidate_subnets``10:07
ralonsohyou should improve them to consider this new scenario10:07
liushyralonsoh, OK10:08
opendevreviewLiushy proposed openstack/neutron master: Only consider the IPv4 subnets when creating a FIP  https://review.opendev.org/c/openstack/neutron/+/93240510:42
opendevreviewDmitriy Rabotyagov proposed openstack/neutron master: [OVN] Do not supply gateway_port if it's not binded to chassis  https://review.opendev.org/c/openstack/neutron/+/93149511:43
opendevreviewLiushy proposed openstack/neutron master: Only consider the IPv4 subnets when creating a FIP  https://review.opendev.org/c/openstack/neutron/+/93240512:01
opendevreviewMerged openstack/neutron stable/2024.1: "security_group_rules" is not a SG selectable field  https://review.opendev.org/c/openstack/neutron/+/93233312:18
opendevreviewMerged openstack/neutron stable/2024.2: "security_group_rules" is not a SG selectable field  https://review.opendev.org/c/openstack/neutron/+/93233112:59
opendevreviewMerged openstack/os-vif master: Remove Python 3.8 support  https://review.opendev.org/c/openstack/os-vif/+/93148012:59
opendevreviewLiushy proposed openstack/neutron master: Only consider the IPv4 subnets when creating a FIP  https://review.opendev.org/c/openstack/neutron/+/93240513:27
ralonsohhaleyb, hello! https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/930743 is so bug (in testing terms) that it will never pass the CI14:26
ralonsohI'm going to divide it in chunks (changing one job each one) in order to pass the CI14:27
ralonsohthat's a shame for us to have an unstable CI...14:27
haleybralonsoh: ack, we definitely see some issues there14:38
haleybwith the -em of antelope can maybe remove the 2023.1 jobs from there, not that i think they failed14:38
opendevreviewRodolfo Alonso proposed openstack/neutron-tempest-plugin master: [WSGI] Move all OVN jobs to use WSGI API module (1)  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93252015:01
opendevreviewRodolfo Alonso proposed openstack/neutron-tempest-plugin master: [WSGI] Move all OVN jobs to use WSGI API module (2)  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93252415:07
opendevreviewBrian Haley proposed openstack/neutron-tempest-plugin master: Stop running 2023.1 (Antelope) jobs in check/gate  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93252715:19
opendevreviewBrian Haley proposed openstack/neutron-tempest-plugin master: Fix trunk transition waiting message  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93254918:36
msaravanHi Network Gurus !!  This is Saravanan from NetApp. I have a query on latest devstack's ovn configurations. I find the ping to floating IP works from the same host, and not from any external entity. I wanted to have a floating ip in the same network as where the devstack instance resides. 19:26
msaravanThe details of the command outputs and all other details are captured here :19:27
msaravanhttps://paste.opendev.org/show/b7EbXTrdf4HCH2GCfYKW/19:27
msaravanCan you please help me fixing this issue.  19:28
msaravanIt used to work fine, when ovs is in place in old openstack releases.  Now, we are on latest (Dalmatian), and as we are using ovn here, I am not sure what is missing, and why the connectivity from external network to the vm is not working. 19:31
haleybmsaravan: it sounds to me like a network infrastructure issue, or a missing route on the source host?20:23
haleybcardoe: you around? maybe we can pick an interlock time?20:24
cardoeYep20:24
cardoeI think I saw we had some overlapping slots on Wednesday.20:24
haleybi had booked tue/wed/thu/fri as placeholders, but sometime Wednesday would work20:26
cardoeSo for me 1400 UTC or later is easiest.20:35
cardoeI got 1 other nod on that time from the channel if that sounds okay.20:38
haleyb1400 UTC works20:43
haleybcardoe: argh, actually there is an eventlet-removal discussion at 1400 so that won't work20:47
haleybwould have to be 1500 or later20:47
cardoe1500 is fine20:47
*** gmann is now known as gmann_afk20:58
*** gmann_afk is now known as gmann_21:41
*** gmann_ is now known as gmann21:41
cardoemaybe gmann will help me to get that setup correctly in the events21:59

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!