Friday, 2025-07-11

opendevreviewOpenStack Proposal Bot proposed openstack/openstack-ansible master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/openstack-ansible/+/95468103:06
opendevreviewJonathan Rosser proposed openstack/ansible-role-pki master: Generate ca_bundle during cert creation for standalone backend  https://review.opendev.org/c/openstack/ansible-role-pki/+/95462806:02
opendevreviewJonathan Rosser proposed openstack/ansible-role-pki master: Allow certificates to be installed by specifying them by name  https://review.opendev.org/c/openstack/ansible-role-pki/+/95423906:02
opendevreviewMerged openstack/openstack-ansible master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/openstack-ansible/+/95468107:29
f0oHi, I'm poking around stabilizing IPv6 for tenant networks and I'm struggling wrapping my head around how to distribute the tenant networks onto the Top-Of-Rack switches. I figured as much that I need to use OVN-BGP for this but I'm a bit confused what is going to run the BGPD. Is it going to be only the hypervisors? only the gateway-hosts? Both? Also what's the state of08:45
f0o`enable_distributed_ipv6` in [ovn] ?08:45
f0oFeels like IPv6 is still very much experimental in OpenStack/Neutron for anything that isnt a vlan/flat network08:45
f0oor at least the documentation is very obscure about it. Showing a million ways how things *could* work but no clear indication of what is the preferred/reference way08:46
noonedeadpunkf0o: so I think there are 2 options available (at least)08:49
noonedeadpunkif we are not going back to discussion of ovn-bgp-agent (which we had somer time ago), I'd guess it's worth to rely on old bgp dragent08:49
noonedeadpunkso it would be quite same setup as for ovs afaik08:49
noonedeadpunkI have not tried that personally with OVN, but I heard miultople times that it just works08:50
noonedeadpunkwe're having IPv6, but with OVN driver we do have ovn-bgp-agent08:50
f0omy main fear is the placement of a FRR instance on our gateway hosts which would break those as they host FRR themselves for their own connectivity to the rest of the fabric08:51
noonedeadpunkso ipv6 and ipv4 work quite alike, except that for IPv6 we do have subnet pool and allocations08:51
noonedeadpunkso frr should be placed on each gateway node08:52
f0oI have IPv6 working entirely fine without BGP by just using a flat-network'ed ovn-router which then ties into every tenant's vxlan network. But this puts an imense stress on the gateway host becoming a singular bottleneck obviously08:52
noonedeadpunkif gateway != compute ofc08:52
f0ook that's very good to know08:52
f0othen regardless what I do, I need to move the gateway hosts away from the Top-Of-Rack routers08:53
noonedeadpunkso in our scenario we set FRR on just cvouple of nodes which act as a gateway08:53
noonedeadpunkwell08:53
noonedeadpunkmaybe you can continue doing the same way with flat network... not 100% sure....08:54
noonedeadpunkbut regardless of bgp, what would you need to do, is create an  address scope and allocation pool with IPv6 subnet in it 08:55
noonedeadpunkthen you create an external network or add a subnet from this subnet pool to existing one (it can be same as for ipv4)08:56
noonedeadpunkand client routers should allocate an own /64 or smth from this same subnet pool and add them to there internal geneve/vxlan network08:56
noonedeadpunkwhile connect their router to the public subnet as external one, so that router would get ipv6 as well08:57
noonedeadpunktraffic would still come through the gateway, but you kinda not limited to single router, and they are spread across multiple ones.08:57
f0oThe way Out is not the problem here08:58
f0obut the way back in08:58
noonedeadpunkwell, thus we announce the router IP and tenant subnet with ovn bgp agent08:59
f0oif I have /48 on vlan123; and set an allocation pool to distribute /64 to tenants. Then I create a Tenant Router which sits on that vlan123 network as well as on the tenant's vxlan432 which now is in charge of 2001:db8:1::/64, how would I know that it is charge of it and route the traffic to that specific tenant router?08:59
f0oso regardless how I spin this, it seems I will always need ovn-bgp-agent in some form or shape to make that revpath available to my network infrastructure09:00
noonedeadpunkbut bgp dragent should be announcing that as well09:00
noonedeadpunkat least we had exact same setup in OVS with it09:00
noonedeadpunk(and I heard it does work in OVN)09:00
f0oand since the agent needs to sit on the gateway hosts, I will need to migrate those out of the current ToR infrastructure and place them on some new dedicated machines somewhere09:01
f0oor am I thinking this wrong?09:01
noonedeadpunkI think for IPv6 you need BGP in some way or form, yes09:02
noonedeadpunkthough this could be second option if not ovn-bgp-agent: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html09:02
noonedeadpunkas you would need to re-do ipv4 as well for it09:02
f0otrying to find a place to upload a diagram to show where my brain-knot is :D09:04
f0ohttps://imgur.com/a/VWnJ0Cg09:04
f0oso if those two links with `?` become "Gateway Hosts" and "OVN-BGP-Agent" then I know a path forward09:05
noonedeadpunkso with ovn-bgp-agent, vlan ext net is really a virtual vlan, which is present only inside of OVN09:05
noonedeadpunkwhile you don't need to have it on switches09:06
f0oI do for distributed_fip tho09:06
noonedeadpunkinstead, you need to figure you a peering vlan or jsut use default route for that09:06
noonedeadpunkthen, ovn-bgp-agent create a vrf on ovn gateway node, where fakes the vlan as noop device. And then frr picks up addresses from VRF for announcement09:07
noonedeadpunkso if you do distributed fip, then I guess you'd need that on all computes actually09:07
noonedeadpunkbut also you may apply that not for all vlan networks...09:08
noonedeadpunkyou still can have just regular vlans, and announce on others...09:08
noonedeadpunkbut yeah09:08
noonedeadpunkbtw, fip is not really applicable for ipv6... not helpful if you wanna have dual stack network though09:08
f0oI know but the fip stuff already made me span the vlan to all computes09:09
noonedeadpunkright...09:09
f0othe dragent; that just needs to run on *some* host and sets foreign next-hop like a route-server right?09:09
noonedeadpunkwell, I mean. if it's only about ipv6, and you don't wanna touch ipv4 - I'd suggest really give old dragent a try09:10
noonedeadpunkI think it runs on neutron-api, but not 100% sure tbh....09:10
noonedeadpunkmaybe indeed on gateways...09:10
f0oyeah I'm reading the docs right now and I'm intruiged if it doesnt become it's own nexthop09:10
f0oif it can just announce the prefix with the OVN router's interface as next-hop directly then that would be absolutely ideal09:10
noonedeadpunkwell, ovn router does not have directly exposed interface, does it?09:11
f0oit does because the extnet is a vlan09:12
noonedeadpunkoh, ok, yes...09:12
f0oso I can place the extnet's v6 subnet as linknet09:12
noonedeadpunkI don't think it does that though tbh...09:12
noonedeadpunkbut not sure09:12
f0othen it and the switches (or even ToR) has direct L2 access09:12
f0oso if I had some BGP speaker that just tells me "2001:db:1::/48 next-hop 2001:db:0::12ab09:12
f0owhere 2001:db:0::12ab is the IP from the tenant's router external interface09:13
f0oalthough in reality the linknet might just be a ULA instead of a GUA09:14
f0odoesnt really matter wther ULA or GUA in the end. IPv6 is free anyway heh09:14
f0ogonna take a look at the dragent codebase and investigate what it annunces as next-hop and if I can just patch it to do what I have in mind09:15
f0othanks for the inputs!09:15
f0owill make a PR if there's anything of value coming out (but first walking the dog)09:16
f0onoonedeadpunk: okay it seems that dragent is spot-on what I want/need to get this to work for me10:11
noonedeadpunkokay, awesome then :)10:11
noonedeadpunkthere's a code in osa to make it work, at least for OVS10:11
f0oYou dont' happen to have a link to the OSA docs for it, do you? :D10:11
noonedeadpunkno idea about ovn to have that said10:11
f0oI mean if I can just recycle some of it, that would be good enough. The dragent itself doesnt seem to care about OVN/OVS10:12
noonedeadpunkI think it was it. but not sure: https://docs.openstack.org/openstack-ansible-os_neutron/latest/configure-network-services.html#bgp-dynamic-routing-service-optional10:12
noonedeadpunklooking at local config overrides - that indeed seems it10:14
noonedeadpunkand yeah, just this line triggers a lot of logic under the curttain: https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/vars/main.yml#L30010:15
f0ogonna have to do some grep'ing10:16
opendevreviewJonathan Rosser proposed openstack/ansible-role-pki master: Allow certificates to be installed by specifying them by name  https://review.opendev.org/c/openstack/ansible-role-pki/+/95423910:16
noonedeadpunkwe just added `neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin` to `neutron_plugin_base` and that was it...10:16
noonedeadpunkand then adding speakers through API10:16
f0oyeah it doesnt seem to have any other relation to OVS/OVN in the role either10:17
f0oso that's cool10:17
jrosseri think we do the same, and took some care to ensure that the bgp session between the dragent and the upstream router happens over some interface/route that setup specifically for that10:19
jrosserusual business of not mixing up data plane and control planes10:19
f0ojrosser: yeah that's a given :)10:21
f0oneed to grep for `network-agent_containers` to find what hosts map to it10:22
f0obut I seem to be too stupid today to find it. hound only shows it in inventory/env.d/neutron.yml where it is defined10:23
noonedeadpunkf0o: there should be no network-agent_containers for OVN by default....10:35
f0oI'm struggling to connect the dots between neutron_bgp_dragent -> network-agent_containers -> network-agent_hosts; I see dragent belongs to agent_containers; but agent_containers and agent_hosts has no connection other than similar name.10:38
noonedeadpunkthis indeed was group for ovs/lxb10:38
noonedeadpunkyeah, that's the name:)10:39
noonedeadpunkas it's split via [-|_]hosts$10:39
noonedeadpunksame for containers10:39
f0ohuh10:39
f0othat explains my brainknot10:40
noonedeadpunkdyumanic_inventory.....10:40
noonedeadpunkbut it may add more things as well...10:40
noonedeadpunk(defining the group)10:40
f0oso safest is to create an own group that includes neutron_bgp_dragent and nothing else?10:41
noonedeadpunkyeah10:41
f0oor tbh I might just add neutron_bgp_dragent to the network-infra_containers group10:41
f0obecause those would make the most sense to have them10:41
noonedeadpunkor that10:41
noonedeadpunkbut it's lxc 10:42
noonedeadpunkor well10:42
noonedeadpunkdepending if you use lxc10:42
f0ocurrent network-infra is also lxc10:42
f0oso should be fine10:42
noonedeadpunkI don't think it is?10:42
noonedeadpunkI mean 0- network-agent is not10:42
f0onetwork-infra_* is10:43
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/neutron.yml#L72-L7310:43
noonedeadpunkyes, but neutron_bgp_dragent was on agents10:43
f0oyeah but I can change that :D10:43
noonedeadpunkand agents are kind of gateway nodes on ovn10:43
f0oto infra10:43
noonedeadpunkunless it announces only what is locally served10:44
noonedeadpunkbut dunno10:44
f0oI just re-re-reading the ovn-bgp-agent docs; not that I missed some detail last time. Because dragent also took me 2-3x reading carefully to realize it has correct next-hop (in the way that I want/need them)10:44
f0ospecifically https://docs.openstack.org/ovn-bgp-agent/latest/contributor/drivers/bgp_mode_stretched_l2_design.html10:45
f0obecause this reads very similar to dragent10:45
noonedeadpunkit probably is very similar10:47
noonedeadpunkthe problem with that specific one, is that I think it's SB DB oriented10:47
noonedeadpunkand that is not really reliable, as ovn does not intend to sb db being directly used, so they tend to change api/behavior it without much of grace period10:48
f0oso I should aim for NB DB?10:49
noonedeadpunkwith ovn-bgp-agent - yes10:50
f0onoonedeadpunk: https://imgur.com/a/8MCLvkh does this make sense?11:05
f0oit's getting messy with all the layers piled on top of eachother11:06
f0omaybe I should clarify internal and external IPs of the router right now it's mixed up11:07
noonedeadpunkeh, probably it makes...11:07
noonedeadpunkbut I won't neither confirm or deny it, as I'm a bit afraid to mislead you now11:07
f0o:D11:08
f0oAlthough... TenantRouter will only ever use _1_ GatewayNode right? it doesnt do active-active DVR right?11:13
noonedeadpunkno, it does not11:14
noonedeadpunkso that is correct11:14
f0ov6 simplifies and complicates things so much11:14
noonedeadpunkoh yes11:14
f0oand I guess ovn-bgp-agent does not do ECMP?11:15
f0oso if I were to go full bloated ovn-bgp-agent with ripping out the gateway nodes; would they implement active-active DVR through ECMP? like how VARP works11:15
f0obecause dragent might solve the flow but it pins it on one host which likely will just get overwhelmed eventually11:17
f0oso here i'm back at ovn-bgp-agent if that implements ecmp active-active flows as in all 3 gatewaynodes advertise the tenant network11:18
noonedeadpunkum. so OVN does do have some internal logic for distribution of routers across gateway hosts11:18
noonedeadpunkbut if each compute would act as a gateway node - then it's unlikely kinda11:19
f0oI've seen vswitchd easily eat up 100% cpu a few times11:20
f0ofeels like a bad idea to place this on compute nodes11:20
f0ojust checked, 149% CPU for ovs-vswitchd on one GW Node11:21
noonedeadpunkhuh11:22
f0oso while I would spread the routers out more, it still wouldnt reduce the load of vswitchd. I can say that this 150% is just one tenant router doing ~15Gbit/s11:22
f0oso this 150% would remain, just be unavailble to tenants and show as cpu-steal11:22
f0oironically this is also 150% just in v6 traffic haha11:24
f0owhich is why it's back on my prio list11:24
noonedeadpunkyeah, ok, we probably don't have such throughput by single customer11:25
noonedeadpunklikely due to QoS rules :D11:25
f0oat this point it feels like just making 2 interfaces, one tenant-networked' for v4 and fips; one public vlan-networked' for v6 is the most scalable approach11:29
f0obecause then the switches do all the routing and the can do (W)ECMP :|11:29
f0obut that's horrible UX11:30
f0oand here I am back at the beginning circling around11:30
f0ooh well, Friday 1:30pm in the swedish summer. Might as well just finish up and call it a day in a few to grab a beer with the others11:31
noonedeadpunkoh yes, it's almost midsommar11:32
f0oMidsommar was three-ish weeks ago11:33
f0oidk if that's the same as the meteorological one11:34
f0oor is it astronimical? too many words11:34
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_glance master: Use 'name' to specify SSL certificates to the PKI role  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/95426911:43
noonedeadpunkah, doh, indeed11:48
noonedeadpunkanyway time of 5week vacations is already started :D11:49
f0oindeed11:49
f0othere's nobody around the office anymore11:49
opendevreviewMerged openstack/openstack-ansible-ceph_client master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/95229211:53
opendevreviewMerged openstack/openstack-ansible-ceph_client master: [doc] correct using of code-block directive  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/95419911:53
f0onoonedeadpunk: do you still have a half a braincell? I have tossed the whole ovn-bgp documentation through an LLM and it came up with something that I need to have double checked because it sounds `too good to be true` but also somewhat contradicts to what I believed how OVN handled router ports12:10
f0ohttps://paste.opendev.org/show/bT8xvJ1UW4aUSU4CcwCg/12:11
f0obecause this claims that I can get ECMP _ingress_ by default with the ovn-bgp-agent as it announces each gateway node as a potential target; and I can get _egress_ ECMP by simply attaching the Extnet multiple times12:13
f0oI can live with this12:13
noonedeadpunkeh12:15
f0oI know right :D12:15
noonedeadpunkSo, I am not sure about all drivers/exposure methods12:16
noonedeadpunkbut what ovn-bgp-agent generally does, is it checks where port is binded, and announce it only from this node12:16
noonedeadpunkI think that this is the part you can't really do?12:17
noonedeadpunk`Your tenant router (`TenRt`) must be configured with multiple gateway ports scheduled across your three gateway nodes`12:17
f0oright but if I can get each GatewayNode to host a port for the Tenant Router by allegedly just giving it multiple ExtNet interfaces, then it should be fine right?12:17
noonedeadpunkas yes, if there will be 3 external gatway ports, each of them will be annouunced12:17
noonedeadpunkbut neutron api regasrding multiple external gauytways on router is completely borked right now12:18
f0ooh12:18
f0oso it's a "yes in theory but not right now" kind of thing?12:18
noonedeadpunkand even then I'm not sure if it would be 3 ports12:18
noonedeadpunkand then there's no way to ensure these ports would be on different gateway nodes12:19
noonedeadpunkas with ovn-controller stop/start they would be shuffled...12:19
f0o:/12:19
noonedeadpunkwell.. you can of drop other nodes for the target list with ovn-nbctl...12:19
noonedeadpunkas you don't need it to be moved...12:20
noonedeadpunkbut dunno12:20
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-ceph_client master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/95420212:20
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-ceph_client master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/95420212:20
noonedeadpunkso like in theory you might be able to get to a point of it working... 12:21
noonedeadpunkbut it would not be api-driven for sure at this point12:21
noonedeadpunkimo12:21
f0osad12:21
f0owould be great to hear from somebody who runs v6 in openstack at scale to see what the heck they do... because it seems like every supported way has their own set of scaling problems12:22
noonedeadpunkwe unfortunatelly or fortunatelly don't have that much traffic from single tenant, so yeah :(12:23
f0oI'm just thinking, how do you even go about handling upwards of 100G in this situation12:24
f0owe do 30G+ of v4 no problem but 10-15G of v6 clogs the gatewaynode the router happens to be assigned to12:25
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-haproxy_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/95472012:25
noonedeadpunkf0o: I'm frankly confused of why this happens12:25
noonedeadpunkas ipv4 should be waaaaaay more havy in terms of CPU12:25
noonedeadpunkas you need to do nat12:25
noonedeadpunkwhile ipv6 is routing, which should be very fast12:26
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-haproxy_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/95472012:26
f0ov4 is just dropped from the compute node onto the vlan-extnet and from there the switch just handles it12:26
noonedeadpunkoh, ok, so it's ipv4 without FIPs and router in between, I see12:26
f0obut v6 is moved to the gatewaynode and then idk what it does before it puts it on the wire12:26
f0ois there an ovs-ctl command I can run to see what is consuming CPU?12:27
noonedeadpunklike in OVN ideally there's jsut 1 route which propogates to openflow12:27
noonedeadpunkit should be nothing for it..12:27
f0ohttps://paste.opendev.org/show/bTK8LawWrDJWn27sfqpi/ only know this one 12:28
noonedeadpunkum, not sure about such command12:28
f0o`ovs-appctl dpctl/dump-conntrack | grep 2a0d:bbc7 | wc -l` => 2362112:30
f0oexcluding 2a0d:bbc7 (our v6) => 7265612:31
noonedeadpunkwell - kinda not that much?12:34
noonedeadpunkor well, 25%, which I think is fair distribution?12:34
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472112:35
f0onoonedeadpunk: so I have the same numbers on rt2 but there ovs-vswitchd runs at 10% CPU12:36
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472112:36
f0o9.4% even12:36
f0obut on rt1 with similar conntrack is at 150%12:36
f0oprobably because it is running most openstack routers12:36
f0oyeah `ovs-appctl coverage/show | grep -e put -e xlate` shows barely any translations happening on rt212:37
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: Remove outdate task - Create container (cow)  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472513:01
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: wip  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472513:01
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: wip  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472513:02
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_hosts master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/95472613:04
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_hosts master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/95472613:11
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-memcached_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-memcached_server/+/95472813:18
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-memcached_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-memcached_server/+/95472813:19
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-memcached_server master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-memcached_server/+/95472913:20
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-memcached_server master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-memcached_server/+/95472913:22
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_hosts master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/95473013:23
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_hosts master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/95473013:24
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_hosts master: tox: Drop basepython (used only python3) and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/95473013:25
f0onoonedeadpunk: If I add more Gateway Nodes, is that a seamless experience or do I need to do something for them to start taking workloads?13:26
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95473113:26
f0obecause i'm thinking of adding 3 and removing the 2 that I have now. I dont want to end in a situation where the current LRPs are not being assigned to the 3 new nodes when both old ones vanish13:27
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95473113:27
noonedeadpunkeh13:27
f0oI have a dejavu about having asked this before...13:27
noonedeadpunkI'd guess it should be just added... but that is a good question13:27
noonedeadpunkas once you asked that - I realized I don't know for sure13:28
f0oheh13:28
noonedeadpunkas we did not have a need to scale our OVN region yet13:28
noonedeadpunkand indeed, mapping of gateway nodes to lrps is stored in nb db13:28
noonedeadpunk the new node should be trivial to add ofc...13:29
noonedeadpunkbut it that would be handled by neutron - no idea13:29
f0oguess I'll find out13:29
f0oI faintly remember how to add chassis to LRPs manually13:30
f0oshould have a virtual postit (AKA notepad tab) somewhere13:30
noonedeadpunk`ovn-nbctl lrp-set-gateway-chassis lrp-$uid $chassis $prio`13:30
f0oYAS!13:31
f0o<313:31
f0owill just add the new gateways, then just stop ovs-vswitchd on the old ones and see what breaks13:31
noonedeadpunkyou can also find all lrp binded to existing chassis like this: ovn-sbctl --no-leader-only --format json --columns=options find Port_Binding chassis=${CHASSIS} | jq -r '.data[][][1][] | select(.[0] == "distributed-port") | .[1]'13:31
noonedeadpunkthen with a bit of bash....13:31
f0oTIL that ovn-sbctl can provide JSON13:32
noonedeadpunkyeah, they can13:32
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_hosts master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/95473213:32
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_hosts master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/95473213:32
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_hosts master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/95473313:33
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_hosts master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/95473313:38
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_openrc master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-openstack_openrc/+/95473413:41
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_openrc master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-openstack_openrc/+/95473413:41
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_openrc master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-openstack_openrc/+/95473513:43
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-openstack_openrc master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-openstack_openrc/+/95473513:44
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-repo_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/95473613:49
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-repo_server master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/95473613:49
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-repo_server master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/95473713:50
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-repo_server master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/95473713:51
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-plugins master: Remove outdate file manual-test.rc  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/95396913:53
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-plugins master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/95474013:54
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-plugins master: tox: Remove ineffective ignore_basepython_conflict and bump minimum version  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/95474013:55
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: wip  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472514:57
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: wip  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472517:12
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: wip  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472517:14
opendevreviewIvan Anfimov proposed openstack/openstack-ansible-lxc_container_create master: B flag replaced to backingstorage  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/95472520:51

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!