Sunday, 2021-10-24

opendevreviewyangjianfeng proposed openstack/neutron master: Improve Router callback system's publish events  https://review.opendev.org/c/openstack/neutron/+/80484609:17
opendevreviewyangjianfeng proposed openstack/neutron master: [Server Side] L3 router support ndp proxy  https://review.opendev.org/c/openstack/neutron/+/74314209:17
opendevreviewyangjianfeng proposed openstack/neutron master: [Agent Side] L3 router support ndp proxy  https://review.opendev.org/c/openstack/neutron/+/74481509:17
EugenMayerHello. all my compute(4) and controller(1) nodes are connected via 2 vswitches (provider networks using 2 seperate vlans). For each of the nodes i configured the interfaces e.g. ens9s0.4001 / ens9s0.4002 and the network is setup and is working. On network is for management of the cluster (VIP). The other should be a flat provider network for all the VMs hosted on the compute nodes.13:31
EugenMayerAFAIU i need to configure a provider network (not sure i need to do this for the mngmnt network, but at leas for the "common vm lan network"). Now i'am very new to openstack and kind of having trouble to understand how this is supposed to happen. thus a couple of questions13:33
EugenMayera) The neutron service (server?) when deployed using kolla - where does it usually go? I have put it on the controller. Is that about right or does it need to be on each compute node (i suppose those i have agents only), right?13:34
EugenMayerb) when configuring the "provider network" for the "vm lan", how do i do that cleanly? Do i remove the existing ens9s0.4002 which i created using ENI and set it up entirely from scratch using openstack tooling? Or do i rather create a bridge on that interface which i then define as a provider network using Openstack?13:35
EugenMayerc) the configuration of the provider network, does this happen on the controller and is somehow (however?) propagated to the compute nodes or doe i configure it on each compute node? Alle compute nodes have the same interface names, but the controller has different (ens18 / ens19) so i could not imaging that openstack can help with that? I would assume i declare a "provider network" of name "X" in the controller, configure 13:37
EugenMayerwhich existing interface (ens19) is providing that, and then configure this for each compute node like "you are attached to X via interface Y) - is that the way to go?13:37
EugenMayerc) if i understand it right, it depends on the machanism driver if i need to configure the compute nodes too. If i pick ovs or linux bridge, i have to configure the agents, OVN seems different. Still not sure though, could need some guidiance. Reading up on this at https://docs.openstack.org/neutron/latest/admin/config-ml2.html right now14:04
EugenMayera) i think it is clear to me that the neutron server is to be located on the controller node as explained in https://docs.openstack.org/neutron/latest/admin/ovn/refarch/refarch.html#refarch-refarch14:06
EugenMayerc) I guess e.g. with linux bridge i would configure the linux-bridge-agent via https://docs.openstack.org/neutron/latest/configuration/linuxbridge-agent.html physical_interface_mappings so that Openstack understands which pysical interfaces maps to which network14:50

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!