Tuesday, 2023-01-24

moha7If it's not required to set br-vxlan as a bridge, then how the its interface should be defined directly in the `openstack_user_config.yml`? the same as *br-vlan* section here: https://paste.opendev.org/show/bb714u2x5OD377JJtp14/ ?06:15
moha7In the AIO, you use class-B IP ranges ( /22  that include 1022 IPs per subnet)! Why such big ranges for br-mgmg, br-vxlan, etc?08:14
jrossermoha7: why not?! :)08:25
jrosseri have a lab deployment which started with /24 sized networks and it was not enough08:26
jrosseralso if you use a L2 networking approach which almost everyone does, expanding those ranges in the future is really very difficult08:27
jrosserif you use some L3 it's easier08:28
moha7Each controller needs 13 mgmt IP for its container and on for itself = 14; Excluding the lb IP, and keeping 15 for Computes, would be 238 free IP in /24 that means: "238IP = 17 Controllers"08:41
moha7jrosser: If the calculation is correct, You really had >17 controllers in your lab?08:42
moha7Maybe I'm missing something there!08:44
noonedeadpunkmornings08:47
noonedeadpunkmoha7: you will likely also want to separate net nodes as well. and well, 200 computes is not _that_ big deployment :)08:48
noonedeadpunkBut indeed it depends on your needs and how much there's intent to scale08:48
noonedeadpunkbut IMO better to have spare then to struggle with extending network08:49
moha7Do you recommend the separation of net nodes like this: https://docs.openstack.org/security-guide/_images/1aa-network-domains-diagram.png ?08:50
noonedeadpunkyes, exactly. though it's applicable for lxb/ovs scenarious, ovn is a bit different, though gateway nodes are still a thing08:51
jrossermoha7: no not 17 controllers08:51
jrosserbut every physical node and every container needs an ip off br-mgmt08:51
jrosserand routers and mirrors and bastions and deploy host and monitoring and on it goes :)08:51
jrosserand then you might want to divide the address space in some sane way to be able to write concise iptables/firewall rules using CIDR boundaries, for example08:52
jrosserthere is also no reason at all why the internal networking of your openstack deployment needs to be accessible from outside the deployment08:54
jrosserthough you kind of force that to happen by putting the external vip on the mgmt network which is sort of OK for a lab but really not for a production deployment08:55
moha7In the new lab, I'm going to consider more prod approach as in separating in/ex lb IPs by two different vlan09:00
jrosserright - i have a small /27 or something for external stuff09:01
moha7jamesdenton: Do you recommend network nodes separation from the controllers in a prod env that has OVN as its network backend?09:01
*** cloudnull2 is now known as cloudnull09:09
moha7What do you recommend for the mgmt, external api, vlan (provider) and vxlan networks: 1G network interface or 10G?10:38
noonedeadpunkjrosser: I;'ve tested https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/871517/1/tasks/main.yml#76 and it does exactly what is needed10:47
*** dviroel|out is now known as dviroel11:18
admin1moha7, "If it's not required to set br-vxlan as a bridge," -- a bridge makes a lot of sense and keeps the configs sane .. in prod you might have multiple interfaces, or a bond, or a single interface doing east-west .. and each company/deployment has its own .. so by just using it as a bridge, it frees you to use the same config for multiple11:37
admin1deployments 11:37
admin1so what you want to plug into br-mgmt, or br-vlan or br-vxlan comes outside of openstack and upto the deployer 11:38
admin1so when you want to plug in a new interface/bond to grow traffic, you just add to the bridge without having to touch openstack 11:39
admin1 in prod ( i have worked and deployed openstack in big public cloud providers ) ,  usually the compute node is also set as a network node . that way, you guarantee that one node failure is not bringing down 33% of your network down ( when 3 controllers are used also as network nodes ) 11:40
admin1for prod, the external vip is most of the time on  a diff network .. sometimes isolated /30 between router and controllers and with firewalls only allowing openstack ports and sometimes a waf 11:42
jrosserto do distributed gateway or DVR is a pretty big decision with large implications, i don't think you can say 'usually' there means "how everyone usually does it"11:42
jrosserits how admin1 usually does it :)11:42
admin1i do not do dvr 11:43
admin1i only use the compute nodes as also network nodes  -- that is the only thing i do differently 11:43
admin1btw, has anyone migrated from ovs -> ovn  ? i would like to hear their success stories and tips 11:46
moha7"it frees you to use the same config for multiple"12:01
moha7admin1: it's also possible by renaming, for example, we have `set_name` in netplan configuration12:02
moha7But it's you recommending for not using of bridge, here:12:03
moha7In the deployment documentation: "Note that br-vxlan is not required to be a bridge at all, a physical interface or a bond VLAN subinterface can be used directly and will be more efficient." <-- https://docs.openstack.org/project-deploy-guide/openstack-ansible/zed/targethosts.html12:03
noonedeadpunkadmin1: I've heard plenty of success stories on summit but never did that on my own12:07
noonedeadpunkit generally works like a charm, except things with MTU can go weird if there's no support of jumbo frames12:07
noonedeadpunkso changing MTU is most messy part of the migration ppl said12:08
jrossermoha7: what do *you* actually want to do. OSA is a toolkit and you can setup most things as you wish12:10
jrosseryou can use those bridges to have high uniformity across the hosts, or you can use the underlying interfaces/bonds directly12:10
dokeeffe85Hi all, trying to install octavia via ansible and getting this: Invalid input for operation: physical_network 'lbaas' unknown for VLAN provider network. Do I need to create a new physical_network or can I use the physnet1 that I already have12:14
noonedeadpunkSo for octavia you must have a dedicated network that will be present in neutron and will also be available on controllers inside octavia_api containers12:15
noonedeadpunkdokeeffe85: most convenient option for that is having a VLAN network. You can use vlan on your already existing physnet1 - that's not an issue12:16
dokeeffe85Perfect, thanks noonedeadpunk12:17
noonedeadpunkbut eventually you will need to define a network you want to use at the end. If you want to create this network in neutron manually, you can jsut set `octavia_service_net_setup: false`12:17
dokeeffe85Ok will do, thanks 12:18
noonedeadpunkthese are variables that you can set for the net if you want playbook to manage networks after all https://opendev.org/openstack/openstack-ansible-os_octavia/src/branch/master/defaults/main.yml#L342-L35712:18
jrosserdokeeffe85: thereis a worked example here (of one way, but not the only way) https://satishdotpatel.github.io/openstack-ansible-octavia/12:18
dokeeffe85Thanks jrosser12:22
opendevreviewMerged openstack/ansible-role-zookeeper master: Ensure zookeeper is not stopped after role re-run  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/87151714:22
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper stable/zed: Ensure zookeeper is not stopped after role re-run  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/87155614:40
noonedeadpunk#startmeeting openstack_ansible_meeting15:00
opendevmeetMeeting started Tue Jan 24 15:00:18 2023 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
opendevmeetThe meeting name has been set to 'openstack_ansible_meeting'15:00
noonedeadpunk#topic rollcall15:00
noonedeadpunko/15:00
mgariepyhey15:03
noonedeadpunk#topic office hours15:05
noonedeadpunkwell I don't have much tbh15:09
noonedeadpunkI've holded up Y due to discovring quite critical CVE regarding erlang version there15:10
noonedeadpunk#link https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87130415:13
mgariepyi don't have anything, i'm quite busy with other stuff atm so i won't have much time for osa beside quick review. 15:13
NeilHanlonhey, sorry am late. not much for me. preparing for centos connect and fosdem next week (gasp)15:19
*** dviroel is now known as dviroel|lunch15:19
noonedeadpunkWill try to find you there :)15:21
noonedeadpunkWell, yes, basically we have business as usual. As always - a bit lack of reviews, but bug fixes seem to land on time15:22
noonedeadpunkGates seem to be quite healthy as well now15:22
jrosserapologies but andrewbonney and i arent available for the meeting today15:26
noonedeadpunksure, no worries!15:29
noonedeadpunkI think I will close meeting early today then due to lack of agenda basically15:30
mgariepy+115:30
noonedeadpunkOne thing though - I won't be around next week for the meeting15:30
noonedeadpunkAnd eventually I won't be around next week at all :)15:30
noonedeadpunkSo is there anybody who wants to be a meeting chair?15:31
noonedeadpunkAs otherwise I will just cancel the meeting15:31
noonedeadpunkOk, I think I will cancel it then:)15:41
noonedeadpunk#endmeeting15:41
opendevmeetMeeting ended Tue Jan 24 15:41:28 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:41
opendevmeetMinutes:        https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.html15:41
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.txt15:41
opendevmeetLog:            https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.log.html15:41
*** dviroel|lunch is now known as dviroel16:30
noonedeadpunkOh my...16:43
*** dviroel is now known as dviroel|doc_appt16:43
jamesdenton?16:48
noonedeadpunkhttps://lists.openstack.org/pipermail/openstack-discuss/2023-January/031886.html16:56
NeilHanlonoh my indeed17:01
*** dviroel|doc_appt is now known as dviroel19:13
jrosseri would think that https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031886.html is a really good test case to see if some of the systemd private stuff can mitigate bugs like that20:12
*** dviroel is now known as dviroel|out22:44

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!