Sunday, 2024-03-17

f0ohow can I re-generate the inventory? I removed some hosts but ansible still attempts to do stuff on them07:30
f0oscripts/inventory-manage.py -r seems to have done the trick after finding all of the occurences08:03
f0oalso TIL northd cannot be deployed in 2 nodes - by that I mean OSA happily will deploy it on 2 nodes but northd uses RAFT underneath which will at some point completely break northd and prevent any of the two to ever recover08:21
f0oall I wanted is to debug haproxy and here I am migrating northd to a 3 node controller cluster because it cant run with the rest of the networking stuff on a 2 node active/failover scheme08:22
jrosserf0o: your control plane really should be one node for non-HA and 3 for HA10:33
jrosserotherwise you have trouble with the database too10:33
f0othe controllers are 3 nodes; I assumed OVN being networking would follow networking conventions; Active-Standby10:38
f0odidnt know OVN uses RAFT which implicitly requires odd number nodes for quorum10:38
f0oso now I'm moving northd away from the TOR and onto the controllers while keeping the gateway chassis on the TOR10:39
f0osilly if you ask me, because the TOR could run northd just fine if it supported 2 node setup 10:40
f0ogonna see if I can alter northd to use a full-mesh with active-standby instead... ovs does it, just ovn doesnt12:59
f0ojrosser noonedeadpunk I have also discovered my HAProxy issue, and part of it is because I have defined an entry in haproxy_service_configs - that apparently made it ignore _everything_ else13:00
f0oeven on a full fresh install it would only deploy what I defined in haproxy_service_configs and none of the openstack configs13:00
f0othat was an interesting gotcha there; sadly I need haproxy_service_configs for letsencrypt/certbot - but all in due time. first things first, move northd or make it not use RAFT13:02
f0oI think I just shouldn't have configured that variable in the user_vars and instead created a role for it and called that instead13:07
jrosserf0o: you should not need to redefine haproxy_service_configs for LetsEncrypt - there is native support in the haproxy role13:39
f0othat's not what the docs said14:33
f0ohttps://docs.openstack.org/openstack-ansible-haproxy_server/latest/configure-haproxy.html#using-certificates-from-letsencrypt even shows the additional services14:34
f0obut I just saw half a page down haproxy_extra_services so that solves that14:37
f0oOfftopic; is there a good way to restrict cpu/memory cgroups of the lxc containers through OSA?15:33
f0oI'm looking at lxc_container_config_list from the docs but it has no real docs so I'm crawling through the playbook now to understand what it does15:34
f0oor if it's as simple as popping something into "properties" object15:35
jrosserf0o: there are no defaults to override for cpu/memory on the lxc containers16:21
jrosseralso the docs might not be up to date for letsencrypt there, i can check tomorrow16:21
jrosserthe enablement for LE is here https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/haproxy/haproxy.yml#L2216:24
jrosserthe haproxy service which supports LE is here https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/haproxy/haproxy.yml#L71-L9116:25
f0oappreciate it!16:33
jrosserwe changed the haproxy setup very recently and it looks like this was missed16:33
jrosseron the cgroup stuff, whilst we have no specific vars for that many things in OSA provide you generic hooks to apply any config you want16:35
jrosserthough i don't know of anyone who has wanted to do that before16:35
opendevreviewJimmy McCrory proposed openstack/openstack-ansible-os_nova stable/2023.2: Ensure nova_device_spec is templated as JSON string  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/91350121:34
opendevreviewJimmy McCrory proposed openstack/openstack-ansible-os_nova stable/2023.1: Ensure nova_device_spec is templated as JSON string  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/91350221:34

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!