Thursday, 2024-03-07

f0oGood morning/evening - I let ansible run overnight and it did setup everything and the provider net seems correct as per output of ovs-vsctl (bond0 on compute and mlagXYZ on the network nodes); But I'm having an issue with traffic flow on plain vlan (not tested geneve yet). When I start cirros and tap bond0 and br-ext of the hypervisor I see ARP lookups from cirros07:34
f0oforwarded to the wire. But when I arping cirros' IP from a physical device on the same vlan I only see the ARP requests on bond0 and not in br-ext. So somewhere bond0 is not forwarding packets into br-ext but is happily forwarding them out of br-ext. Ideas? (SGs are -1 allow)07:34
f0oI might have a suspicion; I do use the same vlan on a vlan subinterface on the same bond0... Let me test with a different vlan for the sake of completeness07:50
noonedeadpunko/07:58
noonedeadpunkf0o: so. if your externel connectivity happens over vlan (or you can do that over vlan), I'd suggest just forget about flat networks07:59
noonedeadpunkas you can't really use flat interface for vlan interface08:00
noonedeadpunkthe hack I did back in the days for flat network to work - was manually creating bond0.1000 (or smth) and adding it as "flat" network08:00
noonedeadpunkbut it was back in the days when I was not sure about policies/rbac, so had no idea if users with their default permissions can/can not create vlan networks on their own, so decided to "isolate" that way08:01
noonedeadpunkso my suggestion would be to forget about flat networks, and just use vlan. You can create an external/shared network in neutron using VLAN as well08:02
f0othis isnt a flat network tho08:09
noonedeadpunkah, ok08:10
f0oI think that the linux kernel might yank the packet from bond0 and toss it into bond0.2 instead of br-ext08:10
f0osince bond0.2 is not on a bridge so it might not cause the packet to fanout08:10
noonedeadpunkhm, let me check my ovn sandbox....08:10
f0ojust went through the lovely Cisco IOS terminals to get myself a new dummy vlan to just test that theory08:11
f0oif I can ping a vm with vlan3 from a non-hypervisor device on vlan3 then that explains why the vlan2 was swallowed08:12
f0oalso low on caffeine so words are super hard08:12
noonedeadpunkI'm jsut not sure that bond0.2 (if that's the external vlan) should be present for kernel networking at all08:15
noonedeadpunkbut seems that I crashed my sandbox lately with some other experiment :D08:15
noonedeadpunk(or it was some of my collegues)08:15
f0oah no that's not external networking - it's just some workloads we will run cant do geneve and need to be on L2 connectivity for god knows why08:16
f0oI was already looking into just abusing our arista TOR switches to do geneve<>vlan translations but I gues that will pour hell into ansible to add foreign vteps08:17
jrosserisn’t geneve/vxlan tunnel completely invisible to the workload?08:19
jrosseror you need to integrate some other not-openstack thing?08:19
f0onon-openstack thing sadly08:20
noonedeadpunkI guess external/non-external doesn't really matter...08:21
f0obut I just confirmed by theory; I see arp-requests being pushed from bond0->br-ext on vlan308:21
f0ovlan2 is swallowed by kernel and tossed into bond0.2 instead of bond0->.2&&br-ext08:21
jrosserah well we do a hybrid for this08:21
jrosserwe use vxlan for general purpose project networks08:21
noonedeadpunkI got an octavia (amphora) instance that use a vlan as an example - it's also "internal" vlan08:22
jrosserand then for “special” things that need to extend to physical devices we have a bunch of provider vlans in openstack that are shared into projects with neutron RBAC08:24
jrosserbut actually those are vxlan-evpn on the leaf/spine but no mast integration needed between that and openstack08:24
jrosserno *nasty integration08:24
noonedeadpunkok, crap... and how to find vlan wiring in ovn....08:26
noonedeadpunkI bet I saw that in ovs-vsctl output....08:26
* jrosser goes to hide behind linuxbridge08:27
f0owoop we have connectivity. So I can confirm that if you use a physical interface as backing for your ovs bridge and you define a vlan subinterface via your OS on that same physical interface, then no packets with the tag will enter OVS but packets will exit OVS no problem08:27
noonedeadpunkbut as a matter of fact, I for sure have no "vlan" interfaces anywhere exposed08:27
noonedeadpunkso basically bond0 is indeed part of br-ext08:27
f0ogonna see if I can use a linux-bridge as physical backing instead which should make the packet be delivered to both OVS and subinterface08:28
noonedeadpunkBut then all ports of VM are part of br-int08:28
noonedeadpunkincluding the vlan one....08:28
f0oyep08:28
f0oovs doing some 5D-Chess flow policies to move packets around08:28
noonedeadpunkBy why do you have bond0.2 at kernel space at all?08:28
f0oI did the mistake to dump-flows on br-int and was slaped with a wall of text08:28
noonedeadpunkI guess that's the question I'm trying to understand08:28
f0o.2 is management network08:29
noonedeadpunkaha08:29
f0oIw as just lazy and tried an existing vlan08:29
f0odidnt want to configure a test vlan on the switches and mlags and yaddayadda - thought it would "just work" like with linux-bridges08:29
* noonedeadpunk still loves linuxbridges08:29
f0obut it's non-issue since vlans do just work, as long as I dont consume them through kernel on the hypervisors - I see this as a security plus08:30
f0oBut I can guarantee somebody will ask me to allow spawning some vm that has access to management-vlan so for that I will do the flat-network like you suggested earlier08:30
noonedeadpunkyeah, I don't think you can use vlan on kernel space and use that as neutron vlan...08:30
noonedeadpunkI guess that's indeed what confused me08:30
noonedeadpunkProbavbly you can do some kind of wiring though.... Like ovn-bgp-agent does08:31
noonedeadpunkBut it adds ovs flows to redirect traffic to kernel space and vrf for the way back08:31
noonedeadpunkbut it's slightly /o\08:31
f0ohavent even started looking at BGP since that'll be likely needed too08:31
f0ofor floating IPs08:31
f0obabysteps08:31
noonedeadpunkI am working on it right now08:32
noonedeadpunkand what I see right now - all security and isolation benefits you get with ovn - wades away with bgp08:32
noonedeadpunk*fades08:32
f0oI looked at the docs very briefly and there was a way to only advertise floating IPs instead of all tenant networks but that might be outdated08:33
noonedeadpunkthe one more or less decent in theory way, is using a standalone ovn cluster, which I haven't even tried yet, as that requires newer OVN version then present in repos, and have quite serious limitations08:33
noonedeadpunkso... I think exposing tenant networks is potentially not what you're thinking about. As it's not about east-west traffic... And moreover, these tenant networks can not overlap in ranges08:34
noonedeadpunkso it's... really very private-cloudish-specific-thingy08:35
jrosserthere’s multiple things isn’t there?08:35
jrosseradvertising fip to upstream routers would be nice08:35
noonedeadpunkyeah, but still tenant networks are all using same vrf from what I got so far08:35
noonedeadpunkand that's not for geneve I assume anyway08:36
jrosserand I didn’t yet look at all about how that would be for ipv608:36
noonedeadpunkyeah, actually, for ipv6 you probably indeed wanna expose tenant networks08:36
noonedeadpunkwhen each tenant has it's own ipv6 range...08:36
jrosserright just like you have to today without ovn08:36
jrosseryes any they are unique anyway just because v608:37
noonedeadpunkyeah08:37
noonedeadpunkbut for ipv4 it's non-multitenant, imo08:37
jrosserwould be great to write some guidance on this08:39
jrosserjust notes in the neutron role docs08:39
noonedeadpunkI'm still struggling to understand how exactly it works/ should work08:39
jrossercalico was very similar iirc08:39
jrossergreat, but also not great, depending08:39
noonedeadpunkright now playing with external networks, got SRC-NAT and gateway reachable. But a bit struggling with how FIPs are to be exposed.08:40
noonedeadpunkAs docs are a bit contradictory about - do you need frr on computes or not08:40
noonedeadpunkas ideally, when FIPs are centralized - it should be just on net nodes.08:41
noonedeadpunkBut agent somehow does not catch FIP changes there...08:41
f0ojust to jump in here; east-west traffic is pretty easy with geneve/vxlan since vteps take away the wire-noise by moving everything into ptp. BGP would only be usefullt/beneficial for north-south traffic like floating-ips to not have those onlink routed anymore. In our old setup we had quite a lot of broadcast noise, I think ~100mbps just junk noise on the wire since there08:41
f0owas no deterministic way to know if an IP existed or not. We ended up writing some go-bgp service that periodically crawls the neutron-db for assignments and whitelisted them in our TOR switches08:41
f0ohoping to avoid the workaround and get some native neutron<>bgp integration here08:42
noonedeadpunkBut even for src-nat usecase - traffic just rushes through the default gateway, which is logical...08:42
noonedeadpunkbut not so good if you wanna have air-gapped environement...08:42
noonedeadpunkAs I feel uncomfortable seing all VMs traffic to the world coming through the FORWARDING table to the world where you have all sort of management traffic as well...08:43
noonedeadpunkBut then ofc you can create own routing table and redirect traffic towards different gateway - but it's slightly a mess and lack of isolation, imo08:44
noonedeadpunknot 100% sure I'm right though08:44
noonedeadpunkAs I'm not good at many things... (not sure I'm good at anything nowadays :D)08:45
noonedeadpunkBut networking is probably one of them08:45
noonedeadpunkf0o: I think this patch might be a good start for doing that: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/90978008:46
f0oTIL br-ovs-flat:br-mgmt will kill your networking09:12
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Determine if upgrade source branch is stable/ or unmaintained/  https://review.opendev.org/c/openstack/openstack-ansible/+/91158309:19
noonedeadpunkfrankly speaking, for me networking is so far the hardest part in openstack09:25
f0ocant agree more09:25
f0oit was simple with linuxbridge... but now with OVN it adds a whole new layer of complexity. I mean I get the benefits which is why I want to actually get this done "right" but still.. this is now week2 on this adventure09:26
f0ogranted most time is resetting the hardware09:26
jrosserf0o: are you combining the mgmt network in with the OVN/OVS stuff?09:27
noonedeadpunkyeah, true.  lxb was very easy to understand and debug 09:28
jrosserok so we need to find someone who can merge unmaintained patches09:29
f0ojrosser: well not anymore hah09:29
jrosserf0o: well, i would say "good move" :)09:29
noonedeadpunkjrosser: not so many ppl on the list https://review.opendev.org/admin/groups/4d728691952c04b8b2ec828eabc96b98dc124d69,members09:29
f0ojrosser: but yeah it was a low-ball attempt to get management-vlan into openstack neutron for consumption because I just *know* that request will come sooner or later09:30
jrosseri think you should never never do that tbh09:30
noonedeadpunkeasier to find ones who will approve our ACL change in gerrit09:30
jrosseras you have a massive trust boundary that you should not cross there into the control plane09:30
f0ojrosser: yes but... "I just need to test this software in a vm real quick so I need access from the OpenStack for this very short PoC yaddayadda"09:31
f0oI can already predict those requests09:31
noonedeadpunkactually, you can get full control of your cluster relatively easily by having access to that network09:31
jrosserf0o: perhaps some misunderstanding?09:31
jrosserthe mgmt network is private and internal to the inside workings of openstack09:31
noonedeadpunkIt's totally fine to get external vlans, but not management one09:32
jrosseryou don't access to it, ever, to do anything in a VM as a user09:32
noonedeadpunkyeah09:32
noonedeadpunkaccess openstack through public endpoint from VMs09:32
noonedeadpunkif you need to interact with API09:32
f0ohrm I just smacked it into the general management network...09:32
jrosserso, for example09:33
f0obut I see your point, could just split it into its own vlan09:33
jrosseri have an out-of-band network, which PXE the hosts and whatnot, and for monitoring09:33
jrosserthats mine exclusively as cloud-operator09:33
jrosserthen seperate is the openstack mgmt network, which deals with the internals of the control plane and you should consider as "if this is accessible, my cloud is compromised"09:34
jrosser^ these two you can totally make be the same network, and most deployments probably do that09:34
jrosserthen what your users and workloads see and use is something else altogether09:35
f0oI get your point; I over-interpreted management network and used the actual management-network we use for all our gear instead of an openstack-specific management network09:37
f0omea culpa09:38
jrosserso to complete the picture, you've then got some "external" networks09:38
f0owhich immeidately fixes the issue I tried to solve because this makes the general management network accessible agian since it's a different vlan09:38
jrosserand you might have a subnet containing your external haproxy endpoint, perhaps ceph radosgw endpoints, perhaps bind instances for Designate and so on09:39
f0ogood thing OVN nuked my hosts just now, linuxbridge would've been fine09:39
f0ohah09:39
jrosser^ these things all have some outward facing presence from your cloud, toward your users09:39
jrosserand thats a seperate thing to your public network, but often, a lot of deployments might combine the two together09:40
jrosserby reserving a bunch of addresses in the public network for the API endpoints and so on09:40
jrosserbut you have really total freedom to set that up however you need09:41
ThiagoCMCFolks, morning! Which variable should I set so that the `rabbitmq_server` Ansible playbooks (`/etc/ansible/roles/rabbitmq_server`) will only use the Ubuntu packages without adding the third-party APT repository from this "novemberain.org" place?09:57
noonedeadpunk`rabbitmq_install_method: distro` I assume10:03
ThiagoCMCHmmm... Thanks! I'll try it now.10:06
ThiagoCMCAlso, I can't see a similar `_install_method` for the MariaDB. Any other variable name for it so I can also install it from Ubuntu's repositories?10:07
noonedeadpunkI'm not sure about mariadb - there's probably no that easy way. But I think you should be on safe side jsut undefining external repos vars10:36
noonedeadpunklike `galera_repo: {}`, `galera_gpg_keys: []`10:37
jrosseryou would fall outside what we test for mariadb versions though11:01
jrosserso it might be good to have some local testing of what you actually get from the ubuntu repo11:02
opendevreviewMerged openstack/openstack-ansible-haproxy_server stable/2023.2: Use correct permissions for haproxy log mount  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/91160311:04
kleiniCurrently my LXC containers get the default MTU of 1500 on their network interfaces. I also see that in the LXC container configuration files. I am digging into the Ansible roles now. What is the best way to configure MTU 9000 for some networks (br-storage and br-lbaas). I have currently the issue that heartbeat of some amphora VMs are send with MTU 9000 but they can not reach the Octavia health manager.11:14
noonedeadpunkkleini: hold on, I should have a sample somewhere11:18
noonedeadpunkkleini: you'd need to add `container_mtu: 9000` to openstack_user_config.yml under global_overrides/provider_networks11:19
noonedeadpunkand then dynamic_inventory should take care of that...11:19
noonedeadpunkbut then my guess would be to run lxc-containers-create --limit octavia_all,lxc_hosts and restart containers11:20
noonedeadpunkand actually here's example out of docs: https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.example#L26811:21
ThiagoCMCnoonedeadpunk, it worked! RabbitMQ is from `distro` now. Thanks!11:38
noonedeadpunkwe actually should add a variable for mariadb11:41
noonedeadpunkand add these overrides as part of distro job testing11:41
noonedeadpunkI'm thinking about that for couple of month now...11:41
g3t1nf0hey its me again with the questions :D 12:08
g3t1nf0https://docs.openstack.org/openstack-ansible/2023.2/user/prod/example.html according to this I need rsyslog server as well 12:08
noonedeadpunknah12:09
noonedeadpunkyou don't12:09
g3t1nf0also I know that its prefered to hava application LB infront the whole openstack but can I use a server there as well running HAProxy12:09
noonedeadpunkwe've deprecated rsyslog for a while now...12:09
noonedeadpunkActually, I assume that haproxy we're deploying can be that application LB already12:10
noonedeadpunkand you can deploy it wherever you want basically12:10
g3t1nf0so then 3 servers for control plane 3 for the ceph and then just compute? 12:10
noonedeadpunkpretty much yes12:10
g3t1nf0okay then do I need 2 networking servers this way the LB will be on it and not on the control plane12:10
noonedeadpunkI guess it's up to you?12:11
g3t1nf0or running it on the control plane will not eat that much cpu ?12:11
noonedeadpunkI guess mainly ppl don't bother themselves with a standalone LB hosts12:11
noonedeadpunkofc depends on scale12:12
g3t1nf0usually they have f5 :D12:12
noonedeadpunkand potentially RGW is a bigger question12:12
noonedeadpunkmeh, I find haproxy way more performant then nginx12:12
noonedeadpunkunless you're talking about hardware ones12:12
g3t1nf0rgw ? 12:13
noonedeadpunkRados Gateway - ceph component that provides object storage12:13
g3t1nf0oh yeah I think of letting it sit on the 3 ceph servers 12:13
noonedeadpunkhttps://docs.ceph.com/en/latest/radosgw/12:13
noonedeadpunkand also - you can ignore haproxy installation and do f5 or whatever else you want12:14
g3t1nf0no I prefer haproxy12:14
noonedeadpunkwe're really trying not to invent any locks here and let operators variety of choices (for good and for bad)12:15
g3t1nf0last two questions for the OS, I'm gonna go with debian can I use LVM for the main storage? this way I can use snapshots and revert if my install fails thus saving time on reprovisioning12:15
g3t1nf0and the second one 12:16
noonedeadpunkmain storage meaning storage for OS?12:16
g3t1nf0yea12:16
noonedeadpunkdoesn't matter from our prespective12:16
noonedeadpunkyou can do ZFS as well :D12:17
g3t1nf0and the second question was about the firewall somewhere I've read that its up to the deployer to configure it, so not sure I understand 12:17
noonedeadpunkactually, I guess you can also use lvm backend for containers if you want to12:17
noonedeadpunk*for lxc containers12:17
noonedeadpunkyeah, there's nothing we have to configure firewall on nodes.12:18
noonedeadpunkthat's true12:18
g3t1nf0prefered storage for OS is nvme or ssd does it matter like in k8s ?12:18
g3t1nf0on k8s for etcd its prefered nvme 12:19
noonedeadpunkWell, faster storage is always better, but probably not that crucial after all12:19
noonedeadpunkor well. rabbit quorum queues reside on the storage... but they're not enabled by default for now.12:20
noonedeadpunkso potentially rabbitmq might get hungry for storage like etcd is12:20
g3t1nf0gotcha 12:20
noonedeadpunkbut with classic queues it's all in memory iirc12:20
g3t1nf0I've heard people complaining for rabitmq issues 12:21
noonedeadpunkyeah12:21
g3t1nf0so doing it with nvme then 2tb should be suffiscient 12:21
noonedeadpunkbut it's way better lately12:21
g3t1nf0no idea, about to test it out :D12:21
g3t1nf0so back to the firewall, routing and firewall rules I have to figure out on my own ?12:21
noonedeadpunkand I'd suggest enabling quorum queus then right away12:22
noonedeadpunkSo, routing is super trivial (close to absent) unless you're talking about neutron + bgp.12:22
noonedeadpunk(or well ovn+bgp)12:23
noonedeadpunkabout firewall - indeed we don't have anything right now for that.12:23
g3t1nf0I do want public not private so I guess I'll have to figure out bgp 12:23
noonedeadpunkit could be interesting to get some implementation for firewall actually, in a way we do with haproxy...12:23
noonedeadpunkwell, it's not that you have to...12:24
g3t1nf0is apparmour still supported? is the hardening plabook enabled by default12:24
noonedeadpunkI guess plenty of deployments just do use vlan passing to net/compute nodes12:24
noonedeadpunkyeah, hardening playbook is enabled as well as apparmour is12:24
noonedeadpunkThough we haven't added new stigs to hardening role for a while12:25
g3t1nf0I like my networks strickt, so far what I have has always been specific ports open on the host and on the FW tables12:25
noonedeadpunklike contributions in this area would be very welcome...12:25
noonedeadpunkYeah, there're just a lot of them12:25
noonedeadpunkSO probably nobody come to a really good combined view yet, also considering all sort of part/bits for monitoring, etc12:26
g3t1nf0never touch apparmour but you convinced me not to go the centos way so I'm out for the selinux 12:26
noonedeadpunkI wasn't convicing against Rocky Linux. I did against centos solely:D12:26
g3t1nf0sorry I just don't trust Rocky at all 12:26
g3t1nf0still have bad taste in my mouth for centos state before rh bought them 12:27
g3t1nf0one last thing, all the management for openstack should be made thru the deployment host correct?12:28
g3t1nf0and I should evade making hand changes ?12:29
g3t1nf0stick to iac12:30
noonedeadpunkso yes, idea that all configuration changes are made with playbooks, and you just need to maintain state/changes in /etc/openstack_deploy12:32
noonedeadpunkplaybooks/roles are pretty much idempotent, so running them should not hurt deployment12:32
noonedeadpunkunless manual changes were made - then they will be overriden12:32
g3t1nf0perfect, thank you for the time 12:33
g3t1nf0going to get my hands dirty 12:33
noonedeadpunkbut ofc you can make them if you want/need for some testing, just don't forget to reflect them in config for future you12:33
g3t1nf0how often should I pull from gh 12:34
noonedeadpunkwe actually have here one of Rocky Linux maintainers - so can't say we don't trust them :)12:34
noonedeadpunkwhen you need to do minor upgrade?12:34
noonedeadpunkas we usually suggest checking out to a specific release/tag12:34
noonedeadpunkeach tag has fixed SHAs of versions for all and each component, which makes environment pretty much fixed/stable12:35
noonedeadpunkand reproducible12:35
g3t1nf0and if I want to contribute just extra branch and push ?12:35
noonedeadpunkwhen you wanna make minor upgrade - you checkout to the new tag, do ./scripts/bootstrap-ansible.sh, which pulls in new versions12:36
noonedeadpunkYou can override SHA of specific component anytime12:36
noonedeadpunkand if you want contribute - we're using gerrit for that. It's slightly diferent flow then github is, but imo - more trivial one. Basically you'd need to isntall git-review plugin. and then - git commit; git review12:37
noonedeadpunkno need to do git push or checkout for branch. Though, checking out to branch is handy - branch name is considered as a "topic" for the patch. and you can make series of patches to different repos this way12:38
noonedeadpunkalso `git review -f` will delete this branch once change be pushed12:38
jrossernoonedeadpunk: we have full coverage here of iptables with the role that logan wrote12:39
jrossercould be interesting to see if that can be somehow contributed12:40
noonedeadpunkwe have also pretty much full coverage but I don't think the way it's done is applicable and can be contributed - long legacy history behind what we have12:40
noonedeadpunkbut can help with rules for sure12:41
jrosserdo have to be careful though with neutron12:41
noonedeadpunkI was also a bit thinking of nftables that are currenly a backedn for iptables...12:41
noonedeadpunkyeah, as geneve vs vxlan using different protos?12:42
noonedeadpunk*ports12:42
noonedeadpunkand then whole BGP12:42
jrosserif whatever neutron plugins a you use mess with iptables, the care needed not to disturb that with rules from OSA12:42
jrosserparticularly as the neutron ones are not persistent rules you don’t know what they are12:43
jrossermaybe less relevant problem for OVN?12:43
jrosserg3t1nf0: another thing you can think about for network “strictness” is which actually need to route to each other (most in OSA don’t, regardless what the docs say)12:45
jrosserand also which need any egress or NAT, again most actually don’t12:45
noonedeadpunkjrosser: I don't think it's relevant even with OVS today, unless you do hyprid_iptables (like we do /o\)12:54
noonedeadpunkas ovs native is known to work nicely for a while now. Though I can't get myself up for testing migration path12:54
g3t1nf0hmm sorry I didn't get that, what? Once I get to Neutron I'll have maybe a bit more understanding what you've meant 12:54
noonedeadpunkso, VM security groups/permissions and filtering are made by neutron.12:55
noonedeadpunkwith some drivers/implementations they are made on a kernel space12:55
noonedeadpunkthrough iptables basically12:56
noonedeadpunkjrosser: I wonder if we should jsut document set of rules that might be needed and some sample of how to configure it using logan-'s role12:57
noonedeadpunkas in fact thinking about neutron and all kind of extra stuff, not sure if it's feasible to enforce them anywhere except maybe control plane...12:57
noonedeadpunkbut dunno12:57
noonedeadpunkmaybe it's doable after all through group vars and extending/merging config....12:58
jrosserwell, we do apply everywhere so it is possible13:11
jrossereven with linuxbridge13:12
noonedeadpunkjrosser: I guess I was more thinking of how to do that with diversity of deployments13:28
jrosseryes that would be very difficult13:28
jrosserbtw also looks like we have openstack-ansible-unmaintained-core group now 13:28
noonedeadpunk\o/13:31
jrosseri added you https://review.opendev.org/admin/groups/1d7433bc7e2c46fd333e6b1b7bfeaa9a324803d0,members13:31
noonedeadpunkawesome, thanks13:32
jrosserprobably we should just add the existing core group to that13:33
g3t1nf0on the OS that openstacks controll node will be run can I make some modifications or its prefered to keep it as stock as possible?13:40
jrosserg3t1nf0: you can make modifications if you need - OSA tries to stay away from everything you might need specific to your deploymemt13:44
jrosserso you can make whatever local setup you need for things like ntp, mta, ssh etc13:45
jrosserit's typical to have some additional "base" setup either via whatever host provisioning you use or some more ansible of your own13:45
g3t1nf0on my rhel systems I'm setting a lot of stuff but never touched debian so I have to do a base setup for it as well, like ssh chrony systemd timers ...13:47
jrosserandrewbonney: if you log out/in of gerrit can you see the usual voting options on https://review.opendev.org/c/openstack/openstack-ansible/+/911621 ?13:49
andrewbonneyNo need to log in/out. I can see them now13:49
jrosserexcellent, thanks13:51
kleininoonedeadpunk, thank you very much. With our growing kubernetes cluster on OpenStack I think, we will some day run into https://bugs.launchpad.net/octavia/+bug/202526213:56
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Determine if upgrade source branch is stable/ or unmaintained/  https://review.opendev.org/c/openstack/openstack-ansible/+/91158314:11
f0oI couldn't leave it be after having a working MVP... I absolutely and entirely nuked the setup again attempting a lift&shift into a new management vlan space15:04
f0othe joy of going throuhg a whole resetup again15:05
noonedeadpunkf0o: so you have had a working MVP - that;s smth already :D15:13
nixbuilderI have installed OSA v.27.4.0 and it seems like none of the policy.yaml files were created on the system. Not sure why. Which OSA scripts are responsible for creating the policy files.15:14
f0onoonedeadpunk: oh yeah after those pointers with the host_vars and user_vars I basically 1-hit got something running across the halfrack I'm using. That really solved everything for me. But then I went and tweaked stuff like container_mtu, dedicated vlan, ... and Obviously I attempted it without redeployment, just re-running the playbooks... I should've known better :D15:17
noonedeadpunknixbuilder: they're not created unless you have overrides, since all policies defaults are implemented as defaults15:17
f0oSo now I'm about to grab a beer at the pub while the rack re-re-re-resets itself once again15:17
noonedeadpunkso they're present only when you define overrides15:17
noonedeadpunkf0o: very nice to hear, and beer part is especially valuable hehe15:22
noonedeadpunkbut actually.. .changing mtu should have worked15:22
f0onoonedeadpunk: 100%; if I bump into you or jrosser remind me to get you a round too15:22
noonedeadpunkthough with adding another network/re-wiring stuff might be quite breaking for clusters indeed15:22
f0oI think if I would've done the MTU first and then vlan change after (or vice-versa) it would've been fine but I did it all at once and just had the vlans routed between assuming it was "good enough"15:23
noonedeadpunkWel, you would be also fine, likely, by just re-creating containers per controller15:23
noonedeadpunkthough at mvp might be indeed easier to re-run things15:24
jrosserf0o: please do raise bugs for anything broken you find, thats very handy for us15:24
f0oMAAS is fast enough to just run through setup once more and have ansible fired afterwards - it'll just do it's thing over the evening and tomorrow I'll see the result15:25
f0ojrosser:will do, taking notes of all the gotchas already15:25
jrosserand also if you want to change / improve something this whole endeavor is "by operators, for operators" so contirbutions are absolutely welcome15:25
nixbuildernoonedeadpunk: Hmmm... well... I had a couple of overrides and yes, the policy.yaml was created.  However when using horizon I keep getting errors saying policies do not allow the operations I am trying to execute. If I manually enter the policy the errors go away.15:25
f0o100% will upstream my openstack_user_config.prod.example with the OVN fixes and some comments to why what how15:26
f0oincluding that gotcha that vlan works fine but if your kernel consumes a tag it wont show up in OVS15:26
nixbuildernoonedeadpunk: Where are the policy defaults located?15:28
jrossernixbuilder: https://governance.openstack.org/tc/goals/completed/queens/policy-in-code.html15:29
noonedeadpunknixbuilder: actually... for horizon it could be this patch you should try15:29
noonedeadpunkhttps://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/91072615:30
opendevreviewMerged openstack/openstack-ansible-os_octavia master: Adopt for usage openstack_resources role  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/88987916:19
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Bump SHAs for 2023.2  https://review.opendev.org/c/openstack/openstack-ansible/+/91194316:20
opendevreviewMerged openstack/openstack-ansible-os_magnum master: Adopt for usage openstack_resources role  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90118516:21
nixbuilderWhere do you guys put the change logs between versions?  In other words where would I find the change logs for 27.2.0 and 27.4.0?16:23
opendevreviewMerged openstack/openstack-ansible stable/zed: Determine if upgrade source branch is stable/ or unmaintained/  https://review.opendev.org/c/openstack/openstack-ansible/+/91158416:26
noonedeadpunknixbuilder: https://docs.openstack.org/releasenotes/openstack-ansible/2023.1.html16:28
nixbuildernoonedeadpunk: As always... thank you!16:30
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Move insecure param to keystone_auth section  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90511016:32
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add job to test Vexxhost cluster API driver  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90519916:32
noonedeadpunkhuh, so we've landed all openstack_resources patches - sweet16:34
jrosserfeels like we are also very close to getting the capi stuff merged too16:46
noonedeadpunkyeah16:47
*** jamesdenton_ is now known as jamesdenton16:48
jrosserthis should be good to go now https://review.opendev.org/c/openstack/openstack-ansible/+/91162116:50
-opendevstatus- NOTICE: Jobs that fail due to being unable to resolve mirror.dfw.rackspace.opendev.org can be rechecked. This error was an unexpected side effect of some nodepool configuration changes which have been reverted.16:53
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Add support for the apply_to parameter for policies  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/91194917:07
f0ois there an easy way to have openstack-ansible create some predefined flavors?17:54
f0otried googling but I got a whole lot of other results unrelated to openstack-ansible including the obvious " just use the openstack module from ansible community"17:54
g3t1nf0what is the standard network called not the OVN the other one the old one ?18:05
f0og3t1nf0: linuxbridge18:05
g3t1nf0thanks 18:05
g3t1nf0jrosser: if I provide a list with selinux policies and firewall ports for the different services is that suffiscient for selinux enforcing ?18:06
jrosserf0o: there is improvement in that regard in the next release18:07
g3t1nf0https://paste.rs/LUWz4.txt so far18:07
jrosserwe have ansible role “openstack resources” that can manage those things for you18:07
f0oneat!18:08
noonedeadpunkbut it's only on master18:08
noonedeadpunkpretty much new and still needs some polishing/improvement18:08
noonedeadpunkBut I assume you can try using it18:08
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/openstack_resources/defaults/main.yml18:08
f0oI'll try at least heh18:09
noonedeadpunkg3t1nf0: there's also OVS which sits in-between18:09
noonedeadpunkg3t1nf0: um, I guess you would need to teach us a bit more on how it's expected to be set up18:11
g3t1nf0noonedeadpunk: should I continue with the policies or its of no use 18:11
g3t1nf0what you mean ?18:12
g3t1nf0the commands are there 18:12
g3t1nf0you create the custom policy then apply it if there is no policy18:12
noonedeadpunkwell, regarding firewall, I think me and jrosser have some view on it, though question about how implement it in a meaningful way for others is still open question18:13
noonedeadpunkregarding selinux - that is super useful thing to hove, imo18:13
noonedeadpunk*have18:13
jrosserI think for selinux we need someone to step forward and offer to develop/support that feature18:15
g3t1nf0jrosser: so just the policies is not enought ?18:15
noonedeadpunkwe for sure can provide some guidance there18:15
jrosserg3t1nf0: I have little to no practical experience with RHEL variants18:16
noonedeadpunkI guess a question here - what if they change18:16
jrosserand therefore motivation to keep the CI in good order for selinux is not high, for me18:16
noonedeadpunkI think selinux can work nicely on ubuntu/debian as well though18:17
g3t1nf0there is selinux for ubuntu and debian, debian 100% not sure ubuntu18:17
g3t1nf0https://wiki.debian.org/SELinux18:17
noonedeadpunkg3t1nf0: so, can you please submit a bug report with your work/policies then, so we won't loose them, unless you wanna to push them directly18:18
noonedeadpunkgiven amount of things in this paste - is there some "automated" way to come up with a policy?18:18
g3t1nf0https://wiki.ubuntu.com/SELinuxhttps://wiki.ubuntu.com/SELinux lol guess not so much 18:18
g3t1nf0automated way? maybe go to chatgpt give it the auditd logs with the denials and ask for a policy, no idea I got them from a person I know that runs openstack18:19
jrosserthis is the sort of feature we would want to run gate CI on I think18:20
noonedeadpunkyeah18:20
jrosserso that’s where the challenge comes - needing to maintain that, because if it breaks we have to either fix it or drop it18:20
jrosserotherwise we cannot merge anything18:20
g3t1nf0Let me check with the older versions if much have been changed 18:21
jrosserg3t1nf0: ^ this is why having active contributors is so important - particularly if you have a specific interest in a feature and need that to be maintained18:21
noonedeadpunkI've added that topic to etherpad for PTG18:22
noonedeadpunkg3t1nf0: or at least a bit document/educate how you've to fix that :D18:23
noonedeadpunkBut also question - how we are to implement their distribution. Per role? On playbook level?18:24
g3t1nf0okay I'll create the list first for bobcat then compare with older releases on first glance not much have changed sinze Yoga 18:24
noonedeadpunkg3t1nf0: I guess interesting might be nova/manila18:24
noonedeadpunkneutron as well18:25
g3t1nf0neutron has different configs for different servers I've started with the OVN first I'm on linuxbridge now18:25
g3t1nf0Nova is different only between the nova-scheduler, nova-conductor vs nova-compute18:26
noonedeadpunkI totally agree with jrosser though on maintaining it part - we are running ubuntu everywhere...18:26
noonedeadpunkand I guess main "problem" here - you need to cover either 100% or 0% more or less18:27
noonedeadpunkyou can't cover one scenario and leave another one broken.18:28
g3t1nf0with the list that jrosser gave me I have the policies for all of them 18:28
noonedeadpunkoh, huh18:28
noonedeadpunkI'm really pretty much interested to see what we can do here, as I don't want this work to be wasted18:29
g3t1nf0keystone, nova, glance, horizon, swift, cinder, neutron, octavia, heat, ceilometer, cloudkitty, trove, magnum, sahara, ironic, manila, designate, barbican18:29
jrosserwriting an ansible role to apply this stuff is pretty much the trivial bit18:29
g3t1nf0I can do the ansible part as well 18:30
g3t1nf0its a bit tricky with the neutron because you have to separate the policy where it is installed and which network is being used ovn or linuxbridge18:30
jrosserI’m about to run out of battery here :/18:31
noonedeadpunkyeah, let's continue this discussion later, ok?18:31
noonedeadpunkI need to think about how better to implement that anyway18:32
noonedeadpunkas you said - policy should be aware of the context18:32
g3t1nf0https://www.server-world.info/en/note?os=CentOS_Stream_9&p=openstack_bobcat&f=1 here is his blogs with documentation that he did18:32
noonedeadpunkto we likely need to place them in service roles.18:32
noonedeadpunkand call smth out of that18:32
noonedeadpunkbut then due to the nature of selinux it indeed feels like more approriate would be - run role against * 18:36
g3t1nf0see you tomorrow guys I need to take the dog for a walk 18:37
noonedeadpunko/18:37
gokhaniHello folks, I am getting weird issues on galera after upgrading primary infra host. galera cluster is working and sync. But on haproxy galera backend is down because it can't reach 9200 port. I can curl 9200 port on galera container but on infra host I can't reach 9200 port. https://paste.openstack.org/show/bKWpCIfVrwSz9v2ZzqiV/ . what can prevent this ?18:46
gokhaniI can curl 3306 port both from host and container.18:46
noonedeadpunkgokhani: I guess you've existing the controller from some weird IP18:48
noonedeadpunklike having VIP added as /24 instead of /32 or smth like that18:48
noonedeadpunksince service on 9200 will accept connections only from specific IPs18:48
gokhaninoonedeadpunk: I am trying to curl with galera container ip not with VIP18:53
gokhaniit is also can not reach with galera container ip 18:53
noonedeadpunksrc IP I meant18:56
noonedeadpunkit matters18:56
noonedeadpunkso /etc/systemd/system/mariadbcheck.socket defines an explicit list of IPs from which it can be accessed18:58
noonedeadpunkthrough `IPAddressAllow`18:58
noonedeadpunkwhich is defined by `galera_monitoring_allowed_source` variable18:58
noonedeadpunkand it allows only haproxy mgmt ip and localhost by default18:59
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/galera_all.yml#L33-L3918:59
noonedeadpunkgokhani: ^18:59
noonedeadpunkand in case VIP is added with some netmask like /24 or smth - traffic from controller might flow thorugh VIP rather then it should from management IP19:00
noonedeadpunkcausing haproxy to mark galera as down19:00
gokhaninoonedeadpunk: I am checking now 19:04
gokhaninoonedeadpunk: thanks this issue is because of setting ips different from br-mgmt ips in /etc/openstack_deploy/openstack_user_config.yml. ı don't know how this ip is named. 19:11
gokhaniafter allowing infra host br-mgmt ips in /etc/systemd/system/mariadbcheck.socket, it worked 19:12
gokhaniand last but basic question, my bash is not completing container name when typing lxc-* commands, which app ı need to install. 19:13
noonedeadpunkyeah19:15
noonedeadpunkI dunno why it's broken, never had time to look at it19:16
noonedeadpunkbut it's difference of `sudo -s` vs `sudo -i`19:16
noonedeadpunkso smth with environment19:16
noonedeadpunkso I just tought myself to use `sudo -i` for auto-completition to work19:17
noonedeadpunkgokhani: you can really use `galera_monitoring_allowed_source` for more persistent fix 19:18
noonedeadpunkbut there might be really more surprises though19:18
gokhaninoonedeadpunk: now with sudo -i it is worked. but I previously tried with sudo -i it didn't work but now it works :)  19:19
gokhanithanks noonedeadpunk, I have also added galera_server_proxy_protocol_networks for no surprises :)19:20
noonedeadpunkreally would be good to take time and figure out why sudo -s not working anymore...19:21
noonedeadpunkI haven't managed in couple of years now... so chances are low :D19:21
gokhaninoonedeadpunk: yes it would be goof if it can also work with sudo -s19:24
noonedeadpunkalso - auto-logout is not working with sudo -s19:24
noonedeadpunkwhich is way more critical19:25
noonedeadpunkso if you find the reason - let us know)19:25
gokhaninoonedeadpunk: ı have completed upgrade of infra hosts from victoria to antelope and also from focal to jammy.19:25
noonedeadpunksweet19:26
noonedeadpunkthat sounds pretty much like an achievment19:26
gokhaniı have only problems with galera for these ip issues because of my env 19:26
noonedeadpunkI guess, we talked last year that upgrades are not that scary and better/easier just to keep things up to date :)19:28
noonedeadpunkbut again - I wonder what kind of issues you had19:28
noonedeadpunkSo you have separate network for SSH and another as openstack mgmt net?19:29
noonedeadpunkbecause then we have smth new in inventory for these cases19:29
gokhanithere is a difference when copying .my.cnf because galera root user changed with admin from root  19:29
noonedeadpunkoh, well19:30
noonedeadpunkin fact...19:30
gokhaniYes I have seperate network for ssh 19:30
noonedeadpunkin fact root should use just socket auth19:30
noonedeadpunknot password auth19:30
noonedeadpunkthen there's no need for .my.cnf on galera containera19:31
noonedeadpunkbut we obviously missed that thing out of sight historically19:31
noonedeadpunkfor ssh !=mgmt - have you cheked on https://opendev.org/openstack/openstack-ansible/src/branch/master/doc/source/reference/inventory/configure-inventory.rst#having-ssh-network-different-from-openstack-management-network ?19:32
gokhaniyes if I am not wrong it is only needed on first utility container 19:32
noonedeadpunkactually - for all hardware hosts19:34
gokhaniohh this is awesome, it is first time I have seen this doc :( 19:34
noonedeadpunklike for everything defined in openstack_user_config19:34
noonedeadpunkah, about mariadb19:34
gokhaniı haven't ever checked this. 19:34
noonedeadpunkyes, mariadb just once19:34
gokhaninoonedeadpunk: thanks a lot ı have learned new things today :) 19:36
ncuxois there some recommendations for partitioning of the OS drive ?19:40
noonedeadpunkncuxo: I'm not sure we have written that anywhere19:41
noonedeadpunkalso I guess it kinda depends19:41
noonedeadpunkIf you're using LXC containers, and rootfs as backing storage (just folders on the FS) - then they're stored in /var/lib/lxc/ and you might want to have a partition for that19:42
ncuxousually I'm separating /usr /var /var/log /var/log/audit /var/crash /var/tmp /opt /tmp /home /.snapshots19:43
noonedeadpunkthen there're things in /openstack/ - like backup, log, venvs...19:43
noonedeadpunkbasically folders that are bind mounted inside containers19:44
noonedeadpunkand venvs if smth happens to be on bare metal19:44
noonedeadpunkso /openstack might be another thing19:44
ncuxocan I use podman or docker ? or only lxc ?19:46
ncuxoisn't kolla the one that is using containers, or I'm on the wrong place19:48
jrosserncuxo: openstack-ansible uses lxc containers by default,  it that is optional19:50
jrosserthese are nothing at all like docker/podman, but can be considered more like virtual machines19:50
ncuxoso for bare metal ?19:50
jrossercan you be more specific?19:50
ncuxookay I need to read some more on LXC then I though they are some other kind of container 19:51
jrosserthey are analogous to a vm and run the full systemd and everything19:51
ncuxoso lxc is like virtual machine thus openstack-ansible is on bare metal ?19:51
jrosserjust happen to share the host kernel19:51
jrosserno not rally, it’s like having multiple hosts inside one bare metal host19:52
jrosserwhen we say a “metal”19:52
ncuxoI'm perplexed then why are we using docker or podman at all does lxc have the same "containerisation" are there worries of escapes 19:52
ncuxosorry I'm just finding a whole new world here and I'm a bit confused why I've never heard of those19:53
jrosserwell they are not particularly fashionable in a world full of docker and k8s19:54
jrosserthey are “machine containers” where docker and podman are usually “application containers”19:54
ncuxowhat happens if one of them is stopped can it be restarted with systemd like the others ?19:55
jrosserthere are command line tools to manage lxc containers19:55
jrosserstart / stop / create / delete etc19:55
ncuxookay I'll be back you gave me homework 19:56
jrosserncuxo: see https://linuxcontainers.org/19:57
ncuxoalready there19:57
ncuxocan I use rocky or centos or rhel as lxc ?19:57
ncuxoor everything is alpine 19:57
jrosserfor openstack-ansible we build our own image during deployment that matches the host OS19:59
jrosserwith debootstrap or dnf to build a minimal filesystem19:59
jrosserbut for the general case outside OSA there are many different OS container images available to use20:00
ncuxowhich playbook does that container-lxc-create ?20:00
jrosserthe image build is done in the lxc_hosts role20:00
noonedeadpunkncuxo: you can do just bare metal as well20:01
noonedeadpunkyes, and it's backed up by 2 roles basically: https://opendev.org/openstack/openstack-ansible-lxc_container_create - which handles container creation20:03
noonedeadpunkand https://opendev.org/openstack/openstack-ansible-lxc_hosts where we prepare basic image for them20:03
jrosseryes osa makes the lxc containers optional - they’re not mandatory, and would be arguments both ways depending on circumstances20:03
noonedeadpunkbut yes, if you run rocky - you will get rocky in container, if you run centos - you will get centos. We actually talked about letting ppl select smth different as well, but it;s not high in prio20:04
ncuxothe thing is if I go with container containers why not deploy openstack on k8s20:04
noonedeadpunkI guess because you can't really use much of k8s features?20:05
noonedeadpunkbut again - we do run CI without containers as well20:05
noonedeadpunkreally easy to opt out from them20:06
noonedeadpunkLike - auto-scaling can't be used as breaks all kind of things20:06
jrosserwell this is all factors in choosing your deployment tool, if you want docker then you have to use kolla, if you want bare metal then you have to choose osa, if you want k8s based then there are other choices besides these20:06
noonedeadpunkSelf-healing? But that kinda systemd does pretty well by restarting the service20:06
ncuxoauto scaling of openstack is broken when deployed with or without lxc20:06
noonedeadpunkso to disable any container things, jsut add `no_containers: true` to openstack_user_config.yml under global_overrides20:07
ncuxosure but I didn't understood which is breaking the autoscaling and self-healing if I disable the lxc and then those are broken then I prefer to do it with lxc 20:08
noonedeadpunknah20:09
noonedeadpunkthere's no autoscaling regardless20:09
jrosserI feel we are having a very confused conversation here20:09
noonedeadpunkand nothing is broken20:09
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/doc/source/reference/inventory/configure-inventory.rst#deploying-directly-on-hosts20:09
noonedeadpunkbut as I said - you can just define no_containers once under global_overrides20:10
noonedeadpunkAnd actually - then you can fully skip `provider_networks` defenition at all20:10
noonedeadpunkwill jsut need to define neutron_provider_networks for mappings, which is way more trivial thing to do20:10
ncuxoare there any benefits of running one over the other 20:11
ncuxolxc over baremetal and vise versa 20:11
jrossersome people feel that the without-lxc setup is simpler, and the deployment is certainly faster20:11
noonedeadpunkcontainer is easy to re-create, no conflicting dependencies20:11
noonedeadpunkand yes - +1 to metal benefits20:12
jrosserhowever, day-2 operations and maintenance are certainly easier with the lxc setup20:12
ncuxowith those LXC you made me rethink our whole infra at work20:12
noonedeadpunkwell, I guess ppl can argue on day2 as well though20:12
noonedeadpunkbut I guess it's a matter of taste20:13
noonedeadpunkso we allow both of these and operator is free to choose more or less20:13
jrosserfor me, being able to just delete and reprovision a container if I mess up is a good benefit20:13
ncuxohow is security implemented with LXC as an example if I run debian then I first have to harden my os debian then lxc is copying my OS as base :?20:14
jrosserbut a downside is you now have N times more “systems” to maintain and patch20:14
noonedeadpunkncuxo: pretty much20:14
noonedeadpunkwell, not fully...20:14
noonedeadpunkSorry, I answered wrongly...20:14
ncuxojrosser: why ? isn't it just redeploy with the new image ?20:14
noonedeadpunkncuxo: lxc image is build from scratch using debootstrap/rpm as jrosser wrote earlier20:15
noonedeadpunkand there's some caching time, when re-running role won't try to update it20:15
jrosserand then ansible deploys “the things” into the image once it’s booted20:15
ncuxoso it is taking my OS as base thus I need to upgrade my OS and recreate all the LXC images ?20:15
noonedeadpunkbut there's variable that can be adjusted20:15
noonedeadpunknah, it's not taking OS as base20:16
jrosserno, it builds a fresh OS fikesystem20:16
jrossersee debootstrap tool20:16
noonedeadpunkI guess biggest difference from docker here, is that image is jsut bare minimal OS20:16
ncuxosorry for the confusionn guys never used LXC, I had a totally different idea of the thing20:16
noonedeadpunkit does not contain any services. So if you re-create, then you'd need to re-install service inside it20:17
jrosserif lxc looks interesting, you should also check out lxd/incus20:17
noonedeadpunkyeah, incus is sweet....20:17
jrosserthey are a slightly higher level abstraction of the same thing20:17
jrosserwith api and fancy stuff20:17
noonedeadpunkbut then - you can just run `dnf upgrade` inside lxc container20:17
noonedeadpunkand it will got updated20:17
ncuxoso I can have rhel9 and use LXC to create a minimal rhel9 lxc ?20:18
ncuxowhy I can't find any rhel lxc templates20:18
jrosserwell RHEL is non free and can’t be distributed20:19
noonedeadpunksorry, folks, I need to leave now - already quite late 20:20
noonedeadpunko/20:20
jrossero/20:20
noonedeadpunkbut... if you run el9 - lxc will be also el9 I assume20:20
noonedeadpunkas the way how we build images - is to take repos from the host20:21
ncuxoI have to test this ... I've always build everything on rhel using the ubi images20:21
noonedeadpunkand run dnf to create minimal image of a thing20:21
noonedeadpunkyou can start with AIO really easily20:21
ncuxocan AIO be on a cluster ?20:22
ncuxoor its meant just for one machine20:22
noonedeadpunkgit clone https://opendev.org/openstack/openstack-ansible; cd openstack-ansible; git checkout stable/2023.2; ./scripts/gate-check-commit.sh aio_lxc20:22
noonedeadpunkncuxo: well, for cluster we have MNAIO but it's slightly unmaintained: https://opendev.org/openstack/openstack-ansible-ops/src/branch/master/multi-node-aio20:23
noonedeadpunkI want to re-work this concept for a while and having couple of ideas, but ENOTIME20:23
ThiagoCMCjrosser, so, back to Ceph quickly. I manage to install Ceph 18 (Reef) on Ubuntu 22.04 with Bobcat, using Ceph Ansible `stable-7.0`! The `stable-8.0` is definitely broken for Ubuntu 22.04 + Botcat (with Reef) and 24.04. It seems that Ceph Ansible is not good for Ubuntu 24.04, perhaps if downgrading Ansible.20:35
jrosserThiagoCMC: maybe you can make a patch to deploy Reef on OSA master?20:42
jrosserright so currently we certainly test quincy https://zuul.opendev.org/t/openstack/build/55e1e978d267477eb52e21574d4152f6/log/logs/host/apt/history.log.txt#5720:46
nixbuilderI am stumped... I have been trying to fix this all day to no avail... fresh 27.4.0 install and I get this https://paste.openstack.org/show/beNkSK8rCKyzRKTi9LxP/22:28
nixbuilderThis is weird... back to back commands... the first an error the second it works (https://paste.openstack.org/show/bvQOCb1qTdiQDtRcP4Rh/)23:03
opendevreviewMerged openstack/openstack-ansible-os_neutron master: Add VPNaaS OVN support  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/90834123:36

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!