jrosser | noonedeadpunk: i was just starting to think about the glusterfs stuff - spatel has a use case already for glance here https://satishdotpatel.github.io/openstack-ansible-glance-with-glusterfs/ | 07:59 |
---|---|---|
jrosser | if we also want to use the same thing for the repo server then there is opportunity to use some common stuff? | 07:59 |
noonedeadpunk | mm, and there's also https://github.com/gluster/gluster-ansible-collection | 08:00 |
noonedeadpunk | but regarding glance - tbh - we should just re-use systemd-mount role | 08:01 |
noonedeadpunk | and same thing for repoeventually | 08:01 |
noonedeadpunk | as kind of creating volume is likely matter of just running module | 08:02 |
noonedeadpunk | so not sure how common is that | 08:02 |
jrosser | yes exactly - so if in AIO or 'default' deployment we would want to create the gluster stuff | 08:02 |
noonedeadpunk | will make glance patch to rework what we have... | 08:03 |
jrosser | but i wonder where we do that | 08:03 |
noonedeadpunk | I think we should do that just in repo-server role? | 08:03 |
noonedeadpunk | but dunno | 08:03 |
jrosser | yeah, i was not sure what was best | 08:03 |
jrosser | if to decouple the deployment of gluster from any repo / glance / whatever | 08:03 |
jrosser | or to make it part of one of those | 08:04 |
jrosser | and then were actually to run the gluster daemons - infra1/2/3 metal? or do they deserve their own env.d definitions | 08:05 |
noonedeadpunk | well we obviously have 2 paths here :) just make drop-in replacement for lsyncd which means we justi nstall it inside containwers. And provide an option for deployers to disable installation of gluster (instead another mount can be provided - like cephfs). | 08:07 |
noonedeadpunk | Or try to make it global | 08:07 |
jrosser | right | 08:07 |
jrosser | i was thinking to do the drop-in replacement of lsyncd but then remebered spatel/glance stuff and was not so sure | 08:08 |
noonedeadpunk | and tbh I'd say that making it global would have quite limited value atm. | 08:08 |
jrosser | it might also be the case that someone already has NFS or something which would work for both of those | 08:08 |
noonedeadpunk | yup | 08:08 |
jrosser | so i was thinking making it external / optional made more sense | 08:08 |
noonedeadpunk | I would most likely just use cephfs... | 08:08 |
jrosser | right - and if we use systemd_mount role and defaults in repo and glance role then any of that is possible | 08:09 |
jrosser | so that comes back to where in openstack-ansible repo do we deploy glusterfs..... | 08:09 |
noonedeadpunk | I'd just deployed it inside repo containers tbh | 08:12 |
noonedeadpunk | we can indeed make some simple small role we call from there and place in plugins repo | 08:13 |
noonedeadpunk | when we need to setup gluster | 08:13 |
noonedeadpunk | as running glusterfs-server on bare metal controller is meh.... | 08:14 |
noonedeadpunk | as then you can also pass another volume or mount point inside container, which will be used for creating fs | 08:15 |
noonedeadpunk | (likely) | 08:15 |
noonedeadpunk | or it requires some kernel module so we can't run it inside container? | 08:16 |
noonedeadpunk | as if you already have smth and don't need osa to deploy gluster there would be 2 options - either rely on systemd-mount or jsut bind mount inside container with list_of_bind_mounts | 08:18 |
jrosser | maybe fuse is involved | 08:18 |
noonedeadpunk | but I agree that it might make sense to create small simple role in plugins so it could be re-used | 08:20 |
jrosser | urgh https://github.com/gluster/glusterfs/issues/1239 | 08:21 |
noonedeadpunk | pfff | 08:33 |
noonedeadpunk | that is really bad... | 08:48 |
noonedeadpunk | I mean - thinking about bare-metal installation, we indeed should likely provide env.d | 08:49 |
noonedeadpunk | and some upgrade path | 08:49 |
noonedeadpunk | and imo it will be quite messy and opionated.... | 08:49 |
noonedeadpunk | as to avoid gluster from being installed you would need to be very careful during upgrade | 08:51 |
jrosser | it could be made opt-in | 09:04 |
jrosser | i'll do some experiments | 09:06 |
jrosser | i have glusterfs working with the client and server both inside lxc | 10:52 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Use systemd_mount native syntax for mounts https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/837550 | 10:56 |
noonedeadpunk | jrosser: oh, that's nice | 10:56 |
noonedeadpunk | you get it working on focal? | 10:57 |
jrosser | only thing it needed was the fuse device node creating | 10:57 |
jrosser | yes | 10:57 |
noonedeadpunk | well, requirement of fuse is fair kind of | 10:57 |
jrosser | the install is really pretty trivial too, i'll try to make some ansible for the repo_server role | 10:58 |
damiandabrowski[m] | just an open suggestion: do You think it's also worth to consider some lightweight replicated object storage like minIO? then we can replace nginx&lsyncd with it | 11:05 |
damiandabrowski[m] | (ofc it will require some changes in python_venv_build role) | 11:06 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Replace glance_nfs_client https://review.opendev.org/c/openstack/openstack-ansible/+/837551 | 11:07 |
noonedeadpunk | I'd say it's more complicated then nginx plus gluster | 11:07 |
noonedeadpunk | as you need to store wheels somewhere first and then upload to minio? | 11:07 |
noonedeadpunk | but not sure - wasn't really running minio much. But I guess to get object replicated you need to pass it through api anyway | 11:09 |
noonedeadpunk | and then to clean things up it might be also more complicated... | 11:09 |
noonedeadpunk | but dunno | 11:09 |
damiandabrowski[m] | yes, but now IIRC we also build wheels in tmp and then move them to repo dir. So the change would be to replace 'mv' task with s3 upload(minio is s3 compatible) | 11:10 |
damiandabrowski[m] | but i agree, it may be just easier to replace lsyncd with gluster, we may consider minIO if we'll see some problems with gluster | 11:11 |
damiandabrowski[m] | as object storage seems to be more reliable way for our use case :D | 11:11 |
noonedeadpunk | well, I kind of like that idea.... | 11:13 |
noonedeadpunk | jrosser: wdyt? | 11:14 |
noonedeadpunk | as basically it's single binary... | 11:15 |
opendevreview | Merged openstack/openstack-ansible-galera_server master: Updated from OpenStack Ansible Tests https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/835669 | 11:15 |
jrosser | that involves loadbalancer too? | 11:16 |
jrosser | i never used minio but i did read the docs recently | 11:16 |
noonedeadpunk | I think yes | 11:16 |
jrosser | and was not really happy about it at any scale | 11:16 |
noonedeadpunk | nah, it's mainly to test things out locally :) | 11:16 |
jrosser | but for small things maybe ok | 11:16 |
noonedeadpunk | we should not have many trafic though | 11:17 |
noonedeadpunk | basically gluster at scale kind of sucks afaik as well | 11:17 |
jrosser | feels like whatever we do should leverage systemd_mount | 11:17 |
jrosser | then there is a nice decoupling | 11:17 |
noonedeadpunk | not really if it's s3.... | 11:18 |
noonedeadpunk | well, we could mount it as s3fs ofc... | 11:18 |
jrosser | didnt you do some s3fs stuff? | 11:18 |
noonedeadpunk | to manage files | 11:18 |
jrosser | ^that yes | 11:18 |
noonedeadpunk | it was suuuuper slow just in case... but was running that in slow ceph cluster as well, so not sure if it's s3fs design or cluster suck | 11:19 |
noonedeadpunk | but as alternative we could jsut use that https://docs.ansible.com/ansible/latest/collections/community/aws/s3_sync_module.html | 11:19 |
noonedeadpunk | but yeah, python_venv_build is not designed for that | 11:20 |
noonedeadpunk | could be adjusted though.... | 11:21 |
noonedeadpunk | but you're right minio with s3fs is suuuuper simple thing to do (considering it would sync objects) | 11:22 |
noonedeadpunk | ok, minio is not smth that would fly | 11:24 |
noonedeadpunk | `All the nodes running distributed MinIO setup are recommended to be homogeneous, i.e. same operating system, same number of disks and same network interconnects.` | 11:24 |
noonedeadpunk | And basically that would be EC pools | 11:24 |
damiandabrowski[m] | but i'm trying to figure out if 'bucket replication' would be better for us | 11:24 |
noonedeadpunk | btw I also wonder how gluster would like different OS and versions of itself... | 11:25 |
damiandabrowski[m] | https://docs.min.io/minio/baremetal/replication/enable-server-side-multi-site-bucket-replication.html | 11:26 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Add galera_data_dir variable https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/831552 | 11:27 |
* noonedeadpunk wonders if `mc mirror` have smth to do with Midnight Commander | 11:28 | |
damiandabrowski[m] | true, that's a bit confusing :D | 11:28 |
noonedeadpunk | at this point I find gluster more simple tbh | 11:29 |
noonedeadpunk | but it was really good call | 11:30 |
noonedeadpunk | I mean - as long as we manage that with systemd_mount - we can use whatever we want on backend | 11:30 |
damiandabrowski[m] | You may be right, the most important for us is probably what tool would be more reliable when it comes to recovery, node failures etc. :D but it's hard to answer this question now | 11:31 |
damiandabrowski[m] | but it sounds promising that jrosser already has gluster running on lxc | 11:32 |
jrosser | i think in an hour or so i have some ansible for glusterfs - then we can test and see what happens | 11:33 |
noonedeadpunk | to be fair - we don't store statefull data there. it's unfortunate to loose wheels, but not anything really bad | 11:33 |
jrosser | like delete / recreate containers and see if it recovers | 11:33 |
noonedeadpunk | so we can jsut drop all repo containers and recreate them anytime if anything | 11:34 |
NeilHanlon | noonedeadpunk: there be dragons with Minio, too.. in my experience. Several times at $lastjob they introduced serious regressions into master | 11:47 |
jrosser | first attempt at the repo server gluster stuff https://paste.opendev.org/show/bqSs3XDGb1oUk7h9mA3i/ | 12:59 |
jrosser | i will move it to the plugins repo next as a role | 12:59 |
noonedeadpunk | `Ensure that the mount point exists` should not be needed as that is handled inside systemd_mount role https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_mounts.yml#L27-L40 | 13:06 |
jrosser | ah cool | 13:06 |
noonedeadpunk | but it looks too simple :D | 13:09 |
opendevreview | Siavash Sardari proposed openstack/openstack-ansible-os_cinder master: Add the ability to disable send_actions option in cinder-volume-usage-audit service. https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/837570 | 13:50 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Do not encrypt SSL for CentOS distro path https://review.opendev.org/c/openstack/openstack-ansible/+/837571 | 14:03 |
*** dviroel is now known as dviroel|mtg | 14:15 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Do not deploy api-paste for CentOS distro deployment https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/837576 | 14:21 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-plugins master: Add role for creating simple glusterfs clients/servers https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/837582 | 14:55 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:02 |
opendevmeet | Meeting started Tue Apr 12 15:02:26 2022 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:02 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:02 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:02 |
noonedeadpunk | #topic rollcall | 15:02 |
jrosser | o/ hello | 15:02 |
noonedeadpunk | hey everyone o/ | 15:02 |
damiandabrowski[m] | hey! (on a phone for next 20min) | 15:03 |
mgariepy | hey | 15:03 |
noonedeadpunk | #topic office hours | 15:04 |
noonedeadpunk | So PTG results. I had sent email tomorrow, but seems I mixed up a bit emails it should go from, so I guess it haven't landed on ML and waits for moderation.... | 15:05 |
noonedeadpunk | s/tomorow/today lol | 15:06 |
noonedeadpunk | I also started looking at CentOS distro jobs... And they are broken is so stupid/nasty ways.... | 15:06 |
noonedeadpunk | I would actually say - rhel'ish way | 15:07 |
noonedeadpunk | I think that intention to drop them were quite reasonable.... | 15:08 |
jrosser | so related - Z will not support python3.6..... | 15:08 |
jrosser | which i think means that Y *has* to be centos8->9 transition release? | 15:08 |
noonedeadpunk | yup.... | 15:08 |
jrosser | /o\ | 15:09 |
noonedeadpunk | well, iirc centos 8 has py3.8 | 15:09 |
noonedeadpunk | but as usual without libs built | 15:09 |
noonedeadpunk | as we run ansible in 3.8 venv right now | 15:10 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/bootstrap-ansible.sh#L76 | 15:11 |
jrosser | hrrm | 15:11 |
noonedeadpunk | I wonder if we even use that for service venv | 15:12 |
jrosser | it would be possible - though i expect ceph and libvirt bindings are going to be troublesome | 15:12 |
noonedeadpunk | nah, we don't now | 15:13 |
jrosser | i don't expect it to go well tbh | 15:13 |
noonedeadpunk | yeah, exactly | 15:13 |
jrosser | i think we tried this right at the start for centos8 | 15:13 |
noonedeadpunk | I just can recalled we tried indeed:) but as you said - they don't ship bindings for $reason | 15:13 |
jrosser | @NeilHanlon whats the position on python versions for Rocky? | 15:14 |
jrosser | eg Zed needing > 3.6 | 15:14 |
noonedeadpunk | Btw I will propose patches for switching to Y toorrow first thing in the morning | 15:17 |
jrosser | cool | 15:17 |
noonedeadpunk | I also have a proposal of adding damiandabrowski[m] to core reviewers team. If nobody against, I will send ML so everybody could vote. | 15:22 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Add facility to store repo contents on a remote mount https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837587 | 15:22 |
noonedeadpunk | If there're any other proposals or nominations - I'm all ears :) | 15:22 |
noonedeadpunk | s/adding/nominating | 15:24 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Remove all code for lsync, rsync and ssh https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837588 | 15:24 |
damiandabrowski[m] | thank You! I'd be grateful for joining the core team (and now when I'm done with distro upgrades, I'll have much more time to contribute) | 15:25 |
damiandabrowski[m] | btw. let me copy-paste my message from previous week as it didn't get much attention | 15:26 |
noonedeadpunk | I guess we need also to patch systemd_mount role to install gluster_client? | 15:27 |
mgariepy | congrats damiandabrowski[m] ! | 15:27 |
damiandabrowski[m] | Will You be able to have a look at my tempest changes and leave initial review? https://review.opendev.org/q/topic:tempest-damian-2021-12 | 15:27 |
damiandabrowski[m] | There is a huge relation chain but I don't think we can avoid it as all of these changes focus on the same files :/ However, they are pretty straightforward so I hope all of them will be merged. | 15:27 |
damiandabrowski[m] | So my idea is to: gather initial reviews from You -> (make corrections) -> rebase all of them -> merge all. What do You think? | 15:27 |
damiandabrowski[m] | thanks! :D | 15:27 |
noonedeadpunk | sounds like a plan:) | 15:27 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Use glusterfs as a backend for synchronising repo server contents https://review.opendev.org/c/openstack/openstack-ansible/+/837589 | 15:29 |
jrosser | ^ this is basically working locally though i expect upgrades to break | 15:29 |
jrosser | maybe we discussed this enough already earlier but adding systemd_mount into repo_server as the hook into external shared storage seems neat | 15:30 |
damiandabrowski[m] | nice, i didn't expect gluster changes to land that fast :D | 15:31 |
noonedeadpunk | These are ususally lest words before huge issues | 15:32 |
noonedeadpunk | *last | 15:32 |
noonedeadpunk | :D | 15:32 |
jrosser | yeah indeed | 15:33 |
noonedeadpunk | jrosser: what I mean is that systemd_mount should be capable of mounting gluster on their own | 15:33 |
noonedeadpunk | so it should likely include client part when type is gluster | 15:34 |
jrosser | oh yes, it does and it works | 15:34 |
jrosser | the glusterfs role i made has bools for installing the client / server parts but i don't yet know what to connect those to | 15:34 |
noonedeadpunk | eventually we have there logic already https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/vars/main.yml#L25-L37 | 15:35 |
noonedeadpunk | So it could be independant relatively | 15:35 |
damiandabrowski[m] | btw. noonedeadpunk i received Your 'Zed PTG results' email so it actually has landed properly on ML ;) | 15:36 |
noonedeadpunk | oh, ok | 15:36 |
jrosser | only ugly bit is needing the fuse device for lxc | 15:36 |
noonedeadpunk | I haven't recieved it :/ | 15:37 |
noonedeadpunk | yeah..... | 15:37 |
jrosser | and that has to happen in-line in the code rather than in the lxc base image to cover upgrades | 15:37 |
noonedeadpunk | Just in case I already landed some ugly stuff there https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_install.yml#L16-L47 but yeah, it's not container-specific, but distro specific | 15:39 |
noonedeadpunk | so I agree that in role there shouldn't be lxc crap preferably... | 15:40 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Add facility to store repo contents on a remote mount https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837587 | 15:41 |
noonedeadpunk | so dunno what to do better here | 15:41 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-plugins master: Add role for creating simple glusterfs clients/servers https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/837582 | 15:42 |
noonedeadpunk | I think we can start from that for sure and then re-arrange code if decide to do so. | 15:44 |
jrosser | do we have a priority order for doing the things from https://etherpad.opendev.org/p/osa-Z-ptg | 15:44 |
noonedeadpunk | very good question. I think ssl and keystone rbac should go first ? | 15:46 |
noonedeadpunk | ubuntu 22.04/centos 9 is based on time on hands? | 15:46 |
noonedeadpunk | as basically centos 9 likely rely on gluster (which thankfully landed) | 15:47 |
jrosser | yeah, thats removed the need to lsyncd and EPEL in the repo server, which is ++ | 15:47 |
mgariepy | we will be able to remove lsyncd ;) | 15:48 |
noonedeadpunk | nice to see how everybody loves lsyncd :p | 15:48 |
damiandabrowski[m] | if nobody took it yet, I can work on 'cover octavia with PKI role' when I'm done with tempest changes | 15:48 |
mgariepy | lsyncd is not fun ;) but the good part about it is that stuff can be regenerated :) haha | 15:49 |
jrosser | damiandabrowski[m]: that would be good to get some insight into the workings of the PKI role | 15:50 |
jrosser | and also i think there is some careful handling needed for upgrades | 15:50 |
damiandabrowski[m] | ack | 15:51 |
jrosser | the bigger piece of SSL work is doing the backends for all the services | 15:53 |
noonedeadpunk | yup, but also good one. As somewhere we're inventing bycicle to achieve that goal | 15:55 |
noonedeadpunk | which would be really awesome to land things and get them supported | 15:55 |
noonedeadpunk | (they are quite historical to have that said) | 15:56 |
noonedeadpunk | #endmeeting | 16:00 |
opendevmeet | Meeting ended Tue Apr 12 16:00:12 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:00 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-04-12-15.02.html | 16:00 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-04-12-15.02.txt | 16:00 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-04-12-15.02.log.html | 16:00 |
*** dviroel|mtg is now known as dviroel|lunch | 16:02 | |
*** dviroel|lunch is now known as dviroel | 16:58 | |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Remove all code for lsync, rsync and ssh https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837588 | 17:09 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Remove all code for lsync, rsync and ssh https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837588 | 17:09 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-repo_server master: Add facility to store repo contents on a remote mount https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837587 | 17:14 |
jrosser | noonedeadpunk: so we already have /var/www as a bind mount onto the host in /openstack/<repo_server> | 17:21 |
jrosser | i wonder if that is going to interfere with /var/www/repo being a filesystem mount | 17:21 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Use glusterfs to synchronise repo server contents https://review.opendev.org/c/openstack/openstack-ansible/+/837589 | 17:28 |
*** dviroel is now known as dviroel|out | 20:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!