*** odyssey4me <odyssey4me!~odyssey4m@2a00:23c5:fe26:1001:35e0:d99e:2e06:2738> has quit IRC (Ping timeout: 480 seconds) | 00:29 | |
*** priteau <priteau!~priteau@93.186.40.84> has quit IRC (Ping timeout: 480 seconds) | 02:04 | |
*** priteau <priteau!~priteau@93.186.40.84> has joined #openstack-ansible | 02:06 | |
*** Gilou <Gilou!~me@8VQAABLIT.tor-irc.dnsbl.oftc.net> has quit IRC (Remote host closed the connection) | 04:03 | |
*** Gilou <Gilou!~me@7YZAAAN2L.tor-irc.dnsbl.oftc.net> has joined #openstack-ansible | 04:03 | |
noonedeadpunk | snadge: I guess it's already running deployement? I wonder if it could be that galera container was recreated or smth like that? | 05:38 |
---|---|---|
noonedeadpunk | but, um... eventually `galera_cluster_members` is just `groups['galera_all']` by default. So I think more relveant question here would be if you also happen to override this variable somewhere? | 05:41 |
*** rpittau|afk is now known as rpittau | 07:07 | |
*** luksky <luksky!~luksky@hC1F2D42A.cust.netmar.net.pl> has joined #openstack-ansible | 07:16 | |
*** andrewbonney <andrewbonney!uid417545@id-417545.highgate.irccloud.com> has joined #openstack-ansible | 07:21 | |
*** tosky <tosky!~luigi@dynamic-adsl-78-13-253-141.clienti.tiscali.it> has joined #openstack-ansible | 07:45 | |
*** CeeMac <CeeMac!uid366483@id-366483.brockwell.irccloud.com> has joined #openstack-ansible | 08:05 | |
*** jnamdar <jnamdar!~oftc-webi@static-176-139-14-39.ftth.abo.bbox.fr> has joined #openstack-ansible | 08:32 | |
*** ysastri <ysastri!~oftc-webi@p200300c6ff4c2c257b1e6e9269f84cc9.dip0.t-ipconnect.de> has joined #openstack-ansible | 09:08 | |
*** ysastri <ysastri!~oftc-webi@p200300c6ff4c2c257b1e6e9269f84cc9.dip0.t-ipconnect.de> has quit IRC (Remote host closed the connection) | 09:12 | |
*** odyssey4me <odyssey4me!~odyssey4m@2a00:23c5:fe26:1001:41af:96f:d159:4d3e> has joined #openstack-ansible | 09:35 | |
*** MrClayPole <MrClayPole!~quassel@2a02:8010:6538:104:216:3eff:fed5:272f> has joined #openstack-ansible | 09:58 | |
jnamdar | hi all o/ | 09:59 |
jnamdar | I'm trying to install manila in an AIO (which includes a ceph install), and I have a question about ceph | 10:00 |
jnamdar | I understand OSA uses ceph ansible to deploy ceph, but I think it's not compatible with my distribution. I use Debian 10 (buster), and at some point in the install it's failing because it's trying to resolve an apt repo in http://download.ceph.com/nfs-ganesha/ | 10:01 |
jnamdar | and I can't find any repo for debian buster in this folder. So I was wondering, did you guys get to deploy manila (and ceph) from scratch in an AIO? Which distro did you use? | 10:02 |
jrosser | ubuntu focal probably has the best support | 10:03 |
jrosser | it's certainly possible to do debian, but you'd have to figure out the ceph side of things maybe with something like https://mirror.croit.io/debian-octopus/ | 10:04 |
jnamdar | jrosser: ok I'll try with ubuntu focal. Actually it may be a dump question but can't I deploy manila without ceph in OSA? | 10:13 |
jnamdar | dumb question* | 10:13 |
noonedeadpunk | jnamdar: you can, why not? Once `ceph-nfs` group is not defined (as well as other ceph groups) - ceph won't be installed | 10:18 |
noonedeadpunk | but you would need to configure proper manila share driver | 10:18 |
noonedeadpunk | if we're talking about aio - we define "reasonable" defaults we want to test in CI | 10:18 |
noonedeadpunk | and testing manila with ganesha nfs feels like most production ready option that's mostly picked up | 10:19 |
jrosser | jnamdar: i think it's important to remember that OSA is a tool that lets you deploy many many combinations of all these things, it's not really a 'shrink-wrap' type installer with particularly fixed ideas of what it means to do "OSA + manila" for example | 10:28 |
jrosser | also the AIO is primarily a test/development tool with specific scenarios baked in which are of value for our CI | 10:28 |
jrosser | those may / may not be suitable for what you would want in a production deployment | 10:28 |
jnamdar | I see. it's only for testing for now tho | 10:29 |
jrosser | right, but you might want to start moving beyond the AIO to your own constructed config for a lab with OSA + ceph + manila | 10:29 |
jrosser | jnamdar: it might be useful to see how the manila CI tests are done, you can see the patches for the manila role here https://review.opendev.org/q/project:openstack%252Fopenstack-ansible-os_manila | 10:33 |
jrosser | then you could look at one of those jobs output, like this one https://zuul.opendev.org/t/openstack/build/0276c13346fa46adb2dac525446a43fa/logs | 10:35 |
jrosser | and you can see in the log that its a regular AIO deployment with SCENARIO=aio_metal_manila, from here https://zuul.opendev.org/t/openstack/build/0276c13346fa46adb2dac525446a43fa/log/job-output.txt#2530 | 10:37 |
jrosser | so to reproduce that exact same test locally you'd use that scenario environment variable, which would deploy manila and ceph for you | 10:37 |
*** mgariepy <mgariepy!~mgariepy@0002bc4d.user.oftc.net> has quit IRC (Ping timeout: 480 seconds) | 10:55 | |
*** odyssey4me is now known as Guest2088 | 11:21 | |
*** odyssey4me <odyssey4me!~odyssey4m@2a00:23c5:fe26:1001:542d:e6b:e1ef:a346> has joined #openstack-ansible | 11:22 | |
*** Guest2088 <Guest2088!~odyssey4m@2a00:23c5:fe26:1001:41af:96f:d159:4d3e> has quit IRC (Ping timeout: 480 seconds) | 11:25 | |
*** mgoddard- <mgoddard-!~mgoddard@240.240.125.91.dyn.plus.net> has joined #openstack-ansible | 11:35 | |
*** mgoddard <mgoddard!~mgoddard@238.240.125.91.dyn.plus.net> has quit IRC (Ping timeout: 480 seconds) | 11:38 | |
*** mgoddard- is now known as mgoddard | 11:38 | |
jnamdar | jrosser: thanks a lot, I was using SCENARIO=aio_lxc_manila. I'll check out your links | 11:52 |
jrosser | thats the same, just with the services in containers | 11:53 |
jrosser | the _metal_ is used a lot in CI to get the runtime down | 11:53 |
jnamdar | i don't mind using metal as well tbh. No haproxy though then right? | 11:53 |
jnamdar | horizon* | 11:54 |
jrosser | no, there is still haproxy | 11:54 |
jnamdar | i meant | 11:54 |
jrosser | only if the scenario calls for it | 11:54 |
jrosser | this is why you should maybe consider something AIO-like but with more of a production style config, if thats what you want to test | 11:54 |
jrosser | the manila scenario will only contain what is needed to run some basic tempest tests against manila, which doesnt need horizon | 11:55 |
jnamdar | yeah I see | 11:55 |
*** mgariepy <mgariepy!~mgariepy@styx-204.ccs.usherbrooke.ca> has joined #openstack-ansible | 12:09 | |
snadge | how do i figure out where galera_cluster_members is defined? | 12:41 |
snadge | im not overriding it, scratching my head trying to figure out why galera server install is failing | 12:42 |
snadge | the docs show that its set to groups['galera_all'], but i don't know how to verify if it's set to that, or what that is | 13:05 |
CeeMac | snadge: 'galera_all' refers to the environment skell and the osa inventory | 13:51 |
CeeMac | https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/galera.yml#L16-L19 | 13:51 |
CeeMac | you should be able to use 'inventory-manage.py -g' to list out all of the groups, i can't remember if you can limit that to a particular group, or just manually parse the output | 13:53 |
CeeMac | but basically, in your o_u_c if you have database_hosts: defined, they will all belong to galera_all, otherwise it would be anything defined under shared-infra_hosts: | 13:56 |
CeeMac | depending of course on if you have it deployed to metal or container, it'll either be the hosts themselves, or the associated galera container running on those hosts | 13:57 |
jrosser | snadge: codesearch is really useful for finding where things are used/defined https://codesearch.opendev.org/?q=galera_cluster_members&i=nope&files=&excludeFiles=&repos= | 14:08 |
*** rpittau is now known as rpittau|afk | 16:08 | |
snadge | according to inventory, galera_all contains galera children which just has infra1_galera_container-8156b01d in it.. then why is it trying to run galera server install inside the utility container? | 23:27 |
snadge | TASK [galera_server : Fail when the host is not in galera_cluster_members] | 23:55 |
snadge | container_name: "infra1_utility_container-47ac6ab1" | 23:55 |
snadge | well of course the utility container is not in the galera cluster members.. why is setup infrastructure trying to install galera into the utility container.. that's so weird, it didn't happen during testing | 23:56 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!