Wednesday, 2022-11-16

*** ysandeep|out is now known as ysandeep05:41
*** ysandeep is now known as ysandeep|brb06:16
*** ysandeep|brb is now known as ysandeep06:57
*** akahat|ruck is now known as akahat|rover|lunch07:35
*** akahat|rover|lunch is now known as akahat|rover08:28
kleiniI am on W and I see a lot of "AMQP server on 10.20.150.201:5671 is unreachable: Server unexpectedly closed connection. Trying again in 1 seconds.: OSError: Server unexpectedly closed connection". Is it somehow possible to get these connections to RabbitMQ more stable?08:34
opendevreviewAndrej Babolcai proposed openstack/openstack-ansible-os_swift master: Add support for running object-servers Per Disk  https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/86468508:38
*** ysandeep is now known as ysandeep|lunch08:52
dokeeffe85Morning, another quick one please. I pointed my deploy host at a subnet 10.37.100.0/24 (used a few IPs for the controller & computes) I now need to point it at 3 different machines in a 10.37.115.0/24 subnet. I changed all the IPs in openstack_user_config and uservariables and finally regenerated the openstack_inventory.json. My compute1, compute2 and infra1 IPs have all updated but all the container "ansible_host" have kept all the 09:16
dokeeffe85"10.37.110" addresses. They're also in other places too as seen here https://paste.openstack.org/show/bdojHYvfdc3M1QNEr97B/ so the question being. Is there somewhere else I need to change the IPs or regenerate any files or will they be updated for the fields in the paste when ansible is run? 09:16
jrosserdokeeffe85: changing the subnet for the management network after deployment is one of the trickiest things you can do09:23
jrosserand thats not a scenario we expect the ansible playbooks to be able to deal with *at all*09:23
jrosserpart of the complexity is that some of the IP addresses, like rabbitmq, get written into the config files for pretty much all of the services09:26
jrosserso there is no way to do this without some kind of downtime09:26
dokeeffe85Oh ok, We thought maybe once we got the deploy host set up that we could just keep it as it was and point it at a new cluster so we could just deploy a full openstack after making all the adjustments we needed on our test environment09:27
jrosserisnt that a slightly different question?09:29
jrosser"can i use a deploy host for more than one deployment" vs. "can i adjust all the mgmt IP for a deployment i already have"09:29
dokeeffe85My bad, sorry if I asked incorrectly. So yes your question is phrased better, can I do that?09:30
jrosseri think that if you have more than one deployment then really you should have more than one set of /etc/openstack_deploy/*09:33
jrosserso there would be two ways to do that, have multiple deploy hosts, which you could do with physical machines, VMs, LXC/LXD or maybe even docker. this is what I do, one LXD deploy host per deployment09:35
jrosseralternatively you can have multiple sets of /etc/openstack_deploy data on the same host, controlled by the OSA_CONFIG_DIR environment variable09:36
jrossersee https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/openstack-ansible.rc#L1509:36
jrosseranother thing you can do is keep the openstack_user_config pretty minimal, so it expresses really the deployment specific things such as IP addresses of the physical hosts09:37
jrosserthen customisation that is common you put in user_variables.yml09:38
jrossernote that you can have multiple user_variables files, with the pattern user_variables*.yml09:38
noonedeadpunkfwiw the pattern is user_*.yml09:52
noonedeadpunk(the reason why/how user_secrets.yml are loaded actually09:53
noonedeadpunkwell, I've figured out why sahara breaks... And it's jsonschema version and basically sahara itself is just broken for Zed 09:53
*** ysandeep|lunch is now known as ysandeep10:04
dokeeffe85Ok, jrosser, thanks for that. I will look into that right away. 10:11
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Use /healthcheck URI for backends  https://review.opendev.org/c/openstack/openstack-ansible/+/86442410:20
dokeeffe85Hey jrosser, I pronmise I'll leave you alone after this one (for today) so I copied /opt/openstack-ansible/etc/openstack_deploy/ to /etc/openstack_deploy_OSA/ and did my config files from scratch. Changed export OSA_CONFIG_DIR="${OSA_CONFIG_DIR:-/etc/openstack_deploy}" to the new directory and generated the inventory file to look at it but changed it to --config /etc/openstack_deploy_OSA/ now the qusetion I have is. It took the variables from 10:40
dokeeffe85the files in that directory but wrote the inventory file to the old directory (/etc/openstack_deploy/) so do I need to add a flag to tell it where write the inventory to and where to use it from when I run the playbooks? Sorry for all the questions10:40
jrosserif it's written to the wrong place that that sounds like a bug10:42
jrosserbut also the idea is that you set the environment variable OSA_CONFIG_DIR in your shell, you should not edit the OSA files to do that10:43
jrosserOSA_CONFIG_DIR="${OSA_CONFIG_DIR:-/etc/openstack_deploy}"    this means "take the value set in the OSA_CONFIG_DIR environment variable, falling back to /etc/openstack_deploy if it is not defined"10:44
jrossernoonedeadpunk: you use OSA_CONFIG_DIR ?10:51
noonedeadpunknot as of today10:51
noonedeadpunkBut used that couple of yars ago...10:52
noonedeadpunk*years10:52
noonedeadpunkdokeeffe85: I think question is - how you generated inventory? Running dynamic_inventory script manually or with running openstack-ansbile some playbook?10:53
noonedeadpunkAs I don't think that dynamic_inventory script consume OSA_CONFIG_DIR, but it does have flag that can be passed to it10:54
*** dviroel|afk is now known as dviroel11:20
dokeeffe85Ah ok, noonedeadpunk I used this command inventory/dynamic_inventory.py --config /etc/openstack_deploy_OSA/ to generate it (just as a test to see what it would generate in the file) but realised it was generated in the other directory. jrosser, sorry I thought you meant edit the openstack.rc file to change the directory11:24
dokeeffe85Ok, not sure you'd recommend it but I renamed the etc/openstack_deploy dir and used the new one as /etc/oprnstack_deploy and the inventory is now correct11:53
jrosserdokeeffe85: there is also the standard ansible command `ansible-inventory` if you want to generate / dump the inventory data12:05
*** ysandeep is now known as ysandeep|brb12:06
*** ysandeep|brb is now known as ysandeep12:17
*** ysandeep is now known as ysandeep|afk12:47
dokeeffe85Thanks jrosser, will keep that in mind. For now what I did seems to suit so lets hope the cinder override I put in works and I'll be very happy :)12:51
*** ysandeep|afk is now known as ysandeep13:26
*** akahat|rover is now known as akahat|ruck|afk14:03
noonedeadpunkI'm looking at all these ansible_host everywhere in code and jsut (>д<)14:45
noonedeadpunkKind of wonder if we should use management_address or openstack_service_bind_address instead.14:45
noonedeadpunkfor things like https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/defaults/main.yml#L12914:46
*** ysandeep is now known as ysandeep|retro14:58
*** akahat|ruck|afk is now known as akahat|ruck15:00
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475015:06
anskiyI have a question about using Ceph with OpenStack: if I'm supposed to move `cinder-volume` service to the control-plane nodes, then I would need to manually configure frontend in HAProxy and add `cinder-volume` service in Keystone by hand, so it would point to the balanced internal address? The reason I'm asking is that I see there is a `os-vol-host-attr:host` attribute for the volume and I wonder what it would be set to in15:08
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475215:08
noonedeadpunkanskiy: I totally didn't get part about haproxy. But os-vol-host-attr:host will be set to name of container or wherever cinder-volume runs. In case of active/active mode it doesn't matter much, since there's a `cluster` parameter that is set for each volume. So each cinder-volume service that is in the same "cluster" will be able to manage volumes regardles of this attribute15:11
noonedeadpunkFor active/passive you would need to set a backend_name when defining backend, this way all cinder-volumes will be discovered with this name, so each of them can manage these volumes if needed15:12
noonedeadpunkIf it's not active/active and cinder-volume service names are unique then in case of outage of the service, you won't be able to perform any action on volumes that are assigned to it, except update host using cinder-manage (to re-assign them to another service)15:14
anskiynoonedeadpunk: ah, I see, so there would be another attribute for active/active. This perfectly answers my question, thank you.15:14
noonedeadpunkShould be that: https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/templates/cinder.conf.j2#L17-L1915:15
anskiygot it :)15:15
*** ysandeep|retro is now known as ysandeep16:02
*** ysandeep is now known as ysandeep|out16:08
*** dviroel is now known as dviroel|lunch16:32
noonedeadpunkfolks, any idea how to properly echo stuff to socket? Like `echo isro | nc 127.0.0.1 2181` but with some proper ansible module?17:04
*** dviroel|lunch is now known as dviroel17:27
jrossernoonedeadpunk: other than writing a very trivial module and put it in plugins repo just with `command`?17:49
jrosseralso is there any way to see your zookeeper module given the redirect in gitea? a mirror somewhere else?17:59
jrossermodule/role17:59
jrosseroh i found https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475218:20
jrosserare we going to want to do TLS here too?18:20
*** dviroel is now known as dviroel|afk19:34
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/82285019:38
nixbuilderI just re-installed using OSA 25.1.1 and my installation broke.  I was using OSA 25.0.0.0rc1. The installation failed when it tried to start haproxy on the infra nodes  and the installation was missing a certificate in /etc/haproxy/ssl.19:48
nixbuilderNow that I have rolled back to 25.0.0.0rc1 the installation is working fine.19:50
opendevreviewDamian Dąbrowski proposed openstack/ansible-role-uwsgi master: Install OpenSSL development headers  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/86478319:54
noonedeadpunkjrosser: yup, for sure we want tls there19:57
noonedeadpunkI haven't done it yet though. wanted to do mvp first and add complexity right after that19:58
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/82285020:13
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Change 'vip_bind' to 'vip_address' in templates/service-redirect.j2  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/86478420:16
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Accept both HTTP and HTTPS also for external VIP during upgrade  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/86478520:16
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Fix warnings in haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/86478620:16
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/82285020:20
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475020:32
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-os_glance master: Add support for TLS to Glance  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/82101120:32
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475220:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475020:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475020:58
jrossernixbuilder: you need to provide some debug info if you want some insights21:05
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Add support for enabling TLS to haproxy backends  https://review.opendev.org/c/openstack/openstack-ansible/+/82109021:09
damiandabrowskihmm does anybody know how can I mark somebody else's change as "work in progress"? seems like i have this option available only for my own changes21:11
damiandabrowskii'm talking about https://review.opendev.org/c/openstack/openstack-ansible/+/82109021:11
jrosserdamiandabrowski: you can ovrride haproxy_stick_table per service21:12
jrossersee https://github.com/openstack/openstack-ansible-haproxy_server/blob/06e76706c7818843137add470c8c6cc2166eed62/releasenotes/notes/custom-stick-tables-1c790fe223bb0d5d.yaml21:12
damiandabrowskiyeah i see we're doing that for horizon, but decided to skip it for know as I'm trying to focus on internal TLS21:12
damiandabrowskifor now*21:13
jrosserso haproxy_stick_table in group_vars applies to all services, but if you want to remove it from galera because it makes no sense then set service.haproxy_stick_table=[]21:13
jrosseri was looking at this https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864786/121:13
damiandabrowskiahh that's actually right, stick tables for galera do not make any sense as only 1 backend is active21:14
jrosserand we primarily use them for rate limiting as well on external API which also makes no sense for galera21:14
damiandabrowskiokok i'll try to fix it in a moment21:15
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: [WIP] Add support for enabling TLS to haproxy backends  https://review.opendev.org/c/openstack/openstack-ansible/+/82109021:17
jrosserthe WIP status thing is interesting21:23
jrosserbecause we by convention use [WIP] in the commit message and anyone can push a new patch to take that away21:24
jrosseror add it21:24
damiandabrowskiyeah and I tried to use `git review` with -w and got this:21:24
damiandabrowski ! [remote rejected]     HEAD -> refs/for/master%topic=tls-backend,wip (only users with Toogle-Wip-State permission can modify Work-in-Progress)21:25
damiandabrowskiso i guess it's something with privileges21:25
damiandabrowskiwell, I can live with having [WIP] in commit message :D 21:25
jrosserdamiandabrowski: example of manipulating rules for WIP flag https://review.opendev.org/c/openstack/project-config/+/86393121:30
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Disable stick tables for galera  https://review.opendev.org/c/openstack/openstack-ansible/+/86479221:37
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: [WIP] Add support for enabling TLS to haproxy backends  https://review.opendev.org/c/openstack/openstack-ansible/+/82109021:37
jrosserdamiandabrowski: did you need another patch somewhere to ensure that the ssh headers are present in the repo container before we try to build uwsgi for the first time?21:58
jrosserssl headers i mean - to ensure we always build uwsgi with TLS capability21:59
damiandabrowskiit's not covered now, i thought we decided it's not needed as we're about to bump uWSGI version22:08
damiandabrowskiso theoretically speaking, during next openstack upgrade all environments will have uWSGI upgraded and openSSL dev headers installed22:08
damiandabrowskido you think we still need to find a way to ensure that the "correct" uWSGI is installed?22:09
damiandabrowskiahh and there's one more thing: ssl dev headers need to be present in container where uWSGI is running, not on the repo container22:15
damiandabrowskiAFAIK we don't store uwsgi wheel on repo container22:17
jrosserisnt it a required at the point that the wheel is built though?22:20
jrosserbecasue on the repo container is where the compiling / linking against ssl libraries takes place22:21
jrosserthe ssl-dev package will contain C function headers that are no good at runtime22:21
damiandabrowskiactually I'm looking for an answer why I don't have uWSGI wheel on my repo container22:22
damiandabrowskido you have any idea?22:22
jrosserdoes it come direct from pypi instead?22:22
damiandabrowskisounds like uWSGI was built on glance container and wheel was stored in /root/.cache22:25
damiandabrowskihttps://paste.openstack.org/raw/bwQZqAj0aHU7Z5RP8LIF/22:26
jrosserthat looks like the install22:28
jrosserrather than compliation22:28
jrosserthe implementation is in C so it would need an entire toolchain which i'm not sure is in the glance container22:28
damiandabrowskihttps://paste.openstack.org/show/bQ3ssU5gpOpu2xXkG1Rm/22:30
damiandabrowskihmm there is an information about 'uWSGI compiling server core'22:30
damiandabrowskimaybe it's the reason why gcc is set as a requirement? https://opendev.org/openstack/ansible-role-uwsgi/src/branch/master/vars/source_install.yml#L1922:31
jrosser^ that is on the repo server?22:31
damiandabrowskino, it's all on glance container22:32
jrosserwell that is interesting22:33
jrosseri wonder if we intend that to be the case22:33
jrosserit means we install gcc everywhere and do the build everywhere22:35
jrosserso to ensure that uwsgi has TLS support we need to ad libssl-dev or whatever it was to the list of packages along with gcc in the uwsgi role22:36
jrosserthen it's a different question again if we want to be using the repo server here or not22:37
damiandabrowskiyeah, it sounds convenient to build uWSGI on repo host and store wheel there22:39
damiandabrowskiI only wonder if there was some reason behind current behavior...22:39
damiandabrowskii'll try to find something tomorrow as my brain is not working anymore... :D 22:40
damiandabrowskihave a good night22:40

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!