Friday, 2024-01-12

*** noonedeadpunk_ is now known as noonedeadpunk08:14
noonedeadpunkTheCompWiz: for clean/new deployment feel free to to re-run with `-e galera_ignore_cluster_state=true -e galera_force_bootstrap=true` as suggested by the role08:18
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: DNM Allow to skip pre-step bootstrap  https://review.opendev.org/c/openstack/openstack-ansible/+/90529009:22
noonedeadpunkugh, spent couple of hours on trying to understand how dynamic_inventory checks for existing cotnainers and where it decides to create a new one - giving up....10:26
noonedeadpunkI'ts jsut /o\10:27
noonedeadpunkI thought it would be quite easy hack in dictutils, but no, decision to generate new UUID comes from somewhere else....10:27
noonedeadpunkI guess I should have whined earlier to find the thing I'm looking for lol10:32
admin1TheCompWiz, why  are your container interfaces modified ? 11:23
Tadiosjrosser: the comment here https://github.com/openstack/openstack-ansible-ops/blob/1c0bd2ae111b57ba9366587d1c4542655b215d14/elk_metrics_6x/README.rst?plain=1#L54C21-L54C21 states that kibana nodes are also elasticsearch coordination nodes, and haproxy_backend_nodes for elastic-logstash is set to groups['Kibana'], is this accurate?11:29
Tadiosor should the var haproxy_backend_nodes for elastic-logstash set to groups['elastic'], i'm confused11:29
admin1noonedeadpunk, https://docs.openstack.org/openstack-ansible/latest/admin/upgrades/major-upgrades.html  => "This release is under development. The current supported release is 2023.2." --  what does 28.0.0 tag correspond to , and is it SLURP release ? 11:36
admin1"This guide provides information about the upgrade process from 2023.1 ​ to 2023.2 for OpenStack-Ansible." -> maybe we can also have corresponding tags ? 11:37
noonedeadpunkadmin1: tags are on the main: https://docs.openstack.org/openstack-ansible/latest/11:41
noonedeadpunk2023.2 is not SLURP and it's 28.0.011:41
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Treat dashes/underscores as interchangeable symbols for container names  https://review.opendev.org/c/openstack/openstack-ansible/+/90543211:48
admin1aha .. so easy ..  odd is slurp and even is non-slurp in our case as well 11:55
admin1actually not :D 11:56
admin1upgrading a platform from 27.0.1 -> 28.0.0 12:18
noonedeadpunk27 is slurp btw12:30
noonedeadpunkadmin1: fwiw our versioning is actually the same as for nova 12:31
noonedeadpunkalso all release information can be found here: https://releases.openstack.org/12:32
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Do not use underscores in container names  https://review.opendev.org/c/openstack/openstack-ansible/+/90543312:32
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Do not use underscores in container names  https://review.opendev.org/c/openstack/openstack-ansible/+/90543313:28
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Allow env.d to contain underscores in physical_skel  https://review.opendev.org/c/openstack/openstack-ansible/+/90543813:32
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Allow env.d to contain underscores in physical_skel  https://review.opendev.org/c/openstack/openstack-ansible/+/90543813:32
noonedeadpunkI _think_ now we should be able to rename our env_d and get rid of Ansible warning regarding using dashes in group names13:32
mgariepyi'm a bit confused by that. i somewhat recall something about glance and uwsgi/haproxy and ceph ? https://bugs.launchpad.net/glance/+bug/191648213:33
noonedeadpunkYeah, and I think there was fix for that proposed even13:36
mgariepythis ?https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/879699 13:38
noonedeadpunkmgariepy: I think this: https://review.opendev.org/c/openstack/glance_store/+/88558113:54
noonedeadpunkor well. not using uwsgi works for us as well13:55
mgariepyi really don't like using code that isn't merged..14:10
mgariepyeven more when the patch has been sitting for months.14:12
noonedeadpunkyeah14:14
mgariepynot sure what has changed but the snapshot was workiong last october. and is broken now. that's really fun.15:05
admin1i am upgrading one env to 28.0.0 .. how to validate/check if this patch -> https://review.opendev.org/c/openstack/neutron-fwaas/+/845756 is there, or how to use this kine of custom patch when using osa 15:14
noonedeadpunkadmin1: well, I'd wait for a next point release15:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_ironic stable/2023.2: Stop generating ssh keypair for ironic user  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/90354315:35
noonedeadpunkdamiandabrowski: would be sweet if you could take a look at these backports: https://review.opendev.org/c/openstack/openstack-ansible/+/905321 https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/90508115:36
noonedeadpunkSo I could propose bumps for 2023.215:36
NeilHanloni can review if you need15:41
noonedeadpunkyes, thanks, that will work as well :)15:45
dmsimard[m]btw if you'll be at upcoming FOSDEM: https://www.meetup.com/brussels-openinfra-meetup-group/events/298420649/16:00
noonedeadpunkdmsimard[m]: thanks!16:05
noonedeadpunkUgh, maybe it's worth going there after all....16:05
* noonedeadpunk got paperwork sorted16:05
NeilHanlonthanks dmsimard[m]!16:42
* NeilHanlon wonders if we just bullied noonedeadpunk into coming to FOSDEM...16:42
noonedeadpunkneed to check if anything interesting in the schedule :D16:45
noonedeadpunk(except beer)16:45
NeilHanlonheh16:49
dmsimard[m]¯\_(ツ)_/¯16:52
opendevreviewMerged openstack/openstack-ansible-lxc_hosts stable/2023.2: Fix resolved config on Debian  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/90508118:18
opendevreviewMerged openstack/openstack-ansible stable/2023.2: Return back /healtcheck URI verification  https://review.opendev.org/c/openstack/openstack-ansible/+/90532118:41
TheCompWizIs there any way to get ansible to re-deploy the database on a galera container? ... I'm not sure why, but after setup-infrastructure.yml, the database has none of the permissions set, and it appears as if no data has been set either.19:58
TheCompWizhttps://paste.openstack.org/show/bo2OQsf0Han0dHhfC7EX/20:03
jrosserTheCompWiz: the database will only have root and admin users after setup-infrastructure21:29
jrossereach service that gets deployed during setup-openstack creates the needed db and user at the point they are required21:30
TheCompWizjrosser: ok... then any ideas why keystone fails on the task "Create database for service"?21:31
jrosserwell, a combination of seeing the output of the task with no_log disabled would help21:31
TheCompWizsee above :)21:32
jrosserdoes the MySQL cli client work on the utility container?21:33
TheCompWizERROR 2013 (HY000): Lost connection to server at 'handshake: reading initial communication packet', system error: 1121:35
TheCompWizhmmmm.21:35
jrosserok, so that cli connects to the db via haproxy21:35
jrosserhaproxy needs to think that the db backend is up21:35
jrosseryou can check that using hatop on the haproxy node21:36
TheCompWizhmm... showing all of the "galera" containers as down... but none are.21:44
TheCompWizI can lxc-attach to each of them... and just do "mysql" and query everything.21:45
TheCompWiz(using root@localhost with no password)21:46
jrosserthere is a an http healthcheck run on port 9200 which is what haproxy uses to determine the backend health21:55
jrosserfrom the perspective of haproxy that healthcheck much be failing if the backends are marked as down21:55
TheCompWizI can connect to the mysql db directly from the ha node using the credentials that haproxy is supposedly using... 22:04
TheCompWizwhy is it using an http healthcheck?22:04
jrosserbecause haproxy does not understand port 3306 to determine the galera cluster status22:08
jrosserhaproxy does not have credentials for the database22:08
jrossera service is run on the galera nodes on port 9200 which returns good/bad depending on if each backend database node is successfully part of the db cluster22:09
jrosserit would be wrong to mark a db backend as good which was “working” but failed to be in the cluster22:10
jrosserso the http based healthcheck permits a more advanced status to be used by haproxy when deciding of the db backends are usuable or not22:11
jrossertry using curl on port 9200 to the ip of the galera container from the haproxy node22:11
TheCompWizhmmm... I think I figured it out.  The stupid mariadb check is only accepting connections from the "external_lb_vip_address" and not the "internal_lb_vip_address"22:17
TheCompWiz... or in-short... from my haproxy server... it has 2 IPs it *could* use... and it picked the wrong one.22:19
TheCompWiz... /grumble.22:19
jrosserTheCompWiz: it is defined here https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/galera_all.yml#L33-L3922:21
jrosserthis is happening because you have specified 192.168.122.100 as the IP for your loadbalancer node in haproxy_hosts22:23
jrosseris it not correct to use either of the VIP addresses as the IP for that node in openstack_user_config22:24
jrosserin an H/A deployment the internal and external vip will float between all haproxy nodes using keepalived22:25
jrosseryou need a unique and fixed address on the mgmt network for each node in openstack_user_config which then becomes `management_address`22:25

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!