-opendevstatus- NOTICE: The maintenance of the review.opendev.org Gerrit service is now complete and service has been restored. Please alert us in #opendev if you have any issues. Thank you | 03:25 | |
*** dpawlik0 is now known as dpawlik | 05:10 | |
*** mgoddard- is now known as mgoddard | 06:04 | |
*** dpawlik5 is now known as dpawlik | 06:50 | |
*** dpawlik7 is now known as dpawlik | 07:06 | |
*** dpawlik3 is now known as dpawlik | 08:30 | |
*** dpawlik4 is now known as dpawlik | 08:44 | |
*** rpittau|afk is now known as rpittau | 09:31 | |
opendevreview | Merged openstack/openstack-ansible-ceph_client stable/victoria: Enable fmt package instalaltion from EPEL https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/800871 | 10:41 |
---|---|---|
opendevreview | Merged openstack/openstack-ansible-ceph_client stable/wallaby: Enable fmt package instalaltion from EPEL https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/800870 | 10:45 |
kleini | With which OpenStack release would it be best to upgrade the OS on the target hosts from bionic to focal? | 11:20 |
noonedeadpunk | Victoria/Wallaby | 11:21 |
kleini | Is there already some guide, how to do this? one controller into maintenance, install focal there, repo/utility container installation, use that repo for other hosts to upgrade and so on? | 11:23 |
jrosser | andrewbonney: ^ one for you perhaps too | 11:29 |
noonedeadpunk | kleini: we don't have now. The only one we had was from 16.04 > 18.04 but I think logic is still applicable | 11:30 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html | 11:30 |
noonedeadpunk | Would be great if we could add some docs to the master with regards to distro upgrades | 11:30 |
noonedeadpunk | For example, I think instead of the forgetting rabbitmq cluster node, I think we could leverage drain (upgrade mode) | 11:32 |
jrosser | also really not sure about this "primary node is last" for current deployments | 11:34 |
jrosser | noonedeadpunk: andrewbonney is doing bionic->focal for V here this week on a small cloud, should be able to get some new docs from that i think? | 11:35 |
jrosser | i would be selecting the node that was the lsyncd master and doing that *first*? | 11:36 |
noonedeadpunk | I think, that lsyncd master then should have both bionic and focal wheels. | 11:37 |
noonedeadpunk | But if we're talking about non-master - I guess lsyncd shouldn't -delete other distro folders... but not sure here | 11:38 |
jrosser | seems some confusing terms though - i guess i'm assuming that primary == venv_build_host | 11:38 |
jrosser | this is all different now from rocky | 11:39 |
noonedeadpunk | Um, but we select venv_build_host based on the OS as well | 11:39 |
noonedeadpunk | So in case we have only 1 random repo with focal - it would be venv_build_host anyway | 11:39 |
jrosser | good point | 11:39 |
noonedeadpunk | In terms of lsyncd it hasn't changed that much | 11:39 |
noonedeadpunk | So I think it still has some logic. The only concern here is that lsyncd might delete focal content, and that it needs to be manually synced later to primary container | 11:41 |
*** sshnaidm|afk is now known as sshnaidm | 13:10 | |
kleini | How can I recreate the repo-container/os-releases/21.2.0/ubuntu-18.04-x86_64/requirements_constraints.txt? | 14:12 |
noonedeadpunk | It should be create with any env built iirc | 14:33 |
kleini | I did not run OSA for a long time now and wanted to check with the healthcheck-infrastructure.yml, that everything is fine. How can I run some env built? | 14:34 |
jrosser | kleini: there isn't a file specifically called os-releases/21.2.0/ubuntu-18.04-x86_64/requirements_constraints.txt | 14:54 |
jrosser | there is /var/www/repo/os-releases/<osa-release>/<component>-<version>-[constraints|requirements|source-constraints|global-constraints].txt | 14:57 |
jrosser | noonedeadpunk: do you use the standard OSA setup for memcached, i.e not via the haproxy? | 15:10 |
noonedeadpunk | Do both | 15:19 |
noonedeadpunk | Still have some prod env with memcahed under haproxy, but all current ones are not (maybe worth changing this) | 15:19 |
jrosser | we just having a lot of wierdness with bionic->focal upgrade | 15:23 |
jrosser | as soon as the first memcached is taken down the performance goes horrible across the cloud | 15:23 |
jrosser | and just looking at one service like keystone, seems that there are CPUS * processes * threads (i.e lots and lots) each possibly having their own connection pool to memcached | 15:24 |
jrosser | and something is happening like the oslo.cache stuff isn't marking the missing memcached as dead for long enough so you kind of race against that for when the next request hits the same memcached pool | 15:26 |
*** rpittau is now known as rpittau|afk | 17:38 | |
kleini | Is healthcheck-infrastructure.yml supposed to work for 21.2.0? It tries to run "/tmp/rabbitmqtest/bin/python2 /tmp/rabbitmqtest/rabbitmq-test.py" but I am sure, python2 is not used any more. | 18:27 |
kleini | And there is of course no /tmp/rabbitmqtest/bin/python2 | 18:27 |
kleini | works with python3 in same place. will look at the ansible playbook and provide a fix | 18:29 |
kleini | 97f249f3f7bc6fb10464a0aad46bdc5ecec3cd3b <- fix for my problem in V but not in U. I can cherry-pick that | 18:46 |
jrosser | those healthcheck scripts were not used in CI back in ussuri | 18:49 |
jrosser | they're now part of the regular test jobs to this stuff is better exercised | 18:50 |
kleini | no problem. I am just preparing the upgrade to 21.2.4 and then to V. I would like to utilize these healthchecks and os_tempest but I am struggling a bit with some failing tests and I am not sure, whether my setup is broken or the tests are broken. | 18:53 |
jrosser | it would be a trivial backport of that python2->python3 were it not for ussuri also supporting centos-7 | 18:54 |
jrosser | without re-running i'd not like to say which python would be in use there | 18:54 |
kleini | designate_tempest_plugin.tests.api.v2.test_zones.ZonesTest <- this one e.g. gives "No route to host" although designate seems to run fine | 18:54 |
kleini | I switched to python3 during upgrade to U | 18:54 |
kleini | on bionic | 18:55 |
jrosser | remmeber tempest is run from the utility container, so in a real deployment that may / may not have routes to things that are assumed to be contactable in devstack | 18:55 |
jrosser | for example in my lab i have to mess with the utiliuty container in a way which gives it an external IP | 18:56 |
jrosser | for testing a production deployment you might want to look at refstack and host it outside the deployment | 18:56 |
jrosser | if your hosts can route to anywhere you should find that tempest from the utility container can also do that as it should have a default route via eth0 | 18:59 |
jrosser | if your hosts are somehow isolated / firewalled that they can't do that, then it will also apply to tempest | 18:59 |
kleini | utility-containers can connect to internal and external VIP and floating IPs. so routing should be fine in this case. | 19:15 |
*** gilou_ is now known as Gilou | 19:42 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!