Monday, 2021-07-19

-opendevstatus- NOTICE: The maintenance of the review.opendev.org Gerrit service is now complete and service has been restored. Please alert us in #opendev if you have any issues. Thank you03:25
*** dpawlik0 is now known as dpawlik05:10
*** mgoddard- is now known as mgoddard06:04
*** dpawlik5 is now known as dpawlik06:50
*** dpawlik7 is now known as dpawlik07:06
*** dpawlik3 is now known as dpawlik08:30
*** dpawlik4 is now known as dpawlik08:44
*** rpittau|afk is now known as rpittau09:31
opendevreviewMerged openstack/openstack-ansible-ceph_client stable/victoria: Enable fmt package instalaltion from EPEL  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/80087110:41
opendevreviewMerged openstack/openstack-ansible-ceph_client stable/wallaby: Enable fmt package instalaltion from EPEL  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/80087010:45
kleiniWith which OpenStack release would it be best to upgrade the OS on the target hosts from bionic to focal?11:20
noonedeadpunkVictoria/Wallaby11:21
kleiniIs there already some guide, how to do this? one controller into maintenance, install focal there, repo/utility container installation, use that repo for other hosts to upgrade and so on?11:23
jrosserandrewbonney: ^ one for you perhaps too 11:29
noonedeadpunkkleini: we don't have now. The only one we had was from 16.04 > 18.04 but I think logic is still applicable11:30
noonedeadpunkhttps://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html11:30
noonedeadpunkWould be great if we could add some docs to the master with regards to distro upgrades11:30
noonedeadpunkFor example, I think instead of the forgetting rabbitmq cluster node, I think we could leverage drain (upgrade mode)11:32
jrosseralso really not sure about this "primary node is last" for current deployments11:34
jrossernoonedeadpunk: andrewbonney is doing bionic->focal for V here this week on a small cloud, should be able to get some new docs from that i think?11:35
jrosseri would be selecting the node that was the lsyncd master and doing that *first*?11:36
noonedeadpunkI think, that lsyncd master then should have both bionic and focal wheels.11:37
noonedeadpunkBut if we're talking about non-master - I guess lsyncd shouldn't -delete other distro folders... but not sure here11:38
jrosserseems some confusing terms though - i guess i'm assuming that primary == venv_build_host11:38
jrosserthis is all different now from rocky11:39
noonedeadpunkUm, but we select venv_build_host based on the OS as well11:39
noonedeadpunkSo in case we have only 1 random repo with focal - it would be venv_build_host anyway11:39
jrossergood point11:39
noonedeadpunkIn terms of lsyncd it hasn't changed that much11:39
noonedeadpunkSo I think it still has some logic. The only concern here is that lsyncd might delete focal content, and that it needs to be manually synced later to primary container11:41
*** sshnaidm|afk is now known as sshnaidm13:10
kleiniHow can I recreate the repo-container/os-releases/21.2.0/ubuntu-18.04-x86_64/requirements_constraints.txt?14:12
noonedeadpunkIt should be create with any env built iirc14:33
kleiniI did not run OSA for a long time now and wanted to check with the healthcheck-infrastructure.yml, that everything is fine. How can I run some env built?14:34
jrosserkleini: there isn't a file specifically called os-releases/21.2.0/ubuntu-18.04-x86_64/requirements_constraints.txt14:54
jrosserthere is /var/www/repo/os-releases/<osa-release>/<component>-<version>-[constraints|requirements|source-constraints|global-constraints].txt14:57
jrossernoonedeadpunk: do you use the standard OSA setup for memcached, i.e not via the haproxy?15:10
noonedeadpunkDo both15:19
noonedeadpunkStill have some prod env with memcahed under haproxy, but all current ones are not (maybe worth changing this)15:19
jrosserwe just having a lot of wierdness with bionic->focal upgrade15:23
jrosseras soon as the first memcached is taken down the performance goes horrible across the cloud15:23
jrosserand just looking at one service like keystone, seems that there are CPUS * processes * threads (i.e lots and lots) each possibly having their own connection pool to memcached15:24
jrosserand something is happening like the oslo.cache stuff isn't marking the missing memcached as dead for long enough so you kind of race against that for when the next request hits the same memcached pool15:26
*** rpittau is now known as rpittau|afk17:38
kleiniIs healthcheck-infrastructure.yml supposed to work for 21.2.0? It tries to run "/tmp/rabbitmqtest/bin/python2 /tmp/rabbitmqtest/rabbitmq-test.py" but I am sure, python2 is not used any more.18:27
kleiniAnd there is of course no /tmp/rabbitmqtest/bin/python218:27
kleiniworks with python3 in same place. will look at the ansible playbook and provide a fix18:29
kleini97f249f3f7bc6fb10464a0aad46bdc5ecec3cd3b <- fix for my problem in V but not in U. I can cherry-pick that18:46
jrosserthose healthcheck scripts were not used in CI back in ussuri18:49
jrosserthey're now part of the regular test jobs to this stuff is better exercised18:50
kleinino problem. I am just preparing the upgrade to 21.2.4 and then to V. I would like to utilize these healthchecks and os_tempest but I am struggling a bit with some failing tests and I am not sure, whether my setup is broken or the tests are broken.18:53
jrosserit would be a trivial backport of that python2->python3 were it not for ussuri also supporting centos-718:54
jrosserwithout re-running i'd not like to say which python would be in use there18:54
kleinidesignate_tempest_plugin.tests.api.v2.test_zones.ZonesTest <- this one e.g. gives "No route to host" although designate seems to run fine18:54
kleiniI switched to python3 during upgrade to U18:54
kleinion bionic18:55
jrosserremmeber tempest is run from the utility container, so in a real deployment that may / may not have routes to things that are assumed to be contactable in devstack18:55
jrosserfor example in my lab i have to mess with the utiliuty container in a way which gives it an external IP18:56
jrosserfor testing a production deployment you might want to look at refstack and host it outside the deployment18:56
jrosserif your hosts can route to anywhere you should find that tempest from the utility container can also do that as it should have a default route via eth018:59
jrosserif your hosts are somehow isolated / firewalled that they can't do that, then it will also apply to tempest18:59
kleiniutility-containers can connect to internal and external VIP and floating IPs. so routing should be fine in this case.19:15
*** gilou_ is now known as Gilou19:42

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!