Thursday, 2025-01-16

noonedeadpunk10 seconds should be enough for haproxy to recover backend, right?05:44
noonedeadpunkor not https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/logs/etc/host/haproxy/haproxy.cfg.txt#84305:45
noonedeadpunkwait. so check interval is 12 seconds for us now05:48
noonedeadpunkso backend should be unreachable for 36 seconds to be marked as down05:48
noonedeadpunkand then same amount of time to be marked as ready again05:49
noonedeadpunkas default is in milliseconds https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/#fall05:50
noonedeadpunk(thinking if default is even reasonable to start with)05:50
noonedeadpunk(our default)05:51
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/93933005:52
noonedeadpunkso this one is only the second check made by haproxy after apache is up: https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/logs/host/syslog.txt#7078505:55
noonedeadpunkand it's 25sec after restart05:56
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/93933005:56
noonedeadpunkI think a question is why in the world haproxy marks it as down. It doesn't seem to me apache restart takes 30sec05:58
jrosserwait - don’t we put backends in maintainace during some of this?05:58
noonedeadpunkin aio?05:59
jrosserI can’t remember05:59
noonedeadpunkwe should have a condition on length of haproxy_hosts05:59
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/playbooks/horizon.yml#L68-L7006:00
noonedeadpunk18:41:20 apache is still responding on healthchecks, then a service restart and 18:41:32 next healthcheck response06:02
noonedeadpunkhaproxy marks as DOWN 18:41:2606:02
noonedeadpunkso it's obviously not following `inter 12000 rise 3 fall 3`06:04
noonedeadpunkit could be because of `reason: Layer4 connection problem, info: "Connection refused"`. I don't see inter saying being applicable only to L7 failures, but can assume L4 and L7 acting differently....06:05
jrossersomething that was unhelpful in the Apache journal was this it only seems to who logs for one vhost06:36
jrosser*show06:36
jrosserit looked like it started out being logs for repo, then switched to being for keystone06:37
noonedeadpunkwell, despite that, I'd assume all vhosts being up given one is up06:41
noonedeadpunkalso, I'm not 100% sure, but could be that adding vhost to CustomLog can help: https://opendev.org/openstack/ansible-role-httpd/src/branch/master/templates/httpd_vhost.conf.j2#L1006:42
opendevreviewMerged openstack/openstack-ansible stable/2023.2: Remove senlin/sahara/murano roles from required project  https://review.opendev.org/c/openstack/openstack-ansible/+/93907208:16
birbilakosHi everyone, I'm trying to deploy mcapi and have a question regarding the SSL certification configuration in: https://docs.openstack.org/openstack-ansible-ops/latest/mcapi.html10:40
birbilakosMore specifically, for capi_client:, what's the internalURL endpoint supposed to be? Is this the magnum internal endpoint?10:40
jrosserbirbilakos: the k8s cluster in your control plane needs to talk to the internal VIP, and needs to trust the certificate on that10:46
jrosserand it is all the API on the loadbalancer, as cluster API interacts with nova/neutron/octavia/keystone etc10:47
jrosserand `endpoint: 'internalURL'` is the correct configuration there, it tells cluster_api to look in the keystone service catalog for the endpoints, and to use the "internal" ones it finds there10:48
birbilakosso I reckon no need to change the ca_file or endpoint definition for the internal one?10:50
birbilakosor should I just define the internal_lb_vip_address to it?10:52
jrosserthe endpoint already points to the internal one, in the ops repo documentation it says 'internalURL'10:53
jrosseryou just need to make sure that ca_file points to something valid for your deployment for the internal VIP10:53
birbilakosthese are auto-generated by openstack during first deployment right?10:55
jrosserthere are some default variables which you are free to override to set up the internal CA as you want it10:57
jrosserif you leave the defaults, this is what you get https://github.com/openstack/openstack-ansible/blob/1590e39e195d4e0f79f8de7ac9061f2242ec9208/inventory/group_vars/all/ssl.yml#L3410:58
jrosserand thats why the mcapi example references `ca_file: '/usr/local/share/ca-certificates/ExampleCorpRoot.crt'`, as thats the default you get with OSA and the one thats tested in our CI jobs10:59
birbilakosdo you know if the capi instructions will also work for 2023.2 ?11:16
jrosserit says `These playbooks require Openstack-Ansible Caracal or later.`11:18
birbilakosis it a matter of 'this hasn't been tested with 2023.2' or that there are commits missing that would make this an invalid deployment?11:21
jrossermaking the k8s cluster work inside LXC containers on the control plane is the enabling part thats only present in Caracal11:30
birbilakosgot it, thanks jrosser11:39
birbilakoson another note, I'm thinking of biting the bullet and upgrading from OSA 2023.2 to the latest release. Is there some resource that can help with that?11:40
noonedeadpunkyou can upgrade only rto 2024.1 from 2023.211:40
noonedeadpunkas 2023.2 is not SLURP release11:41
noonedeadpunkbut you can pretty much refer to https://docs.openstack.org/openstack-ansible/2024.1/admin/upgrades/major-upgrades.html11:41
birbilakosthanks!11:41
birbilakosso I reckon it's a two step process to get to the latest version11:42
noonedeadpunkkeep in mind, that from 2024.1 you can upgrade both to 2024.2 and to 2025.1 once it's released directly11:42
noonedeadpunkas 2024.1 is a SLURP release11:42
noonedeadpunkso you can always do .1 -> .1 directly as well as .1 -> .2, but .2 can only to the next .1 always11:43
birbilakosmakes sense11:43
birbilakosi'm kind of intimidated to upgrade to be honest - this is our production env (i do have a 2024.1 staging lab where I do all the testing)11:44
jrosserwe have done it enough now that its not intimidating11:46
jrosseri think it gets that way and stays that way if you dont make upgrades part of business-as-usual11:46
jrosserbut we are lucky to have a good staging environment to test it all out first11:47
birbilakosyes, I have to be honest that I have gotten tons of support from both of you (here and in launchpad) in my OSA journey. I literally had 0 experience before on openstack, let alone OSA specifically11:48
birbilakosthank you for that11:49
birbilakosdo you know where i might be able to get some newers images for mcapi? The ones on their github go up until 1.27.4, which is rather old12:09
jrosserbirbilakos: you can see the image locations here used for the testing https://github.com/vexxhost/magnum-cluster-api/blob/main/zuul.d/jobs-ubuntu-2204.yaml12:13
jrosseryou can also build your own https://github.com/vexxhost/capo-image-elements12:14
birbilakosawesome12:18
opendevreviewMerged openstack/openstack-ansible stable/2023.2: Bump SHAs for 2023.2  https://review.opendev.org/c/openstack/openstack-ansible/+/93892816:07
*** asharp is now known as Baronvaile16:17
jventuraQuestion: How can I reinstall a service after the whole cluster be alraedy installed ???16:59
jventuraAs an example: resinstall heat.17:00
jventuraI think I shou delete the container, recreate and after that "openstack-ansible openstack.osa.heat"17:01
jventuraAm I missing something here?17:01
jventuraAnother question I have is: How do I include additional packages in the container???17:02
noonedeadpunkhey17:22
noonedeadpunkwell.17:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-ops master: Upgrade magnum-cluster-api to 0.24.2  https://review.opendev.org/c/openstack/openstack-ansible-ops/+/93622917:36
opendevreviewMerged openstack/openstack-ansible stable/2024.2: Update test-requirements  https://review.opendev.org/c/openstack/openstack-ansible/+/93936518:00
alvinstarrHas anybody used foreman and setup the repos to pull from the foreman-proxy?20:49
jrosseryou might need to explain what that means, i googled a bit and am not sure i understand20:54
alvinstarrForeman is a great tool for managing a bunch of computers from a configuration management point of view.21:15
alvinstarrIt can get copies of thinks like the RPM repositories and keep versions and point in time snapshots of the repositories so that when you install something next month you get the same versions you got this month.21:15
alvinstarrIt also has integrations to things like puppet and Ansible.21:15
alvinstarrFor general systems I use the downloaded repositories for installation  but in the case of OpenStack I would also like to use the same local repositories.21:15
alvinstarrForeman is also know as Satallite if your buying it from RH.21:17
jrosserdo you use the 'source' or 'distro' installation method for openstack-ansible?21:25
jrosserin general, openstack-ansible does not care how you deploy the operating system onto your hosts21:26
jrosserby default openstack-ansible installs openstack from git source repos, so that would not be coming from any local foreman rpm repos21:29
jrosseri'm still not quite sure i'm understanding your question though - "setup the repos" <- which ones?21:34
jrosserthere should be variables that you can override to use local mirrors of rabbitmq/mariadb/ceph and so on21:34
jrosserhowever the osa code pins these to specific versions so you will always get the same version without a tool like foreman21:35
alvinstarrI am using the source installation method.22:38
alvinstarrIt may be a moot point then.22:38
alvinstarrIn the past I have been burned by differing versions of software installed from the internet repositories but if the versions are pinned then it may not be a worry for us.22:38
jrosserso for example the openstack service versions are defined like this https://github.com/openstack/openstack-ansible/blob/stable/2024.2/inventory/group_vars/glance_all/source_git.yml22:40
jrosserrabbitmq https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/defaults/main.yml#L23-L2522:41
jrosserforeman sounds like it can certainly give you a stable OS install22:42
alvinstarrIt defiantly makes booting from 0 up to a stable OS easy.22:43
jrosserbut openstack-ansible releases also define a different set of fixed things for you22:43
alvinstarrwhen the lxc images are created. do they take the yum.repos.d info from the underlying physical hosts?22:46
jrosserthey do yes22:52
jrosserhttps://github.com/openstack/openstack-ansible-lxc_hosts/blob/b1235fbfc6a7959cd698f445a2027c75a063a1dc/vars/redhat.yml#L27-L3522:53
alvinstarrI will need to play with this a bit when I get some free time.22:59
jrosseronly a basic base image is created23:02
jrosserspecific things are then installed later, so the rabbitmq LXC container is booted from the base image then customised with the rabbitmq_server ansible role, just like it was an actual host or VM23:03
jrosserso the repo config for things like that come from two places, those on the host from the base image, and those added by some more specific ansible role once the container is created23:03
alvinstarrI think I have a better handle on this.23:05
alvinstarrThanks23:05
alvinstarrIt looks like if I change the EPEL and rdoproject urls I can point them at my mirrored repositories.23:06
jrosseryes, it should be possible to make all package installs come from local mirrors23:06
alvinstarrso that would be "openstack_hosts_rdo_mirror_url" and "centos_epel_mirror"23:07
jrosseri have a deployment that does not have internet access, everything comes from a mirror server23:07
alvinstarrhave to run. THANK you for all your help.23:08
jrosserno problem23:08

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!