noonedeadpunk | 10 seconds should be enough for haproxy to recover backend, right? | 05:44 |
---|---|---|
noonedeadpunk | or not https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/logs/etc/host/haproxy/haproxy.cfg.txt#843 | 05:45 |
noonedeadpunk | wait. so check interval is 12 seconds for us now | 05:48 |
noonedeadpunk | so backend should be unreachable for 36 seconds to be marked as down | 05:48 |
noonedeadpunk | and then same amount of time to be marked as ready again | 05:49 |
noonedeadpunk | as default is in milliseconds https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/#fall | 05:50 |
noonedeadpunk | (thinking if default is even reasonable to start with) | 05:50 |
noonedeadpunk | (our default) | 05:51 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/939330 | 05:52 |
noonedeadpunk | so this one is only the second check made by haproxy after apache is up: https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/logs/host/syslog.txt#70785 | 05:55 |
noonedeadpunk | and it's 25sec after restart | 05:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/939330 | 05:56 |
noonedeadpunk | I think a question is why in the world haproxy marks it as down. It doesn't seem to me apache restart takes 30sec | 05:58 |
jrosser | wait - don’t we put backends in maintainace during some of this? | 05:58 |
noonedeadpunk | in aio? | 05:59 |
jrosser | I can’t remember | 05:59 |
noonedeadpunk | we should have a condition on length of haproxy_hosts | 05:59 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/playbooks/horizon.yml#L68-L70 | 06:00 |
noonedeadpunk | 18:41:20 apache is still responding on healthchecks, then a service restart and 18:41:32 next healthcheck response | 06:02 |
noonedeadpunk | haproxy marks as DOWN 18:41:26 | 06:02 |
noonedeadpunk | so it's obviously not following `inter 12000 rise 3 fall 3` | 06:04 |
noonedeadpunk | it could be because of `reason: Layer4 connection problem, info: "Connection refused"`. I don't see inter saying being applicable only to L7 failures, but can assume L4 and L7 acting differently.... | 06:05 |
jrosser | something that was unhelpful in the Apache journal was this it only seems to who logs for one vhost | 06:36 |
jrosser | *show | 06:36 |
jrosser | it looked like it started out being logs for repo, then switched to being for keystone | 06:37 |
noonedeadpunk | well, despite that, I'd assume all vhosts being up given one is up | 06:41 |
noonedeadpunk | also, I'm not 100% sure, but could be that adding vhost to CustomLog can help: https://opendev.org/openstack/ansible-role-httpd/src/branch/master/templates/httpd_vhost.conf.j2#L10 | 06:42 |
opendevreview | Merged openstack/openstack-ansible stable/2023.2: Remove senlin/sahara/murano roles from required project https://review.opendev.org/c/openstack/openstack-ansible/+/939072 | 08:16 |
birbilakos | Hi everyone, I'm trying to deploy mcapi and have a question regarding the SSL certification configuration in: https://docs.openstack.org/openstack-ansible-ops/latest/mcapi.html | 10:40 |
birbilakos | More specifically, for capi_client:, what's the internalURL endpoint supposed to be? Is this the magnum internal endpoint? | 10:40 |
jrosser | birbilakos: the k8s cluster in your control plane needs to talk to the internal VIP, and needs to trust the certificate on that | 10:46 |
jrosser | and it is all the API on the loadbalancer, as cluster API interacts with nova/neutron/octavia/keystone etc | 10:47 |
jrosser | and `endpoint: 'internalURL'` is the correct configuration there, it tells cluster_api to look in the keystone service catalog for the endpoints, and to use the "internal" ones it finds there | 10:48 |
birbilakos | so I reckon no need to change the ca_file or endpoint definition for the internal one? | 10:50 |
birbilakos | or should I just define the internal_lb_vip_address to it? | 10:52 |
jrosser | the endpoint already points to the internal one, in the ops repo documentation it says 'internalURL' | 10:53 |
jrosser | you just need to make sure that ca_file points to something valid for your deployment for the internal VIP | 10:53 |
birbilakos | these are auto-generated by openstack during first deployment right? | 10:55 |
jrosser | there are some default variables which you are free to override to set up the internal CA as you want it | 10:57 |
jrosser | if you leave the defaults, this is what you get https://github.com/openstack/openstack-ansible/blob/1590e39e195d4e0f79f8de7ac9061f2242ec9208/inventory/group_vars/all/ssl.yml#L34 | 10:58 |
jrosser | and thats why the mcapi example references `ca_file: '/usr/local/share/ca-certificates/ExampleCorpRoot.crt'`, as thats the default you get with OSA and the one thats tested in our CI jobs | 10:59 |
birbilakos | do you know if the capi instructions will also work for 2023.2 ? | 11:16 |
jrosser | it says `These playbooks require Openstack-Ansible Caracal or later.` | 11:18 |
birbilakos | is it a matter of 'this hasn't been tested with 2023.2' or that there are commits missing that would make this an invalid deployment? | 11:21 |
jrosser | making the k8s cluster work inside LXC containers on the control plane is the enabling part thats only present in Caracal | 11:30 |
birbilakos | got it, thanks jrosser | 11:39 |
birbilakos | on another note, I'm thinking of biting the bullet and upgrading from OSA 2023.2 to the latest release. Is there some resource that can help with that? | 11:40 |
noonedeadpunk | you can upgrade only rto 2024.1 from 2023.2 | 11:40 |
noonedeadpunk | as 2023.2 is not SLURP release | 11:41 |
noonedeadpunk | but you can pretty much refer to https://docs.openstack.org/openstack-ansible/2024.1/admin/upgrades/major-upgrades.html | 11:41 |
birbilakos | thanks! | 11:41 |
birbilakos | so I reckon it's a two step process to get to the latest version | 11:42 |
noonedeadpunk | keep in mind, that from 2024.1 you can upgrade both to 2024.2 and to 2025.1 once it's released directly | 11:42 |
noonedeadpunk | as 2024.1 is a SLURP release | 11:42 |
noonedeadpunk | so you can always do .1 -> .1 directly as well as .1 -> .2, but .2 can only to the next .1 always | 11:43 |
birbilakos | makes sense | 11:43 |
birbilakos | i'm kind of intimidated to upgrade to be honest - this is our production env (i do have a 2024.1 staging lab where I do all the testing) | 11:44 |
jrosser | we have done it enough now that its not intimidating | 11:46 |
jrosser | i think it gets that way and stays that way if you dont make upgrades part of business-as-usual | 11:46 |
jrosser | but we are lucky to have a good staging environment to test it all out first | 11:47 |
birbilakos | yes, I have to be honest that I have gotten tons of support from both of you (here and in launchpad) in my OSA journey. I literally had 0 experience before on openstack, let alone OSA specifically | 11:48 |
birbilakos | thank you for that | 11:49 |
birbilakos | do you know where i might be able to get some newers images for mcapi? The ones on their github go up until 1.27.4, which is rather old | 12:09 |
jrosser | birbilakos: you can see the image locations here used for the testing https://github.com/vexxhost/magnum-cluster-api/blob/main/zuul.d/jobs-ubuntu-2204.yaml | 12:13 |
jrosser | you can also build your own https://github.com/vexxhost/capo-image-elements | 12:14 |
birbilakos | awesome | 12:18 |
opendevreview | Merged openstack/openstack-ansible stable/2023.2: Bump SHAs for 2023.2 https://review.opendev.org/c/openstack/openstack-ansible/+/938928 | 16:07 |
*** asharp is now known as Baronvaile | 16:17 | |
jventura | Question: How can I reinstall a service after the whole cluster be alraedy installed ??? | 16:59 |
jventura | As an example: resinstall heat. | 17:00 |
jventura | I think I shou delete the container, recreate and after that "openstack-ansible openstack.osa.heat" | 17:01 |
jventura | Am I missing something here? | 17:01 |
jventura | Another question I have is: How do I include additional packages in the container??? | 17:02 |
noonedeadpunk | hey | 17:22 |
noonedeadpunk | well. | 17:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-ops master: Upgrade magnum-cluster-api to 0.24.2 https://review.opendev.org/c/openstack/openstack-ansible-ops/+/936229 | 17:36 |
opendevreview | Merged openstack/openstack-ansible stable/2024.2: Update test-requirements https://review.opendev.org/c/openstack/openstack-ansible/+/939365 | 18:00 |
alvinstarr | Has anybody used foreman and setup the repos to pull from the foreman-proxy? | 20:49 |
jrosser | you might need to explain what that means, i googled a bit and am not sure i understand | 20:54 |
alvinstarr | Foreman is a great tool for managing a bunch of computers from a configuration management point of view. | 21:15 |
alvinstarr | It can get copies of thinks like the RPM repositories and keep versions and point in time snapshots of the repositories so that when you install something next month you get the same versions you got this month. | 21:15 |
alvinstarr | It also has integrations to things like puppet and Ansible. | 21:15 |
alvinstarr | For general systems I use the downloaded repositories for installation but in the case of OpenStack I would also like to use the same local repositories. | 21:15 |
alvinstarr | Foreman is also know as Satallite if your buying it from RH. | 21:17 |
jrosser | do you use the 'source' or 'distro' installation method for openstack-ansible? | 21:25 |
jrosser | in general, openstack-ansible does not care how you deploy the operating system onto your hosts | 21:26 |
jrosser | by default openstack-ansible installs openstack from git source repos, so that would not be coming from any local foreman rpm repos | 21:29 |
jrosser | i'm still not quite sure i'm understanding your question though - "setup the repos" <- which ones? | 21:34 |
jrosser | there should be variables that you can override to use local mirrors of rabbitmq/mariadb/ceph and so on | 21:34 |
jrosser | however the osa code pins these to specific versions so you will always get the same version without a tool like foreman | 21:35 |
alvinstarr | I am using the source installation method. | 22:38 |
alvinstarr | It may be a moot point then. | 22:38 |
alvinstarr | In the past I have been burned by differing versions of software installed from the internet repositories but if the versions are pinned then it may not be a worry for us. | 22:38 |
jrosser | so for example the openstack service versions are defined like this https://github.com/openstack/openstack-ansible/blob/stable/2024.2/inventory/group_vars/glance_all/source_git.yml | 22:40 |
jrosser | rabbitmq https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/defaults/main.yml#L23-L25 | 22:41 |
jrosser | foreman sounds like it can certainly give you a stable OS install | 22:42 |
alvinstarr | It defiantly makes booting from 0 up to a stable OS easy. | 22:43 |
jrosser | but openstack-ansible releases also define a different set of fixed things for you | 22:43 |
alvinstarr | when the lxc images are created. do they take the yum.repos.d info from the underlying physical hosts? | 22:46 |
jrosser | they do yes | 22:52 |
jrosser | https://github.com/openstack/openstack-ansible-lxc_hosts/blob/b1235fbfc6a7959cd698f445a2027c75a063a1dc/vars/redhat.yml#L27-L35 | 22:53 |
alvinstarr | I will need to play with this a bit when I get some free time. | 22:59 |
jrosser | only a basic base image is created | 23:02 |
jrosser | specific things are then installed later, so the rabbitmq LXC container is booted from the base image then customised with the rabbitmq_server ansible role, just like it was an actual host or VM | 23:03 |
jrosser | so the repo config for things like that come from two places, those on the host from the base image, and those added by some more specific ansible role once the container is created | 23:03 |
alvinstarr | I think I have a better handle on this. | 23:05 |
alvinstarr | Thanks | 23:05 |
alvinstarr | It looks like if I change the EPEL and rdoproject urls I can point them at my mirrored repositories. | 23:06 |
jrosser | yes, it should be possible to make all package installs come from local mirrors | 23:06 |
alvinstarr | so that would be "openstack_hosts_rdo_mirror_url" and "centos_epel_mirror" | 23:07 |
jrosser | i have a deployment that does not have internet access, everything comes from a mirror server | 23:07 |
alvinstarr | have to run. THANK you for all your help. | 23:08 |
jrosser | no problem | 23:08 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!