opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 00:51 |
---|---|---|
noonedeadpunk | prometheanfire: that looks waaay better, thank you! | 08:59 |
noonedeadpunk | I will try to have some better review shortly | 09:00 |
noonedeadpunk | jrosser: we could just `notify: systemd service restart` from any service role. The problem comes here: https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/handlers/main.yml | 09:02 |
noonedeadpunk | when we need to stop service, do actions, start service | 09:02 |
noonedeadpunk | So we can't really use systemd_service handlers at all | 09:02 |
noonedeadpunk | At least for neutron | 09:02 |
noonedeadpunk | For some services, like glance, we can just drop handlers from service role | 09:03 |
jrosser | noonedeadpunk: so that means that the statefulness/logic can only really be in one place? | 09:16 |
jrosser | i.e we must either pass a flag into systemd_service to tell it to restart (like my example), or perhaps opposite that systemd_service should allow the restart to be suppressed but instead set a flag (or call an external handler?) | 09:17 |
jrosser | i was thinking that there were some parallels here with the change we did to the pki role pass a custom handler name | 09:18 |
noonedeadpunk | Well, the thing is that systemd_service is suppressing restart as of today | 09:18 |
jrosser | right - but it knows the unit file was updated, for example | 09:18 |
noonedeadpunk | Yeah, I started doing that as well until I've realized we have complex handlers where we need to stop things and start in some time | 09:18 |
noonedeadpunk | Well, we can listen for that like we do with uwsgi | 09:19 |
noonedeadpunk | But then problem is, that we need to apply complex logic in service roles to understand if it _really_ should be even touched | 09:20 |
noonedeadpunk | basically that https://opendev.org/openstack/ansible-role-systemd_service/src/branch/master/handlers/main.yml#L34 | 09:20 |
noonedeadpunk | But it's less trivial I guess... | 09:21 |
noonedeadpunk | But yeah, it can be done in both directions | 09:21 |
noonedeadpunk | I'm less frustrated today, so will just see how awfull condition I will end up with... | 09:21 |
jrosser | for neutron we don't even use `enabled` | 09:23 |
jrosser | isnt it that everthing that ends up in `filtered_neutron_services` is by definition enabled | 09:24 |
jrosser | so imho there is possibly a reduction in complexity possible here for this case | 09:25 |
noonedeadpunk | No? | 09:25 |
jrosser | what do i miss? :) | 09:25 |
noonedeadpunk | Well, then if there will be some override we won't be able to disable service | 09:26 |
noonedeadpunk | As it will be filtered out before passed to systemd_service | 09:26 |
jrosser | what kind of override? | 09:27 |
noonedeadpunk | Ok, in neutron we indeed filter this out.... | 09:28 |
jrosser | yeah - i really was just trying to look for simplifications | 09:30 |
noonedeadpunk | But for example here https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L357 | 09:30 |
noonedeadpunk | I think we want to be able to disble cinder-backup service when we say it should be disabled... | 09:30 |
noonedeadpunk | But we again do filter it out.... | 09:31 |
noonedeadpunk | But my point is that we likely should not :) | 09:31 |
noonedeadpunk | Eventually how I found that out - I tried to disable/enable uwsgi for glance and realised service is not restarted when unit file is changed. But daemon is reloaded | 09:33 |
noonedeadpunk | Also, this kind of makes me think if we do double restart of neutron agents - if it's not one of the issues that strike us during upgrades.... | 09:33 |
jrosser | its not great that the roles all take a slightly different approach here | 09:35 |
noonedeadpunk | this is another part/angle of the issue.... | 09:35 |
noonedeadpunk | BUt yeah, I think we'd need to harmonize that with same approach | 09:36 |
jrosser | i'm just a bit concerned that we do huge complex thing, when some adjustment around the edges of things might make it overall simpler | 09:36 |
jrosser | kind of like the haproxy stuff | 09:36 |
jrosser | right now i don't know what that adjustment might be though | 09:37 |
noonedeadpunk | Yeah, let me prepare a change, I'll ping you | 09:37 |
jrosser | from a conceptual p.o.v there is something to say "systemd_service role is responsible for everything to do with services" | 09:37 |
jrosser | and we need to hook role handlers into it for these <stop> <do stuff> <start> cases | 09:38 |
noonedeadpunk | But I think that we should 1. use service roles to restart services 2. do not use systemd_service to restart them, except sockets and timers 3. Leave functionality of the role systemd_service for standalone usage | 09:38 |
admin1 | during yoga -> zed upgrade, this is coming up which leads to nova being down now .. https://gist.githubusercontent.com/a1git/6f25cfb53feb2cb3b6d122da5664b462/raw/5bb0563f46c05bcea47fbc2a04f607f1f045f4ab/gistfile1.txt | 09:42 |
admin1 | and is it possible to just use nova on yoga while everything else is already zed | 09:43 |
noonedeadpunk | um, yes, it's possible, but issue is weird | 09:45 |
noonedeadpunk | As Y->Z should be jsut fine, and it's also in output that minimum compute version is Y | 09:45 |
noonedeadpunk | Are you sure that all your computes are on Y? | 09:45 |
admin1 | yes | 09:53 |
admin1 | they are .. because i did w-> y a few days back | 09:53 |
admin1 | and validated all were working via canary deployment | 09:53 |
admin1 | is it possible to just have nova on old version | 09:53 |
admin1 | i # nova playbook on setup-openstack and completed the rest to zed .. . | 09:54 |
noonedeadpunk | Um, you can just supply SHA from Yoga for nova in variables to leave it intact | 09:56 |
noonedeadpunk | I kind of wonder if you might have such override olreaddy that has slipped during upgrade? | 09:58 |
noonedeadpunk | `nova_git_install_branch` | 09:58 |
admin1 | this will be the first time in almost 8 years that i will be overriding something that came out of the box :D | 09:59 |
admin1 | i just work with tags and stick to user_config and user_variables | 09:59 |
admin1 | never override anything else | 09:59 |
noonedeadpunk | yeah, so you can define nova_git_install_branch to user_variables | 10:00 |
noonedeadpunk | To be stable/yoga or any SHA or tag that corresponds to Y | 10:01 |
noonedeadpunk | But again - error you got should not happen and is really weird | 10:01 |
admin1 | so change playbooks/defaults/repo_packages/openstack_services.yml nova_git_install_branch from 9bca7f34a0d1e546bff8e226cbe45ccbc520c878 (26.0.1) to e0723453eaff0346630419413d5b6bff55aa3791 (25.3.0) | 10:01 |
admin1 | is that the only thing i need to do ? | 10:01 |
noonedeadpunk | you can place nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791 in user_variables | 10:02 |
admin1 | got it | 10:02 |
noonedeadpunk | no need to change /openstack_services.yml | 10:02 |
admin1 | ok | 10:02 |
noonedeadpunk | You might also want to set `nova_venv_tag` to prior OSA version to re-use old venv | 10:03 |
admin1 | do i have to setup bootstrap again ? | 10:03 |
noonedeadpunk | Oh! And one more thing | 10:03 |
noonedeadpunk | set `nova_upper_constraints_url: https://releases.openstack.org/constraints/upper/yoga` (instead of yoga you can use SHA for `requirements_git_install_branch` from prior osa version) | 10:04 |
noonedeadpunk | no need to bootstrap | 10:04 |
admin1 | the 25.3.0 venv is in all srevers .. so i need nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791, nova_venv_tag = 25.3.0 and also nova_upper_constraints_url -- 3 total new variables ? | 10:05 |
noonedeadpunk | nova_upper_constraints_url: https://releases.openstack.org/constraints/upper/707da594d13b7f1ed47800660aaa2a8a804689df | 10:07 |
noonedeadpunk | Yeah, I think this should be enough | 10:07 |
admin1 | nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791, nova_upper_constraints_url: https://releases.openstack.org/constraints/upper/707da594d13b7f1ed47800660aaa2a8a804689df , nova_venv_tag: 25.3.0 | 10:08 |
noonedeadpunk | admin1: can you check 1 thing before execute that? | 10:09 |
admin1 | sure | 10:09 |
jrosser | btw I never saw anything like this in Y-Z upgrades | 10:09 |
admin1 | this means i need to somehow seregate all the cpu types here, and try to selectively upgrade one from y -z | 10:10 |
noonedeadpunk | what's the output of `mysql -e "select version from nova.services where deleted = 0;" ? | 10:10 |
noonedeadpunk | It's not about cpu types but about rpc api compatabilityy I believe? | 10:11 |
noonedeadpunk | As the only failures are regarding `Check: Older than N-1 computes` | 10:12 |
noonedeadpunk | output of mysql should be all 60 iirc | 10:12 |
admin1 | https://gist.githubusercontent.com/a1git/13ceb2e181dab9532a5b229b2915b478/raw/eee560226905d21159808f36a038c9201d8e2d23/gistfile1.txt | 10:13 |
noonedeadpunk | or well | 10:13 |
admin1 | hmm.. | 10:13 |
noonedeadpunk | let me check | 10:13 |
admin1 | this is strange .. | 10:13 |
noonedeadpunk | so, the ones that are 60 - are reason of failure. And they don't seem to be upgraded to Y | 10:13 |
noonedeadpunk | You can do hacky thing:) | 10:13 |
noonedeadpunk | Given that you've checked, that nova-compute is running from Zed venv on these computes and pip freeze | grep nova from venv says that nova is 26.*.*.dev*, you can jsut update DB :p | 10:15 |
noonedeadpunk | eventually - that could be in case these computes are down | 10:15 |
admin1 | they are down | 10:15 |
noonedeadpunk | As then they won't report new version to rpc | 10:15 |
noonedeadpunk | If there's a reason to leave them - update DB. If not - delete them from services | 10:16 |
admin1 | now i think of it, the version 60 that is in the db are all computes that are temporarily down | 10:18 |
admin1 | i can live without them for now | 10:18 |
noonedeadpunk | ok, then if they're temp down - update db | 10:18 |
noonedeadpunk | if perm down - no reason to keep them kind of | 10:18 |
admin1 | yes but right now even the api layer is down when i run something like hypervisor list, so i do not think i can use the api to temporarily delete them | 10:20 |
admin1 | for forcing db upgrade, i have to be in any nova controller, inside the venv and then use "nova_manage db sync" | 10:21 |
noonedeadpunk | It will auto-heal once you update db ;) | 10:22 |
noonedeadpunk | Um, not really, I think you will need smth like `update nova.services set version = 61 where version = 60 and deleted = 0;` | 10:23 |
admin1 | would me doing delete from nova.services where version=60 .. or update nova.services set version=61 where version-60 also help ? | 10:23 |
admin1 | i was thinking the same | 10:23 |
noonedeadpunk | And yes, that will jsut fix api and computes and everything | 10:24 |
noonedeadpunk | We did that when were jumping through releases | 10:24 |
noonedeadpunk | jrosser: seems https://review.opendev.org/c/openstack/openstack-ansible/+/876160 not really happy on metal ;( | 10:25 |
admin1 | running nova playbooks again with zed | 10:26 |
admin1 | btw, i was able to make magnum work nicely with osa .. k8s deploys nicely .. only issue is in the ingress :) | 10:32 |
admin1 | but i was able to mitigate that using octavia + nodeports .. and today testing metallb with vyos as bgp | 10:32 |
jrosser | noonedeadpunk: hmm o probably have to make AIO for that, won’t be today | 10:41 |
jrosser | huh that is odd | 10:42 |
jrosser | anyway !working today….. | 10:42 |
admin1 | "<bauzas> admin1: again, that's by design that we don't allow non-upgraded compute to be left registered" ( from openstack-nova ) .. so i think in my case, some servers were offline where wallaby -> yoga did not complain .. but when came time for yoga - zed, the db upgrade part got stuck while the venvs were already in the compute | 10:45 |
admin1 | while the new venvs* | 10:45 |
admin1 | if this works ( current run ) , then i am good to go . | 10:45 |
noonedeadpunk | oh, yes, sorry, I've completely forgotten today is friday jrosser | 10:45 |
noonedeadpunk | admin1: yeah-yeah-yeah, I know :) | 10:46 |
admin1 | but thank you for teaching how to bypass a specific version for a specific component noonedeadpunk | 10:47 |
admin1 | beer from me next time i see you :) | 10:47 |
noonedeadpunk | no worries:) | 10:47 |
admin1 | to jrosser as well . all guys here .. as you have been all helpful | 10:47 |
*** odyssey4me is now known as odyssey4me__ | 10:58 | |
*** odyssey4me__ is now known as odyssey4me | 10:58 | |
*** odyssey4me is now known as odyssey4me__ | 11:06 | |
*** odyssey4me__ is now known as odyssey4me | 11:06 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Reduce output by leveraging loop labels https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/876302 | 11:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Reduce output by leveraging loop labels https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/876302 | 11:36 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Fix tags usage for included tasks https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/876321 | 11:39 |
*** odyssey4me is now known as odyssey4me__ | 12:02 | |
admin1 | upgrade successful .. | 12:22 |
noonedeadpunk | sweet | 12:32 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Ensure service is restarted on unit file changes https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/876328 | 12:59 |
noonedeadpunk | jrosser: smth like that regarding systemd ^ | 12:59 |
noonedeadpunk | I was thinking about variables in handlers name for a while, and I'm not sure if we need them in this case. Likely not | 13:00 |
noonedeadpunk | I don't think we do include systemd_service more then once without flushing handlers in between - we always supply just list of services to configure in one round | 13:01 |
noonedeadpunk | So I'd keep that simple. Also there's a caution to use vars in handlers names as you should be careful when doing that. Having vars in listen would be helpful, but it's not possible | 13:01 |
admin1 | from wallaby -> zed, it created a file called user_neutron_migration.yml .. and now my hosts give ERROR neutron.agent.l2.extensions.fdb_population [None req-944ff309-b58b-4551-954c-e0f180097607 - - - - - -] Invalid configuration provided for FDB extension: no physical devices | 13:13 |
admin1 | sorry .. wallay -> yoga complained that file was not there, but went ahead | 13:13 |
admin1 | yoga -> zed populated that file | 13:13 |
admin1 | which has neutron_ml2_drivers_type: flat,vlan,vxlan,local in line, and also neutron_plugin_base: -- routing and metering | 13:13 |
admin1 | and nothing else | 13:13 |
admin1 | in /etc/neutron/plugins, there is a FDB line added, but there shared_physical_device_mappings = ( is blank) | 13:15 |
admin1 | would it cause the ovs-agent to go down ? | 13:15 |
admin1 | do we have a new variable in zed that is suppose to populate FDB ? error in the log is "Invalid configuration provided for FDB extension: no physical devices" | 13:18 |
noonedeadpunk | admin1: you're having sriov? | 13:22 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/releasenotes/notes/network_sriov_mappings-7e4c9bcb164625c3.yaml | 13:23 |
admin1 | yeah | 13:25 |
admin1 | https://gist.githubusercontent.com/a1git/a4d850497b6d5fe9805448d8467bd46c/raw/7a34db9d858e93418e3e062ab5f52d7f143face2/gistfile1.txt | 13:25 |
admin1 | that is how the config is now | 13:25 |
noonedeadpunk | I don't know much about sriov on neutron, so no idea how it must be filled in properly, but idea is to define network_sriov_mappings in neutron_provider_networks or openstack_user_config | 13:34 |
noonedeadpunk | I believe jamesdenton might know more about that | 13:34 |
admin1 | oh | 13:35 |
admin1 | i see | 13:35 |
admin1 | i get it | 13:35 |
admin1 | thanks for the hint noonedeadpunk , i know how to set it up now | 13:37 |
noonedeadpunk | ok, great :D | 13:38 |
admin1 | i think a lot of changes were introduced/enforced from w - z , as I stayed in y for just a day to notice | 13:38 |
admin1 | is osa also tested with trove and ironic ? | 13:39 |
noonedeadpunk | Well. Kind of. I've played with trove during... V? and it worked nicely | 13:41 |
noonedeadpunk | It was polished and taken care of back then | 13:41 |
noonedeadpunk | There's also doc pushed on how to deploy a separate rabbit cluster for trove that you defenitely want to have | 13:42 |
admin1 | i only want rabbit and postgres :) | 13:42 |
noonedeadpunk | I think the only issue we had with it were ipv6 support | 13:42 |
noonedeadpunk | But it was trove thing rather then deployment (though we should think about ipv6 to be frank as well) | 13:43 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Install erlang from packagecloud for RHEL https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 14:06 |
jamesdenton | good morning | 14:13 |
prometheanfire | noonedeadpunk: it's literally the first thing I could get working, largely coppied from neutron | 14:28 |
admin1 | \o | 14:29 |
damiandabrowski | hi! | 14:44 |
noonedeadpunk | Yeah, I'd rename some variables I guess, but will see | 14:49 |
prometheanfire | the cert names seemed to be right at least | 15:35 |
prometheanfire | I mean, as saved on the cert host | 15:36 |
noonedeadpunk | ++ | 15:41 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Install erlang from packagecloud for RHEL https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 15:43 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 16:01 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 16:02 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 16:09 |
prometheanfire | noonedeadpunk: it's probably a problem with my env, but I can't get a OVN LB to work, creates but IP doesn't respond to arp | 17:02 |
prometheanfire | probably my fault though | 17:02 |
noonedeadpunk | prometheanfire: you'd better pinged jamesdenton :D | 17:03 |
noonedeadpunk | I'm not on OVN yet - trying to get there | 17:03 |
noonedeadpunk | And never even played with octavia and ovn... | 17:03 |
prometheanfire | heh, jamesdenton so.... how you doin'? | 17:04 |
jrosser | oh i see `Proxy 'base-front-2': unable to find required default_backend: 'horizon-back'` | 17:55 |
jrosser | no horizon on metal jobs so that needs to be optional | 17:55 |
noonedeadpunk | jrosser: damiandabrowski has spotted more nasty thing with dynamic groups | 17:59 |
noonedeadpunk | Which is that haproxy is added as backend to all services | 17:59 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Allow default_backend to be specified https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/876157 | 18:00 |
noonedeadpunk | Since it's part of <service>_all, and we have backends based on that group | 18:00 |
noonedeadpunk | (as we add it to <service>_all) | 18:00 |
noonedeadpunk | So looks like we have to rollback to using loops and playbook | 18:01 |
noonedeadpunk | As I have zero idea how to overcome this behaviour | 18:01 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Split haproxy horizon config into 'base' frontend and 'horizon' backend https://review.opendev.org/c/openstack/openstack-ansible/+/876160 | 18:02 |
jrosser | i see, yes | 18:04 |
jrosser | noonedeadpunk: well really i still stand by my comment in the blueprint that moving all the group vars around is out of scope of a change moving the execution of haproxy for services into the service playbooks..... | 18:19 |
jamesdenton | hey prometheanfire | 18:22 |
jamesdenton | i'd need to circle back on the OVN LB piece - but tied up one some other stuff | 18:23 |
noonedeadpunk | jrosser: it\s quite small/relatively trivial addition to overall mess | 18:24 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Enable TLS frontend for repo_server by default https://review.opendev.org/c/openstack/openstack-ansible/+/876426 | 19:24 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 19:28 |
jrosser | noonedeadpunk: this is unexpected https://paste.opendev.org/show/bowVjtmvnimW2myXdEsI/ | 19:30 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 19:30 |
jrosser | and appears to work around the issue of having haproxy_all added into glance_all | 19:30 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-repo_server master: Add TLS support to repo_server backends https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/876429 | 19:35 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Enable TLS frontend for repo_server by default https://review.opendev.org/c/openstack/openstack-ansible/+/876426 | 19:36 |
jrosser | i think there is something broken for rabbitmq on master now | 19:42 |
jrosser | `No package erlang-25.2-1.el8.x86_64 available.` | 19:43 |
damiandabrowski | jrosser: does it require to explicitly pass a list of variables? if so, it would be tricky as *_haproxy_services sometimes refer to other variables from the foreign group | 19:56 |
jrosser | yes it does require them to be listed | 19:57 |
damiandabrowski | btw. i tested it for horizon on my AIO with applied haproxy-separated-service-config and it doesn't work | 19:57 |
damiandabrowski | https://paste.openstack.org/show/bXt96UjC85w4ibRNV8Q5/ | 19:57 |
damiandabrowski | i have no idea why it complains about 'ansible_architecture' is undefined :| | 19:58 |
jrosser | hmm well maybe my example is not realistic enough to see how limited it might be | 19:59 |
noonedeadpunk | jrosser: wait, what.... | 20:01 |
damiandabrowski | anyway guys, i need to leave in a moment. As I mentioned before, I'm leaving for a vacation so see you on 14th March | 20:01 |
damiandabrowski | just in case: i'll be partially available tomorrow | 20:01 |
damiandabrowski | thanks for your support! | 20:02 |
noonedeadpunk | regarding rabbit - yup, I don't see such package indeed... I wonder if we indeed should just bump series for erlang | 20:02 |
noonedeadpunk | like damiandabrowski did for Xena | 20:02 |
jrosser | i didnt like the '8' in the url for centos9 either | 20:02 |
jrosser | though that might be situation-normal here :) | 20:03 |
noonedeadpunk | yeah, it's as proposed by vendor /o\ | 20:03 |
noonedeadpunk | *maintainer | 20:03 |
noonedeadpunk | Hm, do you recall how we were using fixed versions of tempest plugins before Y.... | 20:06 |
noonedeadpunk | just in aio template or smth... | 20:06 |
jrosser | you mean this sort of thing? https://github.com/openstack/openstack-ansible-os_tempest/blob/stable/xena/defaults/main.yml#L118 | 20:09 |
noonedeadpunk | Aha, yes | 20:09 |
noonedeadpunk | But I'm afraid we're having circular dependency on X now... | 20:09 |
jrosser | iirc every so often we have to go and pin one of those on stable branch when tempest master no longer works | 20:09 |
noonedeadpunk | Well, it's fixed now kind of. But not on X | 20:10 |
noonedeadpunk | Y has that https://opendev.org/openstack/openstack-ansible/src/branch/stable/yoga/playbooks/defaults/repo_packages/openstack_testing.yml | 20:10 |
jrosser | maybe we make a mistake on X by not re-introducing that file? | 20:13 |
noonedeadpunk | I wonder if just using Y will work fine | 20:14 |
jrosser | there are instructions somehere about when that file should be present / updated / deleted as part of a release | 20:14 |
noonedeadpunk | Let's try that | 20:14 |
noonedeadpunk | Yeah, we tend to delete it before doing release | 20:14 |
noonedeadpunk | as it should be supplied by u-c or smth | 20:15 |
noonedeadpunk | But it's not as of today | 20:15 |
prometheanfire | jamesdenton: ack | 20:15 |
noonedeadpunk | And I'm not doing that anymore as we're having too much troubles with that approach | 20:16 |
noonedeadpunk | And tempest is not in u-c anymore at all, so | 20:16 |
prometheanfire | jamesdenton: the lb is created in the container (ovs shows it with the right IPs and ports), but I don't know how to figure out why or where it should be sending arp | 20:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Backport openstack_testing from Yoga https://review.opendev.org/c/openstack/openstack-ansible/+/876434 | 20:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 20:24 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 20:30 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 20:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/875782 | 20:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 20:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 20:34 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 20:34 |
noonedeadpunk | damn, I need to stop doing anything... | 20:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876398 | 20:40 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Update erlang to 25.2.3 and rabbit to 3.11.10 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/876436 | 20:43 |
noonedeadpunk | damiandabrowski: it complains about 'ansible_architecture' is undefined because of gather_facts: false | 20:49 |
noonedeadpunk | But that is super weird that I do also see haproxy_all being added as backend, but example really does work | 20:50 |
noonedeadpunk | hm, it works differently for some reason... | 20:54 |
noonedeadpunk | https://paste.openstack.org/show/bOU8Qi6CC3sYkL6C9ELp/ | 20:55 |
noonedeadpunk | So what's the difference with https://paste.openstack.org/show/bXt96UjC85w4ibRNV8Q5/ | 20:55 |
noonedeadpunk | localhost vs horizon_all | 20:56 |
jrosser | the add_host task is vey different | 20:56 |
jrosser | *very | 20:56 |
jrosser | no group for example | 20:56 |
noonedeadpunk | oh, right | 20:57 |
jrosser | i am not sure i totally understand it | 20:57 |
jrosser | but i was looking more at the examples in the add_host documentation | 20:58 |
jrosser | it's almost like it augments the existing haproxy host with a var from the other group | 20:58 |
jrosser | var or vars, whatever you list there | 20:59 |
jrosser | but that is perhaps quite fragile | 20:59 |
noonedeadpunk | we'd kind of need to name all vars in group_vars as haproxy_service_configs I assume | 21:03 |
jrosser | i was thinking ahead to where we would put things like `glance_backend_tls` and how that would get through | 21:06 |
noonedeadpunk | Yeah, that kind of works | 21:07 |
noonedeadpunk | https://paste.openstack.org/show/bgoBDI2Gux7H3Zuhp5Tm/ | 21:08 |
noonedeadpunk | We likely even can skip reloading inventory this way as haproxy_service_configs will be overriden regardless | 21:09 |
jrosser | we probably have other variables too that are used in the haproxy service config | 21:11 |
jrosser | that might be tricky | 21:11 |
noonedeadpunk | they all seemed to resolve | 21:11 |
noonedeadpunk | hm, or let me check... | 21:12 |
noonedeadpunk | I have picked too easy service | 21:12 |
noonedeadpunk | yes, they do resolve | 21:14 |
noonedeadpunk | I assume that add_host does fully load variable and pass it as resolved one | 21:15 |
noonedeadpunk | That is neat | 21:15 |
noonedeadpunk | I'm not sure I like accessing hostvars though but might be fine after all... | 21:15 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!