Friday, 2023-03-03

opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible/+/87118900:51
noonedeadpunkprometheanfire: that looks waaay better, thank you!08:59
noonedeadpunkI will try to have some better review shortly 09:00
noonedeadpunkjrosser: we could just `notify: systemd service restart` from any service role. The problem comes here: https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/handlers/main.yml09:02
noonedeadpunkwhen we need to stop service, do actions, start service09:02
noonedeadpunkSo we can't really use systemd_service handlers at all09:02
noonedeadpunkAt least for neutron09:02
noonedeadpunkFor some services, like glance, we can just drop handlers from service role09:03
jrossernoonedeadpunk: so that means that the statefulness/logic can only really be in one place?09:16
jrosseri.e we must either pass a flag into systemd_service to tell it to restart (like my example), or perhaps opposite that systemd_service should allow the restart to be suppressed but instead set a flag (or call an external handler?)09:17
jrosseri was thinking that there were some parallels here with the change we did to the pki role pass a custom handler name09:18
noonedeadpunkWell, the thing is that systemd_service is suppressing restart as of today09:18
jrosserright - but it knows the unit file was updated, for example09:18
noonedeadpunkYeah, I started doing that as well until I've realized we have complex handlers where we need to stop things and start in some time09:18
noonedeadpunkWell, we can listen for that like we do with uwsgi09:19
noonedeadpunkBut then problem is, that we need to apply complex logic in service roles to understand if it _really_ should be even touched09:20
noonedeadpunkbasically that https://opendev.org/openstack/ansible-role-systemd_service/src/branch/master/handlers/main.yml#L3409:20
noonedeadpunkBut it's less trivial I guess...09:21
noonedeadpunkBut yeah, it can be done in both directions09:21
noonedeadpunkI'm less frustrated today, so will just see how awfull condition I will end up with...09:21
jrosserfor neutron we don't even use `enabled`09:23
jrosserisnt it that everthing that ends up in `filtered_neutron_services` is by definition enabled09:24
jrosserso imho there is possibly a reduction in complexity possible here for this case09:25
noonedeadpunkNo?09:25
jrosserwhat do i miss? :)09:25
noonedeadpunkWell, then if there will be some override we won't be able to disable service09:26
noonedeadpunkAs it will be filtered out before passed to systemd_service09:26
jrosserwhat kind of override?09:27
noonedeadpunkOk, in neutron we indeed filter this out....09:28
jrosseryeah - i really was just trying to look for simplifications09:30
noonedeadpunkBut for example here https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L35709:30
noonedeadpunkI think we want to be able to disble cinder-backup service when we say it should be disabled...09:30
noonedeadpunkBut we again do filter it out....09:31
noonedeadpunkBut my point is that we likely should not :)09:31
noonedeadpunkEventually how I found that out - I tried to disable/enable uwsgi for glance and realised service is not restarted when unit file is changed. But daemon is reloaded09:33
noonedeadpunkAlso, this kind of makes me think if we do double restart of neutron agents - if it's not one of the issues that strike us during upgrades....09:33
jrosserits not great that the roles all take a slightly different approach here09:35
noonedeadpunkthis is another part/angle of the issue....09:35
noonedeadpunkBUt yeah, I think we'd need to harmonize that with same approach09:36
jrosseri'm just a bit concerned that we do huge complex thing, when some adjustment around the edges of things might make it overall simpler09:36
jrosserkind of like the haproxy stuff09:36
jrosserright now i don't know what that adjustment might be though09:37
noonedeadpunkYeah, let me prepare a change, I'll ping you09:37
jrosserfrom a conceptual p.o.v there is something to say "systemd_service role is responsible for everything to do with services"09:37
jrosserand we need to hook role handlers into it for these <stop> <do stuff> <start> cases09:38
noonedeadpunkBut I think that we should 1. use service roles to restart services 2. do not use systemd_service to restart them, except sockets and timers 3. Leave functionality of the role systemd_service for standalone usage09:38
admin1during yoga -> zed upgrade, this is coming up which leads to nova being down now .. https://gist.githubusercontent.com/a1git/6f25cfb53feb2cb3b6d122da5664b462/raw/5bb0563f46c05bcea47fbc2a04f607f1f045f4ab/gistfile1.txt  09:42
admin1and is it possible to just use nova on yoga while everything else is already zed 09:43
noonedeadpunkum, yes, it's possible, but issue is weird09:45
noonedeadpunkAs Y->Z should be jsut fine, and it's also in output that minimum compute version is Y09:45
noonedeadpunkAre you sure that all your computes are on Y?09:45
admin1yes 09:53
admin1they are .. because i did w-> y a few days back 09:53
admin1and validated all were working via canary deployment09:53
admin1is it possible to just have nova on old version 09:53
admin1i # nova playbook on setup-openstack and completed the rest to zed ..  . 09:54
noonedeadpunkUm, you can just supply SHA from Yoga for nova in variables to leave it intact09:56
noonedeadpunkI kind of wonder if you might have such override olreaddy that has slipped during upgrade?09:58
noonedeadpunk`nova_git_install_branch`09:58
admin1this will be the first time in almost 8 years that i will be overriding something that came out of the box :D 09:59
admin1i just work with tags and stick to user_config and user_variables 09:59
admin1never override anything else 09:59
noonedeadpunkyeah, so you can define nova_git_install_branch to user_variables 10:00
noonedeadpunkTo be stable/yoga or any SHA or tag that corresponds to Y10:01
noonedeadpunkBut again - error you got should not happen and is really weird10:01
admin1so change  playbooks/defaults/repo_packages/openstack_services.yml   nova_git_install_branch from 9bca7f34a0d1e546bff8e226cbe45ccbc520c878 (26.0.1)  to   e0723453eaff0346630419413d5b6bff55aa3791 (25.3.0)10:01
admin1is that the only thing i need to do ? 10:01
noonedeadpunkyou can place nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791 in user_variables10:02
admin1got it 10:02
noonedeadpunkno need to change /openstack_services.yml10:02
admin1ok10:02
noonedeadpunkYou might also want to set `nova_venv_tag` to prior OSA version to re-use old venv10:03
admin1do i have to setup bootstrap again ? 10:03
noonedeadpunkOh! And one more thing10:03
noonedeadpunkset `nova_upper_constraints_url: https://releases.openstack.org/constraints/upper/yoga` (instead of yoga you can use SHA for `requirements_git_install_branch` from prior osa version)10:04
noonedeadpunkno need to bootstrap10:04
admin1the 25.3.0 venv is in all srevers .. so   i need nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791, nova_venv_tag = 25.3.0  and also  nova_upper_constraints_url  -- 3 total new variables ? 10:05
noonedeadpunknova_upper_constraints_url: https://releases.openstack.org/constraints/upper/707da594d13b7f1ed47800660aaa2a8a804689df10:07
noonedeadpunkYeah, I think this should be enough10:07
admin1nova_git_install_branch: e0723453eaff0346630419413d5b6bff55aa3791, nova_upper_constraints_url: https://releases.openstack.org/constraints/upper/707da594d13b7f1ed47800660aaa2a8a804689df , nova_venv_tag:  25.3.0 10:08
noonedeadpunkadmin1: can you check 1 thing before execute that?10:09
admin1sure 10:09
jrosserbtw I never saw anything like this in Y-Z upgrades10:09
admin1this means i need to somehow seregate all the cpu types here, and try to selectively upgrade one from y -z 10:10
noonedeadpunkwhat's the output of `mysql -e "select version from nova.services where deleted = 0;" ?10:10
noonedeadpunkIt's not about cpu types but about rpc api compatabilityy I believe?10:11
noonedeadpunkAs the only failures are regarding `Check: Older than N-1 computes`10:12
noonedeadpunkoutput of mysql should be all 60 iirc10:12
admin1https://gist.githubusercontent.com/a1git/13ceb2e181dab9532a5b229b2915b478/raw/eee560226905d21159808f36a038c9201d8e2d23/gistfile1.txt10:13
noonedeadpunkor well10:13
admin1hmm.. 10:13
noonedeadpunklet me check10:13
admin1this is strange .. 10:13
noonedeadpunkso, the ones that are 60 - are reason of failure. And they don't seem to be upgraded to Y10:13
noonedeadpunkYou can do hacky thing:)10:13
noonedeadpunkGiven that you've checked, that nova-compute is running from Zed venv on these computes and pip freeze | grep nova from venv says that nova is 26.*.*.dev*, you can jsut update DB :p10:15
noonedeadpunkeventually - that could be in case these computes are down 10:15
admin1they are down 10:15
noonedeadpunkAs then they won't report new version to rpc10:15
noonedeadpunkIf there's a reason to leave them - update DB. If not - delete them from services10:16
admin1now i think of it, the version 60 that is in the db are all computes that are temporarily down 10:18
admin1i can live without them for now10:18
noonedeadpunkok, then if they're temp down - update db10:18
noonedeadpunkif perm down - no reason to keep them kind of10:18
admin1yes but right now even the api layer is down when i run something like hypervisor list, so  i do not think i can use the api to temporarily delete them 10:20
admin1for forcing db upgrade, i have to be in any nova controller, inside the venv and then use "nova_manage  db sync" 10:21
noonedeadpunkIt will auto-heal once you update db ;)10:22
noonedeadpunkUm, not really, I think you will need smth like `update nova.services set version = 61 where version = 60 and deleted = 0;`10:23
admin1would me doing delete from nova.services where version=60  .. or   update nova.services set version=61 where version-60 also help ? 10:23
admin1i was thinking the same 10:23
noonedeadpunkAnd yes, that will jsut fix api and computes and everything10:24
noonedeadpunkWe did that when were jumping through releases10:24
noonedeadpunkjrosser: seems https://review.opendev.org/c/openstack/openstack-ansible/+/876160 not really happy on metal ;(10:25
admin1running nova playbooks again with zed 10:26
admin1btw, i was able to make  magnum work nicely with osa ..   k8s deploys nicely .. only issue is in the ingress :) 10:32
admin1but i was able to mitigate that using octavia + nodeports .. and today testing metallb with vyos as bgp 10:32
jrossernoonedeadpunk: hmm o probably have to make AIO for that, won’t be today10:41
jrosserhuh that is odd10:42
jrosseranyway !working today…..10:42
admin1"<bauzas> admin1: again, that's by design that we don't allow non-upgraded compute to be left registered" ( from openstack-nova ) .. so i think in my case, some servers were offline where wallaby -> yoga did not complain .. but when came time for yoga - zed,  the db upgrade part got stuck while the venvs were already in the compute 10:45
admin1while the new venvs*10:45
admin1if this works ( current run ) , then i am good to go . 10:45
noonedeadpunkoh, yes, sorry, I've completely forgotten today is friday jrosser10:45
noonedeadpunkadmin1: yeah-yeah-yeah, I know :)10:46
admin1but thank you for teaching how to bypass a specific version for a specific component noonedeadpunk 10:47
admin1beer from me next time i see you :) 10:47
noonedeadpunkno worries:)10:47
admin1to jrosser as well . all guys here ..  as you have been all helpful 10:47
*** odyssey4me is now known as odyssey4me__10:58
*** odyssey4me__ is now known as odyssey4me10:58
*** odyssey4me is now known as odyssey4me__11:06
*** odyssey4me__ is now known as odyssey4me11:06
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Reduce output by leveraging loop labels  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/87630211:22
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Reduce output by leveraging loop labels  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/87630211:36
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Fix tags usage for included tasks  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/87632111:39
*** odyssey4me is now known as odyssey4me__12:02
admin1upgrade successful .. 12:22
noonedeadpunksweet12:32
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Ensure service is restarted on unit file changes  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/87632812:59
noonedeadpunkjrosser: smth like that regarding systemd ^12:59
noonedeadpunkI was thinking about variables in handlers name for a while, and I'm not sure if we need them in this case. Likely not13:00
noonedeadpunkI don't think we do include systemd_service more then once without flushing handlers in between - we always supply just list of services to configure in one round13:01
noonedeadpunkSo I'd keep that simple. Also there's a caution to use vars in handlers names as you should be careful when doing that. Having vars in listen would be helpful, but it's not possible13:01
admin1from wallaby -> zed, it created a file  called  user_neutron_migration.yml .. and now my hosts give  ERROR neutron.agent.l2.extensions.fdb_population [None req-944ff309-b58b-4551-954c-e0f180097607 - - - - - -] Invalid configuration provided for FDB extension: no physical devices13:13
admin1sorry .. wallay -> yoga complained that file was not there, but went ahead13:13
admin1yoga -> zed populated that file13:13
admin1which has neutron_ml2_drivers_type: flat,vlan,vxlan,local  in line, and also neutron_plugin_base:  -- routing and metering13:13
admin1and nothing else13:13
admin1in /etc/neutron/plugins, there is  a FDB line added, but there shared_physical_device_mappings =      ( is blank) 13:15
admin1would it cause the ovs-agent to go down ? 13:15
admin1do we have a new variable in zed that is suppose to populate  FDB ?   error in the log is "Invalid configuration provided for FDB extension: no physical devices" 13:18
noonedeadpunkadmin1: you're having sriov?13:22
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/releasenotes/notes/network_sriov_mappings-7e4c9bcb164625c3.yaml13:23
admin1yeah13:25
admin1https://gist.githubusercontent.com/a1git/a4d850497b6d5fe9805448d8467bd46c/raw/7a34db9d858e93418e3e062ab5f52d7f143face2/gistfile1.txt 13:25
admin1that is how the config is now 13:25
noonedeadpunkI don't know much about sriov on neutron, so no idea how it must be filled in properly, but idea is to define network_sriov_mappings in neutron_provider_networks or openstack_user_config13:34
noonedeadpunkI believe jamesdenton might know more about that13:34
admin1oh 13:35
admin1i see 13:35
admin1i get it13:35
admin1thanks for the hint noonedeadpunk , i know how to set it up now 13:37
noonedeadpunkok, great :D13:38
admin1i think a lot of changes were introduced/enforced from w - z , as I stayed in y for  just a day to notice 13:38
admin1is osa also tested with trove and ironic ? 13:39
noonedeadpunkWell. Kind of. I've played with trove during... V? and it worked nicely13:41
noonedeadpunkIt was polished and taken care of back then13:41
noonedeadpunkThere's also doc pushed on how to deploy a separate rabbit cluster for trove that you defenitely want to have13:42
admin1 i only want rabbit and postgres :) 13:42
noonedeadpunkI think the only issue we had with it were ipv6 support13:42
noonedeadpunkBut it was trove thing rather then deployment (though we should think about ipv6 to be frank as well)13:43
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Install erlang from packagecloud for RHEL  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578214:06
jamesdentongood morning14:13
prometheanfirenoonedeadpunk: it's literally the first thing I could get working, largely coppied from neutron14:28
admin1\o14:29
damiandabrowskihi!14:44
noonedeadpunkYeah, I'd rename some variables I guess, but will see14:49
prometheanfirethe cert names seemed to be right at least15:35
prometheanfireI mean, as saved on the cert host15:36
noonedeadpunk++15:41
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Install erlang from packagecloud for RHEL  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578215:43
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578216:01
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578216:02
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639816:09
prometheanfirenoonedeadpunk: it's probably a problem with my env, but I can't get a OVN LB to work, creates but IP doesn't respond to arp17:02
prometheanfireprobably my fault though17:02
noonedeadpunkprometheanfire: you'd better pinged jamesdenton :D17:03
noonedeadpunkI'm not on OVN yet - trying to get there17:03
noonedeadpunkAnd never even played with octavia and ovn...17:03
prometheanfireheh, jamesdenton so.... how you doin'?17:04
jrosseroh i see `Proxy 'base-front-2': unable to find required default_backend: 'horizon-back'`17:55
jrosserno horizon on metal jobs so that needs to be optional17:55
noonedeadpunkjrosser: damiandabrowski has spotted more nasty thing with dynamic groups17:59
noonedeadpunkWhich is that haproxy is added as backend to all services17:59
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Allow default_backend to be specified  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87615718:00
noonedeadpunkSince it's part of <service>_all, and we have backends based on that group18:00
noonedeadpunk(as we add it to <service>_all)18:00
noonedeadpunkSo looks like we have to rollback to using loops and playbook18:01
noonedeadpunkAs I have zero idea how to overcome this behaviour18:01
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Split haproxy horizon config into 'base' frontend and 'horizon' backend  https://review.opendev.org/c/openstack/openstack-ansible/+/87616018:02
jrosseri see, yes18:04
jrossernoonedeadpunk: well really i still stand by my comment in the blueprint that moving all the group vars around is out of scope of a change moving the execution of haproxy for services into the service playbooks.....18:19
jamesdentonhey prometheanfire 18:22
jamesdentoni'd need to circle back on the OVN LB piece - but tied up one some other stuff18:23
noonedeadpunkjrosser: it\s quite small/relatively trivial addition to overall mess18:24
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Enable TLS frontend for repo_server by default  https://review.opendev.org/c/openstack/openstack-ansible/+/87642619:24
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible/+/87118919:28
jrossernoonedeadpunk: this is unexpected https://paste.opendev.org/show/bowVjtmvnimW2myXdEsI/19:30
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible/+/87118919:30
jrosserand appears to work around the issue of having haproxy_all added into glance_all19:30
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-repo_server master: Add TLS support to repo_server backends  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/87642919:35
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Enable TLS frontend for repo_server by default  https://review.opendev.org/c/openstack/openstack-ansible/+/87642619:36
jrosseri think there is something broken for rabbitmq on master now19:42
jrosser`No package erlang-25.2-1.el8.x86_64 available.`19:43
damiandabrowskijrosser: does it require to explicitly pass a list of variables? if so, it would be tricky as *_haproxy_services sometimes refer to other variables from the foreign group19:56
jrosseryes it does require them to be listed19:57
damiandabrowskibtw. i tested it for horizon on my AIO with applied haproxy-separated-service-config and it doesn't work19:57
damiandabrowskihttps://paste.openstack.org/show/bXt96UjC85w4ibRNV8Q5/19:57
damiandabrowskii have no idea why it complains about 'ansible_architecture' is undefined :|19:58
jrosserhmm well maybe my example is not realistic enough to see how limited it might be19:59
noonedeadpunkjrosser: wait, what....20:01
damiandabrowskianyway guys, i need to leave in a moment. As I mentioned before, I'm leaving for a vacation so see you on 14th March20:01
damiandabrowskijust in case: i'll be partially available tomorrow20:01
damiandabrowskithanks for your support!20:02
noonedeadpunkregarding rabbit - yup, I don't see such package indeed... I wonder if we indeed should just bump series for erlang20:02
noonedeadpunklike damiandabrowski did for Xena20:02
jrosseri didnt like the '8' in the url for centos9 either20:02
jrosserthough that might be situation-normal here :)20:03
noonedeadpunkyeah, it's as proposed by vendor /o\20:03
noonedeadpunk*maintainer20:03
noonedeadpunkHm, do you recall how we were using fixed versions of tempest plugins before Y....20:06
noonedeadpunkjust in aio template or smth...20:06
jrosseryou mean this sort of thing? https://github.com/openstack/openstack-ansible-os_tempest/blob/stable/xena/defaults/main.yml#L11820:09
noonedeadpunkAha, yes20:09
noonedeadpunkBut I'm afraid we're having circular dependency on X now...20:09
jrosseriirc every so often we have to go and pin one of those on stable branch when tempest master no longer works20:09
noonedeadpunkWell, it's fixed now kind of. But not on X20:10
noonedeadpunkY has that https://opendev.org/openstack/openstack-ansible/src/branch/stable/yoga/playbooks/defaults/repo_packages/openstack_testing.yml20:10
jrossermaybe we make a mistake on X by not re-introducing that file?20:13
noonedeadpunkI wonder if just using Y will work fine20:14
jrosserthere are instructions somehere about when that file should be present / updated / deleted as part of a release20:14
noonedeadpunkLet's try that20:14
noonedeadpunkYeah, we tend to delete it before doing release20:14
noonedeadpunkas it should be supplied by u-c or smth20:15
noonedeadpunkBut it's not as of today20:15
prometheanfirejamesdenton: ack20:15
noonedeadpunkAnd I'm not doing that anymore as we're having too much troubles with that approach20:16
noonedeadpunkAnd tempest is not in u-c anymore at all, so20:16
prometheanfirejamesdenton: the lb is created in the container (ovs shows it with the right IPs and ports), but I don't know how to figure out why or where it should be sending arp20:18
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Backport openstack_testing from Yoga  https://review.opendev.org/c/openstack/openstack-ansible/+/87643420:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639820:24
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639820:30
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578220:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/xena: Update rabbitmq to 3.9.28  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87578220:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639820:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639820:34
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Prepare main repo for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible/+/87118920:34
noonedeadpunkdamn, I need to stop doing anything...20:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Update rabbitmq to 3.10.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87639820:40
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Update erlang to 25.2.3 and rabbit to 3.11.10  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87643620:43
noonedeadpunkdamiandabrowski: it complains about 'ansible_architecture' is undefined because of gather_facts: false20:49
noonedeadpunkBut that is super weird that I do also see haproxy_all being added as backend, but example really does work20:50
noonedeadpunkhm, it works differently for some reason...20:54
noonedeadpunkhttps://paste.openstack.org/show/bOU8Qi6CC3sYkL6C9ELp/20:55
noonedeadpunkSo what's the difference with https://paste.openstack.org/show/bXt96UjC85w4ibRNV8Q5/20:55
noonedeadpunklocalhost vs horizon_all20:56
jrosserthe add_host task is vey different20:56
jrosser*very20:56
jrosserno group for example20:56
noonedeadpunkoh, right20:57
jrosseri am not sure i totally understand it20:57
jrosserbut i was looking more at the examples in the add_host documentation20:58
jrosserit's almost like it augments the existing haproxy host with a var from the other group20:58
jrosservar or vars, whatever you list there20:59
jrosserbut that is perhaps quite fragile20:59
noonedeadpunkwe'd kind of need to name all vars in group_vars as haproxy_service_configs I assume21:03
jrosseri was thinking ahead to where we would put things like `glance_backend_tls` and how that would get through21:06
noonedeadpunkYeah, that kind of works21:07
noonedeadpunkhttps://paste.openstack.org/show/bgoBDI2Gux7H3Zuhp5Tm/21:08
noonedeadpunkWe likely even can skip reloading inventory this way as haproxy_service_configs will be overriden regardless21:09
jrosserwe probably have other variables too that are used in the haproxy service config21:11
jrosserthat might be tricky21:11
noonedeadpunkthey all seemed to resolve21:11
noonedeadpunkhm, or let me check...21:12
noonedeadpunkI have picked too easy service21:12
noonedeadpunkyes, they do resolve21:14
noonedeadpunkI assume that add_host does fully load variable and pass it as resolved one21:15
noonedeadpunkThat is neat21:15
noonedeadpunkI'm not sure I like accessing hostvars though but might be fine after all...21:15

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!