Thursday, 2021-02-25

*** tosky has quit IRC00:00
*** macz_ has quit IRC01:11
*** Underknowledge has quit IRC01:23
*** Underknowledge has joined #openstack-ansible01:24
*** macz_ has joined #openstack-ansible01:26
*** Carcer has quit IRC01:26
*** Carcer has joined #openstack-ansible01:27
*** macz_ has quit IRC01:31
*** cp- has quit IRC01:58
*** cp- has joined #openstack-ansible02:04
*** macz_ has joined #openstack-ansible02:07
*** macz_ has quit IRC02:11
*** LowKey has quit IRC02:20
openstackgerritMerged openstack/openstack-ansible master: Switch back to willshersystems ansible-sshd role  https://review.opendev.org/c/openstack/openstack-ansible/+/77696502:45
*** cp- has quit IRC02:45
*** cp- has joined #openstack-ansible02:46
*** spatel has joined #openstack-ansible03:35
*** gyee has quit IRC05:20
*** macz_ has joined #openstack-ansible05:27
*** macz_ has quit IRC05:32
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** luksky has joined #openstack-ansible06:13
*** spatel has quit IRC06:14
*** ierdem has joined #openstack-ansible06:28
*** ioni has joined #openstack-ansible06:56
*** macz_ has joined #openstack-ansible07:08
*** macz_ has quit IRC07:13
*** pto has joined #openstack-ansible07:16
*** miloa has joined #openstack-ansible07:21
*** pto has quit IRC07:22
*** macz_ has joined #openstack-ansible07:24
*** macz_ has quit IRC07:29
*** miloa has quit IRC07:34
*** rpittau|afk is now known as rpittau07:57
ierdemHi everyone, I am facing "EHOSTUNREACH" error in my nova container journals. When I checked the network with tcpdump I saw this log http://paste.openstack.org/show/802989/. The IP addresses in this log are not used but they used for containers before. Now, I cleaned my inventory and re-installed OpenStack, I do not know where these IPs comes from,08:06
ierdemhow my containers find these IPs? Can you help me please?08:06
*** andrewbonney has joined #openstack-ansible08:13
*** maharg101 has joined #openstack-ansible08:28
*** tosky has joined #openstack-ansible08:38
*** mathlin_ has quit IRC08:41
kleiniierdem: can you just grep for those IPs in all files of the nova container? especially /etc/nova, /etc/hosts08:50
ierdemI grep all files, there is nothing about those in any files. I also search other containers, servers etc08:52
*** pto has joined #openstack-ansible08:56
*** pto has joined #openstack-ansible08:57
*** pto has quit IRC08:58
*** macz_ has joined #openstack-ansible08:58
*** macz_ has quit IRC09:03
kleinioh, you have that error message in the container journals. from which process are these errors?09:08
*** Underknowledge has quit IRC09:37
*** Underknowledge1 has joined #openstack-ansible09:37
*** Underknowledge1 is now known as Underknowledge09:37
*** macz_ has joined #openstack-ansible09:46
jrossernoonedeadpunk: do you think we have a good reason for setting "ANSIBLE_CACHE_PLUGIN_TIMEOUT": "86400"09:47
jrosserit means that i come to the AIO i was using > 24hrs ago and wierd things break09:47
jrosserlike run os-nova-install and it breaks on python_venv_build because the facts for the repo server are now out of date09:48
noonedeadpunkI think in all playbook we've added cache gathering on launch09:49
jrosseryes, but only for the hosts in the play09:49
jrosserso i have fresh facts for nova_all, but not for repo_all09:49
noonedeadpunkah...09:49
jrosserleads to really wierd breakage because the files all look good in /etc/openstack_deploy/ansible_facts09:50
noonedeadpunkit's default value actually https://docs.ansible.com/ansible/latest/reference_appendices/config.html#cache-plugin-timeout09:50
*** macz_ has quit IRC09:51
jrosseroh! interesting09:51
admin0ierdem, still having the same issues ?09:51
admin0\o09:51
jrosserwonder how this is not causing more issues already then09:51
noonedeadpunkwell, I just run all facts gathering before running smth with tags09:51
noonedeadpunkas eventually it's only tags that are breaking I guess09:51
noonedeadpunksince then gather_facts is not performed on role startup09:52
noonedeadpunkor we should have always here https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/os-nova-install.yml#L42-L43 (and everywhere in playbooks)09:52
jrosserwell i think that still won't make old repo_all facts become valid running the nova playbook will it?09:56
*** jhesketh_ has joined #openstack-ansible09:57
jrosseros-nova-install.yml has a dependancy on repo_all facts because it uses python_venv_build09:57
noonedeadpunkso it was kind of working with ansible_ but not with ansible_facts, right?09:59
jrosserwell no i'm not sure it can09:59
*** brad- has joined #openstack-ansible10:00
jrosserok: [localhost] => {10:00
jrosser    "hostvars['aio1_repo_container-d8de7ef9']['ansible_facts']": {}10:00
*** arxcruz has quit IRC10:00
*** arxcruz has joined #openstack-ansible10:01
*** macz_ has joined #openstack-ansible10:02
*** Carcer has quit IRC10:03
*** jhesketh has quit IRC10:03
*** brad[] has quit IRC10:03
*** kleini has quit IRC10:03
*** irclogbot_2 has quit IRC10:03
*** mcarden has quit IRC10:03
jrossernoonedeadpunk: http://paste.openstack.org/show/802991/10:03
noonedeadpunkhm10:04
noonedeadpunkHow it has been even working indeed?10:05
jrosseri guess its always been less than 24 hours since running something that gathers the needed facts10:05
*** macz_ has quit IRC10:06
jrosserthis has just come up i think because i pretty much didnt touch my AIO for a whole day then came back to it half way through the deployment10:06
noonedeadpunkand I was running only nova-config tags...?10:06
*** irclogbot_2 has joined #openstack-ansible10:06
jrosserif that doesnt do any of the venv stuff it's probably not going to occur10:06
noonedeadpunkwell my aio on U working for half a year or so... And I think I can just re-run playbooks and it should be working...10:07
jrosserit was just accident i saw the timeout value in a dump of hostvars10:07
jrossersetting the timeout to 0 seems to do what i'd expect http://paste.openstack.org/show/802992/10:09
noonedeadpunkI re-run glance and it failed but because lxc-dnsmasq has been killed by oom, so it couldn't update apt. But it haven't failed http://paste.openstack.org/show/802993/10:11
noonedeadpunkoh, hm https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_wheel_build.yml#L16-L2110:12
noonedeadpunkmaye we should move it to main?10:13
jrosserheres mine at a similar point for nova http://paste.openstack.org/show/802994/10:14
noonedeadpunkI think setting cache to 0 is also an option, it would just influence os upgrades that you need to go and clean up facts manually10:14
noonedeadpunkmaybe we've breake it with https://opendev.org/openstack/ansible-role-python_venv_build/commit/ab68e944256648cb6d714e02dbe88d228a2c04a5 ?10:15
noonedeadpunkas I don't have it yet in my aio10:16
jrosseri think that patch is legitimate10:17
jrosserbetter fix is to set the cache time to 0, as I think we do gather facts correctly in each playbook when things are deployed10:18
jrosserit's more about when one thing wants facts from somewhere out of the scope of that playbook that we have a problem10:18
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Gather facts for repo containers  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77755910:20
noonedeadpunkwell we can disable facts cache timeout. But even then the only way to get the updated is run repo_all which is not really intuitive as well10:22
jrosseri'm going to revisit a lot of this with the ansible_facts[] stuff10:23
noonedeadpunkand I think we might miss proper facts gathering somewhere in terms of tags usage10:24
jrosserspecifically for nova i think we should add repo_all to the group here https://github.com/openstack/openstack-ansible/blob/master/playbooks/common-playbooks/nova.yml#L16-L2010:25
jrosseryeah, gathering facts for hosts: "{{ nova_hosts }},repo_all" in common_playbooks/nova.yml is the proper way and robust10:30
jrosserjust checked and that works fine10:31
ierdemadmin0 yes same problem .. :/10:43
masterpeHi, I try to upgrade from Train to Ussuri, I'm following https://docs.openstack.org/openstack-ansible/ussuri/admin/upgrades/major-upgrades.html for this. But as I understand Ceph also needs to upgraded from Nautilus to Octopus. For this I need to run openstack-ansible /opt/openstack-ansible/playbooks/ceph-install.yml with the rolling_update=true option.10:52
masterpeBut I hit the following issue: https://github.com/ceph/ceph-ansible/issues/540010:52
masterpeI'm I correct that a ceph upgrade is required?10:53
*** pto has joined #openstack-ansible10:59
*** pto has quit IRC11:04
noonedeadpunkwell, I'd say that doing that in python_venv_build is more proper because that's the role that failing it... But I can agree that this will mean multiplication on facts gathering for each role. But from other saide, each playbook would have this included anyway...11:06
noonedeadpunkIt's jsut less work to patch pythn_venv_build like this https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77755911:06
*** sshnaidm|afk is now known as sshnaidm|pto11:22
noonedeadpunkmasterpe: yes, we use ceph-ansible 5 in Ususri11:25
noonedeadpunkso generally you need a ceph update or just omit ceph-install playbook and leave ceph as-is and upgrade it later. Eventually openstack can work nicely with Nautilus as well11:26
noonedeadpunkbut yes, it's better to upgrade anyway11:27
noonedeadpunkjrosser: is ansible_facts['env'] some object so `ansible_facts['env']['HOME']` just doesn't work?11:48
noonedeadpunk(works for me)11:50
jrossernoonedeadpunk: works for me too - did i miss changing a .HOME somewhere?11:52
CeeMacierdem: did you try tcpdump against the rabbit port to see what nova is trying to talk to?  Regarding your tcpdump, is this a dedicated network to your OSA install, or do you have none OSA things running on it to? The container could just be seeing arp requests from other devices on the same subnet11:52
noonedeadpunkyeah is several roles in several places:) So wasn't sure it works:)11:53
jrosseroh yes i see the emails now11:53
jrosserthis can all be fixed11:54
jrosseri just added the spaces to make it easier to see what the code was meaning11:54
jrosseri had to space it out here to understand what the nesting was to change it, which felt like somehow the expression was a bit too complex11:55
noonedeadpunkok, got it. just didn't get used to spaces inside list barckets11:56
noonedeadpunkmaybe it's good to have them...11:56
noonedeadpunkI have comments for https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/777179 that probably relevant for all roles12:05
*** macz_ has joined #openstack-ansible12:05
noonedeadpunkwhat do you think about it? Shouldn't we clean up a bit once we're revising that?12:05
jonherhi, i'm trying to deploy a AIO on focal with stable/victoria, default AIO setup with the addition of the adjutant role but i'm getting this: http://paste.openstack.org/show/802997/   any ideas on what i could check to help troubleshoot this? it seems like the ci jobs deploy this fine against master: https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/77202012:06
*** macz_ has quit IRC12:09
noonedeadpunkoops, looks like some bug12:12
noonedeadpunkyeah, we use new resolver on master12:12
openstackgerritMerged openstack/openstack-ansible-lxc_container_create master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/77695812:15
noonedeadpunkI have a meting right now, will be able to check right after that12:15
jonhertyvm :)12:16
*** macz_ has joined #openstack-ansible12:21
openstackgerritJonathan Rosser proposed openstack/ansible-role-python_venv_build master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77706812:24
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-openstack_openrc master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-openstack_openrc/+/77707012:25
*** macz_ has quit IRC12:25
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-openstack_hosts master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77696012:26
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-galera_server master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/77706012:27
jrossernoonedeadpunk: would you prefer me to combine cleanup stuff into the ansible_facts[] patches?12:27
jrosseri'e tried not to change anything else, like the centos-8 ternary stuff or suse/zypper12:28
jrosserwas going to take another pass over the whole thing for that12:28
noonedeadpunkwell, that would be awesome to have another pass, but it's jsut about time and if you have it... As I'd also loved to have such pass but not sure when will be able to get to it12:41
noonedeadpunkso if we don't need to use ternary somewhere at all - why we even replace it?12:42
noonedeadpunknot sure though...12:42
noonedeadpunkIt's not strong opinion anyway12:42
noonedeadpunkwas just asking how do you think. I'm ok to merge this as is actually12:43
jrosseri think atm i prefer to change just one thing12:43
noonedeadpunkok then12:43
jrossertrying out vscode rather than vim and these things are much easier across * repos12:43
jrosservscode remote into AIO = win12:44
noonedeadpunkyeah, vscode is super nice :)12:44
*** macz_ has joined #openstack-ansible12:52
*** macz_ has quit IRC12:56
openstackgerritMerged openstack/ansible-role-uwsgi master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/77738613:04
openstackgerritMerged openstack/openstack-ansible-rabbitmq_server master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/77707113:14
openstackgerritMerged openstack/openstack-ansible-memcached_server master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-memcached_server/+/77706913:15
*** kleini_ has joined #openstack-ansible13:18
openstackgerritMerged openstack/ansible-role-systemd_networkd master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/77695513:30
noonedeadpunkok, so, adjutant13:36
openstackgerritMerged openstack/openstack-ansible-repo_server master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/77707313:40
*** jamesdenton has joined #openstack-ansible13:40
*** gshippey has joined #openstack-ansible13:57
*** spatel has joined #openstack-ansible13:59
noonedeadpunkjonher: ok, I found it. what if you set `adjutant_venv_tag: "{{ venv_tag | default('untagged') }}"` in you user_variables and re-run it?14:04
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Use global service variables  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/77759614:07
jonheri put that into /etc/openstack_deploy/user_variables.yml and ran: openstack-ansible os-adjutant-install.yml    but it looks like it's hitting the same issue, do i need to set venv rebuild to true as well?14:08
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Use venv_tag  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/77759714:09
noonedeadpunkcan you pls copy that new issue?14:11
noonedeadpunkweird thing is - `You are using pip version 20.2.3; however, version 21.0.1 is available.` so it's pip version that should have old resolver, right?14:11
jrosseri don't think we merged the patch yet to actually change the resolver14:11
jonherhttp://paste.openstack.org/show/803003/14:11
jrosserbecasue os_neturon is br0k14:12
noonedeadpunkoh, indeed14:12
noonedeadpunkjonher: can I also ask you to check `/var/log/python_wheel_build.log` inside repo_all[0] container?14:13
noonedeadpunkyeah eventually this one does not contain reason why it's failed :( http://paste.openstack.org/show/803005/14:16
noonedeadpunkso we really need to check that log file14:18
jonherah sorry, i emptied the log to make a fresh one, re-ran the playbook and then it complained about adjutant-22.0.2.dev7-constraints.txt not exisiting, that is where i'm at right now14:19
noonedeadpunkI thought it should be `adjutant-22.0.2.dev7-source-constraints.txt` hm14:25
jonheradjutant-22.0.2.dev7-global-constraints.txt  adjutant-22.0.2.dev7-requirements.txt  adjutant-22.0.2.dev7-source-constraints.txt  adjutant-untagged-global-constraints.txt  adjutant-untagged-requirements.txt  adjutant-untagged-source-constraints.txt  exists in repo14:30
noonedeadpunkbut it claims on `adjutant-22.0.2.dev7-constraints.txt`?14:31
jonheryep: FAILED! => {"changed": false, "msg": "file not found: /var/www/repo/os-releases/22.0.2.dev7/adjutant-22.0.2.dev7-constraints.txt"}14:31
noonedeadpunkyou can try passing venv_rebuild=true in case you're on stable/victoria and have this patch applied https://opendev.org/openstack/ansible-role-python_venv_build/commit/9df14bdd0ba8221abd43ce26190f13d48ec8c08f. This should help14:35
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: DNM Test workaround octavia centos bug  https://review.opendev.org/c/openstack/openstack-ansible/+/77760114:39
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: [reno] Stop publishing release notes  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/77204214:39
andrewbonneynoonedeadpunk: you might need https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/772559 for the octavia workaround too14:45
andrewbonneyOtherwise I think it'll fail looking for openstacksdk on the deploy host14:45
noonedeadpunkyeah, you're right14:46
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: [reno] Stop publishing release notes  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/77204214:46
mgariepynoonedeadpunk, quick one about the sha bump. how is the new tag generated ?14:47
noonedeadpunkthere's a tooling but lately I just write it up by hands since it's faster. eventually x.x.Y release is a bugfix, x.Y.x is a feature release and Y.x.x is major release eventually14:49
noonedeadpunkso if there're feature release notes it should be feature release kind of14:49
mgariepyok, and the tag is added where ?14:51
mgariepyonce the bump is merged, you simply add the tag to the repo ?14:51
jonheredited the task removing the line that is in the commit, then tried: openstack-ansible os-adjutant-install.yml --extra-vars "venv_rebuild=true" and got: http://paste.openstack.org/show/803008/14:53
noonedeadpunkmgariepy: once bump is merged I push commit to the releases repo with change to the deliverables https://opendev.org/openstack/releases/14:58
mgariepyha, less black magic :) haha14:59
noonedeadpunkit's actually pretty straightforward :p14:59
noonedeadpunkjonher:14:59
noonedeadpunkhuh.14:59
noonedeadpunkdo we even need mysqlclient there?15:00
*** pto has joined #openstack-ansible15:00
mgariepythat's what i was thinking but i didn't knew where it was done ;) haha15:00
noonedeadpunkhm https://opendev.org/openstack/adjutant/src/branch/master/requirements.txt#L1915:01
jonheryeah was just looking at that15:01
*** spatel has quit IRC15:02
jonherit has it's database in galera/mysql unless you run a local sqlite15:02
noonedeadpunkwell, every service we run has own mysql (all almost every)15:02
noonedeadpunkbut none use mysqlclient I guess15:03
noonedeadpunkok, let's install it then15:05
noonedeadpunksec15:05
noonedeadpunkcan you add override like that? http://paste.openstack.org/show/803009/15:06
noonedeadpunkconsidering it's ubuntu15:06
*** pto has quit IRC15:06
jonhernow it gets past the initial error but can't find libmariadb.so.3 when trying to import the module: http://paste.openstack.org/show/803010/15:12
noonedeadpunkdoh15:13
noonedeadpunkI hate mysqlclient....15:13
noonedeadpunkwhy not to use PyMySQL like all other projects...15:13
jonherwhy does this work in ci against master but not on stable/victoria? i'm still lost why that is the case15:14
noonedeadpunkI think because in CI we use metal jobs where libmariadbclient-dev got installed by part of another role15:15
noonedeadpunkor with utility15:15
jonheroh15:15
noonedeadpunkif you manually install that package inside container, does it solve the issue?15:15
noonedeadpunk(inside adjutant container)15:15
noonedeadpunkI will patch this if this helps15:16
*** spatel has joined #openstack-ansible15:16
*** macz_ has joined #openstack-ansible15:22
jonheryep now os-adjutant-install.yml is OK15:22
noonedeadpunkwell, we said about experimental support :p15:22
noonedeadpunkbtw, I think you would love to have horizon integration as well?:)15:23
noonedeadpunkas I never had time/motivation to made it15:23
jonherhehe yep, that's next step to see how it would be installed15:23
jonheri looked for the same for cloudkitty etc.. and saw some similar references in the horizon role15:23
noonedeadpunkwell, at this point it won't15:24
noonedeadpunkfor cloudkitty we have half-backed solution that never was integrated properly15:24
admin0anyone using victoria + AD integration + multi domain auth and has their horizon not broken ?15:24
noonedeadpunkbut there should notbe too much work. we just never had interested enough people and for us it' hard to make smth nobody use15:25
*** macz_ has quit IRC15:26
jonheryeah i see, do you know on top of your head a os-ansible project that also publishes dashboards to horizon?15:26
jonherand thanks very much for the help in the adjutant case, i thought i did something wrong there at first or had a broken env15:27
noonedeadpunkI can take a look at horizon and adjutant dashboard tomorrow if you will be ready to test that :)15:28
noonedeadpunkkolla-ansible?15:28
noonedeadpunkwell, we added experimental support of adjutant in victoria so it's really new15:28
*** rh-jelabarre has joined #openstack-ansible15:29
jonheryeah sure, would like to deploy adjutant with os-ansible instead of manually on the side like we used to if itwas not too much work, which is why i'm now testing how much of an effort it would be :)15:29
*** rh-jelabarre has quit IRC15:29
*** rh-jelabarre has joined #openstack-ansible15:30
noonedeadpunkwell I had a brief look on adjutant dashboard and it should be pretty straigtforward to add to the role15:30
noonedeadpunkbut without having a usecase it super hard to understand if you're doing right and eventually there're other things that need to be done asap15:30
jonherwe have been using adjutant for 3+ years for inviting users / managing quota / requesting new projects / password resets and it works really well for that :)15:31
jonheri'll be available for testing, i'm not very good at ansible yet but i'll be happy to test15:31
noonedeadpunkyeah, agree. I was just using whmcs for that before and now we don't use even horizon, so...15:31
noonedeadpunkbut I always liked the project15:32
*** fanfi has joined #openstack-ansible15:33
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Install mysql client libraries  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/77760715:38
noonedeadpunkjonher: ^ that should be it15:38
*** macz_ has joined #openstack-ansible16:02
*** spatel has quit IRC16:12
openstackgerritMerged openstack/openstack-ansible stable/victoria: Add reno about barbican_simple_crypto_key  https://review.opendev.org/c/openstack/openstack-ansible/+/77585616:27
*** tsturm has joined #openstack-ansible16:32
tsturmWhen we update openstack via openstack-ansible, do we need to update the conaiiners in the controllers as well?  It doesn't look like the update playbook covers updating them.16:34
*** spatel has joined #openstack-ansible16:51
noonedeadpunkwell yes, you do need. Evnetually you can look what upgrade script does https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/run-upgrade.sh#L170-L18916:53
noonedeadpunkbut I think it depends on understanding what update of containers include..16:53
noonedeadpunkall services on containers should be updated as well as galera/rabbitmq version and etc16:54
jrossertsturm: not sure if it's clear that they are system containers not application containers, so unlike docker type things you don't need to "update the container" to get a new version of the service installed16:57
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_nova master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/77762716:58
tsturmWe are specifically asking about os packages within the system containers.17:00
noonedeadpunkyes, you need to run setup-openstack.yml as a part of upgrade to get new os service venvs/packages and db upgrades17:01
*** fresta has quit IRC17:13
*** fresta has joined #openstack-ansible17:14
*** fresta_ has joined #openstack-ansible17:21
*** fresta has quit IRC17:21
*** rpittau is now known as rpittau|afk17:34
*** maharg101 has quit IRC17:38
*** fresta_ has quit IRC17:52
*** fresta has joined #openstack-ansible17:54
*** luksky has quit IRC18:01
*** luksky has joined #openstack-ansible18:01
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_nova master: Do not use service_facts  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/77764418:09
djhankbtsturm: If you are talking about os packages as in "operating system" and not "openstack" then you can use ansible to handle apt updates in the case of debian:    ansible all_containers -m apt -a "upgrade=yes update_cache=yes cache_valid_time=86400" --become18:14
tsturmThat's what we were looking for.  Thanks!18:15
djhankbyou would substitute the appropriate group you would like to target, my example would include all containers - you should be careful to upgrade galera mysql packages beforehand as this command could bring down all galera nodes at the same time forcing you to re-bootstrap the cluster18:15
mgariepywe should add support for ansible vault  in the pwgent script :D18:16
djhankbyou can use this command to list out your groups (run this from the /opt/openstack-ansible directory):  ansible localhost -m debug -a 'var=groups'18:16
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77765018:42
spatelCan i use barbican as a password vault ?18:45
spateladmin0 i am not using AD but i used FreeIPA LDAP. Ldap is very standard so it should work with AD also https://satishdotpatel.github.io//openstack-ldap-integration/18:47
*** andrewbonney has quit IRC18:49
*** ierdem has quit IRC18:50
admin0i have it running on ussuri .. i upgraded to vic and horizon crashed18:51
admin0so trying to figure it out why18:51
admin0luckily in a lab env18:51
spatel+118:53
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_horizon master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/77765118:59
*** ioni has quit IRC19:01
*** maharg101 has joined #openstack-ansible19:34
mgariepyadmin0, when horizon fail, usualy it's kinda easier to just wipe a container and start from scratch19:39
*** maharg101 has quit IRC19:39
admin0mgariepy, i do that for everything except galera :)19:40
admin0i find it faster to redo than to troubleshoot19:40
mgariepyfor upgrades, if i know it can happen, i usualy just delete one at the begining, then when running the upgrade witll deploy a new one.19:40
mgariepythen wipe the remaining one if they fail (can be disabled in haproxy, before deploying)19:41
mgariepynoonedeadpunk, wow. wtf for the 127.0.0.1 in /etc/hosts20:01
mgariepyall that because of the extra nodes in inventory ?20:02
mgariepyhha. nop i was mistaken i guess :D20:20
*** spatel has quit IRC20:27
*** ioni has joined #openstack-ansible20:28
*** logan- has quit IRC20:35
*** fresta_ has joined #openstack-ansible20:35
*** tinwood has quit IRC20:35
*** tinwood has joined #openstack-ansible20:36
*** mugsie_ has joined #openstack-ansible20:36
*** persia_ has joined #openstack-ansible20:36
*** bverschueren_ has joined #openstack-ansible20:36
*** grabes has quit IRC20:36
*** bverschueren has quit IRC20:36
*** zigo has quit IRC20:36
*** gary_perkins has quit IRC20:36
*** persia has quit IRC20:36
*** melwitt has quit IRC20:36
*** snadge has quit IRC20:36
*** gregwork has quit IRC20:36
*** mugsie has quit IRC20:36
*** mmercer has quit IRC20:36
*** melwitt has joined #openstack-ansible20:36
*** grabes has joined #openstack-ansible20:37
*** rh-jlabarre has joined #openstack-ansible20:37
*** mmercer has joined #openstack-ansible20:37
*** jmccrory_ has joined #openstack-ansible20:37
*** tosky_ has joined #openstack-ansible20:37
*** rh-jlabarre has quit IRC20:37
*** MrClayPole_ has joined #openstack-ansible20:37
*** fresta_ has quit IRC20:37
*** fresta_ has joined #openstack-ansible20:37
*** rh-jlabarre has joined #openstack-ansible20:37
*** jmccrory has quit IRC20:37
*** jmccrory_ is now known as jmccrory20:38
*** bradm has quit IRC20:38
*** logan- has joined #openstack-ansible20:38
*** evrardjp_ has joined #openstack-ansible20:38
*** heikkine has quit IRC20:38
*** fresta has quit IRC20:38
*** cp- has quit IRC20:38
*** mubix has quit IRC20:38
*** poopcat1 has joined #openstack-ansible20:38
*** aedan1 has joined #openstack-ansible20:39
*** jrosser has quit IRC20:40
*** rh-jelabarre has quit IRC20:40
*** csmart has quit IRC20:40
*** tosky has quit IRC20:40
*** noonedeadpunk has quit IRC20:40
*** trident has quit IRC20:40
*** fanfi has quit IRC20:40
*** evrardjp has quit IRC20:40
*** jrosser has joined #openstack-ansible20:40
*** poopcat has quit IRC20:40
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Workaround nova bug  https://review.opendev.org/c/openstack/openstack-ansible/+/77760120:41
*** tosky_ is now known as tosky20:42
*** mubix has joined #openstack-ansible20:42
*** masterpe has quit IRC20:42
*** manti has quit IRC20:42
*** trident has joined #openstack-ansible20:42
*** fridtjof[m] has quit IRC20:42
*** janno has quit IRC20:43
*** MrClayPole has quit IRC20:43
*** dasp has quit IRC20:43
*** ioni has quit IRC20:43
*** cp- has joined #openstack-ansible20:43
*** ioni has joined #openstack-ansible20:43
*** janno has joined #openstack-ansible20:44
*** dasp has joined #openstack-ansible20:45
mgariepyhmm. placement issue when upgrading from U to V.20:57
mgariepyanyone seen it ? python_venv_build failed20:58
mgariepyon 2 out of 3 nodes.20:58
*** ioni has quit IRC21:01
*** ajg20 has joined #openstack-ansible21:02
*** csmart has joined #openstack-ansible21:04
mgariepyanyone seens this before? http://paste.openstack.org/show/803022/21:10
mgariepyone did install correctly but the 2 other did fail with that21:10
*** manti has joined #openstack-ansible21:24
jrosserwhich host is that log from?21:24
mgariepyi was the placement container on the 3 infra host21:25
mgariepyin the first run only the first infra host passed..21:25
jrosserit should not be trying to build the placement code on the placement container21:26
jrosserfor some reason it’s not got it from the repo server21:26
mgariepythen after trying various things the second one did pass.21:26
mgariepyhmm.21:26
jrossercould be lsync fail on the repo containers21:26
jrosser1-in-n usually points to loadbalancer  induced trouble21:27
*** fridtjof[m] has joined #openstack-ansible21:29
mgariepyrestated lsyncd..21:31
mgariepyhmm. same issue.21:31
jrosseryou need to look through more if the venv build log and see it try to get the placement wheel from the repo server, and why that’s not happened21:34
mgariepyafter rerunning another time, the downlaod seems to have passed. i'll debug the repo conrtainer tomorrow.21:37
mgariepythanks for you help jrosser21:37
mgariepyit was working great until that :) haha21:37
jrosserhmm really has the feel of inconsistent content in the repo servers somehow21:38
mgariepyit was running so smoothly until that haha21:38
mgariepyanyway, i'll check it tomorrow21:39
mgariepyand keep you updated if i find something.21:39
*** pto has joined #openstack-ansible21:46
mgariepyhmm. is there a way to force re-download of the file ?21:48
mgariepy-e venv_rebuild=true ?21:48
*** pto has quit IRC21:50
*** jbadiapa has quit IRC21:52
openstackgerritDavid Moreau Simard proposed openstack/openstack-ansible master: scripts-library: simplify ara setup  https://review.opendev.org/c/openstack/openstack-ansible/+/77769622:26
openstackgerritDavid Moreau Simard proposed openstack/openstack-ansible-tests master: test-ansible-env-prep: simplify ara setup  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/77769722:27
*** luksky has quit IRC22:35
*** bradm has joined #openstack-ansible22:45
jonhermgariepy --extra-vars "venv_rebuild=true" worked for me, me and noonedeadpunk spoke of a issue earlier today on victoria: https://opendev.org/openstack/ansible-role-python_venv_build/commit/9df14bdd0ba8221abd43ce26190f13d48ec8c08f this might be what you are seeing22:51
*** gshippey has quit IRC23:33
*** mathlin has joined #openstack-ansible23:33
*** aedan1 is now known as aedan23:34

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!