Friday, 2018-10-19

*** markvoelker has quit IRC00:02
*** ThiagoCMC has joined #openstack-ansible00:13
*** cshen has joined #openstack-ansible00:14
ThiagoCMCcloudnull, hey man, I'm running the setup-hosts playbook now but, I thought that it would be using systemd-nspawn everywhere! However, I'm still seeing that "lxc-ls -f" works in my servers! Is this expected?00:14
ThiagoCMCI thought that it would be no more "*lxc*" process running...00:15
cloudnullcorrect you should not see any lxc when running nspawn00:15
ThiagoCMCOh00:16
ThiagoCMCSomething is wrong...00:16
cloudnull:'(00:16
ThiagoCMC:-(00:16
ThiagoCMCI have the line "container_tech: nspawn" under "global_overrides:"00:16
ThiagoCMCMaybe I should declare it before "global_overrides:" (i.e., side by side with it, and "used_ips"... ?00:17
cloudnull you can destroy the containers, `openstack-ansible lxc-containers-destroy.yml` and purge all things lxc eveywhere `ansible -m shell -a 'apt-get remove --purge lxc* lxd* snap*'` hosts00:17
ThiagoCMCNice! No need to re-deploy everything lol00:18
cloudnullthe global_overrides should work but maybe we need to do use a group var instead .00:18
*** cshen has quit IRC00:18
ThiagoCMCIt isn't working.00:19
cloudnullonce you purge all the things I'd also recommend running something like `ansible -m shell -a 'ip link del lxcbr0' hosts`00:19
cloudnulland `ansible -m shell -a 'systemctl stop lxc-dnsmasq' hosts`00:20
cloudnulljust to be sure ;)00:20
cloudnullbummer on the global_overrides, that should work however i'll have to dig into why it's not .00:20
cloudnullthat said you could add the option to a file /etc/openstack_deploy/group_vars/all.yml00:21
cloudnullwhich should do much of the same thing00:21
cloudnullI'd be curious if that goes.00:21
cloudnullonce you have the variable in place you can test it with `openstack-ansible containers-nspawn-deploy.yml --list-hosts`00:21
*** rgogunskiy has joined #openstack-ansible00:22
cloudnullif you see hosts and containers in the list it's working00:22
ThiagoCMCThat's cool! I'll try again in a few minutes.00:22
ThiagoCMCI'm also using Ubuntu MaaS, so, if I can't clean it up using your suggestions, I'll re-deploy.00:23
ThiagoCMCBTW, the "openstack-ansible containers-nspawn-deploy.yml --list-hosts" should work right after "setup-hosts" playbook, right?00:23
cloudnullyes00:23
cloudnullthe list will work no matter00:23
ThiagoCMCok00:23
cloudnullthat just returns a list of known host items00:24
cloudnullif its empty the variable isn't working00:24
ThiagoCMCOk, I'll wait for the setup-hosts to finish, then, I'll try it00:24
cloudnullthe output looks like so http://paste.openstack.org/show/732453/00:26
ThiagoCMCThank you!00:27
cloudnullin that environment i have 1 host with nspawn enabled "utility1"00:27
ThiagoCMCAwesome00:28
cloudnullthe "all_nspawn_containers" list is empty because its generated when the playbook is executed. the important part of that output is the "nspawn_host" section00:29
cloudnullif that has host entries then nspawn is infact enabled.00:29
ThiagoCMCOk00:30
*** mmercer has quit IRC00:47
*** rgogunskiy has quit IRC00:56
*** markvoelker has joined #openstack-ansible00:58
*** ThiagoCMC has quit IRC01:18
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: clean up readme  https://review.openstack.org/61166801:24
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release  https://review.openstack.org/61166101:25
*** markvoelker has quit IRC01:32
openstackgerritMerged openstack/openstack-ansible-ops master: Cleanup the osquery role  https://review.openstack.org/61164101:39
*** jmccrory has joined #openstack-ansible02:02
*** jonher has quit IRC02:05
*** jonher has joined #openstack-ansible02:05
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification  https://review.openstack.org/61174102:17
*** ThiagoCMC has joined #openstack-ansible02:18
openstackgerritMerged openstack/openstack-ansible-ops master: clean up readme  https://review.openstack.org/61166802:24
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release  https://review.openstack.org/61166102:29
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification  https://review.openstack.org/61174102:29
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification  https://review.openstack.org/61174102:30
*** rgogunskiy has joined #openstack-ansible02:54
cloudnullgreatgatsby yes. that should work. however i know there have been issues with cloud-init and networking specifically so a lot of folks use config-drive and glean (https://docs.openstack.org/infra/glean/) to do a lot of that .03:08
* cloudnull is no expert of glean or cloud-init but odyssey4me and/or prometheanfire might be able to speak more to the advantages/disadvantages of either solution.03:09
openstackgerritMerged openstack/openstack-ansible-os_gnocchi stable/rocky: use include_tasks instead of include  https://review.openstack.org/60929703:10
openstackgerritjacky06 proposed openstack/openstack-ansible-os_gnocchi stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175403:10
openstackgerritjacky06 proposed openstack/openstack-ansible-galera_client stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175503:10
openstackgerritjacky06 proposed openstack/openstack-ansible-os_horizon stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175603:10
openstackgerritjacky06 proposed openstack/openstack-ansible-os_panko stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175703:10
openstackgerritjacky06 proposed openstack/openstack-ansible-ops stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175803:10
openstackgerritjacky06 proposed openstack/openstack-ansible-plugins stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175903:10
openstackgerritjacky06 proposed openstack/openstack-ansible-os_tempest stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61176003:11
*** ThiagoCMC has quit IRC03:19
*** rgogunskiy has quit IRC03:26
openstackgerritMerged openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release  https://review.openstack.org/61166103:37
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification  https://review.openstack.org/61174103:53
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix connection plugin for Ansible 2.6  https://review.openstack.org/61176503:58
openstackgerritJimmy McCrory proposed openstack/openstack-ansible master: Update ansible to latest stable 2.6.x  https://review.openstack.org/58116604:04
*** faizy98 has quit IRC04:06
openstackgerritJimmy McCrory proposed openstack/openstack-ansible master: Use loop_var name in when clause  https://review.openstack.org/61176604:09
openstackgerritJimmy McCrory proposed openstack/openstack-ansible master: Pin argparse for idempotent pip installs  https://review.openstack.org/61176704:11
openstackgerritJimmy McCrory proposed openstack/openstack-ansible master: Idempotent system_crontab_coordination  https://review.openstack.org/61176804:12
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-os_keystone master: Remove keystone service user  https://review.openstack.org/61176904:13
*** cshen has joined #openstack-ansible04:15
openstackgerritJimmy McCrory proposed openstack/ansible-role-python_venv_build master: Mark build task changed only when wheels are built  https://review.openstack.org/61177004:16
*** cshen has quit IRC04:19
*** faizy98 has joined #openstack-ansible04:23
openstackgerritJimmy McCrory proposed openstack/openstack-ansible master: Update ansible to latest stable 2.6.x  https://review.openstack.org/58116604:47
openstackgerritMerged openstack/openstack-ansible-ops master: Additional cleanup and simplification  https://review.openstack.org/61174104:47
*** cshen has joined #openstack-ansible05:01
*** cshen has quit IRC05:05
*** cshen has joined #openstack-ansible05:09
openstackgerritjacky06 proposed openstack/openstack-ansible-os_horizon master: Add watcher dashboard into horizon  https://review.openstack.org/60315605:12
*** pcaruana has joined #openstack-ansible06:14
*** fghaas has joined #openstack-ansible07:04
*** pcaruana has quit IRC07:09
*** shardy has joined #openstack-ansible07:10
*** pcaruana has joined #openstack-ansible07:28
*** pcaruana is now known as pcaruana|elisa|07:30
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_neutron master: tasks: neutron_install: Fix group for neutron hosts  https://review.openstack.org/61161307:36
*** tosky has joined #openstack-ansible07:36
*** jbadiapa has quit IRC07:53
*** electrofelix has joined #openstack-ansible07:56
*** suggestable has joined #openstack-ansible07:58
*** FuzzyFerric has joined #openstack-ansible07:58
*** thuydang has joined #openstack-ansible08:00
*** DanyC has joined #openstack-ansible08:00
*** thuydang has quit IRC08:04
*** DanyC has quit IRC08:05
*** jbadiapa has joined #openstack-ansible08:08
*** jbadiapa has quit IRC08:18
odyssey4meI do love it when jmccrory suddenly jumps in and does a bunch of work - and the work to unblock ansible 2.6 is especially useful!08:30
*** cshen has quit IRC08:31
odyssey4meI wonder if logan- and evrardjp could look at https://review.openstack.org/#/c/61176508:31
odyssey4meand hwoarang given you've done some plugin work too08:32
hwoarangfun stuff08:34
*** jbadiapa has joined #openstack-ansible08:36
evrardjplooks cleaner but upgrades will be challenging08:42
suggestableHey folks!08:43
suggestableI asked yesterday afternoon, but there appear to be more people in the channel now, so...08:43
suggestableDeploying stable/rocky from distro packages on Bionic. Running setup_hosts and setup_infrastructure go through just fine, but healthcheck_infrastructure fails on the second repo check (which I believe it should skip). Is this a bug?08:45
odyssey4mesuggestable: that work's not complete yet, and the healthcheck needs modification to work with it - if that's within your interest to use, perhaps you could help get it working?08:54
*** cshen has joined #openstack-ansible08:58
*** DanyC has joined #openstack-ansible09:01
suggestableodyssey4me: I would love to, but I'm currently not permitted to contribute code to outside projects (yay for politics!).09:02
*** cshen has quit IRC09:03
noonedeadpunkmorning everyone.09:03
noonedeadpunkfolks, what do you think about https://review.openstack.org/#/c/611585/ ? It's probably not very imprtant patch for everyone, but it just fixes some cases like mine)09:05
*** DanyC has quit IRC09:06
*** strobelight has joined #openstack-ansible09:07
odyssey4mesuggestable: hmm, well, would it be ok if you submitted bug reports - and provided the solution in the bug report?09:11
benkohlHi! Because 18.0.0 released, I tried to deploy openstack again. Now I get this error with setup-openstack.yml: https://snag.gy/RUILQx.jpg Any idea?09:12
*** strobelight has quit IRC09:12
odyssey4mebenkohl: that appears to be a stale fact cache, or an unsupported distro09:13
*** strobelight has joined #openstack-ansible09:13
suggestableodyssey4me: I like your style ;-)09:14
odyssey4mesuggestable: also, if you're able - there are many bugs already registered that could do with triaging - and it's a great way to get to know OSA more09:14
suggestableodyssey4me: Also, running setup-openstack I get "'keystone_oslomsg_rpc_password' is undefined". The pw_token_gen script was then run (with --regen) against an empty file to see if any more passwords were missing, and couldn't find any reference to the missing password. Perhaps the pw-token-gen script is bugged?09:15
benkohlodyssey4me: yeah I really get many problems with fact cache and linux screen. I always delete the fact cache folder content before running a playbook.  Is it possible to fill the fact cache before running the other playbooks?09:16
odyssey4mesuggestable: it's definitely there: https://github.com/openstack/openstack-ansible/blob/stable/rocky/etc/openstack_deploy/user_secrets.yml#L3609:17
odyssey4mesuggestable: did you perhaps bootstrap on something older?09:17
odyssey4mebenkohl: you can run: ansible -m setup all09:17
benkohlodyssey4me: Thank you :)09:18
suggestableodyssey4me: I bootstrapped after grabbing the rocky files, over the top of the queens files/bootstrap.09:19
odyssey4mebenkohl: unfortunately the 'gather_facts' in the playbook works a little differently to using the setup module in a task - the second does a proper update, the first tries to be smart and often doesn't work all that nicely09:19
suggestableIs there a way to force it to re-bootstrap from scratch?09:19
odyssey4mesuggestable: use the lxc-containers-destroy play to delete the containers and data, then wipe out your /etc/openstack_deploy directory (or move it to another path), then rebootstrap from scratch09:21
odyssey4mealthough personally I prefer using disposable VM's09:21
*** hamzaachi has joined #openstack-ansible09:30
suggestableodyssey4me: When I grabbed the rocky base from GitHub, I moved /opt/openstack-ansible to a different path, then re-grabbed it, followed by running the Rocky bootstrap. Was this not sufficient?09:31
odyssey4mesuggestable: nope, all your user-space config is still in lpace - and the issue is that the user-space config is missing that var09:32
odyssey4mespecifically the /etc/openstack_deploy/user_secrets.yml file09:32
odyssey4methere is one of the upgrade plays that will add anything that's missing09:32
*** greatgatsby has quit IRC09:34
*** greatgatsby has joined #openstack-ansible09:34
suggestableodyssey4me: OK, I'm trying to deploy Rocky onto Bionic, on wiped-and-reloaded machines, using my Queens configs (with the line added to user_variables to set it to install from distro repos). The only machine that wasn't wiped and reloaded was the deployment host.09:35
suggestableodyssey4me: How do I trigger the config upgrade script(s)?09:35
odyssey4mesuggestable: so if you had a running queens install, then you can just use the scripts/run-upgrade.sh script to do the upgrade unattended09:44
odyssey4meor you can use the manual steps if you prefer https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/major-upgrades.html09:44
suggestableodyssey4me: I *did* have a working test Queens deployment, but we blew it away for the upgrade, as we felt it was best to do a clean install on Bionic for longer-term support of the platform.09:52
*** faizy98 has quit IRC09:59
*** faizy98 has joined #openstack-ansible10:00
noonedeadpunkodyssey4me: Am I right, that lxc containers are created with OS which is installed on node? So in case of OS upgrade from xenial to bionic, I may destroy and create containers one by one? And it's the way for upgrade?10:01
odyssey4mesuggestable: the upgrade doesn't actually need any running environment - just the old config and inventory will do I think10:02
*** DanyC has joined #openstack-ansible10:02
odyssey4menoonedeadpunk: yep, we have some notes here - pleasse add to them based on your tests and experience: https://etherpad.openstack.org/p/osa-rocky-bionic-upgrade10:02
suggestableodyssey4me: Thanks.10:02
odyssey4melook through the whole etherpad before doing anything, because there are two sets of testing and results there10:03
noonedeadpunkodyssey4me: thanks, that will be usefull. I'll update it if find will something interesting. Once I get R working after upgrade from Q, will start OS upgrade.10:05
noonedeadpunkAs something strange happened with nova-api containers10:06
*** DanyC has quit IRC10:06
*** faizy_ has joined #openstack-ansible10:13
*** faizy98 has quit IRC10:17
*** cshen has joined #openstack-ansible10:25
*** hamzaachi has quit IRC10:29
*** DanyC has joined #openstack-ansible10:39
*** faizy_ has quit IRC10:40
*** faizy_ has joined #openstack-ansible10:40
*** DanyC has quit IRC10:43
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible master: zuul: SUSE: Fix scenario for ceph jobs  https://review.openstack.org/61184310:52
*** dave-mccowan has joined #openstack-ansible10:55
benkohlodyssey4me: you hint worked... and now that :( https://snag.gy/xOqMXR.jpg sorry, I'm definitely annoying.11:01
benkohland that: https://snag.gy/RzDMtr.jpg11:03
odyssey4mebenkohl: was there perhaps a failure earlier in the tasks - because that handler failure is due to a missing service11:04
*** DanyC has joined #openstack-ansible11:06
benkohlodyssey4me: that make sense... The buffer of gnu screen seems to be to short. So I must run the task again with a larger buffer.11:06
odyssey4mebenkohl: yep, I use tmux with a buffer of 9999 - but note that there is a log file too in https://github.com/openstack/openstack-ansible/blob/master/scripts/openstack-ansible.rc#L1911:08
noonedeadpunkthe only question about the upgrade is how to complete config conversion from lxc2 to lxc3. Am I right, that it should be completed manually with this release notes? https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/48711:11
noonedeadpunkAs without this conversion containers won't be up after upgrade. Or this is first variant, which may be skipped in case of usage second one? So upgrade 2 nodes, work only from the only one, and don't mind about non-working containers after upgrade at all11:15
*** DanyC_ has joined #openstack-ansible11:20
*** DanyC has quit IRC11:23
*** jonher has quit IRC11:26
*** jonher has joined #openstack-ansible11:27
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Make system_crontab_coordination idempotent  https://review.openstack.org/61176811:28
odyssey4menoonedeadpunk: the conversion should be automatic, we've put translations into the lxc-container-create role11:35
*** vollman has quit IRC11:37
*** cshen has quit IRC11:41
*** cshen has joined #openstack-ansible11:47
canori01hey guys, I've been having trouble getting designate-sink to work with openstack-ansible. I've configured the notification topic and the neutron handler. However, designate-sink just sits there doing nothing. Has anyone ran into this?11:47
hwoarangcores + mbuil any luck to help me get https://review.openstack.org/#/c/611613/ in to fix a neutron race condition?12:06
*** rgogunskiy has joined #openstack-ansible12:07
*** rgogunskiy has quit IRC12:11
noonedeadpunkFolks, I have problems after Q->R upgrade. It started from VNC console in nova-api container. Then I decided to re-create these containers, after which nova-api just was refusing to start becaus of missing so libraries. I decided to re-create repo container as well, but building wheels I'm recieving the following error: http://paste.openstack.org/show/732488/12:14
noonedeadpunkAnd in /var/log/repo/wheel_build.log on this host I see following problem http://paste.openstack.org/show/732489/12:15
odyssey4menoonedeadpunk: did you look at https://gist.github.com/cloudnull/cb87440c8221104ed2b857e67289905f12:16
odyssey4methat note helps resolve the missing so files in the venvs12:16
noonedeadpunkbut it is still on xenial12:16
odyssey4meoh really? that's interesting12:16
noonedeadpunkand during repo_build role run12:16
odyssey4meoh, it's from gnocchi - of course :/12:17
*** cshen has quit IRC12:18
noonedeadpunkyeah, and it's using ceph as a backend12:18
odyssey4meah, interesting - ok it seems that cython is not pinned anywhere - nice to find that gap12:18
odyssey4meI'll push up a patch for that12:19
suggestableHey again...12:20
suggestableI've managed to get it a lot further this time around, but hitting a dependency issue with Neutron-Linuxbridge-Agent.12:20
suggestableDepends: neutron-linuxbridge-agent (= 2:13.0.0~b2-0ubuntu1~cloud0) but 2:13.0.1-0ubuntu1~cloud0 is to be installed12:20
suggestable(stable/rocky on Bionic from distro)12:21
*** cshen has joined #openstack-ansible12:21
noonedeadpunkodyssey4me: cython is present to be honest, but it's too new for cradox12:21
odyssey4menoonedeadpunk: yep, I'll help you out for an override now - just preparing a patch first12:21
*** faizy98 has joined #openstack-ansible12:22
noonedeadpunkyep, sure.12:22
odyssey4menoonedeadpunk: can you try adding 'Cython<0.28' into https://github.com/openstack/openstack-ansible-os_gnocchi/blob/master/defaults/main.yml#L177-L184 and see if doing a repo rebuild works?12:24
*** faizy_ has quit IRC12:24
noonedeadpunksure, give me several minutes to check this out12:25
*** vollman has joined #openstack-ansible12:26
*** priteau has joined #openstack-ansible12:34
*** gkadam has joined #openstack-ansible12:52
*** strattao has joined #openstack-ansible12:52
mbuilhwoarang: One question regarding your patch, if I understand correctly, you detected that apparmor is sometimes not applied in a host because neutron_apparmor_hosts points to a neutron container which is not part of the inventory. To fix that, you want neutron_apparmor_hosts to point to all possible neutron containers, right? Then I wonder about the change because doesn't 'neutron_role_project_group' group less containers than 'all'?12:56
mbuilhwoarang: in fact, I thought 'neutron_role_project_group' is a subgroup of 'all'12:56
odyssey4meneutron_role_project_group is only applied by the repo build - it has absolutely nothing to do with anything else13:00
hwoarangmbuil: the all subgroup contains containers which have nothing to do with neutron13:02
hwoarangso it was filtering the wrong thing13:02
hwoarangi want to only filter the neutron containers and find the physical hosts from them13:03
hwoaranganyway, what we want is to iterate through the entire list of neutron conteiners, find all the physical hosts for them, and then fix apparmor13:03
noonedeadpunkodyssey4me: I've tried to remove Cython from repo container, but it was again installed of 0.29 version. But it was installed by "Install pip packages (from repo)". So probably I should re-create container?13:04
*** faizy98 has quit IRC13:05
*** faizy98 has joined #openstack-ansible13:06
odyssey4menoonedeadpunk: removing cython won't work - did the edit of the gnocchi role work? If not - I have a patch prepared which will work - I was just hoping for a less intrusive one.13:07
noonedeadpunkodyssey4me: I've appended to gnocchi_pip_packages "Cython<0.28" (and "Cython<0.29"), but 0.29 was still installed at repo container13:09
noonedeadpunkand the second thought, is, that ansible pip module accepts version as a separate argument and I never tried to set package version inside package name...13:10
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Set an upper pin for Cython to please gnocchi  https://review.openstack.org/61186413:10
odyssey4menoonedeadpunk: try that patch13:10
mbuilhwoarang: ok, understood, thanks13:12
odyssey4menoonedeadpunk: you may need to force a wheel & venv rebuild using the flags13:13
odyssey4menoonedeadpunk: actually, perhaps best to just remove /var/www/repo/os-releases/18.0.0 and all its contents13:14
noonedeadpunkodyssey4me: thanks, will try it now13:14
*** lbragstad is now known as elbragstad13:23
openstackgerritMohammed Naser proposed openstack/openstack-ansible-os_neutron master: ovs: force to secure fail mode by default  https://review.openstack.org/61187413:37
noonedeadpunkodyssey4me: that worked for me13:43
*** faizy_ has joined #openstack-ansible13:45
*** faizy98 has quit IRC13:49
mgariepyhoping this is the last time i ask for this one: can i get a few votes : https://review.openstack.org/#/c/605789/13:51
odyssey4mecores - https://review.openstack.org/611864 needs to merge fairly urgently, and needs porting back to rocky when it is13:53
*** munimeha1 has joined #openstack-ansible13:53
*** rpittau has quit IRC13:56
*** rgogunskiy has joined #openstack-ansible13:57
noonedeadpunkCan you please also check this patch https://review.openstack.org/#/c/611585/ ?13:57
noonedeadpunkOk, I still have problem with nova-api after container re-creation. http://paste.openstack.org/show/732498/14:01
noonedeadpunkI thought that the problem is in the downloaded venv, but repo container re-creation didn't the trick14:03
*** rgogunskiy has quit IRC14:04
spotzodyssey4me:  donr14:05
mgariepyodyssey4me, done.14:06
*** rgogunskiy has joined #openstack-ansible14:10
suggestableHi guys... Still struggling with this. Deploying stable/rocky on Bionic using distro packages. During setup-infrastructure, I hit this:14:13
suggestablehttp://paste.openstack.org/show/RfYMFGF5mHjiK6gBgza4/14:13
suggestableTASK [os_neutron : Install neutron role packages]14:13
*** rgogunskiy has quit IRC14:15
*** ThiagoCMC has joined #openstack-ansible14:15
ThiagoCMCHey guys, the following playbook is failing right at the beginning: "openstack-ansible setup-openstack.yml", with error: FAILED! => {"msg": "'keystone_oslomsg_rpc_password' is undefined"}14:18
ThiagoCMCWhat is happening ?14:18
*** rgogunskiy has joined #openstack-ansible14:18
suggestableThiagoCMC: I had this same issue earlier today. You need to upgrade your configs from Queens to Rocky.14:19
*** thuydang has joined #openstack-ansible14:19
ThiagoCMCHow?14:19
suggestableThe upgrade scripts will add in all the required changes.14:19
suggestable/opt/openstack-ansible/scripts/run-upgrade.sh14:19
*** thuydang has quit IRC14:19
ThiagoCMCHmm... Let me try it now!14:20
noonedeadpunkodyssey4me: do you have any ideas what might be the root of this http://paste.openstack.org/show/732498/ ? I'd appreciate some start point for investigation14:22
ThiagoCMCSince I have an old /etc/openstack_deploy subdirectories but, on a totally new and fresh env, I never thought that it was necessary to run an "upgrade", since there was no queens here. But, okay, trying it now...14:22
suggestableThiagoCMC: I had exactly the same situation earlier today. Glad I'm not the only one! :-)14:23
*** rgogunskiy has quit IRC14:23
ThiagoCMCCool! Small world!   :-D14:23
suggestableThiagoCMC: In case you're looking to deploy Rocky via distro packages, be aware that this appears to be currently broken on Ubuntu Bionic (as that's the situation I'm stuck on).14:24
ThiagoCMCOh, that's also exactly what I'm trying to do14:24
ThiagoCMCAfter run-upgrade.sh, do I need to run setup-hosts.yml and setup-infrastructure.yml again?14:25
suggestableIndeed, as the upgrade script will fail (after it's made the updates to your configs).14:25
ThiagoCMCDamn14:26
ThiagoCMCYou gave up on installing via distro?14:26
suggestableIf your Queens configs worked, then the upgrade script will patch them to work with Rocky, and you should be good to go with installing via the normal playbooks.14:27
suggestableI haven't yet, but leaning towards it...14:27
ThiagoCMCI see, thanks for pointing this out!14:27
suggestableNo worries.14:27
suggestableGlad I could help someone else!14:27
odyssey4meThiagoCMC suggestable - the distro install is only confirmed to be working for opensuse and centos at this point see https://review.openstack.org/#/c/608930/ as an example of the last merge for the tests to confirm they're working14:29
odyssey4menotice that there's no 'distro' job for bionic yet14:29
odyssey4meoh, and centos doesn't have a distro job yet either14:29
odyssey4methe distro install support is brand new, and still needs work - if you're up for helping, that'd be great14:30
odyssey4meon top of that, a source-based build cannot be changed to a distro-based build14:30
suggestableodyssey4me: Can this please be made crystal clear on the main docs pages? This situation is a little chaotic...14:30
odyssey4mesuggestable: please register a bug for that14:31
ThiagoCMCThat sucks, I thought that it was stable enough, since it is released and documented but, okay, I'll re-deploy everything from scratch, using source.14:36
suggestableThiagoCMC: Same decision my boss and I just came to. #disappointing14:37
cloudnullmornings14:37
ThiagoCMCMorning!14:37
*** rgogunskiy has joined #openstack-ansible14:37
cloudnullThiagoCMC how goes your nspawn'ing  ?14:38
cloudnullany cores around want to give this a little love tap https://review.openstack.org/#/c/581166/ :)14:38
ThiagoCMCNot good, it was failing to start the systemd containers here and there, then, I returned to LXC and the playbook (setup-hosts.yml) worked as before.14:38
cloudnullfair enough14:39
cloudnullwas that with rocky ?14:39
*** weezS has joined #openstack-ansible14:40
ThiagoCMCyep14:40
odyssey4mecloudnull: several fixes have gone into master and not rocky - please try to trace back what's missing and do the appropriate backports14:41
odyssey4mespecifically for the nspawn_hosts and nspawn_container_create I've seen some missing, and not sure where else14:41
odyssey4meI tried a few, but because they weren't ported back in time, there are just so many merge conflicts and I didn't have enough context to understand how to resolve them right.14:42
ThiagoCMCsuggestable, about the keystone_oslomsg_rpc_password fail, don't you think that the migrate_openstack_vars.py would do the trick, and not the run-upgrade.sh ?14:42
*** rgogunskiy has quit IRC14:42
suggestableThiagoCMC: There's a secrets script that's run as part of run-upgrade.sh that adds the missing credentials.14:43
noonedeadpunkfolks, I really need help with nova-api service in regular LXC on rocky (with xenial), as it's not willing to start even after container re-creation14:43
ThiagoCMCI see, okay!14:43
noonedeadpunkprobably I need to upgrade to bionic then?14:43
*** FuzzyFerric has quit IRC14:43
odyssey4menoonedeadpunk: nope, get it rigtht on xenial first14:43
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_swift master: Add variable for the ssh service and ensure its enabled  https://review.openstack.org/60780814:44
*** FuzzyFerric has joined #openstack-ansible14:44
*** rgogunskiy has joined #openstack-ansible14:44
*** rgogunskiy has quit IRC14:44
noonedeadpunkokay. try to debug python then and fix it manually14:44
cloudnullodyssey4me good lookin' out. ill go see what the delta is14:45
ThiagoCMCLooks like that the two cool things that I was looking forward to try on Rocky, nspawn and distro package, doesn't yet...   :-(14:47
ThiagoCMCBut, okay!  :-P14:47
*** spatel has joined #openstack-ansible14:48
noonedeadpunkhm, I thought that nspawn is experimental on rocky, isn't it?14:48
odyssey4menoonedeadpunk: it is14:48
spatelToday when i reboot one of instance i got this error anyone know what is heck is that? http://paste.openstack.org/show/732501/14:48
odyssey4meas is the distro package installs14:48
*** cshen has quit IRC14:50
ThiagoCMCGot it!  lol14:50
noonedeadpunkhm, os-nova-install fails as well on "Run nova-status upgrade check to validate a healthy configuration" as well :(14:50
ThiagoCMCI'll try again those two new in 6 months...   =)14:50
ThiagoCMCBTW, does Rocky supports networking-ovn?14:51
*** jonher has quit IRC14:53
*** jonher has joined #openstack-ansible14:53
*** pcaruana|elisa| has quit IRC14:56
*** sawblade6 has joined #openstack-ansible14:57
jrosserjust bear in mind with distro installs you'll struggle to patch the actual openstack code14:57
jrosserthat may or may not matter to you14:57
*** pcaruana|elisa| has joined #openstack-ansible14:57
*** DanyC has joined #openstack-ansible14:58
openstackgerritMerged openstack/openstack-ansible-os_swift stable/rocky: releasenotes: oslo-messaging-separate-backends add project name  https://review.openstack.org/61162315:00
*** DanyC_ has quit IRC15:01
*** pcaruana|elisa| has quit IRC15:08
*** pcaruana has joined #openstack-ansible15:08
openstackgerritMerged openstack/openstack-ansible-os_neutron master: tasks: neutron_install: Fix group for neutron hosts  https://review.openstack.org/61161315:09
noonedeadpunkodyssey4me: I do not know how, but I had libpython2.7 not installed inside container. It had only libpython2.7-minimal and libpython2.7-stdlib, which didn't suit uwsgi to start.15:09
noonedeadpunkI even don't know wether I should patch smth or not regarding this. As looks it pretty strange.15:12
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_hosts stable/rocky: Combined backport from master to ensure nspawn functionality in rocky  https://review.openstack.org/61190515:18
*** gkadam has quit IRC15:18
*** cshen has joined #openstack-ansible15:19
cloudnullThiagoCMC maybe -cc jamesdenton - I know there was some work that happened w/ OVN im just not sure if its all in rocky15:24
jamesdentonit's not all there yet15:25
jamesdentonand the HA story isn't completely fleshed out15:25
ThiagoCMCOk, no worries!15:25
*** electrofelix has quit IRC15:25
*** pcaruana has quit IRC15:31
*** spatel has quit IRC15:33
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix issues with and enable Python 3 job  https://review.openstack.org/61190915:35
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix issues with and enable Python 3 job  https://review.openstack.org/61190915:36
*** rgogunskiy has joined #openstack-ansible15:49
*** vnogin has joined #openstack-ansible15:50
*** vnogin has quit IRC15:50
*** rgogunskiy has quit IRC15:54
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro  https://review.openstack.org/59142416:05
*** mgariepy has quit IRC16:10
*** DanyC_ has joined #openstack-ansible16:13
ThiagoCMCcloudnull, are you using nspawn on top of ZFS (root)?16:14
*** mgariepy has joined #openstack-ansible16:15
*** DanyC has quit IRC16:16
*** FuzzyFerric has quit IRC16:23
*** suggestable has quit IRC16:23
*** openstackgerrit has quit IRC16:24
*** fghaas has quit IRC16:27
*** shardy has quit IRC16:30
cloudnullThiagoCMC no. i use nspawn w/ btrfs16:42
cloudnullI've done several setups, my general go to is LVM w/ a logical volume mounted at /var/lib/machines formatted BTRFS16:43
noonedeadpunkfolks, is there any switcher between python3 and python2 for venvs and deployment?16:43
noonedeadpunkI meen some variable)16:44
ThiagoCMCcloudnull, you're really brave!  :-O16:44
cloudnullI've also done BTRFS as root (that's running on my laptop now) its been great16:44
ThiagoCMCNow I wanna do that too!  Lol16:44
cloudnullnah BTRFS w/ modern kernels seems to be pretty rock solid16:44
ThiagoCMCcloudnull, is btrfs good to host qcow2 ?16:44
ThiagoCMCI tried in the past, didn't worked16:45
ThiagoCMCUnder load16:45
cloudnullgenerally I'd say no16:45
ThiagoCMCOh, ok16:45
cloudnullI had a whole sub thread on that16:45
cloudnullhttps://twitter.com/cloudnull/status/105094382711102668816:45
cloudnullvia sataII and sataIII w/ XFS and BTRFS there's no difference16:46
*** nsmeds__ has joined #openstack-ansible16:46
cloudnullin terms of IOPS16:46
cloudnullvia SAS XFS wins by a small margin16:46
cloudnullthose screenshots were fio results from within a VM16:47
noonedeadpunkcloudnull: really interesting research16:48
cloudnullyou can make BTRFS host qcow's by using the no-data-cow mount option, but even with that XFS still wins16:49
cloudnullin terms of IOPS16:49
cloudnullBTRFS has snapshots, subvolumes, import / export tools, and other advanced management capabilities which other file systems may not have (except ZFS).16:50
cloudnullso its largely a matter of what you want, in terms of commodity cloud with local disks, I'm probably going to stick with XFS for qcow for now.16:50
cloudnullI also have my local NAS running BTRFS w/ ~14T of storage using the built in BTRFS RAID (using RAID10). My wife does a lot of over the network video editing on that and it's been great.16:52
cloudnullso needless to say, I have no trust issues with BTRFS :)16:52
noonedeadpunkdo we support python3 venvs at all?16:53
cloudnullnoonedeadpunk I think so? we did for a time.16:54
*** openstackgerrit has joined #openstack-ansible16:54
openstackgerritMerged openstack/openstack-ansible-os_masakari stable/rocky: use include_tasks instead of include  https://review.openstack.org/61008416:54
cloudnullI know there were issues with py3 in the rocky cycle so a lot of that work was disabled/reverted16:54
cloudnullodyssey4me ?16:54
ThiagoCMCcloudnull, wow! That sounds fun! lol16:55
ThiagoCMCWhat about hosting RAW images on btrfs, instead of qcow2?16:55
noonedeadpunkcloudnull: I mean, that I'm missing libpython2.7 in nova-api container, so I'm thinking about patch, but I really don't know how to check that we're using python2 (as in case of python3 we don't need it)16:56
cloudnullthat should be ok, though I've not done it myself .16:56
cloudnullnoonedeadpunk "libpython2.7" should be part of the base image ,16:56
cloudnullmaybe we need to be more explicit about it?16:57
cloudnullyea I would assume that package would be installed with https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L4816:57
cloudnullhowever if it's not then I'd say yes, we should patch that array to have it16:57
noonedeadpunksurprisingly for me, but it's not installed for nova-api container, which makes all uwsgi inside it to fail on launch16:57
cloudnullin that case I'd say yes. lets get a patch in to make sure that doesn't happen to anyone else.16:58
cloudnullyou should be able to correct the issue across the environment with `ansible -m package -a 'name=libpython2.7' all`16:59
cloudnullif you're still seeing the problem16:59
noonedeadpunkcloudnull: I've re-created container several times, with the same result - libpython2.7 wasn't there...16:59
cloudnullnoonedeadpunk - probably need to rebuild the base image once we have that package in the list17:01
noonedeadpunkI'm not sure, that I know how to do it. I've also re-created distro container, but it seems that it's not about it17:02
noonedeadpunkmoreover, python2.7 doesn't have a dependency of libpython2.717:03
cloudnullinteresting.17:03
noonedeadpunkhttp://paste.openstack.org/show/732513/17:04
*** hamzaachi has joined #openstack-ansible17:04
noonedeadpunkUnfrotunatelly I don't have now ability to test same with bionic, as I'm only preparing upgrade right now17:05
cloudnullok, so... add that package to the list within the role. then `machinectl remove ubuntu-bionic-amd64` (or whatever the base image name is). then purge the lxc cache `rm -rf /var/cache/lxc/downloads/*`. delete the container you want to rebuild `openstack-ansible lxc-container-destroy.yml --limit nova_all`. then rerun `openstack-ansible containers-dep17:06
cloudnullloy.yml --limit lxc_hosts:nova_all`17:06
cloudnullthat would rebuild all the cache, nuke your old broken container, rebuild the cache, and recreate the container.17:07
noonedeadpunkok, I'll try now17:07
noonedeadpunkthanks!17:07
cloudnulldo you want to add that package to https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L33 for both bionic and xenial ?17:08
odyssey4mecloudnull: yeah, no py2 at all for rocky/master right now - we can explore py3 for stein17:10
noonedeadpunkcloudnull: actually I was thinking to add it to somewhere here https://github.com/openstack/openstack-ansible-os_nova/blob/master/vars/ubuntu.yml#L23 but I think that add it explicitly if a good idea17:10
*** gyee has joined #openstack-ansible17:12
cloudnullnoonedeadpunk that would work , my only concern would be that we have might need that package in all of the containers that use uwsgi17:14
jungleboyjspotz:  Is there a way to configure the hostname used by AIO at deployment time?17:15
spotzI think I hacked it once jungleboyj. Let me go peek unless cloudnull knows off hand17:16
*** strattao has quit IRC17:17
*** tosky has quit IRC17:18
jungleboyjspotz:  Cool.  Thanks.  Going to try again on my new isolated node.17:19
openstackgerritMerged openstack/openstack-ansible-os_neutron master: ovs: force to secure fail mode by default  https://review.openstack.org/61187417:21
*** mmercer has joined #openstack-ansible17:21
*** strattao has joined #openstack-ansible17:21
noonedeadpunkcloudnull: it seemed enough just to purge the lxc cache and re-create container.17:31
cloudnullah cool!17:31
noonedeadpunkthanks for the tip!17:31
cloudnullcouldve something gone off the rails in cache creation.17:31
cloudnull**could've been17:32
cloudnullglad you got it going though :)17:32
noonedeadpunkyeah, now I may go further and try upgrade to bionic:)17:32
cloudnullI did that on my lab environment .17:35
cloudnullit remarkably well :)17:35
spotzjungleboyj: roles/bootstrap-host/tasks/prepare_hostname.yml17:36
cloudnullgranted I was using nspawn so i didnt need to deal with the lxc container config conversion17:36
spotzunder the tests directory17:36
cloudnullodyssey4me https://review.openstack.org/#/c/611905/ - if you get a chance17:36
cloudnulla combined backport from to rocky for nspawn hosts.17:36
jungleboyjspotz:  Ah, Thank you!17:37
jungleboyjspotz: Stupid question, but why are there tasks under tests that impact the outcome of deployment.17:39
spotzjungleboyj: I'm thinking it's not tests as you're thinking tests but tests against its work from reading through the roles17:40
jungleboyjOk.  Just totally different from Cinder as nothing in 'tests' would impact how Cinder works when deployed.17:41
jungleboyjSo, I had seen other notes on looking at things in 'tests' and was very confused by that.17:41
jungleboyjJust a different environment I guess.17:41
ThiagoCMCI'm curious about OSA Rocky, I'm deploying this using Ubuntu 18.04, including MaaS. Point 1: MaaS isn't the gateway. So, OS is creating the following file inside of each container: "/etc/apt/apt.conf.d/90curtin-aptproxy" with: "Acquire::http::Proxy "http://192.168.4.10:8000/";" <- this is MaaS PXE IP! Thing is, from the container, it can't reach that, as a workaround, I'm running via /etc/rc.local: "iptables -t nat -17:44
ThiagoCMCA POSTROUTING -o bond0 -j MASQUERADE". Then, playbook worked!17:44
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_container_create stable/rocky: Combined backport from master to ensure nspawn functionality in rocky  https://review.openstack.org/61192617:45
cloudnullThiagoCMC you might be able to use a static route in your containers to get to that address.17:47
cloudnullhowever if a local IPtables rule made it go, i'd probably just go with that17:47
ThiagoCMCYeah, 1 change per blade, not bad... I was just curious about why this happened.17:47
cloudnullI assume the containers are on a different subnet ?17:48
ThiagoCMCIs there a var to disable /etc/apt/apt.conf.d/90curtin-aptproxy from appearing in first place?17:48
cloudnulland there's no route from the eth0 interface in the container to that network17:48
ThiagoCMCYes, MaaS PXE is subnet 1, OSA have its own br-mgmt, br-vxlan etc, diff subs17:48
cloudnullthat apt config is likely something maas is doing?17:48
ThiagoCMCI can't be MaaS!17:49
ThiagoCMCSince this file appears inside of OSA containers17:49
ThiagoCMCAfter MaaS long done17:49
cloudnullis that file on the host?17:49
ThiagoCMCNo, inside of each container!17:50
ThiagoCMCThat OSA creates17:50
cloudnullthe containers inherit config from the host maybe that's something we're just pulling in ?17:50
ThiagoCMCDuring setup-hosts.yml17:50
ThiagoCMCOh, I didn't knew that...17:50
ThiagoCMCMaybe!  lol17:50
cloudnullif  /etc/apt/apt.conf.d/90curtin-aptproxy is on the host machine that might explain it17:50
cloudnullosa does create a proxy file for apt which points to the repo servers17:51
ThiagoCMCYep, it is also there.17:51
ThiagoCMCI didn't knew that each container would inherit config like this... Interesting!17:51
ThiagoCMCGood to know.17:51
cloudnullhttps://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L2417:52
cloudnullso you can change that by defining that list in your user_variables.yml file17:52
cloudnullhttp://paste.openstack.org/show/732518/17:53
cloudnullwould work17:53
*** strattao has quit IRC17:53
ThiagoCMCThat's awesome! Thank you!!!17:54
ThiagoCMC:-D17:54
cloudnull++17:54
ThiagoCMCNow I can see why I didn't saw this problem with nspawn!  lol17:55
cloudnullnspawn has the same mechanism though its used less.17:56
*** fghaas has joined #openstack-ansible17:56
ThiagoCMCThe problem that I had with nspawn in OSA, was that, a few containers was (randomly) failing to start. I could see the container's subdir under /var/lib/machines but, "systemd start $container" failed. No idea why.17:57
*** strattao has joined #openstack-ansible17:57
ThiagoCMCIt was also missing from `machinectl list` output, maybe because it was down?17:57
ThiagoCMCIt's my first time with nspawn / machinectl  =P17:57
nsmeds__hmm ... is there a way to interact with `dynamic_inventory.py` and get IP addresses of containers? for example, running `ansible -i /opt/openstack-ansible/inventory/dynamic_inventory.py --list-hosts all` outputs all hostnames18:06
nsmeds__but I'd like to configure Prometheus' targets using the dynamic inventory, and therefore will need IP addresses18:07
cloudnullThiagoCMC I think the issue you were seeing was likely due to the missing nspawn patches in rocky18:08
nsmeds__(installing mysqld/rabbitmq-exporters inside the containers, but need Prometheus to target the exporters).18:08
cloudnullthe start / stop thing was likely due to a problem we had with systemd-escape18:09
ThiagoCMCI'm glad that I'm not alone!  lol18:09
ThiagoCMChow can I try the new version?18:09
cloudnullthe backport patches are now up. however you can checkout the master version of the roles to get all the same things18:10
nsmeds__I understand that setting up a Consul cluster would likely resolve this issue (can configure the containers to register services with consul and Prometheus to get lists of targets that way), but do not have Consul setup yet18:10
*** cshen has quit IRC18:10
cloudnullnsmeds__ lxc-ls -f will show you all of the container IPs on a local machine18:10
openstackgerritMichael Vollman proposed openstack/openstack-ansible-os_manila master: Iniital commit for some scaffolding  https://review.openstack.org/61192918:11
openstackgerritMichael Vollman proposed openstack/openstack-ansible-os_manila master: Converting os_cinder role to os_manila role  https://review.openstack.org/61193018:11
cloudnullotherwise, if you just want to interact with the inventory directly, you'd need to parse the json file.18:11
ThiagoCMCcloudnull, ok, thanks!18:11
ThiagoCMCnsmeds__, what about etcd? :-P18:11
openstackgerritMerged openstack/openstack-ansible master: Set an upper pin for Cython to please gnocchi  https://review.openstack.org/61186418:11
openstackgerritMerged openstack/openstack-ansible master: Use loop_var name in when clause  https://review.openstack.org/61176618:12
cloudnullthat said, I do like the consul idea, its something on my list of ops tools I aim to get implemented soon-ish18:12
cloudnullif you have something to work from I'd love to collaborate on it18:12
*** DanyC_ has quit IRC18:12
nsmeds__Yeah `lxc-ls -f` won't work because that means manually running around, collecting IPs, and then nothing gets updated when respinning containers/etc18:12
nsmeds__getting the JSON comes from ` sudo ./scripts/inventory-manage.py -e` yes ?18:13
cloudnullyes.18:13
cloudnullthe inventory json file is localted at /etc/openstack_deploy/openstack_inventory.json18:14
cloudnullthat python file simply parses that file.18:14
*** cshen has joined #openstack-ansible18:14
cloudnull**that python script simply parses that json file.18:14
nsmeds__I think I'm causing a few headaches for myself because we have our own playbooks for setting up Prometheus+Grafana which we don't use `openstack-ansible` wrapper to run18:14
*** sum12 has quit IRC18:14
nsmeds__but now I'm basically integrating the two, because Prometheus needs container IPs to target18:14
cloudnullif you want it to interact with the json file you could use something like https://github.com/openstack/openstack-ansible-ops/tree/master/bootstrap-embedded-ansible18:15
nsmeds__ok I'll check that out, much appreciated18:15
cloudnulljust plain-jane ansible that can talk to an osa inventory18:15
cloudnullwe're using that method to deploy elk and osquery18:15
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Remove keystone service user  https://review.openstack.org/61176918:17
*** sum12 has joined #openstack-ansible18:17
*** noonedeadpunk has quit IRC18:21
openstackgerritTaseer Ahmed proposed openstack/openstack-ansible master: Integrate Blazar with OpenStack Ansible  https://review.openstack.org/54995618:27
openstackgerritMerged openstack/openstack-ansible master: Make system_crontab_coordination idempotent  https://review.openstack.org/61176818:27
*** sum12 has quit IRC18:27
*** sum12 has joined #openstack-ansible18:29
jungleboyjspotz:  Latest complaint ... the bootstrap option for data disk doesn't recognize 'vda' as a disk.  :-)18:33
spotzjungleboyj: Oh just don't do the export, I never do:)18:34
jungleboyjspotz:  I need to as I am using a smaller VM that I can keep clones of so I don't have to start from scatch.  :-p18:34
jungleboyjI got it working.  I needed to not try to pass through a disk from the host.  Needed to create an image and then share it like a SCSI device, not virtio.18:35
*** nsmeds__ has quit IRC18:38
*** DanyC has joined #openstack-ansible18:41
*** nsmeds__ has joined #openstack-ansible18:42
*** DanyC_ has joined #openstack-ansible18:43
openstackgerritMerged openstack/openstack-ansible-os_nova master: Update the pci config for nova.  https://review.openstack.org/60578918:46
*** DanyC has quit IRC18:46
jungleboyjspotz:  I think I have gotten dnsmasq issue resolved in the VM I am using.18:47
spotzcool18:47
*** sum12 has quit IRC18:49
*** sum12 has joined #openstack-ansible18:51
jungleboyjspotz:  I moved to Rocky by the way.19:00
spotzk19:00
nsmeds__cloudnull, just thought I'd share - got it working (grabbing container IP without using openstack-ansible wrapper) - didn't require the `bootstrap-embedded-ansible` either. To test, I edited the Prometheus' targets list as follows19:21
nsmeds__```19:21
nsmeds__   node-exporter:19:21
nsmeds__     - targets:19:21
nsmeds__-        "{{ groups['all'] | map('regex_replace', '$', ':9100') | list | sort }}"19:21
nsmeds__+        "{{ groups['galera'] | map('extract', hostvars, ['ansible_host']) | map('regex_replace', '$', ':9100') | list | sort }}"19:21
cloudnullcool!19:22
nsmeds__and then run it with (sorry for spam, should have used a paste)19:22
nsmeds__ansible-playbook -i pilot -i /opt/openstack-ansible/inventory/dynamic_inventory.py monitoring.yml19:22
*** openstacking_123 has joined #openstack-ansible19:22
nsmeds__thanks for help19:22
cloudnullanytime !19:22
cloudnullglad you made it go :)19:22
openstacking_123Hope everyone is well. Anyone with experience using magnum? I seem to have an issue after Queens update where Heat registers the stack deployment as complete but Magnum never updates and stays in state CREATE_IN_PROGRESS.19:24
*** fghaas has quit IRC19:25
*** fghaas has joined #openstack-ansible19:36
*** DanyC_ has quit IRC19:39
ThiagoCMCHey guys, the task "systemd_mount : Set the state of the mount" is failing inside of a glance_container. Ansible error output: http://paste.openstack.org/show/732523/ - "systemctl status var-lib-glance-images.mount" shows a problem but, "systemctl restart var-lib-glance-images.mount" works! Any idea? I can even mount the NFS manually...   :-/19:43
ThiagoCMCThis: `systemctl reload-or-restart $(systemd-escape -p --suffix="mount" "/var/lib/glance/images")` also fails19:44
ThiagoCMCFile: "/etc/systemd/system/var-lib-glance-images.mount" looks correct!19:47
ThiagoCMCsystemctl reload-or-restart var-lib-glance-images.mount <- fails19:47
ThiagoCMCsystemctl stp var-lib-glance-images.mount <- works19:47
ThiagoCMC*stop19:47
ThiagoCMCsystemctl start var-lib-glance-images.mount <- works!19:48
ThiagoCMCNo idea why...   =/19:48
ThiagoCMCSystemd craziness? Error on glance_container: "mount.nfs: access denied by server while mounting 172.29.244.40:/glance" BUT "bin/mount 172.29.244.40:/glance /var/lib/glance/images -t nfs -o _netdev" just works on same container! lol19:51
*** DanyC has joined #openstack-ansible19:53
*** dave-mccowan has quit IRC19:57
*** DanyC has quit IRC19:58
ThiagoCMCThat's very weird. The "reload-or-restart ONLY works after manually mounting the NFS share and unmount it! Then, it only fails over and over again...19:59
ThiagoCMCSo, I left it in a state that it would work on next run, and openstack-ansible is proceeding! I tricked it! lol20:02
openstackgerritMerged openstack/openstack-ansible-os_octavia stable/rocky: Only run OpenStack tasks once  https://review.openstack.org/60875720:08
openstackgerritMerged openstack/openstack-ansible-os_octavia stable/rocky: releasenotes: oslo-messaging-separate-backends add project name  https://review.openstack.org/61162520:08
*** strattao has quit IRC20:15
*** fghaas has left #openstack-ansible20:31
*** cshen has quit IRC20:35
openstackgerritMerged openstack/openstack-ansible-os_nova master: SUSE: Add support for openSUSE Leap 15  https://review.openstack.org/60408020:36
*** cshen has joined #openstack-ansible20:40
*** munimeha1 has quit IRC20:51
*** vnogin has joined #openstack-ansible20:52
*** ThiagoCMC has quit IRC20:53
openstacking_123running  /openstack/venvs/magnum-17.1.2/bin/python /openstack/venvs/magnum-17.1.2/bin/magnum-conductor in any magnum container sets the managum status to complete20:55
*** vnogin has quit IRC20:56
*** ThiagoCMC has joined #openstack-ansible20:57
ThiagoCMCGuys, any idea about this error: "FAILED! => {"msg": "'nova_oslomsg_notify_host_group' is undefined"}" ?20:58
ThiagoCMCMaybe I need this: migrate_openstack_vars.py ?20:58
ThiagoCMCTASK: os_ceilometer : Copy ceilometer configuration files21:00
ThiagoCMCI can't see "nova_oslomsg_notify_host_group" under /etc/ansible/roles/os_ceilometer! This var only exists under roles/os_nova.21:03
*** dave-mccowan has joined #openstack-ansible21:05
*** dave-mccowan has quit IRC21:13
*** goldenfri has quit IRC21:22
openstacking_123I fixed the queens magnum issue in openstack ansible installed this version of evenlet refernced here https://github.com/eventlet/eventlet/issues/172#issuecomment-37942116521:24
openstacking_123specifically pip install https://github.com/eventlet/eventlet/archive/1d6d8924a9da6a0cb839b81e785f99b6ac219a0e.zip21:24
openstacking_123I will try to submit a bug fix21:25
openstacking_123Actually don't see a place in github to submit issues21:27
logan-openstacking_123: https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L142 is why your queens install didnt have the eventlet 0.23 release. queens upper-constraint for eventlet is 0.2021:29
logan-rocky is still pinned at 0.20 too21:31
openstacking_123Good find21:32
openstacking_123That was a tough issue to find. None of the processes died and I could not quite figure out where the acknowledgement was supposed to happen. The docs for magnum where very sparse.21:34
openstacking_123logan- you able to get that info to the correct people?21:34
logan-probably going to have to file a bug with magnum about this21:35
logan-i'm trying to figure out where their bug tracker is21:36
openstacking_123Oh its a magnum issue? Nice21:36
openstacking_123There irc is the worse21:36
openstacking_123They don't reply to anything21:36
logan-prometheanfire: seems like the eventlet in u-c for queens is breaking magnum ^ not sure if thats going to end up being a fix in requirements or magnum21:36
logan-and yeah openstacking_123, I have no idea where their bugs go.. the readme in their repo says to file them here https://bugs.launchpad.net/magnum/ but it seems to be blank21:38
openstacking_123I will ask there non responsive irc as well21:38
logan-¯\_(ツ)_/¯21:38
openstacking_123Thanks for pointing me in the right direction21:39
logan-no prob21:39
logan-openstacking_123: looks like they might have migrated magnums bug tracker to storyboard: https://storyboard.openstack.org/#!/project/openstack/magnum21:42
openstacking_123Aww good find21:42
prometheanfirelogan-: maybe reqs, depends on the exact error21:43
openstacking_123I better do a full deploy and confirm it works21:43
prometheanfireI doubt reqs though, since it's been that req for a LONG time21:44
openstacking_123deletes work correctly now. will just confirm deploys don't run into issues21:45
*** weezS has quit IRC21:51
*** DanyC has joined #openstack-ansible21:54
openstacking_123yup magnum updates correctly now22:02
openstacking_123Made this story https://storyboard.openstack.org/#!/story/2004130 No idea if that is the correct way to do it. Either way thanks for everyones help.22:05
*** openstacking_123 has quit IRC22:08
*** priteau has quit IRC22:12
*** ThiagoCMC has quit IRC22:19
*** hamzaachi has quit IRC22:46
*** cshen has quit IRC22:47
greatgatsbycloudnull, thanks for the glean info, looks very promising22:50
*** DanyC has quit IRC23:31
*** sawblade6 has quit IRC23:33
openstackgerritMerged openstack/openstack-ansible-ops stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61175823:38
*** gyee has quit IRC23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!