Friday, 2016-12-16

*** weezS has joined #openstack-ansible00:06
*** maeker has quit IRC00:07
*** maeker has joined #openstack-ansible00:09
*** maeker has quit IRC00:11
*** maeker has joined #openstack-ansible00:12
*** marst has quit IRC00:20
*** Jeffrey4l_ is now known as Jeffrey4l00:28
*** weezS has quit IRC00:28
*** gouthamr has joined #openstack-ansible00:30
*** maeker has quit IRC00:46
*** whiteveil_ has joined #openstack-ansible00:51
*** whiteveil has quit IRC00:51
*** whiteveil_ is now known as whiteveil00:51
*** jamielennox is now known as jamielennox|away00:57
*** May-meimei has quit IRC00:59
*** sdake_ has quit IRC01:00
*** jamielennox|away is now known as jamielennox01:01
*** asettle has joined #openstack-ansible01:08
*** woodard has quit IRC01:10
*** vnogin has quit IRC01:10
*** woodard has joined #openstack-ansible01:10
*** vnogin has joined #openstack-ansible01:11
*** woodard has quit IRC01:11
*** vnogin has quit IRC01:11
*** woodard has joined #openstack-ansible01:11
*** asettle has quit IRC01:13
*** markvoelker has joined #openstack-ansible01:14
*** sdake has joined #openstack-ansible01:14
*** markvoelker has quit IRC01:19
*** May-meimei has joined #openstack-ansible01:22
*** messy has quit IRC01:30
*** sdake has quit IRC01:35
*** sdake has joined #openstack-ansible01:39
*** jamielennox is now known as jamielennox|away01:41
*** hw_wutianwei has joined #openstack-ansible01:44
*** rmelero has quit IRC01:44
*** rmelero has joined #openstack-ansible01:44
*** jamielennox|away is now known as jamielennox01:47
*** rmelero has quit IRC01:48
*** v1k0d3n has quit IRC01:50
*** sdake has quit IRC02:00
*** Mahe has quit IRC02:04
*** Mahe has joined #openstack-ansible02:05
*** asettle has joined #openstack-ansible02:09
*** Mudpuppy has quit IRC02:12
*** vnogin has joined #openstack-ansible02:12
*** Mudpuppy has joined #openstack-ansible02:12
*** klamath has quit IRC02:13
*** asettle has quit IRC02:13
*** messy has joined #openstack-ansible02:16
*** Mudpuppy has quit IRC02:17
*** vnogin has quit IRC02:17
*** sdake has joined #openstack-ansible02:22
*** Jeffrey4l has quit IRC02:31
*** Jeffrey4l has joined #openstack-ansible02:31
*** markvoelker has joined #openstack-ansible02:31
*** Jeffrey4l has quit IRC02:33
*** Jeffrey4l has joined #openstack-ansible02:33
*** markvoelker has quit IRC02:36
*** sdake has quit IRC02:42
*** sdake has joined #openstack-ansible02:43
*** Mahe has quit IRC03:01
*** sdake has quit IRC03:03
*** Mahe has joined #openstack-ansible03:04
*** ngupta has joined #openstack-ansible03:09
*** ngupta has joined #openstack-ansible03:09
*** gouthamr has quit IRC03:10
*** asettle has joined #openstack-ansible03:10
*** asettle has quit IRC03:14
*** whiteveil has quit IRC03:15
*** sdake has joined #openstack-ansible03:16
*** sdake has quit IRC03:35
*** sdake has joined #openstack-ansible03:36
openstackgerritKyle L. Henderson proposed openstack/openstack-ansible-repo_server: Fix apt-cacher-ng file owners during rsync  https://review.openstack.org/41091604:04
*** dxiri has joined #openstack-ansible04:05
*** dxiri has quit IRC04:09
*** asettle has joined #openstack-ansible04:10
*** maeker has joined #openstack-ansible04:12
*** vnogin has joined #openstack-ansible04:13
*** asettle has quit IRC04:15
*** v1k0d3n has joined #openstack-ansible04:15
*** vnogin has quit IRC04:18
*** thorst has joined #openstack-ansible04:18
*** ngupta has quit IRC04:34
*** maeker has quit IRC04:34
*** thorst has quit IRC04:45
*** chhavi has joined #openstack-ansible04:46
*** thorst has joined #openstack-ansible04:49
*** thorst has quit IRC04:57
*** qiliang27 has quit IRC04:57
*** qiliang27 has joined #openstack-ansible04:58
*** shausy has joined #openstack-ansible05:10
*** asettle has joined #openstack-ansible05:11
*** asettle has quit IRC05:15
*** sdake_ has joined #openstack-ansible05:24
*** sdake has quit IRC05:25
*** v1k0d3n has quit IRC05:35
*** ngupta has joined #openstack-ansible05:42
*** thorst has joined #openstack-ansible05:44
*** thorst has quit IRC05:52
*** vnogin has joined #openstack-ansible06:15
*** vnogin has quit IRC06:19
*** jimbaker has quit IRC06:22
*** jimbaker has joined #openstack-ansible06:26
*** ngupta has quit IRC06:27
*** ngupta has joined #openstack-ansible06:27
*** Jack_Iv has joined #openstack-ansible06:29
*** whiteveil has joined #openstack-ansible06:30
*** ngupta has quit IRC06:31
*** sdake_ has quit IRC06:32
*** sdake has joined #openstack-ansible06:34
*** thorst has joined #openstack-ansible06:39
*** thorst has quit IRC06:46
*** askb has quit IRC06:53
*** Oku_OS-away is now known as Oku_OS06:55
*** Mudpuppy has joined #openstack-ansible07:01
*** fxpester has joined #openstack-ansible07:02
*** sdake has quit IRC07:03
*** Mudpuppy has quit IRC07:06
*** asettle has joined #openstack-ansible07:12
*** asettle has quit IRC07:17
*** adreznec has quit IRC07:21
*** karimb has joined #openstack-ansible07:36
*** thorst has joined #openstack-ansible07:38
*** thorst has quit IRC07:44
*** karimb has quit IRC07:55
*** pcaruana has joined #openstack-ansible07:56
*** sacharya has quit IRC08:06
*** jamielennox is now known as jamielennox|away08:07
*** sacharya has joined #openstack-ansible08:08
*** adreznec has joined #openstack-ansible08:09
*** sacharya has quit IRC08:12
*** jamielennox|away is now known as jamielennox08:14
*** karimb has joined #openstack-ansible08:32
*** karimb has quit IRC08:32
*** asettle has joined #openstack-ansible08:34
*** chhavi has quit IRC08:37
*** asettle has quit IRC08:39
*** thorst has joined #openstack-ansible08:39
*** chhavi has joined #openstack-ansible08:43
*** thorst has quit IRC08:44
*** whiteveil has quit IRC08:58
*** sacharya has joined #openstack-ansible09:09
*** sacharya has quit IRC09:13
andymccrmornin09:25
winggundamthmorning09:29
*** ngupta has joined #openstack-ansible09:30
*** asettle has joined #openstack-ansible09:33
*** ngupta has quit IRC09:34
*** qiliang27 has quit IRC09:35
*** thorst has joined #openstack-ansible09:39
*** thorst has quit IRC09:44
*** jamielennox is now known as jamielennox|away09:48
*** vnogin has joined #openstack-ansible09:49
*** fxpester has quit IRC09:54
*** jamielennox|away is now known as jamielennox09:55
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_searchlight: [WIP] Import initial os_searchlight role.  https://review.openstack.org/40441910:03
*** karimb has joined #openstack-ansible10:04
*** kylek3h has quit IRC10:05
*** chhavi has quit IRC10:09
*** Jack_iv_ has joined #openstack-ansible10:12
*** openstackgerrit has quit IRC10:18
*** vnogin has quit IRC10:24
*** vnogin has joined #openstack-ansible10:24
*** openstackgerrit has joined #openstack-ansible10:26
openstackgerritAndy McCrae proposed openstack/openstack-ansible-tests: Remove Trusty support from tests repository  https://review.openstack.org/41174410:26
*** vnogin has quit IRC10:27
*** ngupta has joined #openstack-ansible10:31
*** uthng has joined #openstack-ansible10:32
uthnghi all10:32
uthnganyone can tell me if I can install magnum on AIO ? maybe odyssey4me ? because when I tried to run os-magnum-install.yml, it told me that no host matched10:33
*** thorst has joined #openstack-ansible10:33
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_tempest: Fix tempest for os_horizon role  https://review.openstack.org/41174810:35
*** ngupta has quit IRC10:35
*** v1k0d3n has joined #openstack-ansible10:36
*** sliver has joined #openstack-ansible10:38
*** stuartgr has joined #openstack-ansible10:39
*** mcrafty has quit IRC10:40
andymccruthng: you should have a magnum.yml file inside /etc/openstack_deploy/conf.d/magnum.yml and if so you should be able to run it10:41
*** thorst has quit IRC10:41
*** v1k0d3n has quit IRC10:42
uthngandymccr: arff I dont have ! How is it possible ?10:43
uthngI have only aodh.yml  ceilometer.yml  swift.yml !10:43
uthngafter upgrade !10:44
uthngi have magnum.yml in env.d10:45
andymccruthng: if you did an upgrade perhaps you upgraded from a release that didnt have magnum - id recommend reading the docs on how to configure, but if its just an AIO you can copy over the AIO sample file from the repository (etc/openstack_deploy/conf.d/magnum.yml.aio) to your local etc directory (/etc/openstack_deploy/conf.d/magnum.yml) - also note to drop the .aio when you copy it over10:45
andymccrand by didn't have magnum, i mean in the aio build10:45
*** kylek3h has joined #openstack-ansible10:47
uthngandymccr: thats right that I am doing upgrade from Mitaka. But I donot understand that why other component *.yml like cinder, glance etc. are not in conf.d ? They do not need ?10:48
*** philippe_ has joined #openstack-ansible10:49
*** philippe_ is now known as lemouchon210:49
andymccruthng: the way we setup the AIO was slightly different in mitaka - so all of those are most likely in a single /etc/openstack_deploy/openstack_user_config.yml10:53
andymccrwhich will still work fine in newton/master etc10:54
uthngandymccr: ah ok. after copying magnum.yml in conf.d, I have to rerun setup-hosts and lxc-containers-create.yml ?10:55
andymccruthng: if its an aio probably just lxc-containers-create.yml since it should be setup10:56
uthngand do I have to rerun other install playbook for other components ? Becasue I already upgraded all other existing components such as cinder, leystone, glance etc.10:56
uthngandymccr: ok, I will try it. Thanks for your helps10:57
andymccrno problem!10:57
uthngandymccr: ah It seems that the script user-secrets-adjustment.yml does not do correctly the work10:58
uthngI think it save the existing user_secrets.yml and add new secrets for magnum or sarah etc. in it10:58
uthngBut finally it overrides existing secrets with the new generated ones !10:59
andymccruthng: it should just rename some secrets that have changed, add in new secrets that didnt exist, and then generate secrets for the new ones11:00
uthngI donot know if I misunderstood anything but I had to report all old passwords found in the components that are not upgraded yet11:00
*** shausy has quit IRC11:02
uthngthats why when I upgraded glance and cinder, they cannot authenticate anymore at Keystone11:03
*** bpetit has quit IRC11:04
uthngI had to use curl api, as odyssey4me told me, to set existing cinder or glance passwords in keystone to the new ones generated in user_secrets.yml11:04
*** bpetit has joined #openstack-ansible11:04
andymccruthng: ok - im not 100% sure, i've not run into that error before - what odyssey4me told you would fix it so you can continue, but that isnt an expected behaviour, if its consistently happening and you could file a bug for it that'd be really good :)11:06
*** sacharya has joined #openstack-ansible11:06
uthngyes, just to talk to you before if you knew anything about that. I will try to run this script againts on another server. If it happens again, I will open a bug then.11:09
*** sacharya has quit IRC11:11
*** hw_wutianwei has quit IRC11:13
*** fxpester has joined #openstack-ansible11:17
*** dgonzalez has quit IRC11:22
*** dgonzalez has joined #openstack-ansible11:24
*** thorst has joined #openstack-ansible11:29
*** thorst has quit IRC11:36
*** chhavi has joined #openstack-ansible11:40
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_ceilometer: Add required packages for Ceilometer  https://review.openstack.org/41177512:01
openstackgerritLogan V proposed openstack/openstack-ansible: WIP: ceph-ansible integration  https://review.openstack.org/40940712:02
odyssey4meuthng FYI for future purposes you may wish to check into using the openstack CLI instead of curl: http://docs.openstack.org/developer/python-openstackclient/command-objects/user.html#user-set12:06
odyssey4mebut the end result is the same :)12:06
*** Andrew_jedi has joined #openstack-ansible12:09
Andrew_jediHello Folks, Is there a command to find out all the overrides available?12:09
odyssey4meAndrew_jedi each role's defaults/main.yml files are the documented overrides12:10
odyssey4mesome of which are used in group_vars in the integrated repo12:10
*** deadnull has joined #openstack-ansible12:13
Andrew_jediodyssey4me: Ohh, ok. Could you please also let me know whether its possible to override this :https://github.com/openstack/openstack-ansible-os_heat/blob/master/tasks/heat_domain_setup.yml#L7212:14
odyssey4meAndrew_jedi It's not using a variable, so not at this time.12:15
odyssey4meThat does seem like an oversight.12:15
odyssey4mealthough, the 'admin' role name is special - it's the super admin that allows changes to anything and everything12:16
odyssey4mewhy would you want to set that to something different?12:16
odyssey4meso it looks like the keystone role has 'keystone_role_name' as a var to set which defines the role name for the admin12:18
andymccrodyssey4me: Andrew_jedi's moving from another platform that has admin as like "Admin" or something similar - so yeah we should just use that var12:18
odyssey4meaha, makes sense12:18
andymccri said the same last time hahah :D12:18
odyssey4meso the var 'keystone_role_name' was created quite recently - Newton I think12:18
andymccrthats a bug we can backport that fix too12:18
odyssey4methe heat role doesn't get much love - so I expect that we may need to survey the other roles and implement the use of that var if it's set12:19
odyssey4meI can do a sample if you like?12:19
Andrew_jediodyssey4me andymccr : Yes, i am trying to migrate Juju db to OSA. And juju uses "Admin" role instead of "admin".12:19
odyssey4meAndrew_jedi I hope you're keeping notes on your experience and will share them :)12:20
odyssey4meok, I'll put a quick patch together for os_heat12:20
Andrew_jediodyssey4me: Notes written in blood.12:20
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_heat: Allow admin role name to be changed  https://review.openstack.org/41178012:23
odyssey4meandymccr ^ seems the simplest mechanism12:23
odyssey4mewe haven't really gone through all the roles to namespace all var usage, and to some degree I think that may be overcomplicating things12:23
Andrew_jediodyssey4me: Please correct me if i am wrong but if i want to change this https://github.com/openstack/openstack-ansible-os_heat/blob/master/defaults/main.yml#L100 then i have to use "heat_default_yaml_overrides" ?12:23
odyssey4meAndrew_jedi The 'heat_service_role_name' variable? Nope - just add it to your user_variables.yml file with the value you want12:24
*** thorst has joined #openstack-ansible12:25
Andrew_jediodyssey4me: oh ... so  you mean any variable which is in roles/default/main.yam can be overridden by just adding it to user_variables.yaml ?12:26
Andrew_jediodyssey4me: Any chance i can use this patch in Liberty :P https://review.openstack.org/41178012:27
odyssey4meAndrew_jedi yes, that's correct12:27
odyssey4meAndrew_jedi perhaps, it's a safe patch which just uses the var if it exists12:28
odyssey4meOur liberty branch is gone though as it's EOL, so you'll have to fork and patch yourself.12:28
Andrew_jediodyssey4me: Gotcha! Thanks!12:29
odyssey4meok, I don't see any other roles which have a similar pattern12:30
odyssey4meso it looks like that'll do it12:30
odyssey4meif you find any other you need, ping us :)12:30
Andrew_jediodyssey4me: roger that :)12:31
openstackgerritMerged openstack/openstack-ansible-pip_install: Remove Trusty support from pip_install role  https://review.openstack.org/41136712:31
andymccrodyssey4me: agree with that12:32
*** ngupta has joined #openstack-ansible12:32
odyssey4meandymccr I think we should actually work through the roles and implement similar defaulting for all those keystone vars and any other vars outside the role12:32
odyssey4mehmm, let me register some bugs for that12:32
odyssey4meit's a nice low-hanging-fruit item12:33
andymccrodyssey4me: yeah i thinkso - but do the others not have it?12:33
andymccrif Andrew_jedi only ran into this issue on heat im guessing the other major projects have it atleast12:33
*** thorst has quit IRC12:33
odyssey4mewell, there is the use of keystone_* vars in almost all the service roles... those aren't present in the defaults and aren't defaulted either12:34
odyssey4meit's not *necessary*, but it is a bit of tidying up that would be nice12:34
openstackgerritMerged openstack/openstack-ansible-plugins: Remove Trusty support from plugins role  https://review.openstack.org/41137612:36
*** ngupta has quit IRC12:36
openstackgerritMerged openstack/openstack-ansible-rsyslog_client: Remove Trusty support from rsyslog_client role  https://review.openstack.org/41138612:37
openstackgerritMerged openstack/openstack-ansible-rsyslog_server: Remove Trusty support from rsyslog_server role  https://review.openstack.org/41139012:40
openstackgerritMerged openstack/openstack-ansible-repo_server: Remove Trusty Support from repo_server role  https://review.openstack.org/41138512:40
openstackgerritMerged openstack/openstack-ansible-os_magnum: Remove Trusty support from os_magnum role  https://review.openstack.org/41127812:44
openstackgerritAndy McCrae proposed openstack/openstack-ansible-repo_build: Add libxml2-dev package required for lxml install  https://review.openstack.org/41178712:44
*** markvoelker has joined #openstack-ansible12:45
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_ceilometer: Add required packages for Ceilometer  https://review.openstack.org/41177512:45
odyssey4meandymccr I see you're in the thick of things there. Need any help with anything?12:46
*** gouthamr has joined #openstack-ansible12:46
andymccrodyssey4me: think im done now! found out why the 3 are failing, and once the appropriate fixes merge it should be good to go12:47
andymccrim tempted to set ceilometer to build in the integrated build, ignoring requirements12:47
andymccrfor now12:47
*** jamesdenton has joined #openstack-ansible12:47
andymccrso we can ensure it stays up to date and catch issues sooner12:47
andymccrand then before actual release we remove that12:47
odyssey4meandymccr sure, the trick will be to remember to do that!12:49
andymccrhaha yeah i know...12:49
andymccrmaybe i'll leave it a bit longer12:49
odyssey4meyou could create a blueprint for the Ocata release and add a note to it12:49
andymccrthere is working to fix that stuff12:49
*** thorst has joined #openstack-ansible12:49
andymccrthere is work happening, rather :P12:49
odyssey4meit's very likely that more and more projects will start doing this12:49
andymccrignoring reqs?12:49
odyssey4meyeah, the whole co-installable thing is not something everyone subscribes to12:50
*** jamesden_ has joined #openstack-ansible12:50
odyssey4mebut yeah, that is a current community contract which ceilometer really should be working within the community to resolve12:50
odyssey4meI mean if monasca is the hold back, then perhaps monasca should solve it or step out of the co-installable promise12:51
*** jamesdenton has quit IRC12:52
andymccrodyssey4me: yeah it looks like oslo is gonna update and ceilometer will consume oslo rather than its own thing12:53
andymccrbut yeah12:53
openstackgerritMerged openstack/openstack-ansible-os_trove: Remove Trusty support from os_trove role  https://review.openstack.org/41136612:53
odyssey4meah ok nice12:54
odyssey4mewill they be able to achieve that in Ocata?12:54
openstackgerritMerged openstack/openstack-ansible-repo_build: Remove Trusty support from repo_build role  https://review.openstack.org/41138012:55
*** thorst has quit IRC12:56
*** thorst has joined #openstack-ansible12:57
andymccrodyssey4me: i thinkso - we arent the only ones who will have issues12:58
andymccri imagine packagers will have bigger issues12:58
odyssey4meyeah, much12:58
odyssey4methat's exactly who the co-installability promise is designed for12:58
andymccrwe can get around it (at a cost)13:00
andymccrbut then i might request upper constraints in their reqs file13:00
andymccrto ensure consistency13:01
*** Mudpuppy has joined #openstack-ansible13:04
*** thorst has quit IRC13:05
*** johnmilton has joined #openstack-ansible13:06
odyssey4meyes, if they're going to ignore the requirements process, then they need to publish their own properly tested upper bounds in their own repo13:07
*** sacharya has joined #openstack-ansible13:07
odyssey4methat then gives us something to work with as a consistent set which guarantees that what was tested is what is deployed13:07
*** Mudpuppy has quit IRC13:08
*** v1k0d3n has joined #openstack-ansible13:10
*** woodard has quit IRC13:11
*** woodard has joined #openstack-ansible13:11
*** sacharya has quit IRC13:11
*** jamesden_ has quit IRC13:12
openstackgerritMerged openstack/openstack-ansible-os_tempest: Remove Trusty support from os_tempest role  https://review.openstack.org/41136513:16
mgariepygood morning everyone13:19
openstackgerritKyle L. Henderson proposed openstack/openstack-ansible: Add retries to apt update in cache proxy task  https://review.openstack.org/41180513:35
openstackgerritMerged openstack/openstack-ansible: Revert role SHA pin for Ocata-3 development  https://review.openstack.org/41118813:35
mgariepyanyone would have an idea why http://logs.openstack.org/58/411358/1/check/gate-openstack-ansible-os_nova-ansible-func-centos-7-nv/c7a6100/console.html#_2016-12-15_20_39_01_452233 fails and this one doesn't http://logs.openstack.org/58/411358/1/check/gate-openstack-ansible-os_nova-ansible-func-centos-7-nv/c7a6100/console.html#_2016-12-15_20_11_28_81615613:37
*** retreved has joined #openstack-ansible13:45
kylek3hmgariepy: what version of ansible are these runs using?  perhaps 1.9.x?13:46
mgariepy2.2.something13:47
kylek3hhmm....intersting.  we saw that error alot with 1.9.x13:47
mgariepywhen I run the test on my vm it passes correctly.13:48
kylek3hthere were no internal reties with apt in 1.9.x we saw that about 10% of the time.  we patched ansible 1.9 to add retries and it went away.13:48
mgariepythis is centos ;)13:49
kylek3hah...13:49
kylek3hwell...maybe similar issue?13:49
mgariepythere is retry but still the cert doesn't validate.13:50
mgariepyand it tries on github and bootstrap.pypa.io13:50
kylek3hright...it was really flaky with apt....and a simple retry worked.13:50
odyssey4memgariepy hmm, that sounds like the usual SNI issues13:53
odyssey4mewe had that badly with trusty13:53
odyssey4meany python version < 2.7.9 does not support SNI SSL certs natively13:53
mgariepywhy does it work the first time ?13:53
odyssey4mewe have patched in a bunch of extra packages and python bits to make that work, so it should work... but yeah13:54
mgariepyand it's consistent on the gate job :) haha13:54
*** fxpester has quit IRC13:54
odyssey4methat said, yes - I have seen sometimes it works on the host but not in containers13:54
odyssey4mein that case it's better to look for networking shenanigans13:55
odyssey4merace conditions and such13:55
mgariepywhen i run it on my openstack installation it works fine.13:55
*** ngupta has joined #openstack-ansible13:55
odyssey4mehmm, I wonder if there may not be something gate specific involved then13:55
mgariepycan we stat get-pip.py to see if it's there an skip the download if it's not older than a day ?13:56
mgariepymight speed up the gate check a tiny bit also.13:56
odyssey4mewell, so that was the point of implementing the fetch of get-pip to the repo server, then using it from there for all other hosts/containers13:57
odyssey4meit means it would skip automatically and not have to reach out to the internet13:57
odyssey4mebut obviously you're looking at a role test here where there is no repo server13:57
mgariepyi saw errors in the past with github certs but those where linked to mtu issue (1500 in container but 1450 on the hosts)13:58
mgariepyi don't think this kind of issue would come up in the gate job.13:59
odyssey4meit could? it all depends on how we're setting things up13:59
odyssey4mebut there is the fact that it did work earlier in the sequence, then doesn't work later13:59
mgariepyit wouldn't work the first time for the container ;)13:59
odyssey4meso if it is that, then something's changing the setup14:00
mgariepyhmm.14:00
*** shausy has joined #openstack-ansible14:01
mgariepydmsimard, how can i enable the ARA logging on that patch ?14:01
dmsimardmgariepy: which one14:01
mgariepyhttps://review.openstack.org/#/c/41135814:01
odyssey4meI wonder if it would work if we add 'creates: /opt/get-pip.py' to that task, but make it default to omit that... and only activate that for the role tests14:01
odyssey4melemme work up a patch to try that out14:02
dmsimardmgariepy: I'm not sure how the gate is set up. Maybe a Depends-On https://review.openstack.org/#/c/396324/ ?14:02
dmsimardmgariepy: or https://review.openstack.org/#/c/396565/14:03
dmsimardI'm not sure which one you need14:03
mgariepydepends don't pull patches AFAIK for openstack-ansible.14:04
odyssey4meno, not yet - we still need to work out how to do it14:06
mgariepyodyssey4me, could we have a check-experimental-with-ara ?14:07
odyssey4memgariepy what are you hoping to achieve?14:07
mgariepyhaving a easy way to see what's happening on the server14:07
dmsimardFWIW I feel https://review.openstack.org/#/c/396565/ is good to merge if you guys are satisfied with it14:07
odyssey4mea way to do it would be to change https://review.openstack.org/#/c/396565/ so that it's opt-in rather than enabled by default14:08
dmsimardHowever, https://review.openstack.org/#/c/396324/ should definitely not merge until ARA is optimized for that amount of content14:08
odyssey4methen in any patch you want to use it you could just active it14:08
odyssey4me*activate it14:08
mgariepyjust like check experimental is ?14:08
*** marc_ab has joined #openstack-ansible14:09
odyssey4memgariepy check experimental would require a lot more moving parts14:10
odyssey4meandymccr thoughts on setting up ARA as an opt-in item via an env var?14:10
uthngodyssey4me: yeah thanks for tuto. It is simpler :D but I did not find it out hehe :)14:10
odyssey4methat way it would be easy to adjust the env var temporarily in a patch to use it, but not always have it enabled by default14:10
*** shausy has quit IRC14:11
*** mennie has quit IRC14:11
*** shausy has joined #openstack-ansible14:11
dmsimardodyssey4me, mgariepy, andymccr: I added comments to the ARA reviews: https://review.openstack.org/#/q/topic:osa-ara14:13
*** mennie has joined #openstack-ansible14:18
*** whiteveil has joined #openstack-ansible14:21
*** fguillot has joined #openstack-ansible14:22
*** woodard has quit IRC14:23
*** woodard has joined #openstack-ansible14:23
*** shausy has quit IRC14:26
*** jamesdenton has joined #openstack-ansible14:28
*** shausy has joined #openstack-ansible14:32
odyssey4memgariepy so, interestingly the get_url module is meant to actually ignore the task if the file is already there - or at least the docs say so14:33
*** Andrew_jedi has quit IRC14:33
odyssey4meah, we have 'force: yes'14:33
mgariepywe might want it to get updated once in a while ?14:35
mgariepyif we remove the force, we might loose updating it on upgrades14:37
*** klamath has joined #openstack-ansible14:38
*** ngupta has quit IRC14:38
*** klamath has quit IRC14:39
*** ngupta has joined #openstack-ansible14:39
*** klamath has joined #openstack-ansible14:40
*** thorst_ has joined #openstack-ansible14:41
*** marst has joined #openstack-ansible14:41
*** shausy has quit IRC14:41
*** shausy has joined #openstack-ansible14:42
*** TxGirlGeek has joined #openstack-ansible14:43
*** thorst_ has quit IRC14:46
*** shausy has quit IRC14:49
*** shausy has joined #openstack-ansible14:50
*** thorst_ has joined #openstack-ansible14:51
*** shausy has quit IRC14:55
*** chhavi has quit IRC14:58
*** dxiri has joined #openstack-ansible14:58
*** Andrew_jedi has joined #openstack-ansible15:00
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-pip_install: Provide toggle for get-pip.py get_url task force option  https://review.openstack.org/41185415:04
odyssey4memgariepy ^15:04
*** whiteveil has quit IRC15:08
*** sacharya has joined #openstack-ansible15:08
mgariepyodyssey4me, thanks a lot :)15:08
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-pip_install: Provide toggle for get-pip.py get_url task force option  https://review.openstack.org/41185415:08
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-tests: Prevent repeated get-pip.py fetches in role tests  https://review.openstack.org/41185515:09
odyssey4memgariepy ^ that combination should improve the role test results15:09
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-pip_install: Provide toggle for get-pip.py get_url task force option  https://review.openstack.org/41185415:11
*** sacharya has quit IRC15:13
*** thorst_ has quit IRC15:14
*** dxiri has quit IRC15:18
*** dxiri has joined #openstack-ansible15:19
*** Disova has quit IRC15:23
*** deadnull has quit IRC15:23
mgariepythanks a lot for this odyssey4me.15:24
odyssey4memgariepy well, andymccr may correct me here but the original patch to force it to fetch all the time was because of Ansible 1.915:25
odyssey4meI think that patch implements enough of a safeguard, but will really help improve the role test execution.15:26
* mgariepy hope that os_nova centos will not fail a bit later after this :D15:27
odyssey4meI don't know if any cores are around to vote that one through.15:28
odyssey4mebut I expect that it will go through at some point15:29
*** thorst_ has joined #openstack-ansible15:35
openstackgerritMerged openstack/openstack-ansible-pip_install: Provide toggle for get-pip.py get_url task force option  https://review.openstack.org/41185415:37
dmsimardI'm going to ask a crazy question15:45
dmsimardWill there ever be binary deployments in OSAD, either through UCA or RDO ?15:46
*** thorst_ has quit IRC15:46
dmsimardIf someone interested enough contributes it15:46
*** jmckind has joined #openstack-ansible15:49
*** whiteveil has joined #openstack-ansible15:51
*** maeker has joined #openstack-ansible15:55
*** thorst_ has joined #openstack-ansible15:57
*** weezS has joined #openstack-ansible15:58
odyssey4medmsimard that's the key thing right there15:59
odyssey4meif someone is prepared to contribute it and maintain it, then all it takes is a spec15:59
dmsimardodyssey4me: okay -- because I haven't looked at the scope of this but I am assuming there are a lot of assumptions that everything runs off of source and it'd probably be a huge undertaking16:00
odyssey4meyep, I expect it wouldn't be too hard - but it would be a fair body of work16:01
*** thorst_ has quit IRC16:01
odyssey4meand the trouble we've had in the past when working on chef-based deployments (where we did use packages only) was that each distro made its own assumptions16:01
odyssey4meso we'd have to skirt the line between those - perhaps we could help normalise them a bit16:02
dmsimardmeh, like different paths and so on ?16:02
odyssey4medifferent paths are fine - I think we have that covered now16:02
odyssey4mebut different conf files was what I remember16:02
odyssey4mesome do multiple, some do a single file16:02
odyssey4menot terrible to overcome - I think we have a fair framework in place to cater for distro differences16:03
*** sdake has joined #openstack-ansible16:03
odyssey4mea good start for anyone wanting to do that would be for them to get involved in what we already have16:03
dmsimardyeah16:03
*** marc_ab has quit IRC16:03
dmsimardhey, whatever happened to that config templating module in Ansible ? Did that get merged ?16:03
odyssey4megain community confidence, understand where every current stakeholder is coming from16:04
odyssey4menah, Ansible doesn't want to merge it - they feel that using filters you can achieve the same thing16:04
odyssey4meregardless of performance differences16:04
odyssey4meso we know that the ceph-ansible crew vendor it16:04
odyssey4meand we'll continue to use it16:04
odyssey4meand extend it as we go16:04
odyssey4mehere's a new extension https://review.openstack.org/38210916:05
*** chhavi has joined #openstack-ansible16:06
*** chhavi has quit IRC16:06
odyssey4meandymccr any chance for a vote on https://review.openstack.org/411855?16:06
*** chhavi has joined #openstack-ansible16:06
*** rmelero has joined #openstack-ansible16:08
odyssey4methx16:08
*** maeker has quit IRC16:16
*** maeker has joined #openstack-ansible16:16
*** uthng has quit IRC16:19
*** marc_ab has joined #openstack-ansible16:20
*** pcaruana has quit IRC16:25
openstackgerritMerged openstack/openstack-ansible-tests: Prevent repeated get-pip.py fetches in role tests  https://review.openstack.org/41185516:28
*** adrian_otto has joined #openstack-ansible16:32
*** whiteveil has quit IRC16:32
*** Oku_OS is now known as Oku_OS-away16:32
*** whiteveil has joined #openstack-ansible16:33
*** whiteveil has quit IRC16:34
*** karimb has quit IRC16:39
*** karimb has joined #openstack-ansible16:40
*** dxiri has quit IRC16:41
*** kstev has joined #openstack-ansible16:44
*** dxiri has joined #openstack-ansible16:46
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-tests: Setup ARA reporting for central tests repository  https://review.openstack.org/39656516:48
*** maeker has quit IRC16:55
*** woodard_ has joined #openstack-ansible17:01
*** woodard has quit IRC17:01
*** whiteveil has joined #openstack-ansible17:01
*** ngupta has quit IRC17:03
*** dxiri has quit IRC17:04
*** dxiri has joined #openstack-ansible17:05
*** adrian_otto has quit IRC17:08
*** sacharya has joined #openstack-ansible17:09
*** chhavi has quit IRC17:14
*** maeker has joined #openstack-ansible17:19
*** lemouchon2 has quit IRC17:20
*** lemouchon2 has joined #openstack-ansible17:20
*** maeker has quit IRC17:23
*** maeker_ has joined #openstack-ansible17:24
andymccrok i think im done for today! Have a great weekend everybody17:26
*** sdake has quit IRC17:28
*** maeker_ has quit IRC17:29
*** maeker has joined #openstack-ansible17:30
*** maeker has quit IRC17:33
*** chhavi has joined #openstack-ansible17:34
*** adrian_otto has joined #openstack-ansible17:38
*** Jack_Iv__ has joined #openstack-ansible17:42
*** maeker has joined #openstack-ansible17:45
*** maeker has quit IRC17:48
*** ngupta has joined #openstack-ansible17:53
palendaeYou too andymccr17:54
openstackgerritMerged openstack/openstack-ansible-os_tempest: Fix tempest for os_horizon role  https://review.openstack.org/41174817:55
*** wadeholler has joined #openstack-ansible17:57
*** maeker has joined #openstack-ansible17:59
*** Jack_Iv__ has quit IRC18:00
*** asettle has quit IRC18:01
*** Jack_Iv__ has joined #openstack-ansible18:01
*** asettle has joined #openstack-ansible18:01
*** whiteveil has quit IRC18:03
*** Jack_Iv__ has quit IRC18:05
*** asettle has quit IRC18:06
*** ngupta has quit IRC18:11
*** ngupta has joined #openstack-ansible18:11
*** ngupta has quit IRC18:13
*** ngupta has joined #openstack-ansible18:13
*** Jack_Iv__ has joined #openstack-ansible18:15
*** Andrew_jedi has quit IRC18:16
*** Jack_Iv__ has quit IRC18:18
*** Mudpuppy has joined #openstack-ansible18:18
*** Jack_Iv__ has joined #openstack-ansible18:19
*** Jack_Iv__ has quit IRC18:24
palendaelogan-, cloudnull: So I dug a bit deeper on potential group_var overlaps; in these two tests, I couldn't find conflicts. https://gist.github.com/nrb/179053241c3035f57bd9a8d89b0f841018:24
*** weezS has quit IRC18:24
*** kelv has joined #openstack-ansible18:26
*** weezS has joined #openstack-ansible18:29
*** ianychoi has quit IRC18:30
*** adrian_otto has quit IRC18:30
*** Mudpuppy has quit IRC18:30
*** Mudpuppy has joined #openstack-ansible18:31
*** asettle has joined #openstack-ansible18:31
*** asettle has quit IRC18:32
*** sdake has joined #openstack-ansible18:33
*** Mudpuppy has quit IRC18:35
logan-agreed palendae. that has been my experience with ansible's base inventory outside of OSA.18:36
palendaehttps://github.com/ansible/ansible/issues/17559; https://github.com/ansible/ansible/issues/1724318:36
palendaeI did not try the scenario where a host was directly in 2 groups, or trying --limit18:36
palendaeWill try it after lunch18:36
palendaeMy 2 parents thing was a *group* in 2 different groups18:37
palendaeHmmmmmmmmm https://github.com/ansible/ansible/pull/1594218:37
palendae^ that may be why it's working actually18:37
palendaeOh, that links back to 17243, nvm18:38
palendaeKind of spelunking so I can compare OSA's approach, and at least write down some whys18:39
palendaeEven if they're historical and we can move18:39
*** galstrom_zzz is now known as galstrom18:39
logan-seems like that PR would make overlaps a lot safer18:40
palendaeYeah, the claim is that https://github.com/ansible/ansible/issues/17243 fixed it18:40
palendaeWhich might have gotten into 2.2, which my test was using18:40
*** Jack_Iv__ has joined #openstack-ansible18:41
logan-right. it looks like it is inthe stable-2.2 branch but not any tags yet if im parsing github correctly.18:41
palendaeI think the first issue https://github.com/ansible/ansible/issues/17559, where --limit behaves unexpectedly, may still be a problem though18:41
logan-yeah18:42
palendae--limit is *just* for deciding where to execute tasks, not for processing variable information18:42
palendaeSo, more experiments to do this afternnon18:42
palendaeBut I'm hungry :)18:42
logan-i think that is actually the correct way to do it. --limiting execution shouldn't change how inventory is compiled imo18:43
wadehollerhi all I have a 16.04 upgrade issue that I need little help with: doing first node of 3 node infra group; re-pxe'd / reinstalled host, ran setup-hosts,setup-infrastructure,setup-openstack which relative to the problem im looking at ran clean.  however, new services don't start with a pip problem.18:43
wadehollerhttp://pastebin.com/E6DvWdHy18:43
wadehollerI bet it's something simple but I'm new.  Can someone point me in a direction ?18:44
palendaelogan-: Yeah; as long as that's understood, I'm fine, but I'm still trying to dig out the why of our inventory structure so I can write that down :)18:44
logan-wadeholler: this is going to be interesting to dig into. I take it you had a trusty env, upgraded the trusty env to newton, and now you are working on upgrading your newton+trusty install to newton+xenial one controller at a time?18:46
wadehollerlogan: you got it18:47
*** Jack_Iv__ has quit IRC18:48
*** maeker has quit IRC18:48
*** Jack_Iv__ has joined #openstack-ansible18:48
logan-ok first question... can you check which host the repo-build.yml playbook is running the builds against? i'm guessing your wheels/venvs are getting built on a trusty controller, but can you confirm?18:49
*** stuartgr has left #openstack-ansible18:50
*** maeker has joined #openstack-ansible18:50
wadehollerwill do if you tel me how ot do that ..... ?18:51
*** maeker has quit IRC18:52
*** maeker_ has joined #openstack-ansible18:52
openstackgerritMerged openstack/openstack-ansible-os_horizon: Remove Trusty support from os_horizon role  https://review.openstack.org/41127018:52
*** Jack_Iv__ has quit IRC18:53
wadehollerlogin: p.s. "pip list" on the container before I run /openstack/venv/glance...../bin/activate runs OK18:53
*** sdake has quit IRC18:54
logan-if you run openstack-ansible repo-build.yml, just curious which release the host running tasks like "Create OpenStack-Ansible requirement wheels" has18:54
wadehollerlogan: ok - will run that and report back.18:55
logan-also did you remove the ansible facts cache on your deployment host after reinstalling the cpa00003 server?18:55
*** maeker_ has quit IRC18:55
wadehollerlogan: no.....18:55
logan-it should be in /etc/openstack_deploy/ansible_facts18:55
logan-ok18:56
*** maeker has joined #openstack-ansible18:56
*** sdake has joined #openstack-ansible18:56
logan-well lets check on the repo_build, then i'm curious if we delete the glance container you're having problems with, delete the facts cache, and re-build the container if it still has the same failure.18:57
logan-i'm guessing it will but its worth checking18:57
*** Jack_Iv__ has joined #openstack-ansible18:59
*** BjoernT has joined #openstack-ansible19:00
*** jmckind_ has joined #openstack-ansible19:00
*** jmckind has quit IRC19:01
*** poopcat has joined #openstack-ansible19:02
wadeholleryes "Create requirement wheels" ran on a trusty backed host - at least so far in the repo-build.xml execution19:04
wadehollerlogan: regardless of where "repo-build" built the wheels - go ahead and delete container, facts cache, rebuild ...?19:05
wadehollerlogan: as long as I don't dork with openstack_inventory the new container should get the same IP right?19:06
logan-ok. thanks. if you could lxc-stop and lxc-destroy that glance container, delete /etc/openstack_deploy/ansible_facts, and then run setup-hosts and setup-openstack again that would be the next thing i'd test19:06
wadehollerwill do.19:06
logan-yes the facts will be re-gathered when you run ansible again. the /etc/openstack_deploy/ansible_facts is just a cache that is refreshed. I think it is a 1 day cache time by default19:07
logan-yep, it'll get built with the same IP as long as the inventory does not change19:07
wadehollerok. thank you19:07
logan-I think we're running into the issue here https://bugs.launchpad.net/openstack-ansible/+bug/1641131 where wheels built in the trusty repo containers aren't compatible with xenial hosts, and vice versa if your repo_build was occurring on a xenial container19:09
openstackLaunchpad bug 1641131 in openstack-ansible "In mixed OS environments, the libvirt-python wheel does not work" [High,Confirmed]19:09
wadehollerlogan: delete ansible_facts or just files in that directory for glance container ?19:09
logan-it should be safe to delete the entire facts directory, and I would recommend that since you rebuilt the entire host and it probably has old facts cached for all of the containers on that host19:10
*** poopcat has quit IRC19:10
wadehollerok19:10
*** poopcat has joined #openstack-ansible19:11
*** phalmos has joined #openstack-ansible19:11
*** whiteveil has joined #openstack-ansible19:13
*** sdake has quit IRC19:16
*** weezS has quit IRC19:17
*** sdake has joined #openstack-ansible19:17
*** whiteveil has quit IRC19:20
*** thorst_ has joined #openstack-ansible19:20
*** Jack_I___ has joined #openstack-ansible19:21
*** whiteveil has joined #openstack-ansible19:23
*** Jack_Iv__ has quit IRC19:23
*** Jack_iv_ has quit IRC19:24
*** Jack_Iv has quit IRC19:24
*** phalmos has quit IRC19:25
*** Jack_Iv has joined #openstack-ansible19:25
*** chhavi has quit IRC19:25
*** thorst_ has quit IRC19:27
*** asettle has joined #openstack-ansible19:33
*** asettle has quit IRC19:36
*** asettle has joined #openstack-ansible19:36
*** electrofelix has quit IRC19:37
*** sdake has quit IRC19:37
*** asettle has quit IRC19:39
*** sdake has joined #openstack-ansible19:42
*** jmckind_ has quit IRC19:51
*** jamesdenton has quit IRC19:52
*** jamesdenton has joined #openstack-ansible19:53
*** ngupta has quit IRC19:54
*** ngupta has joined #openstack-ansible19:54
*** jamesdenton has quit IRC19:58
*** ngupta has quit IRC19:59
*** jmckind has joined #openstack-ansible20:00
*** galstrom is now known as galstrom_zzz20:00
*** sdake has quit IRC20:02
*** Jack_I___ has quit IRC20:03
*** Jack_Iv has quit IRC20:03
*** Jack_Iv has joined #openstack-ansible20:04
*** sdake has joined #openstack-ansible20:04
*** Jack_Iv_ has joined #openstack-ansible20:06
*** galstrom_zzz is now known as galstrom20:06
*** Jack_Iv has quit IRC20:09
*** Jack_Iv has joined #openstack-ansible20:11
*** galstrom is now known as galstrom_zzz20:12
*** johnmilton has quit IRC20:20
wadehollerlogan: ran procedure you advised - delete glance container, delete ansible_facts, setup-hosts, setup-openstack20:22
wadehollerlogan: same error20:22
*** maeker has quit IRC20:22
*** sdake has quit IRC20:24
wadehollerlogan: other thoughts - delete repo containers - force to xenial based physical only, repo-build ?20:25
wadehollerseems heavy handed and troublesome to other nodes not yet upgraded to xenial20:26
*** sdake has joined #openstack-ansible20:26
*** galstrom_zzz is now known as galstrom20:27
*** maeker_ has joined #openstack-ansible20:27
*** maeker_ has quit IRC20:32
*** Andrew_jedi has joined #openstack-ansible20:32
*** gouthamr has quit IRC20:44
*** sdake has quit IRC20:46
*** gouthamr has joined #openstack-ansible20:48
*** sdake has joined #openstack-ansible20:49
*** asettle has joined #openstack-ansible20:59
*** asettle has quit IRC21:00
*** galstrom is now known as galstrom_zzz21:03
*** askb has joined #openstack-ansible21:05
*** sdake has quit IRC21:09
*** sdake has joined #openstack-ansible21:10
*** Andrew_jedi has quit IRC21:11
*** BjoernT has quit IRC21:12
*** jwitko has joined #openstack-ansible21:14
*** Jack_Iv_ has quit IRC21:16
*** Jack_Iv has quit IRC21:17
*** ngupta has joined #openstack-ansible21:25
*** gouthamr has quit IRC21:25
*** wadeholler has quit IRC21:25
*** gouthamr has joined #openstack-ansible21:26
*** thorst_ has joined #openstack-ansible21:26
*** gouthamr has quit IRC21:27
*** Mudpuppy has joined #openstack-ansible21:29
*** uthng has joined #openstack-ansible21:30
uthnghello all21:30
*** sdake has quit IRC21:30
uthngwhile running os-magnum-install.yml, I got this error : systemctl status rsyslog.service rsyslog.service   Loaded: error (Reason: No such file or directory)   Active: inactive (dead)21:31
uthnganyone can tell me why please ?21:31
*** Mudpuppy has quit IRC21:32
*** Mudpuppy has joined #openstack-ansible21:32
*** sdake has joined #openstack-ansible21:32
*** Jeffrey4l has quit IRC21:35
*** whiteveil has quit IRC21:35
*** retreved has quit IRC21:35
*** thorst_ has quit IRC21:35
*** Jeffrey4l has joined #openstack-ansible21:36
*** thorst_ has joined #openstack-ansible21:36
*** fguillot has quit IRC21:41
*** adrian_otto has joined #openstack-ansible21:42
*** adrian_otto has quit IRC21:49
*** Mudpuppy has quit IRC21:53
*** marc_ab has quit IRC21:56
*** jproulx has quit IRC22:04
*** aludwar has quit IRC22:06
*** aludwar has joined #openstack-ansible22:07
*** adrian_otto has joined #openstack-ansible22:12
*** Jack_Iv has joined #openstack-ansible22:13
*** smatzek has joined #openstack-ansible22:17
*** adrian_otto has quit IRC22:31
*** jamesdenton has joined #openstack-ansible22:34
*** jamesden_ has joined #openstack-ansible22:37
*** jamesdenton has quit IRC22:38
*** jmckind has quit IRC22:42
*** aludwar has quit IRC22:42
*** aludwar has joined #openstack-ansible22:43
*** Jack_Iv has quit IRC22:45
*** smatzek has quit IRC22:48
*** aludwar has quit IRC22:52
uthngnoone here please ?22:55
*** Jack_Iv has joined #openstack-ansible22:56
*** chris_hultin|AWA is now known as chris_hultin22:56
*** dxiri has quit IRC22:56
*** dxiri has joined #openstack-ansible22:57
*** aludwar has joined #openstack-ansible23:01
*** Jack_Iv has quit IRC23:02
*** Jack_Iv has joined #openstack-ansible23:02
*** Jack_Iv_ has joined #openstack-ansible23:03
*** Jack_Iv_ has quit IRC23:03
*** Jack_Iv_ has joined #openstack-ansible23:04
*** Jack_Iv__ has joined #openstack-ansible23:04
*** Jack_Iv_ has quit IRC23:08
*** TxGirlGeek has quit IRC23:12
*** adrian_otto has joined #openstack-ansible23:12
*** maeker has joined #openstack-ansible23:15
*** marst has quit IRC23:21
*** Mudpuppy has joined #openstack-ansible23:22
*** maeker has quit IRC23:26
*** ngupta has quit IRC23:27
*** ngupta has joined #openstack-ansible23:28
*** maeker has joined #openstack-ansible23:28
*** chris_hultin is now known as chris_hultin|AWA23:29
*** ngupta has quit IRC23:30
uthnghmm magnum & sahara container lxc use systemd23:30
*** ngupta has joined #openstack-ansible23:30
uthngother containers still use upstart. Thats why they work23:30
uthngwhy ansible_service_mgr on magnum and sahara detects systemd ? anyone has an idea about it ?23:31
*** retreved has joined #openstack-ansible23:31
*** retreved has quit IRC23:32
*** dxiri_ has joined #openstack-ansible23:34
*** dxiri has quit IRC23:34
uthngnobody awaked here ? odyssey4me ? andymccr ? :(23:35
*** maeker has quit IRC23:37
*** maeker has joined #openstack-ansible23:40
*** dalees has quit IRC23:40
*** dalees has joined #openstack-ansible23:43
*** Jack_Iv__ has quit IRC23:45
jmccroryuthng : trusty with OSA 14.0.3? you're probably hitting this problem https://review.openstack.org/#/c/409312/23:46
*** Jack_Iv_ has joined #openstack-ansible23:46
uthngjmccrory: for other components such as glance, cinder etc. I have trusty 14.04.4.23:47
uthngBut new components : magnum and sahara have 14.04.523:48
jmccroryright, and you're deploying newton, or the 14.0.3 tag of openstack-ansible?23:48
uthngnewton23:48
*** dalees has quit IRC23:48
jmccroryok, try installing the systemd-shim package in your containers23:49
*** dalees has joined #openstack-ansible23:50
*** Jack_Iv_ has quit IRC23:51
uthngjmccrory: the problem you sais is due to /run/systemd/system. But when I check ansible/module_utils/facts.py, I saw that it does not check /run/systemd/system but check the path to systemctl23:54
*** Jack_Iv has quit IRC23:54
uthngI dont know if by default, trusty 14.04.05 installs systemd or not. But there are both systemd (all related packages installed) and initctl23:56
uthngI wonder if I can uninstall systemd in magnum and sahara containers23:56
*** maeker has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!