Thursday, 2018-05-03

*** cjloader has quit IRC00:01
*** lbragstad has quit IRC00:17
*** ashak has joined #openstack-ansible00:23
mmercerouch, thats what i get for not actually reading the script first.... forcefully wipes out existing partitions on a multi disk system regardless of how the system is partitions00:37
*** cjloader has joined #openstack-ansible00:46
*** chigang__ has joined #openstack-ansible00:55
*** sep__ has quit IRC01:03
*** spine55 has quit IRC01:03
*** sep__ has joined #openstack-ansible01:03
*** spine55 has joined #openstack-ansible01:03
*** threestrands has joined #openstack-ansible01:05
*** pmannidi has quit IRC01:11
*** vnogin has joined #openstack-ansible01:19
*** pmannidi has joined #openstack-ansible01:20
*** thedini1 has joined #openstack-ansible01:22
thedini1has anyone been playing with ansible-hardening lately01:23
*** vnogin has quit IRC01:23
cloudnullthedini1: I use it all the time.01:33
*** spine55 has quit IRC01:37
thedini1cloudnull: i have been playing around with it and now I am getting a little more serious and was wondering how to start contributing back01:39
cloudnullah, have you ever contributed to openstack before ?01:40
cloudnullheres the complete getting started guide if you've not01:41
cloudnullhttps://docs.openstack.org/contributors/code-and-documentation/index.html01:41
thedini1nope... havn't contributed anywhere. THANKS01:41
cloudnullrelevant parts are https://docs.openstack.org/contributors/common/accounts.html01:41
cloudnulland https://docs.openstack.org/contributors/common/setup-gerrit.html01:41
cloudnullonce you have an account and have setup gerrit you're off to the races.01:42
cloudnullthe biggest change in your git workflow will be from `git push` to `git review`01:42
cloudnullonce you are off and running, and contributing, you'll see your changes here https://review.openstack.org/#/q/project:openstack/ansible-hardening01:43
cloudnullif you're ever just wanting to review patches, we'd greatly appreciate it!01:43
thedini1kk, I will start getting more involved01:44
hw_wutianwei_cloudnull:  odyssey4me logan-    hi, I met a issue in TASK [lxc_host: Ensure that the LXC cache has been prepared], when I deploy pike and queens. Here is the log http://paste.openstack.org/show/720253/, could you tell me how to solve?01:44
cloudnullhw_wutianwei_: I think that's related to https://bugs.launchpad.net/openstack-ansible/+bug/176859201:45
openstackLaunchpad bug 1768592 in openstack-ansible "/usr/local/bin/cache-prep-commands.sh failing to write to /etc/resolv.conf when resolvconf used on Xenial" [Undecided,New]01:45
hw_wutianwei_cloudnull:  have you fixed this bug?01:46
cloudnullI've not.01:47
cloudnullI just searched for it and found it :)01:47
hw_wutianwei_:)01:48
cloudnullto fix that I'd try changing https://github.com/openstack/openstack-ansible-lxc_hosts/blob/stable/pike/vars/ubuntu-16.04.yml#L65-L67 to just "rm /etc/resolv.conf"01:48
cloudnullwhich would just delete the file allowing the script to create the new one.01:49
cloudnullhw_wutianwei_: mind sharing the contents of "/var/log/lxc-cache-prep-commands.log" on the host ?01:49
hw_wutianwei_cloudnull:  ok, wait a moment01:50
cloudnullthis is what we do in master01:52
cloudnullhttps://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/prep-scripts/_container_sys_setup.sh.j2#L17-L2101:52
cloudnullwhich is basically delete the file and create it :)01:52
* cloudnull goes to make a patch01:52
hw_wutianwei_cloudnull: + mkdir -p /etc/ansible/facts.d/01:53
hw_wutianwei_+ '[' -a /etc/resolv.conf ']'01:53
hw_wutianwei_+ echo 'nameserver 10.0.3.1'01:53
hw_wutianwei_/usr/local/bin/cache-prep-commands.sh: line 8: /etc/resolv.conf: No such file or directory01:53
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts stable/queens: Fix lxc cache prep resolvers  https://review.openstack.org/56593301:57
cloudnullhw_wutianwei_: ^01:58
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts stable/queens: Fix lxc cache prep resolvers  https://review.openstack.org/56593301:58
cloudnullso with that change it should do the right things02:00
cloudnulllooks like recent releases of the lxc image from upstream are breaking things02:00
*** cjloader has quit IRC02:07
*** cjloader has joined #openstack-ansible02:08
*** cjloader has quit IRC02:12
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Convert rsyslog to an include_task with group_vars  https://review.openstack.org/55600202:23
cloudnullhw_wutianwei_: did that work for you ?02:23
*** lbragstad has joined #openstack-ansible02:24
hw_wutianwei_cloudnull:  I am testing. when rm resolv.conf manual, it works.02:25
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Convert rsyslog to an include_task with group_vars  https://review.openstack.org/55600202:25
cloudnullhw_wutianwei_: cool!02:26
*** mwarad has joined #openstack-ansible02:28
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-rabbitmq_server master: Tune-up the rabbitmq role for efficiency  https://review.openstack.org/52402802:43
*** thedini2 has joined #openstack-ansible02:44
*** thedini1 has quit IRC02:46
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-openstack_hosts master: Add IPv6 kernel module  https://review.openstack.org/56594002:49
*** thedini2 has quit IRC02:53
*** spsurya has joined #openstack-ansible02:54
*** dxiri has quit IRC03:09
*** mma has joined #openstack-ansible03:09
*** thedini2 has joined #openstack-ansible03:10
*** mma has quit IRC03:13
*** cjloader has joined #openstack-ansible03:14
*** cjloader has quit IRC03:18
*** thedini3 has joined #openstack-ansible03:21
*** nicolasbock has quit IRC03:21
*** thedini2 has quit IRC03:23
*** hamzy has joined #openstack-ansible03:34
*** udesale has joined #openstack-ansible03:36
*** cjloader has joined #openstack-ansible03:55
*** cjloader has quit IRC03:59
*** poopcat has quit IRC04:03
*** cjloader has joined #openstack-ansible04:13
*** cjloader has quit IRC04:17
*** vnogin has joined #openstack-ansible04:19
*** vnogin has quit IRC04:23
*** thedini3 has quit IRC04:25
*** lhinds has quit IRC04:29
*** portante has quit IRC04:29
*** lhinds has joined #openstack-ansible04:31
*** pabelanger has quit IRC04:31
*** portante has joined #openstack-ansible04:31
*** pabelanger has joined #openstack-ansible04:32
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Automatically prune the inventory backup  https://review.openstack.org/56595004:35
*** Taseer has joined #openstack-ansible04:37
*** ianychoi_ has joined #openstack-ansible04:38
*** ianychoi has quit IRC04:41
*** gyee has quit IRC04:41
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Properly configure cinder-volume containers  https://review.openstack.org/56595104:44
*** radeks_ has joined #openstack-ansible04:49
*** hachi has quit IRC04:59
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible stable/ocata: Backport nova placement api healthcheck from pike  https://review.openstack.org/56595505:01
openstackgerritMerged openstack/openstack-ansible-os_tempest master: Install os-testr  https://review.openstack.org/56477605:02
evrardjpgood morning05:04
olivierb-hello everyone, any thing changed in pike recently that could explain that I now have my AIO deployment not working anymore due to ansible-harderning ? The 1 serror I got was due to the fact that I only have 1 DNS server configured on my system (which I tricked very easily) and now I have a second stuff due to the NOPASSWD in my sudoers :-(. Looking at the latest changes I can not see something which could explain it was working05:04
olivierb- last week, any clue ?05:04
evrardjpmmm05:05
olivierb-morning evrardjp05:05
evrardjpcould you repeat the issue for me? It's early in the morning I am not fully awake.05:06
evrardjpeverything in hardening could be skipped on a case by case basis05:06
evrardjpNOPASSWD seems indeed something hardening would trip with05:06
olivierb-same for me, I think I'll go grab cup of coffee soon ;-)05:06
evrardjpright now bed is more tempting.05:07
olivierb-the problem is that with the same config (1 DNS + NOPASSWD) everything was deploying perfectly last week05:07
olivierb-today these 2 errors occured without me being able to tell why05:07
evrardjpcould you paste the issue?05:07
olivierb-sure05:08
evrardjpdid you update your roles?05:08
olivierb-not that I know of05:08
evrardjpok05:08
*** markvoelker has quit IRC05:08
evrardjpso you think of an idempotency failure05:08
olivierb-no, because I even restarted my deployment from scratched machines :-(05:08
olivierb-brand new installation05:09
olivierb-and applying all steps which were working previously untouched05:09
olivierb-I think I somehow inherited some change in conf/env/... from I do not know where05:10
olivierb-DNS error seems related to ansible-hardening/tasks/rhel7stig/misc.yml:05:11
olivierb-DEBUG: [V-72281 - For systems using DNS resolution, at least two name servers must be configured.]05:11
olivierb-Two or more nameservers must be configured in /etc/resolv.conf.05:11
olivierb-Nameservers found: 105:11
evrardjpolivierb-: check the user variables.05:11
olivierb-can be easily tricked using a commented dummy line ;-)05:11
evrardjpin /etc/openstack_deploy/user_*.yml05:11
evrardjpthat's bad05:11
evrardjpyou can skip V-72281 for your case05:12
olivierb-nothing weird/different in my user variables as far as I can tell. Did not see any difference from last week05:13
olivierb-anything I should grep for in particular05:13
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_nova master: Add option to define the qemu security driver  https://review.openstack.org/56595805:13
olivierb-going for coffee right now, hoepfully mind should clear up a bit after this05:15
evrardjpolivierb-: then it's probably the re-roll of the machine05:18
openstackgerritKevin Carter (cloudnull) proposed openstack/ansible-hardening master: Add option to skip sudoers NOPASSWD check  https://review.openstack.org/56595905:22
evrardjpcloudnull: thanks for that patch05:27
cloudnull?05:27
evrardjpI will check if the docs gets changed05:27
evrardjpfor the skip sudoers with a var05:27
cloudnulloh, im just churning through the open bugs list and saw that .05:27
cloudnullis that an issue we're seeing elsewhere?05:27
*** armaan has joined #openstack-ansible05:28
evrardjpsomeone this morning wanted to by pass it, just a few lines above.05:29
evrardjp:)05:29
evrardjpI thought this was reactive to the conversation :D05:29
olivierb-evrardjp I do not think so because I am using same snapshots than last week to re-roll "machines" so I suspect something else05:29
evrardjpare you using the same openstack-ansible version?05:30
olivierb-and yes many thanks cloudnull for the patch, will try it right in a few moments05:30
evrardjpwhich sha is that?05:30
evrardjpwell no cloudnull 's patch just makes it convenient for you to override05:31
evrardjpthere is a regression somewhere we need to find out05:31
evrardjp(instead of using the skip tags)05:31
evrardjpolivierb-: what SHA are you using?05:31
olivierb-the openstack-ansible version is patched for pike as 2.3.3.0 has issue when running in reduced-connectivity mode05:31
olivierb-https://github.com/obourdon/ansible/commits/2.3-opennext-osa-pike05:32
evrardjpI am not sure what you mean05:33
openstackgerritChristian Zunker proposed openstack/ansible-hardening master: Use absolute path for aide binary in cronjob  https://review.openstack.org/56596005:33
evrardjpolivierb-: which version of openstack-ansible?05:33
olivierb-ansible-playbook 2.3.3.005:33
evrardjpnot answeringmy question05:34
olivierb-again as this is fixed it has not changed since last week05:34
evrardjpyou're using ansible-hardening right?05:34
evrardjpwhich code version of ansible-hardening is that?05:34
evrardjpthere is no magic, something must have drifted05:35
olivierb-frankly if I do use it it is not from my will, I inherit it from the instructions I use to deploy AIO05:35
evrardjpeither it's in our code, or it's in your environment :)05:35
evrardjpolivierb-: hehe :)05:35
evrardjpit's an AIO with pike?05:36
olivierb-yes05:36
evrardjpis Pike restricted to a specific version?05:36
olivierb-of ansible yes 2.3.305:36
evrardjpor is that stable/pike of openstack-ansible ?05:36
evrardjpolivierb-: I don't care about ansible --version for now :)05:36
evrardjplet's do this:05:36
evrardjpcould you do a git show inside /opt/openstack-ansible ?05:37
olivierb-sure05:37
evrardjpor alternatively show me your instructions :)05:37
olivierb-https://gist.github.com/obourdon/aabd34a08acfed6c51cb7026191b086905:38
*** evin has quit IRC05:39
olivierb-https://gist.github.com/obourdon/a75cbd2a2bbbd98de30005557ae7d88605:39
olivierb-AFAIK nothing changed from this side since last week when everything was working fine05:40
olivierb-grabbing another cup of coffee, seems like I need it this morning sorry05:40
evrardjpolivierb-: haha05:42
evrardjpolivierb-: where do you fetch your  /opt/openstack-ansible/ ?05:43
evrardjpin your instructions?05:43
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Configure cors for glance for additional usability  https://review.openstack.org/56596105:47
olivierb- /usr/local/bin/openstack-ansible and it should come from the scripts/bootstrap-ansible.sh05:50
openstackgerritKevin Carter (cloudnull) proposed openstack/ansible-hardening master: Add option to skip sudoers NOPASSWD check  https://review.openstack.org/56595905:51
cloudnullok, im off. take care all05:52
evrardjpcloudnull: have a good night :)05:52
evrardjpolivierb-: that's true but that's not answering my question05:52
olivierb-and I just ran find on my system, this is the only one05:52
evrardjpI am not searching for the binary named openstack-ansible05:52
evrardjp(script I mean)05:52
evrardjpI am looking for where the scripts/bootstrap-ansible.sh is05:53
evrardjpit's in a folder, that folder has some code in it05:53
evrardjpthat code is versioned.05:53
evrardjpthat's the openstack-ansible version we are discussion05:53
evrardjpdiscussing*05:53
olivierb- /opt/openstack-ansible/scripts/bootstrap-ansible.sh from stable/pike + the slight modification to grab patched ansible 2.3.3 from the repo I pasted above: aka05:56
olivierb-https://raw.githubusercontent.com/obourdon/openstack-ansible/stable/pike/scripts/bootstrap-ansible.sh05:56
olivierb-should be synched with latest pike from official openstack-ansible repo05:56
*** mma has joined #openstack-ansible05:57
evrardjpolivierb-: ok so you're using a moving target stable/pike.05:58
evrardjpnot a fixed sha05:58
evrardjpthat could be the reason05:58
evrardjpplease check what you had before, because the last sha bump didn't change anything for sudoers.05:59
evrardjplatest sha bump of ansible-hardening inside openstack-ansible05:59
evrardjpI have to go05:59
*** DanyC has joined #openstack-ansible06:08
*** markvoelker has joined #openstack-ansible06:09
*** ppetit has joined #openstack-ansible06:11
*** cjloader has joined #openstack-ansible06:12
*** evin has joined #openstack-ansible06:12
*** cjloader has quit IRC06:17
*** DanyC has quit IRC06:24
openstackgerritlu.li proposed openstack/ansible-hardening master: Update the homepage url  https://review.openstack.org/56597506:26
*** markvoelker has quit IRC06:44
*** chigang__ has quit IRC06:44
*** eumel8 has joined #openstack-ansible06:52
*** pcaruana has joined #openstack-ansible06:53
*** threestrands has quit IRC06:58
*** ianychoi_ is now known as ianychoi07:00
*** radeks_ has quit IRC07:03
*** chigang__ has joined #openstack-ansible07:03
olivierb-can someone please explain this CI error to me ? http://logs.openstack.org/62/565762/1/check/openstack-ansible-deploy-aio_lxc-ubuntu-xenial/cee8e6d/logs/host/lxc-cache-prep-commands.log07:04
*** pmannidi has quit IRC07:04
olivierb-do I need to do something on my side ?07:04
olivierb-from what I was told yesterday, should have been transient issue but it has been transient overnight07:05
olivierb-thx07:05
openstackgerritMerged openstack/openstack-ansible-os_trove master: Deprecate auth_uri option  https://review.openstack.org/55837407:10
*** jbadiapa has joined #openstack-ansible07:16
*** epalper has joined #openstack-ansible07:17
*** mbuil has joined #openstack-ansible07:17
*** tosky has joined #openstack-ansible07:29
evrardjplooks like there is no resolv.conf file?07:34
*** gkadam has joined #openstack-ansible07:39
*** markvoelker has joined #openstack-ansible07:41
*** radeks has joined #openstack-ansible07:42
*** electrofelix has joined #openstack-ansible07:57
*** cjloader has joined #openstack-ansible07:58
*** DanyC has joined #openstack-ansible08:01
*** cjloader has quit IRC08:03
*** shardy has joined #openstack-ansible08:08
*** vnogin has joined #openstack-ansible08:13
*** markvoelker has quit IRC08:13
hwoarangmorning08:17
hwoarangwho is up for some reviews? :)08:22
*** radeks has quit IRC08:29
*** jwitko has quit IRC08:30
andymccrhwoarang: what you got08:34
hwoarangmaaany things08:35
hwoarangthis one for example https://review.openstack.org/#/c/565347/08:35
hwoarangand this https://review.openstack.org/#/c/565754/08:36
hwoarangand a more interesting one https://review.openstack.org/#/c/562606/08:37
andymccri'll try get through those this morning08:38
hwoaranggracias08:39
odyssey4memmercer yeah, gate-check-commit is brutal because it assumes you don't care about preserving anything on the host - if you do, better to walk yourself through the steps in the AIO guide08:41
evrardjpor the deploy guide even!08:44
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_tempest stable/queens: Install os-testr  https://review.openstack.org/56599908:46
evrardjphwoarang: can you clarify https://review.openstack.org/#/c/562606/16/vars/ubuntu-16.04.yml comment on setuptools?08:46
olivierb-evrardjp yep very strange this "disappearing" of resolv.conf. Furthermore as some other patchset tests have passed successfully08:47
hwoarangevrardjp: on ubuntu, setuptools and pkg-resources are two different packages08:48
evrardjpand you're doing that temporarily until everything is onto the host as proper distro packages08:49
*** mwarad has quit IRC08:49
hwoarangin other distros it's one. so when you use setuptools from ubuntu, and you use pip, then pip tries to install pkg-resources from pip and there is no such wheel in our index. it's what you see here http://logs.openstack.org/06/562606/12/check/openstack-ansible-functional-ubuntu-xenial/8f2c9d7/job-output.txt.gz#_2018-05-02_12_30_40_50799008:49
hwoarangevrardjp: tempest is a bit special. not all distros have tempest packaged and the plugins make it harder to do it properly. i only want to minimize the stuff we install with pip08:50
evrardjp_tempest_requires_pip_packages that's for pip packages not for distro, so it should be the same?08:50
hwoarangpip packages on the host. these are only needed to prepare the host for tempest like creating networks etc. you dont need to use the pip packages for that08:51
evrardjpok maybe my brain is deficient here. Didn't we say do everything in venv for tempest, and all the required resources to setup with ansible should be using ansible_python_interpreter to that venv?08:51
*** vnogin has quit IRC08:51
evrardjpis that patch too big maybe?08:52
hwoarangit's too invasive to do it at once. plus venv creation happens in the middle of the role. so somehow you need to switch from host ansible to venv ansible08:53
odyssey4methe reason we're installing this stuff is to do the resource creation tasks... we could also just build a venv and have those tasks use the venv.... or another approach would be to use the ansible-runtime venv and delegate the tasks to localhost08:53
hwoarangin reality, all i want ot happen in that patch is to use python-shade instead of pip shade for opensuse08:53
evrardjpok.08:53
evrardjpcall a cat a cat :p08:53
hwoarangwell not only shade. anything that depends on 'cryptography'08:54
evrardjpoh yeah08:54
evrardjpI see08:54
evrardjpthat's a good first step08:54
hwoarangbecause distro cryptography and pip cryptography can't co-exist. so anything that brings pip cryptography in has to either be in venv or switch to distro package08:55
evrardjpodyssey4me: that's what I meant by "all the required resources" :)08:55
*** cjloader has joined #openstack-ansible08:55
evrardjpbut I think what you're proposing, including the delegation, would basically require setting up the ansible_python_interpreter08:55
evrardjpI am surprised they can't co-exist08:56
evrardjpbut let's talk about that later, cryptography has been a pain all along08:56
*** mbuil has quit IRC08:56
evrardjpI like that.08:56
hwoarangit's only for suse. they are built using different openssl versions and some symbols dont like each other08:56
odyssey4mejamespage Is there some idea of when we can expect to see UCA for Rocky show up? It looks like we're needing python-shade >= 1.9.0 and UCA for Queens is only giving us 1.7.008:56
hwoarangwe had a bug about that a few months ago08:56
evrardjpIt's just that it seems a distraction towards simplification08:56
evrardjpodyssey4me: good call :)08:57
evrardjpI guess everything is in 18.04 :p08:57
evrardjpomg so many things to do :/08:57
hwoaranggreat i was about to ask that...08:57
evrardjpwe are stretched so thing.08:57
hwoarangabout shade on ubuntu i mean...08:57
odyssey4me18.04 is Queens too, so it's unlikely to make a difference08:57
evrardjpthat's true.08:58
jrosseri did a patch to put shade on the deploy host if that helps any? might be another axis to decouple that from whats going on with the target nodes08:58
evrardjpdistro packages everywhere and our life will be easier for those things :)08:59
odyssey4mehwoarang I'm curious - where does the shade requirement come from... and more especially, v1.9.0 - upper constraints only has 1.27.1: https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L25608:59
*** cjloader has quit IRC08:59
odyssey4mealso, 1.28.0 is the latest version... where does 1.8.0 come from?09:00
openstackgerritMerged openstack/openstack-ansible-openstack_hosts master: vars: ubuntu: Explicitly add virtualenv package  https://review.openstack.org/56534709:00
odyssey4meah, 28>8.... wow, u-c has a really, really old version then09:00
odyssey4meagh, not u-c... UCA....09:01
* odyssey4me gets more coffee09:01
* odyssey4me wonders how the distributions decide which version they package... given that u-c for shade is 1.27.109:02
odyssey4me(in queens)09:02
*** radeks has joined #openstack-ansible09:02
*** markvoelker has joined #openstack-ansible09:10
hwoarangodyssey4me: exactly09:11
hwoarangso on xenial you can't use the os_* modules :(09:11
hwoarangi wonder how they workaround that09:12
andymccrcould we not install python-shade from the uca repo?09:16
andymccrand/or get the uca team to update the version in the repo?09:16
andymccrok ignore me :P09:17
* andymccr gets another coffee09:19
odyssey4mehwoarang yeah, so my personal preference is actually to try to get rid of our own module usage and to try to use shade more... and also to use delegation to localhost and run_once for that resource stuff.... we shouldn't need to install so much on the host09:20
odyssey4mebut I get you're trying to just switch to packages here, rather than change how it works09:21
hwoarangodyssey4me: what do you mean 'our own module' ?09:21
odyssey4mehwoarang I mean that we're using these modules: https://github.com/openstack/openstack-ansible-plugins/tree/master/library09:21
odyssey4meand those are bringing in the extra requirements (glance client, keystone client, etc)09:21
*** vnogin has joined #openstack-ansible09:22
odyssey4meif we can switch to using the ansible shade-based modules, then we potentially don't need glance client, etc09:22
*** cedlerouge has joined #openstack-ansible09:22
odyssey4mealso, if we delegate the resource creation tasks to the deploy host then we won't need any of these packages installed09:25
odyssey4mesome might need to be added to the ansible runtime though09:25
hw_wutianwei_odyssey4me: I met a issue that some hosts lxc containers didn't install python. so It would fail at "TASK [lxc_container_create : Drop container network file (interfaces)]"09:29
hwoarangi didn't realize we had our own modules09:30
odyssey4mehw_wutianwei_ that is very odd, given that python is installed in the lxc host preparation I'm not sure how you would have gotten into that situation09:31
nsinghevrardjp: can you help me with the configuration file for container and host setup.09:33
nsinghHow should i configure /opt/openstack-ansible/inventory/env.d/masakary.yml file so that masakarimonitor services will install on compute nodes.09:33
nsinghhttp://paste.openstack.org/show/720274/09:33
openstackgerritMerged openstack/openstack-ansible-tests master: common-tasks: Do not sync preconfigured /etc/pip.conf file  https://review.openstack.org/56575409:33
*** chigang__ has quit IRC09:34
openstackgerritOpenStack Proposal Bot proposed openstack/openstack-ansible-os_masakari master: Updated from OpenStack Ansible Tests  https://review.openstack.org/56600709:38
evinWhy are there no gnocchi_git_* entries in /playbooks/defaults/repo_packages/openstack_services.yml pinning gnocchi to a particular commit?09:42
Tahvokcloudnull: I'm not sure how your review is fixing this problem: https://bugs.launchpad.net/openstack-ansible/+bug/176663609:42
openstackLaunchpad bug 1766636 in openstack-ansible "No need to restart rabbitmq if there is no version upgrade" [High,In progress] - Assigned to Kevin Carter (kevin-carter)09:42
TahvokPerhaps you put wrong bug number?09:42
odyssey4meevin we pin the versions of packages used so that when we release that tagged release delivers a consistent result09:42
odyssey4methose pins get updated for each release, generally around twice per month09:43
odyssey4meyou can override that pin to something else if you want to - see https://gist.github.com/odyssey4me/fc69b1eb68f250e37815246e37fd13f2 for an example of how you do it09:43
*** markvoelker has quit IRC09:44
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_masakari master: Remove tests-repo-clone.sh  https://review.openstack.org/56601209:47
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_masakari master: Remove tests-repo-clone.sh  https://review.openstack.org/56601209:48
*** yolanda_ has joined #openstack-ansible09:49
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_masakari master: Only replace python when re-initializing the venv  https://review.openstack.org/56601509:50
*** yolanda has quit IRC09:51
*** cjloader has joined #openstack-ansible09:51
openstackgerritMerged openstack/openstack-ansible master: Update the output for "openstack floating" command  https://review.openstack.org/56572709:51
openstackgerritMerged openstack/openstack-ansible-openstack_hosts master: Remove jinja templating delimiters  https://review.openstack.org/56485609:54
*** yolanda has joined #openstack-ansible09:56
*** yolanda_ has quit IRC09:59
evrardjpnsingh: I can help you tomorrow, today is quite busy... I am stuck with bad hardware, taking 100% of my hands.10:00
nsinghevrardjp: Ohh ok no problem. thank you. Will ping you tomorrow. :)10:01
evrardjpthanks :)10:01
evrardjpso the role is working now?10:01
evrardjpand you're now integrating it?10:01
evrardjpI haven't seen patches though10:01
odyssey4meevrardjp are the base jobs up for review for the masakari role?10:02
*** yolanda has quit IRC10:02
evrardjpmmm good question10:02
evrardjpI guess it's waiting for me?10:02
evrardjp:p10:02
odyssey4meI specifically mean the jobs set out in project-config?10:02
odyssey4meI ask because I'm not seeing any jobs fire.10:02
evrardjpoh wait let me check taht10:02
odyssey4meI can do the zuul jobs for it, but I would have thought you'd have done the base jobs in project-config with the role creation?10:03
nsinghevrardjp: yes role is working now. I am working on few things.10:05
nsinghi will update the repo when all set from my side.10:07
openstackgerritAlbert Mikaelyan proposed openstack/openstack-ansible-rabbitmq_server master: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56601710:08
*** yolanda has joined #openstack-ansible10:15
nsinghevrardjp: oh i can see the https://github.com/openstack/openstack-ansible-os_masakari now. Thank you :) i didn't got any mail regariding this. BDW thank you so much10:15
odyssey4mensingh no, thank YOU :)10:22
*** yolanda has quit IRC10:25
*** nicolasbock has joined #openstack-ansible10:31
*** yolanda has joined #openstack-ansible10:39
*** markvoelker has joined #openstack-ansible10:41
evrardjpodyssey4me: https://review.openstack.org/#/c/562619/10:41
evrardjpnsingh: it's because it's not done yet10:41
evrardjpI will talk with infra to make this happen today10:41
*** cjloader has quit IRC10:51
*** cjloader has joined #openstack-ansible10:52
hw_wutianwei_https://www.irccloud.com/pastebin/ooVBJwPz/10:53
hw_wutianwei_odyssey4me:  I am not sure10:54
*** cjloader has quit IRC10:56
*** ppetit has quit IRC11:01
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Move radosgw keystone config tasks to their own playbook  https://review.openstack.org/56570111:03
*** persia has joined #openstack-ansible11:06
*** vnogin has quit IRC11:08
*** mbuil has joined #openstack-ansible11:12
*** radeks has quit IRC11:14
*** markvoelker has quit IRC11:14
*** blinkiz has quit IRC11:19
*** geb has quit IRC11:22
olivierb-BTW evrardjp you also have the same CI issue I am having (see https://review.openstack.org/565029) therefore I conclude that I'll wait for CI to be fixed ;-)11:22
olivierb- BTW evrardjp you also have the same CI issue I am having (see https://review.openstack.org/565029) therefore I conclude that I'll wait for CI to be fixed ;-)11:22
*** vnogin has joined #openstack-ansible11:24
evrardjpthanks for noticing olivierb- !11:26
*** geb has joined #openstack-ansible11:26
olivierb-no pb, was worrying about my fixes to have broken something but could not relate them to the errors11:27
*** srihas has quit IRC11:28
*** srihas has joined #openstack-ansible11:40
evrardjpdid you find the root cause?11:41
evrardjpI haven't got the chance to look at it yet11:41
olivierb-nope, I remember cloudnull telling me it was transient errors yesterday but this transient seems quite recurent now11:44
openstackgerritMatt Thompson proposed openstack/openstack-ansible-os_tempest stable/queens: Install os-testr  https://review.openstack.org/56599911:45
*** blinkiz has joined #openstack-ansible11:45
olivierb-I started creating a full test env on my dev system but currently have no time to go on with this  unfortunately11:46
blinkizOn page https://docs.openstack.org/project-deploy-guide/openstack-ansible/queens/targethosts.html it says I should install ntp package when preparing my hosts. But if I recall correctly, chronyd is later installed. Is ntp package really needed?11:47
evrardjpblinkiz: you caught a documentation bug :)11:48
evrardjpI think this is fixed in master and may require backport11:48
evrardjpif not,let's make a patch to fix that :)11:49
*** ansmith has quit IRC11:52
blinkizevrardjp, where should I create/submit this patch? Point me in the right direction and I can probably find the solution :)11:52
hwoarangevrardjp: olivierb- maybe this https://review.openstack.org/#/c/565933/ fixes it?11:58
*** ppetit has joined #openstack-ansible11:59
olivierb-hwoarang seems like a good candidate indeed12:00
*** armaan has quit IRC12:00
olivierb-thx for pointing it out12:01
jrosserblinkiz: you probably want to patch this file https://github.com/openstack/openstack-ansible/blob/master/deploy-guide/source/targethosts-prepare.rst12:01
blinkizok!12:01
*** armaan has joined #openstack-ansible12:01
jrosserblinkiz: are you all set up for pushing patches to review.openstack.org?12:02
blinkizNo :P12:02
jrosserok :) start with this https://docs.openstack.org/infra/manual/developers.html12:03
olivierb-odyssey4me hwoarang seems like it also requires a backport from queens to pike  https://review.openstack.org/56572512:04
hwoarangrequires?12:04
hwoarangwhat requires that?12:05
*** jillr has quit IRC12:05
*** thedini3 has joined #openstack-ansible12:09
*** markvoelker has joined #openstack-ansible12:11
*** cjloader has joined #openstack-ansible12:12
olivierb-the fact that the same CI error occurs on the stable/pike branch as I suppose that openstack-ansible and openstack-ansible-lxc_hosts branches should be in sync12:13
olivierb-just a wild guess12:13
hwoarangolivierb-: because the url you posted above is for a different thing12:15
olivierb-yes I have submitted 2 backports of a master fix for queens and pike and both are failing in CI for the same reason12:16
openstackgerritMerged openstack/openstack-ansible-lxc_hosts stable/queens: Fix lxc cache prep resolvers  https://review.openstack.org/56593312:16
olivierb-the url above and https://review.openstack.org/#/c/565762/ are the 2 backports12:17
*** cjloader has quit IRC12:17
*** markvoelker has quit IRC12:20
*** markvoelker has joined #openstack-ansible12:20
olivierb-therefore my question about the backport of the fix for resolv.conf into pike12:22
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-lxc_hosts stable/pike: Fix lxc cache prep resolvers  https://review.openstack.org/56604612:23
odyssey4mehwoarang olivierb- ^ that's a backport to pike for the same issue, I guess there may also need to be one for Ocata, but I'm a little tied up at the moment12:24
olivierb-thx odyssey4me will try to submit for ocata during my forthcoming meeting if time permits12:25
odyssey4megreat, thanks olivierb-12:25
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_keystone master: Add support for using distribution packages for OpenStack services  https://review.openstack.org/56030812:31
openstackgerritOlivier Bourdon proposed openstack/openstack-ansible-lxc_hosts stable/ocata: Fix lxc cache prep resolvers  https://review.openstack.org/56605412:36
olivierb-odyssey4me hwoarang dones https://review.openstack.org/56605412:37
odyssey4meawesome, thanks olivierb- :)12:38
olivierb-yavw12:39
olivierb-thx to you for the original patch12:39
olivierb-reviews and merge12:39
odyssey4mewell, to cloudnull actually :)12:39
*** ansmith has joined #openstack-ansible12:49
olivierb-on another subject and besides the fact that AIO is a demo/poc/... environment, any good reason why bootstrap-aio.yml and more specifically tests/roles/bootstrap-host/tasks/prepare_data_disk.yml formats only in ext4 (no xfs nor lvm) and furthermore if /openstack and/or /var/lib/liblxc are/is already mounted it does not take it into account ?12:52
olivierb-Just wondering if it's worth submitting a patch for this12:53
olivierb-(or several patches more likely)12:53
*** armaan has quit IRC12:54
olivierb-odyssey4me yep, cloudnull will find out later when back from his well deserved night ;-)12:54
*** nicolasbock has quit IRC12:57
*** yolanda has quit IRC12:57
mnaserhmm12:58
mnaserit looks like ocata pip_install doesnt have the pinnings12:58
mnaserso it deploys pip 10.0.112:58
mnaserwhich fails the deployment/test of pip_install12:59
odyssey4memnaser oh bother - any chance you can figure out a patch based on the stuff done to pike onwards? I would guess that pip_install and repo_build might need some work.12:59
odyssey4mePerhaps also the ansible bootstrap.12:59
mnaserodyssey4me: do you know where the patches are that pinned us to 9.x ?12:59
mnaserhttp://logs.openstack.org/54/566054/1/check/openstack-ansible-functional-centos-7/ac19a2e/job-output.txt.gz#_2018-05-03_12_45_44_50811813:00
mnaserthis is the task that seems to upgrade it13:00
mnaseror actually its already at 10 at that point13:00
odyssey4meis that a role test?13:00
mnaseryes13:00
mnaserfor pip_install13:00
mnaser(for stable/ocata)13:01
mnaserhttps://review.openstack.org/#/c/566054/13:01
odyssey4merole tests use the constraints set here https://github.com/openstack/openstack-ansible-tests/blob/stable/ocata/test-vars.yml#L453 - and the global pins are used there13:01
mnaserinteresting, http://git.openstack.org/cgit/openstack/openstack-ansible/plain/global-requirement-pins.txt?h=stable/ocata shows it pinned at 9.0.113:02
mnaserthe thing is13:03
mnaser"Found existing installation: pip 10.0.1"13:03
odyssey4mehmm, and https://review.openstack.org/#/q/I32603fd34b60183607c6bd9653c36432cbe6b07a was merged too13:03
mnaserso when it reaches the install pip task13:03
mnaserit's already at pip 1013:03
odyssey4meyes, it will start at the later version because it's already on the host13:03
odyssey4mewe downgrade it13:03
mnaseroh i see13:03
openstackgerritMerged openstack/openstack-ansible-tests master: Do not clone role being tested as a dep outside openstack-ci  https://review.openstack.org/56525613:04
odyssey4melooks like https://review.openstack.org/#/q/If1b68fb21e0eb8f2f8c33a6bec952c2972e3e5e3 didn't go to ocata though13:04
mnaserodyssey4me: the parent commit of this is not the one that includes that fix13:04
mnaserdo we maybe we need to rebase?13:04
mgariepygood morning everyone13:05
mnasero/ mgariepy13:05
odyssey4memnaser eh? I'm not sure I understand what you mean13:06
mnaserhttps://review.openstack.org/#/c/566054/ -- if you click on (gitweb) next to parent, you notice that the parent is the zuul remove project patch13:06
mnaserwhich is a commit from 4 months ago13:06
mnaserwhich feels like the patch was pushed without a local rebase13:06
mnaserso all the stuff we did about pinning isn't being applied, because the patch is being tested on top of that commit only rather than tip of stable/ocata13:07
mnaserlet me try rebasing the patch to tip of stable/ocata and see what happens13:07
odyssey4memnaser but that's in a different role - that role test will use the tip of stable/ocata for pip_install13:07
mnaserodyssey4me: this is a patch for pip_install being tested and i assume what is being tested is the checked out version by zuul, correct?13:08
odyssey4memnaser but https://review.openstack.org/#/c/566054/ is a lxc_hosts role change, it's not a pip_install patch13:08
*** vakuznet has joined #openstack-ansible13:08
mnaserok13:09
mnaseri need to get to my coffee13:09
odyssey4meit will consume pip_install from the tip of stable/ocata which seems to have the required fix13:09
mnasersorry for that13:09
mnaseri thought both were pip_install fixes13:09
odyssey4meah, https://review.openstack.org/#/q/If1b68fb21e0eb8f2f8c33a6bec952c2972e3e5e3 isn't needed in stable/ocata because it's an identical tree - so I guess the ocata branch already has that patched merged into another review13:10
mnaserhttps://review.openstack.org/#/c/561577/ - patch from 2 weeks ago to the same branch with the same issue so this might have been lingering around for a while13:10
odyssey4meyeah, this may be a new issue - not sure13:10
*** thedini3 has quit IRC13:10
*** yolanda has joined #openstack-ansible13:11
*** cjloader has joined #openstack-ansible13:13
*** nicolasbock has joined #openstack-ansible13:14
*** cjloader has quit IRC13:17
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Move radosgw keystone config tasks to their own playbook  https://review.openstack.org/56570113:20
*** vakuznet has quit IRC13:21
*** ansmith has quit IRC13:26
*** vakuznet has joined #openstack-ansible13:27
*** eumel8 has quit IRC13:28
*** jwitko has joined #openstack-ansible13:34
openstackgerritMerged openstack/openstack-ansible master: Tidy registered variable names in rgw install  https://review.openstack.org/56545213:37
olivierb-odyssey4me mnaser I am a bit confused by what you wrote above furthermore as the backport of resolv.conf fix to ocata failed to pass CI13:37
vakuznetis this known issue: "repo-container-2aa1521e nginx[65763]: nginx: configuration file /etc/nginx/nginx.conf test failed" ?13:39
*** cmart has joined #openstack-ansible13:46
*** jwitko has quit IRC13:48
*** jwitko has joined #openstack-ansible13:48
openstackgerritMerged openstack/openstack-ansible-lxc_hosts stable/pike: Fix lxc cache prep resolvers  https://review.openstack.org/56604613:50
*** cjloader has joined #openstack-ansible13:58
*** cjloader has quit IRC13:59
*** cjloader has joined #openstack-ansible13:59
mnaservakuznet: i think i noticed that too when running repo-build14:02
mnasera re-run somehow cleaned it up..14:02
*** kstev has joined #openstack-ansible14:03
*** evin has quit IRC14:03
openstackgerritNicolas Bock proposed openstack/openstack-ansible-os_nova master: Define lxd.pool in nova.conf based on lxd_storage_pool  https://review.openstack.org/56589114:04
*** throwsb1 has joined #openstack-ansible14:04
vakuznetmnaser,  it was self inflicted issue. resolved.14:06
*** esberglu has joined #openstack-ansible14:06
idlemindgrr i broke my cloud and can't seem to get os-nova to complete ... keeps blowing up on conductor containers, i'll delete the container and re-run then it fails on something. i had to pin the galera_minor_distribution to 32 now the newest container just failed the pip_install (install distro packages) step14:08
idlemind(stable/pike)14:08
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible stable/queens: Update all SHAs for 17.0.4  https://review.openstack.org/56502914:09
idlemindhorizon kinda works but i can't browse to most of the pages. all the problems started when i moved from a single hosted vip to a multi-node cluster w/haproxy and keepalived, horizon is still trying to hit the old vip ip probably because something in os_nova or another playbook down the line in setup-everything has to complete to update the URLs it tries to hit so i'm basically dead in the water14:09
idlemindbut that seems ancillary to not being able to rebuild the necessary containers to reliably get past os-nova14:10
idlemindis there a "base" container that i'd need to update or anything?14:15
idlemindthat get's cloned or something? it seems a new container is getting the wrong proxy IP after creation in YUM14:16
idlemindit's the ip of the old internal_lb_vip_addr14:17
*** yolanda has quit IRC14:19
*** evin has joined #openstack-ansible14:21
*** esberglu has quit IRC14:23
*** kstev1 has joined #openstack-ansible14:25
*** kstev has quit IRC14:27
idlemindi don't have that configured anywhere but it keeps popping up to bite at my heels14:28
*** dxiri has joined #openstack-ansible14:34
*** esberglu has joined #openstack-ansible14:34
*** vnogin has quit IRC14:34
*** pabelanger has quit IRC14:35
*** pabelanger has joined #openstack-ansible14:35
*** vnogin has joined #openstack-ansible14:35
*** esberglu_ has joined #openstack-ansible14:36
*** epalper has quit IRC14:38
*** vnogin has quit IRC14:40
*** esberglu has quit IRC14:40
*** esberglu_ is now known as esberglu14:40
*** _d34dh0r53_ is now known as d34dh0r5314:46
idlemindk, repo_cache is setup correctly on 99% of my stuff, if i delete a container and recreate it, it comes back to life w/the old internal_lb_vip_addr not the new "right" one ... running setup-everything.yml w/a --limit of the new container doesn't do anything just fails at pip_install install distro packages because it can't fetch packages (yum update at the command line fails too inside the container)14:47
*** spine55 has joined #openstack-ansible14:48
*** yolanda has joined #openstack-ansible14:53
*** klamath has joined #openstack-ansible14:57
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_glance master: defaults: Allow uwsgi to autoload required plugins.  https://review.openstack.org/56609214:58
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_glance master: Add support for using distribution packages for OpenStack services  https://review.openstack.org/56609314:58
*** vnogin has joined #openstack-ansible14:59
*** vnogin has quit IRC15:00
*** ppetit has quit IRC15:03
*** DanyC has quit IRC15:03
cloudnullTahvok: opps, yup added the wrong bug number there.15:05
evrardjpidlemind: hey, do you have an override?15:05
cloudnullgood catch :)15:05
evrardjpI mean a wrong override?15:05
idlemindevrardjp in user_variables you mean for repo_cache_proxy_url (or something close to that)15:05
idlemindevrardjp no not before i started, i've since added one but i can't seem to find which play applies that to an existing container15:05
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-openstack_hosts master: Add IPv6 kernel module  https://review.openstack.org/56594015:05
idlemind^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ IPv6!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! cloudnull i knew i loved you15:06
evrardjpwe do have a base image that we use for creating containers15:06
cloudnullidlemind: :)15:06
idlemindevrardjp is that the /var/lib/machines/<$$>15:06
cloudnullI pay that one day we'll have an IPv6 only cloud !15:07
cloudnull:D15:07
cloudnull**I pray that...15:07
*** ppetit has joined #openstack-ansible15:07
idlemindmine (when it's not broken) gives public ipv6 networks to ppl and routes via bgp15:08
evrardjpyou pay? :D15:08
evrardjpI think if that was really requested we could do it15:08
idlemindevrardjp it'd be something i'd be keen to work on ... i'm an ipv6 crazy15:08
evrardjpanother good way to get rid of the mac/ip generation in the inventory :D15:09
idlemindso lxc_container_create works and completes w/o issue on the trouble container ...15:09
evrardjpidlemind: so what's the issue?15:09
idlemindthe container's yum proxy is set to an old internal_lb_vip_addr15:09
idlemindeven after a delete / recreate15:09
evrardjpyou said basically changed the endpoint url (changing internal_lb_vip_address)15:09
idlemindand i can't see how or why that happens15:10
evrardjpyes so15:10
idlemindi can't run common-tasks/package-cache-proxy.yml it looks like it needs to be imported by another play?15:10
evrardjpif you changed the internal_lb_vip_address, you'd have to rebuild the repo server15:10
idlemindahhhhhhhhhhhhhhhhhhhhhhh15:10
evrardjpand then run all the playbooks15:10
idlemindso that's probably what i missed15:10
evrardjpdon't forget haproxy15:10
idlemindis that a delete / rebuild of the repo server?15:10
idlemindya haproxy completed15:11
idlemindi didn't think i'd have to redo the repo server15:11
evrardjpwait a sec15:11
evrardjpmaybe I am tired there15:11
evrardjpthe IP of the container shouldn't change in itself15:11
evrardjpI'd re-run the repo-server play15:11
idlemindya just internal_lb_vip_addr but maybe that depends on something15:11
*** weezS has joined #openstack-ansible15:11
evrardjpnot deleting the container15:12
evrardjpwell, I'd rerun all the plays15:12
evrardjpcheck if your inventory is alright before starting15:12
evrardjp(open your /etc/openstack_deploy/inventory.json , search for your old internal_lb_vip_address)15:12
idlemindahh k15:12
evrardjpjust in case15:13
evrardjpI am cautious :)15:13
evrardjpdon't edit it15:13
evrardjpbut if you see something there, then there is an issue :p15:13
cloudnullevrardjp: mind giving https://review.openstack.org/#/c/565950/ a review when you can ?15:13
idlemindk i don't have inventory.json in that path on my deployment host15:14
nicolasbockHow does this work with running a gerrit change set through CI?15:15
evrardjpcloudnull: when we tar we put a whole inventory into the file?15:15
nicolasbockDo I need to manually trigger a CI run?15:15
cloudnullevrardjp: yup15:15
cloudnullthe entire inventory file is added to the file15:15
evrardjpyeah I see15:15
evrardjpmmm15:15
evrardjplet me review real quick15:16
cloudnullso instead of saving all inventories forever, we can save the last 1515:16
odyssey4menicolasbock nope - what's the issue? you can look at http://zuul.openstack.org/status.html to see the status of any patches busy being tested15:16
openstackgerritGerman Eichberger proposed openstack/openstack-ansible-os_octavia stable/queens: Adds certificate generation  https://review.openstack.org/56584515:16
*** RandomTech has joined #openstack-ansible15:17
nicolasbockAh odyssey4me thanks!15:17
nicolasbockI see the change set in there15:17
nicolasbockI was only looking at gerrit15:17
RandomTechHello, when i ran the playbooks it created CEPH partitioned drives but didnt add them to the CEPH cluster. Any ideas on why this could happen or how to fix it15:17
odyssey4mecloudnull evrardjp perhaps we should rather created a timestamped tarfile and leave the pruning to the operator?15:18
cloudnullodyssey4me: all of the inventories within the tar file are timestamped.15:19
cloudnullso maybe we should just outline how to prune the inventory ?15:19
idlemindk, just for good measure i dropped and am now recreating the container using lxc_container_delete / create from the doc's i'll triple check the lb addr -> yum proxy url is still incorrectly set and that's not happening further down the line15:21
odyssey4mecloudnull tbh I didn't know the tarbal contained history... we should probably tell people about this somewhere ;)15:22
odyssey4me*tarball15:22
idlemindwhere is this amazing tarball we're speaking of?15:22
cloudnull# /etc/openstack_deploy/backup_openstack_inventory.tar15:22
cloudnullits a running backup of inventory15:23
idlemindthx15:23
cloudnullodyssey4me: I thought it was in the docs somewhere, maybe not...15:23
cloudnullconfirmed, its not...15:24
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible master: Automatically prune the inventory backup  https://review.openstack.org/56595015:24
cloudnullwe should add a blerb about this is deployers guide.15:25
evrardjpcloudnull: it's crazy the tarfile module doesn't have a delete thing15:26
cloudnullyea, i was frustrated by that...15:26
*** vnogin has joined #openstack-ansible15:26
evrardjpyou basically have to unarchive and from tarinfo rebuild your archive... how insane is that? :p15:26
odyssey4mecloudnull perhaps better in the operator's guide?15:26
evrardjpcloudnull: however I am afraid of what this does, as you have a race condition in code15:27
cloudnull?15:28
cloudnullI'm not seeing the race condition.15:31
cloudnullits a basic for loop, and the backup is called serially.15:31
evrardjpyes but you close later, and add to something already open15:32
cloudnullyou mean someone else could be writing to the inventory file at the same time we're pruning it ?15:33
cloudnull**inventory backup file15:33
cloudnull**someone else with a different shell15:34
cloudnullthe context manager only keeps the archive open for as long as it takes to prune and save the backup, so that shouldn't be possible within the inventory application15:35
cloudnullthough it could be possible if different users invoke the inventory at the exact same time.15:36
odyssey4mecloudnull perhaps there should be a lock or something to protect against that - I vaguely remember that there was a bug about multiple executions breaking the archive15:36
odyssey4menot sure how to properly handle it though - what do we do? if there's a lock, skip the archive, or wait?15:37
odyssey4methis is kinda why I think that perhaps we should just do a timestamped archive and never modify the archive again15:37
odyssey4methat way we leave the clean up to the deployer, and don't have to deal with any races or locks or anything15:38
cloudnullodyssey4me: where should we put those archives ?15:42
olivierb-sorry to bother with same matter again but this morning with evrardjp I tried to understand what had changed since last week for deploying AIO and digged a bit more during the day without any more success.15:42
olivierb-I have been using https://gist.github.com/obourdon/a75cbd2a2bbbd98de30005557ae7d886 for several weeks whithout any problem15:42
olivierb-but since I retried this week I get https://gist.github.com/obourdon/3242923baddc94313b881ebe457f447a15:44
openstackgerritMerged openstack/openstack-ansible-rabbitmq_server master: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56601715:44
olivierb-which seems like stuff coming from openstack-ansible-hardening15:44
olivierb-right now I do not have a single clue on why this is now active whereas it was not last week and looking at the various commits in various repo did not help but I must have missed something15:45
*** spine55 has quit IRC15:45
idlemindok, so dropped my trouble container and recreated it with lxc-container-create and it doesn't have any proxy url's for yum configured at all15:46
olivierb-thanks for any idea/pointer/help15:46
idlemindthat must come from a later playbook?15:46
*** mma has quit IRC15:46
olivierb-and no I checked my user_*.yml stuff and everything else did not change either15:46
*** RandomTech has quit IRC15:47
olivierb-note that I am also using stable/pike release which did not changed much since last week either therefore my very puzzled state15:47
idlemindolivierb is there an actual failure above that?15:47
*** gyee has joined #openstack-ansible15:47
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Add information about restoring inventory from backup  https://review.openstack.org/56609915:48
olivierb-idlemind stupid me, I thought these where the actual errors but it's related to the resolv.conf issue fixed by cloudnull earlier today, I should start reading ogs more carefully or stop using -vvv in the 1st place many thanks15:50
*** pcaruana has quit IRC15:50
*** hamzy has quit IRC15:56
evrardjpcloudnull: I meant the context manager is supposed to be doing things for the context, so opening and closing files, but here you have subshell to manipulate the tarfile at the same time16:03
evrardjpwhich is kinda weird16:03
*** vnogin has quit IRC16:04
cloudnullright but its deleting not writing, so there should be no race there.16:04
evrardjpbut you are writing in the same context :)16:04
evrardjpthat's the thing16:04
cloudnullafter the delete has finished16:05
evrardjpyou want me to just update it real quick?16:05
cloudnullsure.16:05
*** yolanda has quit IRC16:06
odyssey4mecloudnull well, yeah - where indeed16:10
odyssey4meit strikes me that perhaps we're trying too hard to do too many things, and perhaps we should let operators do their own inventory backups16:11
rschulmanGiven OSA's use of tagged VLANs, do people generally use general mode for port switches?16:12
rschulmanEr... s/port switches/switch ports/16:13
jrosserodyssey4me: + on that, i have /etc/openstack-deploy symlinked out to its own git repo, and then manage "disaster" of that myself16:13
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Add information about restoring inventory from backup  https://review.openstack.org/56609916:17
evrardjpodyssey4me: they will, when using static inventory :p16:19
evrardjpno choice!16:19
cloudnullrschulman: its totally up to you. top of rack is a choose your own adventure :)16:19
*** spine55 has joined #openstack-ansible16:19
odyssey4mejrosser yep, I wouldn't do it any other way16:19
rschulmancloudnull: Yeah, of course, but I'm seeking wizened advice. :)16:20
cloudnullhahaha16:20
evrardjpolivierb-: I am sure it's the image that changed :) I am pretty sure we haven't changed anything in ansible-hardening.16:20
jrosserodyssey4me: would be nice if there were an env var to point at the config dir rather than needing a symlink16:21
cloudnullrschulman: one sec .16:21
evrardjprschulman: osa is flexible, you just have to configure it for what you want :)16:21
evrardjpvlans work :p16:21
idlemindugh stranger still. in stable/pike i run os-nova-install on the newly created conductor container ... it sets the general proxy in yum.conf to the right ip (172.29.236.8) which is the new internal_lb_vip_address ... in /etc/yum.repos.d/CentOS-Base.repo it has the old IP (172.29.236.11) ... no clue were it's getting that16:21
olivierb-evrardjp neverming I must read logs more carefully, these are debug statements and as written in the very end I thought the error came from there but it is the resolv.conf issue which stroke again16:22
olivierb-and I also must stop using -vvv as my default ;-)16:22
olivierb-to many info kills the info16:22
odyssey4meidlemind the centos repo was copied directly from the host16:22
idlemindahh look at you16:23
idlemindit's incorrectly set there16:23
idlemindmanually editing that is probably not suggested ... what task would normally update that in stable/pike? openstack-hosts?16:23
olivierb-anyways it does not impact the fact that some checks (like nameserver names number in /etc/resolv.conf) could be a litlle bit more "strict" in the sense that you can not fool them using a comment dummy entry #nameserver XXXXX for instance ;-)16:23
idlemindor lxc-hosts-setup?16:24
rschulmanevrardjp: Yeah, I know. Almost TOO flexible for a first timer. :)16:25
odyssey4meidlemind it's only copied in when the base cache is created - I don't think there's anything touching it from then on - so you can fix it using ansible with something like: ansible -m lineinfile -a "...insert args here..." all16:25
*** mma has joined #openstack-ansible16:25
odyssey4meor write a little playbook or something16:25
idlemindodyssey4me so i "could" drop the host completely, reimage it w/centos and start building it back up from there ... or just edit the repo files manually (with ansible or vi)16:27
*** ppetit has quit IRC16:30
odyssey4meidlemind yeah, or you could wipe the cache to force it being rebuilt - but then you'd need to delete the containers and rebuild them16:31
idlemindwipe the cache?16:33
idlemindyou mean drop the repo server container and rebuil it?16:34
idlemindand then drop and rebuild all of my containers?16:34
odyssey4meidlemind no, the lxc base cache - where/what it is depends on the lxc back-end you're using16:34
idlemindahh16:34
idlemindcat /var/lib/machines/centos-7-amd64/etc/yum.repos.d/CentOS-Base.repo ... for me confirms the bad proxy URL16:36
idlemind[root@hc2 ~]# cat /etc/yum.repos.d/CentOS-Base.repo ... also has the bad proxy url ... but as we said the only way to fix that is manually (automated via custom ansible) or format the host and start fresh right?16:37
idlemindno play runs and updates repo files on a base host after the initial pass?16:38
*** DanyC has joined #openstack-ansible16:38
odyssey4meyep, OSA can't fix what you broke ;)16:38
odyssey4methe centos base repo on the host doesn't get touched by OSA as far as I know16:39
* SamYaple says nothing16:39
odyssey4meI haven't really worked with CentOS much.16:39
SamYapleim gonna use that line odyssey4me16:39
*** DanyC_ has joined #openstack-ansible16:39
SamYaplei lied i said something16:39
odyssey4melol16:39
Tahvokodyssey4me, cloudnull: that was fast. Thanks for reviewing this16:40
TahvokIs it possible to submit reviews for ocata/pike/queens branches for this bug? https://bugs.launchpad.net/openstack-ansible/+bug/176663616:41
openstackLaunchpad bug 1766636 in openstack-ansible "No need to restart rabbitmq if there is no version upgrade" [High,Fix released] - Assigned to Albert Mikaelyan (tahvok)16:41
evrardjpSamYaple: :)16:41
evrardjpTahvok: fine for me to bp16:41
idlemindodyssey4me it must at some point ... i didn't go into it and set it to point to the proxy :( :( if anything it seems to set it and never revisit it ... so if you ever change your internal_lb_vip addr this happens16:42
*** DanyC has quit IRC16:42
TahvokPlease remind me how do I submit reviews for other branches?16:43
idlemindit seems to update yum.conf just not the yum.repos.d stuff16:43
idlemindi'm trying to see where that would be added now but i'm probably not as fast as you vets16:44
odyssey4meTahvok the same way as any other patches - checkout the branch, prepare the patch, then git review... if it's a backport, then use 'git review -X <review #>' to cherry-pick it16:45
rschulmancloudnull: Did you ever follow up from that "one sec"? I don't see it.16:48
TahvokSo I first do my changes, then use git review -X I523be647b5e82e6f088428bf2db24dc4cd2cfb53 ?16:48
odyssey4meTahvok nope, git review -X <review number, not change-id> will cherry-pick the change... all changes should go into master first, so they should usually be cherry-picked down16:50
odyssey4meif it needs changes, then do them after the cherry-pick16:50
*** Taseer has quit IRC16:51
odyssey4mefor that change id, the review number is 56601716:51
*** Taseer has joined #openstack-ansible16:51
odyssey4meyou can also just use the gerrit interface and click the cherry-pick button, then select the branch16:51
odyssey4methat assumes a clean pick is possible though16:52
odyssey4mewhich is very likely in this case16:52
TahvokLearning new stuff16:52
TahvokWill try both actually :)16:52
*** yolanda has joined #openstack-ansible16:53
openstackgerritAlbert Mikaelyan proposed openstack/openstack-ansible-rabbitmq_server stable/queens: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56611616:55
Tahvokodyssey4me: should I just checkout the remotes/origin/stable/ocata branch, or actually create my own out of that branch?16:59
*** yolanda_ has joined #openstack-ansible17:01
idlemindis it possible that the yum.repos.d gets edited here? https://github.com/openstack/openstack-ansible-openstack_hosts/blob/stable/pike/tasks/openstack_host_install_yum.yml ... i don't see anything about setting the proxy specifically but maybe it's a knock on effect?17:02
*** mbuil has quit IRC17:02
*** gkadam has quit IRC17:02
*** shardy has quit IRC17:03
odyssey4meTahvok I typically do something like: git checkout origin/stable/ocata17:03
xgerman_wonder if we have seen that already: http://logs.openstack.org/45/565845/2/check/openstack-ansible-functional-ubuntu-xenial/1fb54e5/job-output.txt.gz#_2018-05-03_16_09_15_95519017:03
*** hamzy has joined #openstack-ansible17:03
odyssey4mewell, typically I do this: git fetch --all; git checkout origin/stable/ocata17:03
*** yolanda has quit IRC17:04
SamYaple`git pull` on the stable/branch and fork. easy17:05
*** yolanda has joined #openstack-ansible17:05
TahvokSo after git review -X - I still need to upload my review with git review?17:05
odyssey4meidlemind it looks like it's designed to only add repositories that are missing, but I might be wrong - you'd have to check the playbook execution output or look at it more carefully... I'm a little tied up right now, so can't really think it through properly17:05
odyssey4meTahvok yep, git review -X just cherry-picks to your local repo - git review uploads it17:06
idlemindya as a test i dropped the proxy line from the centos-base.repo file and i'm rerunning setup-everything limited to that host to see it shows back up ... maybe it's a knock on effect of one of the yum commands17:06
TahvokOk, so that's what I was missing, and it was not a wrong branch :)17:06
idlemindi also might just drop this hypervisor and start fresh to see if it was a fluke from an older code set that brought those proxy lines into the yum.repos.d/* stuff17:06
openstackgerritAlbert Mikaelyan proposed openstack/openstack-ansible-rabbitmq_server stable/pike: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56611917:07
TahvokNice, worked :)17:07
idlemindok that was fast, my first test was openstack-ansible openstack-hosts-setup.yml --limit hc217:07
idlemindthe repo's i deleted the proxy from17:07
idlemindcame back with the correct line17:07
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Provide an example for switch port configurations  https://review.openstack.org/56612017:07
idlemind(updated to the new proxy ip)17:07
idlemindthe ones i left ... stayed as the old (wrong) ip17:07
*** yolanda_ has quit IRC17:08
idlemindso something we're doing in that play is causing the proxy ip to be set in the repo only if it doesn't exist17:08
cloudnullrschulman: rschulman ^17:08
openstackgerritAlbert Mikaelyan proposed openstack/openstack-ansible-rabbitmq_server stable/ocata: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56612117:08
idlemindand not updating it if it does exist17:08
Tahvokodyssey4me: thanks a lot! all reviews uploaded17:08
idlemindcloudnull how can i view that i'd be curious to review on it17:09
idlemind(do i have to cherry-pick the patch or cna i see it right in gerrit)17:09
odyssey4meTahvok great, thank you :)17:10
*** DanyC_ has quit IRC17:11
odyssey4meidlemind what branch are you using?17:11
odyssey4mexgerman_ the failure is further up: http://logs.openstack.org/45/565845/2/check/openstack-ansible-functional-ubuntu-xenial/1fb54e5/job-output.txt.gz#_2018-05-03_16_09_15_48382217:12
xgerman_queens17:12
odyssey4mexgerman_ also, you might find looking at the ara-report easier to diagnose issues with: http://logs.openstack.org/45/565845/2/check/openstack-ansible-functional-ubuntu-xenial/1fb54e5/logs/ara-report/17:13
xgerman_thx17:15
rschulmancloudnull: I will study with great interest. Thank you!17:18
*** mma has quit IRC17:24
*** mma has joined #openstack-ansible17:24
*** mma has quit IRC17:29
*** DanyC has joined #openstack-ansible17:30
*** mma has joined #openstack-ansible17:33
mnaseris there a way for repo build to build one specific venv?17:39
mnaserex: changed the cinder hash commit and dont want to wait ages17:39
mnaser:P17:39
*** mma has quit IRC17:41
*** mma has joined #openstack-ansible17:42
*** spsurya has quit IRC17:42
*** mma has quit IRC17:46
idlemindodyssey4me pike17:47
*** udesale has quit IRC17:50
idlemindso cloudnull your switches ... is that to cisco switches that support cross chassis port channels (or a switch stack)?17:55
*** poopcat has joined #openstack-ansible17:59
idlemindcloudnull https://imgur.com/a/ovC9oSa here is basically what i did. i made my hosts redundant with spanning-tree (#ghettobutworksreliably) this kept me away from a vpc or stackable reliant position without having to go routed completely. i am thinking of routed racks which was presented on at one of the conferences by the scaling group to create multiple spanning-tree domains so a spanning-tree event won't take an18:04
idlemindentire dc down. a start of that was bgp based automatic provider networks for me18:04
jrosserhmm i would be carefu18:08
jrosserl with using stp like that for pseudo HA18:09
idlemindjrosser why? you no trust stp?18:09
jrosseri have no issues with nxos/vpc in my control plane18:09
jrosserhand run l3 routed for my racks of compute18:09
*** vakuznet has quit IRC18:10
jrosserif you want l3 with vlan tenant networks look at neutron segmented networks18:10
idlemindjrosser vpc related issues and single point of failure during upgrades in stackable models is why i'd avoid those technologies ... i've seen vpc fail and drop connections by shutting the newly crowned master's port-channels too often because it's state is inconsistent for some oddball reason ... stp is trusty rusty for me18:10
*** vakuznet has joined #openstack-ansible18:10
idlemindshe might be old and seem kinda ugly but it's simple, supported everywhere and just works18:11
jrosserotherwise do vxlan and l3 underlay net18:11
TahvokHow can I check what nova version I actually use when using osa?18:12
idlemindno specifically for routed lans, the customer networks would vxlan'd but you bring up a good point, if i want those provider side networks, vxlan on an underlay is probably the better long term18:12
idlemindbut then you get into the undercloud to run your overcloud discussion18:12
jrosser?18:12
jrosserovverlay network18:12
jrossernot cloud18:12
idlemindjrosser right i meant the undercloud to provide the vxlan to the overcloud unless you wanted to implement and manage vxlan manually (or another way, aci, etc)18:13
jrosserim lost now :) anyway can talk more later if you like18:14
idlemindjust spit ballin'18:14
idlemindgood stuff to talk about but probably way better to do over a glass of beer and a napkin if i can ever get out to one of the conferences18:14
idlemindonto other things ... odyssey4me the item of interest seems to be "yum-config-manager" it lists the "proxy" value for repo's when they don't have it in the configuration file (because it's learned from yum.conf) so when the yum-config-manager steps through all repo's it updates it from yum.conf but only if the repo already does not have a setting18:15
idlemindin places with an incorrect "proxy" value we'd have to check for and update it18:16
idlemindso maybe something like: when: proxyfromyum-config-manager-gather-repos != internal_lb_vip_addr ... set new proxy w/yum-config-manager ...18:17
openstackgerritMerged openstack/openstack-ansible-rabbitmq_server stable/queens: Do not restart rabbitmq when no version is changed  https://review.openstack.org/56611618:19
idlemindodyssey4me https://github.com/openstack/openstack-ansible-openstack_hosts/blob/stable/pike/tasks/openstack_host_install_yum.yml#L84-L8518:19
*** hachi has joined #openstack-ansible18:20
idlemindthat's how it seems to be getting "all" repo's18:20
*** yolanda has quit IRC18:20
odyssey4memnaser if you only change one requirement it'll only build the new things - it will take time though, but not as much time as the first... unfortunately pip is not very fast at processing through files even if it's already got them18:22
idlemindhttp://paste.openstack.org/show/720314/18:22
idlemindnot sure how we'd parse the proxy out of that most correctly for comparison / updating18:22
odyssey4meTahvok you can activate the venv and do 'pip freeze' - or just look at which wheels are in the os-releases/<your tag> folder18:23
idlemindalso, probably not the best to do it in the "openstack-hosts" play if it effectively causes all other containers to need to be rebuilt to be fixed18:23
openstackgerritGerman Eichberger proposed openstack/openstack-ansible-os_octavia stable/queens: Adds certificate generation  https://review.openstack.org/56584518:24
odyssey4meidlemind tbh how centos works and best practises there are a mystery to me... mhayden used to be a key maintainer there, but perhaps mnaser can pick up where he left off to have an intelligent discussion with you about it... :)18:24
Tahvokodyssey4me: it says 15.1.1.dev4318:24
odyssey4meTahvok and there you have it18:24
idlemindodyssey4me thx major left osa?18:24
idlemind#cry18:24
* mnaser will read buffer in a second18:24
Tahvokodyssey4me: but that does not mean 15.1.1 as we found at #openstack-nova channel...18:25
TahvokFirst, 15.1.1 was released only yesterday18:25
idlemindoh hey he works for redhat now lol18:25
TahvokAnd secondly, I do not have the patches that were released with it...18:25
odyssey4meTahvok no, because we build from git source and pin at a SHA, the version will come from pbr and be determined as 15.1.1 + 43 commits on top of that18:26
odyssey4meTahvok just because it was released yesterday, doesn't mean the sha they're using is not quite old ;)18:26
TahvokI'm using :)18:26
TahvokAnyway, can I request a sha bump?18:26
idlemindk chow time bb in a bit odyssey4me and mnaser18:26
TahvokIt fixes at least 2 important bugs with 'no bootable device' issue and 'live migration not working with scsi devices'18:27
*** mma has joined #openstack-ansible18:27
*** hachi has quit IRC18:27
odyssey4meTahvok if you need a later SHA you can override it yourself temporarily - unless the fix you need/want would be useful to most others - then perhaps we should do a general bump... bumping a single SHA doesn't always work so nicely because of changing stuff in upper constraints18:27
Tahvokodyssey4me: the bugs are probably known to other ceph users...18:28
TahvokSo I would says it would be useful to others..18:28
mnaseridlemind: let me know what you're looking to accomplish in centos world and id be happy to help18:29
TahvokI can bump the sha for me, but I think others might benefit from this release18:29
odyssey4meah ok... then see if just bumping the one sha in your test env works - if it does, push a patch up18:29
TahvokDid we have any other plans for bumping shas in the coming weeks?18:29
odyssey4megiven that we just bumped them, nope18:29
odyssey4meit's always around mid month and end of month18:30
odyssey4meTahvok see https://docs.openstack.org/openstack-ansible/latest/contributor/periodic-work.html#releasing18:30
odyssey4medon't use that dependency update script though, it'll bump everything18:31
TahvokSo there should be a release in around 2 weeks?18:31
TahvokOr I don't understand something...18:31
*** mma has quit IRC18:32
odyssey4meTahvok yes, the current head of the branch will be released every two weeks...18:32
odyssey4mebetween each general bump we can do small bumps if we find crucial bugs, like the ones you're mentioning18:33
*** yolanda has joined #openstack-ansible18:35
TahvokSo just to make sure. I update the file  playbooks/defaults/repo_packages/openstack_services.yml with the new sha I need18:36
TahvokTest it, and then submit a review, or request here at the channel?18:36
jrosserif theres any ceph folks around this i'd appreciate some feedback on this https://review.openstack.org/#/c/565701/18:38
*** mma has joined #openstack-ansible18:39
Tahvokjrosser: is it the ceph installation role? I'm not familiar with this...18:40
TahvokWe use only the ceph client in our case, and install ceph with ceph-ansible18:41
jrosserok so i'm doing the same thing18:41
jrosseri've been working towards getting the osa playbooks to consume a list of externally provisioned rgw18:42
jrosserand only do the keystone and haproxy parts, rather than actually deploy the rgw itself18:42
odyssey4meTahvok yes, you can submit the review - describe in the commit msg why it's important, preferably with the appropriate bugs/reviews referenced18:43
Tahvokodyssey4me: I'll have a production deploy on sunday with this sha bump, so I'll test it then and submit a review18:44
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_keystone master: Add support for using distribution packages for OpenStack services  https://review.openstack.org/56030818:45
odyssey4mecores - can I get some reviews for https://review.openstack.org/565999 please to help finalise the fixes to tempest18:53
odyssey4meit's a backport of the master patch, so easy peasey18:53
cloudnullidlemind: sorry was off eating :)18:57
cloudnullthat config was from what we used to use in the OSIC18:58
cloudnullthose were all cisco 9k switchs.18:58
mgariepyodyssey4me, done.19:00
*** mma has quit IRC19:01
*** radeks has joined #openstack-ansible19:03
*** openstackgerrit has quit IRC19:05
*** mma has joined #openstack-ansible19:05
idlemindcloudnull so likely vpc then19:09
*** hamza21 has joined #openstack-ansible19:18
*** mma has quit IRC19:23
idlemindcloudnull no food for you! jk i saw you posting changes at like 1am, maybe a nap too?19:26
*** aludwar1 has joined #openstack-ansible19:50
*** mma has joined #openstack-ansible19:53
cloudnullidlemind: yup, a nap would be nice :)19:54
*** aludwar1 has quit IRC19:57
*** mma has quit IRC19:58
*** hamza21 has quit IRC20:14
*** radeks has quit IRC20:14
throwsb1I am running a fresh install of OSA with 3 infra, 2 storage with LVM, and 2 computes.  I am running into issues of vm creation failing due to storage creation timing out.  It looks like it round robins between storage nodes.  This seems to be causing an error running up against timeouts for cinder.20:15
throwsb1Has anyone run an environment with 2 storage nodes and had the same issues?  Is it better to not have 2 storage nodes?  If so, is there an easy or recommended way to remove one?20:16
throwsb1Seems like on creation, it times out due to cluster communication.20:17
*** aludwar1 has joined #openstack-ansible20:22
*** aludwar1 has quit IRC20:23
*** mma has joined #openstack-ansible20:39
*** evin has quit IRC20:42
idlemindthrowsb1 yes w/lvm you'll want to adjust timers20:44
idlemindi have a bug on it i think w/the correct adjustments20:44
idlemindthrowsb1 http://paste.openstack.org/show/720323/20:47
idlemind^^ in user_variables.yml20:47
idlemindyou'll see all deploy's for disks take at least 30 seconds in horizon20:48
idlemindw/that config20:48
idlemindand it will try 120 + 1 times20:48
idlemindi did such a long window because i had the 11gb or so windows image ... that always took forever to clone 3 or 4 of 'em20:49
idlemindw/lvm20:49
*** vakuznet has quit IRC20:49
*** hamzy has quit IRC20:53
*** openstackgerrit has joined #openstack-ansible21:00
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-os_glance master: Add support for using distribution packages for OpenStack services  https://review.openstack.org/56609321:00
*** kstev1 has quit IRC21:10
*** kstev has joined #openstack-ansible21:16
*** esberglu has quit IRC21:26
throwsb1idlemind: Thanks!, I will give i a try.21:26
*** jwitko_ has joined #openstack-ansible21:32
*** jwitko has quit IRC21:36
*** jwitko_ has quit IRC21:37
*** esberglu has joined #openstack-ansible21:39
*** kstev1 has joined #openstack-ansible21:51
*** kstev has quit IRC21:52
throwsb1idlemind: I put it in place and still failing.  I only pushed to the compute nodes, does it need to go to others as well?21:57
throwsb1I did restart the nova services on compute nodes21:58
*** jwitko has joined #openstack-ansible22:01
idlemindthrowsb1 you'll have to re-run the os-nova and possibly os-cinder plays at least22:07
idlemind(w/osa you shouldn't be editing config files direct)22:08
throwsb1I just deployed the user vars and deployed os-nova-install.yml22:08
idlemindya run os-cinder and possibly os-horizon as well just to be safe22:09
throwsb1ok22:09
idlemindit's been a minute since i've hit it22:09
throwsb1idlemind: is it bad to just rerun the 3 playbooks, setup-hosts, setup-infrastructure, and setup-openstack?  I am still a nub and still learning.22:12
idlemindthrowsb1 no it "shouldn't" hurt anything at all22:13
idlemindi think recommendation from the wizards is just to cyclically run them for the hell of it22:14
throwsb1ok.  thanks.22:14
idlemind* is just to NOT22:14
idlemindif you got a reason go for it tho22:14
idlemindok, so yum aside i cannot get past os-nova w/this error. i have 2 compute nodes and each one fails when doing a cell_v2 discover. it seems to be trying to ssh into the the conductor not local to itself and dies. if i go to the console i'm able to ssh.22:34
idlemindhttp://paste.openstack.org/show/720325/22:34
idleminddeleting and recreating the conductor containers has not helped22:36
dmsimardcloudnull, evrardjp, odyssey4me, mnaser: oi, ara 0.15.0 has been released -- keep an eye out and let me know if there's anything weird ?22:39
*** kstev1 has quit IRC22:43
*** throwsb1 has quit IRC22:48
idlemindok are my nova compute nodes (metal) supposed to have database sections in the nova.conf?22:48
idlemindcuz mine don't22:48
idlemindi add them in and the command succeeds instead of fails. if the playbook doesn't reset my nova.conf i suspect it will clear this stage /openstack/venvs/nova-16.0.10/bin/nova-manage cell_v2 discover_hosts --verbose22:49
*** threestrands has joined #openstack-ansible22:59
*** klamath has quit IRC23:17
evrardjpdmsimard: yup will bump it -- adding this to my super long todo list.23:21
*** masber has joined #openstack-ansible23:23
idlemindhttp://paste.openstack.org/show/720326/23:33
idlemindit would seem my cell's are mapped to statically that way23:33
*** weezS has quit IRC23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!