Monday, 2017-10-02

*** vp has joined #openstack-ansible00:00
*** vp is now known as Guest2835300:00
*** Guest28353 has quit IRC00:09
*** markvoelker has joined #openstack-ansible00:12
*** dave-mccowan has joined #openstack-ansible00:15
*** germs has joined #openstack-ansible00:18
*** germs has quit IRC00:33
*** woodard has joined #openstack-ansible00:44
*** markvoelker has quit IRC00:45
*** dave-mccowan has quit IRC01:30
*** dave-mccowan has joined #openstack-ansible01:32
*** markvoelker has joined #openstack-ansible01:42
*** dave-mcc_ has joined #openstack-ansible01:49
*** dave-mccowan has quit IRC01:51
*** dave-mccowan has joined #openstack-ansible01:57
*** dave-mcc_ has quit IRC01:59
*** markvoelker has quit IRC02:16
*** dave-mcc_ has joined #openstack-ansible02:17
*** dave-mccowan has quit IRC02:18
*** esberglu has joined #openstack-ansible02:26
*** esberglu has quit IRC02:31
*** captaindave has quit IRC02:54
*** captaindave has joined #openstack-ansible02:55
*** dave-mcc_ has quit IRC02:55
*** markvoelker has joined #openstack-ansible03:13
*** markvoelker has quit IRC03:46
*** gouthamr has joined #openstack-ansible03:55
*** esberglu has joined #openstack-ansible04:14
*** esberglu has quit IRC04:19
*** gouthamr has quit IRC04:30
*** markvoelker has joined #openstack-ansible04:43
*** ecelik has joined #openstack-ansible05:15
*** markvoelker has quit IRC05:17
*** logan- has quit IRC05:18
*** logan- has joined #openstack-ansible05:21
*** thorst has joined #openstack-ansible05:41
*** logan- has quit IRC05:48
*** thorst has quit IRC05:51
*** thorst has joined #openstack-ansible05:51
*** thorst has quit IRC05:52
*** logan- has joined #openstack-ansible05:53
*** cshen has joined #openstack-ansible06:02
*** esberglu has joined #openstack-ansible06:02
*** ivve has quit IRC06:05
*** esberglu has quit IRC06:07
*** jvidal has joined #openstack-ansible06:10
*** Oku_OS-away is now known as Oku_OS06:10
*** markvoelker has joined #openstack-ansible06:14
*** cshen has quit IRC06:20
*** cshen has joined #openstack-ansible06:21
*** eumel8 has joined #openstack-ansible06:24
*** ivve has joined #openstack-ansible06:29
*** ansibler has quit IRC06:30
*** shardy has joined #openstack-ansible06:37
*** drifterza has joined #openstack-ansible06:46
*** markvoelker has quit IRC06:47
hwoaranggood morning06:52
*** chas has joined #openstack-ansible06:54
*** ecelik has left #openstack-ansible06:58
*** pcaruana has joined #openstack-ansible07:08
*** mbuil has joined #openstack-ansible07:10
evrardjpgood morning07:11
*** Taseer has quit IRC07:12
*** taseer2 has joined #openstack-ansible07:13
*** electrofelix has joined #openstack-ansible07:18
*** mrch has joined #openstack-ansible07:29
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible-os_neutron stable/pike: Update paste, policy and rootwrap configurations 2017-09-28  https://review.openstack.org/50813907:37
*** markvoelker has joined #openstack-ansible07:44
*** esberglu has joined #openstack-ansible07:51
*** pbandark has joined #openstack-ansible07:51
*** esberglu has quit IRC07:56
*** taseer2 is now known as Taseer08:14
*** markvoelker has quit IRC08:18
hwoarangjust to confirm, can values in user_secrets be overriden by openstack_user_config?08:32
*** gkadam has joined #openstack-ansible08:33
andymccrhwoarang: i dont thinkso - user_secrets would be passed using -e i think, so that would take precedence.08:34
hwoarangi can't find exactly how user_secrets is used :/08:36
odyssey4meivve use the container destroy playbook to destroy the containers, then remove the contains from the inventory using inventory-manage, then remove the host group/host mapping in your openstack_user_config/conf.d08:47
evrardjpandymccr: welcome back!08:47
odyssey4mehwoarang user_* files are used as extra-vars, so they have highest precedence08:47
odyssey4meopenstack_user_config is in inventory, so it's almost the lowest precedence08:47
hwoarangodyssey4me: ok thank you. but where exactly in the code is the user_secrets used?08:48
hwoarangjust looking at the details because i am curious :)08:48
evrardjphwoarang: -e08:55
*** TomMc has joined #openstack-ansible08:56
evrardjphwoarang: let me point you to the code08:56
evrardjphttps://github.com/openstack/openstack-ansible/blob/master/scripts/bootstrap-ansible.sh#L24508:57
evrardjpthe wrapper gets that08:57
*** armaan has joined #openstack-ansible08:58
hwoaranghmm i wonder if it's possible to add passwords in user_variables.yml instead08:58
hwoarangit should be possible since both user_* files are treated the same09:01
hwoarangthanks evrardjp09:01
evrardjphwoarang: yes it is09:01
evrardjpbut because the way they are listed, don't forget about the precedence ;)09:02
evrardjpbut it's technically possible to have user_blabla.yml or user_zzz.yml09:02
hwoarangyep09:03
hwoarangsneaky09:03
evrardjpnot really :p09:09
*** markvoelker has joined #openstack-ansible09:15
neithhello, Is it possible to safely skip repo related tasks in set-infrastructure.yml using tags?09:29
neithopenstack-ansible setup-infrastructure.yml --skip-tags repo-build,repo-server09:30
neith?09:30
*** epalper has joined #openstack-ansible09:31
epalperhi, i am trying to run nova role with openstack-ansible command: openstack-ansible -i /tmp/.xci-deploy-env/openstack-ansible/playbooks/inventory /tmp/.xci-deploy-env/openstack-ansible/playbooks/os-nova-install.yml -vvv. But it fails with following error: https://paste.ubuntu.com/25659673/09:35
epalpercan you please look into it ?09:35
*** esberglu has joined #openstack-ansible09:39
neithevrardjp: odyssey4me  any idea on my question about skipping repo related tasks ?09:40
evrardjpneith: why not using the playbooks instead?09:41
odyssey4meneith they will auto-skip if requirements haven't changed in any recent version09:41
neithevrardjp: cause repo tasks are soooooo long09:41
odyssey4mebut yes, look at the playbooks - they should have tags09:41
neithok09:41
evrardjpneith: I meant running the playbooks, and skipping playbooks you don't need09:42
odyssey4meneith all recent versions of newton onwards will not run any of the build tasks if requirements haven't changed09:42
evrardjpneith: a necessary evil sometimes :)09:42
evrardjpand odyssey4me has done a tough job on improving this, and cracking to small pieces09:42
evrardjp:D09:42
neithI completely crashed my rabbit cluster and I think my last option is to rebuild all the containers09:42
evrardjpjust rebuild rabbit09:43
neithall rabbit containers of course09:43
evrardjpif you haven't change credentials, why change the rest?09:43
evrardjpyeah, faster than recovery :p09:43
evrardjpask jmccrory :p09:43
*** esberglu has quit IRC09:43
neithodyssey4me: including 14.2.8 ?09:45
odyssey4meneith yes09:45
neithIs it possible to run openstack-ansible setup-infrastructure.yml --limit rabbitmq_all ?09:46
*** markvoelker has quit IRC09:48
*** overskylab has joined #openstack-ansible10:03
overskylabdo you have roadmap to deploy panko on this project10:04
*** ejb1123 has quit IRC10:04
admin0\o10:13
admin0question on aio .. if my server has 2 interfaces, how to tell aio to use eth1 instead of eth010:13
*** overskylab has quit IRC10:14
evrardjpneith: use the playbooks instead10:15
evrardjpeasier10:15
evrardjpno extra tasks for repo and all that jazz10:15
evrardjpadmin0: I think that would require a patch10:17
*** ricardoas has quit IRC10:23
admin0evrardjp, ok ..  thanks10:23
admin0in pike, with the cells, is it possible to move an existing running cluster to the new cells (pod)  ?10:24
admin0from documentation, the default is treated as pod1, and from the config, can add 2nd pod etc ?10:25
epalperhi, have anyone had a chance to look at my issue ?10:33
*** markvoelker has joined #openstack-ansible10:45
eumel8epalper: line 106. you skipped resolvconf_enabled so all inventory variables are empty and the task to install resolv.conf will fail10:52
*** DanyC has joined #openstack-ansible11:04
*** pbandark has quit IRC11:06
*** pbandark1 has joined #openstack-ansible11:06
*** dave-mccowan has joined #openstack-ansible11:07
*** pbandark1 is now known as pbandark11:08
*** markvoelker has quit IRC11:19
admin0evrardjp, i upgraded from ocata -> pike .. can you tell me the line to override one cinder server to have is_metal: true from the config file11:33
odyssey4meadmin0 I believe that evrardjp gave you the configs last week. I'd suggest going through the eavesdrop logs.11:36
evrardjpadmin0: you can now use hostvars directly11:36
evrardjpif you upgraded you have new features that allow us simplicity, but you're not forced to migrate to hostvars11:36
evrardjpyou can keep it as it.11:36
admin0"you have new features that allow  simplicity"  -- what are these ?11:37
*** jamesdenton has quit IRC11:39
admin0so in pike, is the big config being broken down to individual service.yml inside conf.d ?11:44
*** smatzek has joined #openstack-ansible11:44
admin0in the example, some yml start with --- , some don't .. is it documentation mistake or it works both way .. and if it works both ways, include --- or remove --- ?11:46
epalpereumel8: what do you recommend to run resolvconf_enabled ?11:52
epalperline 106. while file you are talking about ?11:52
eumel8epalper: your paste file11:56
*** jamesdenton has joined #openstack-ansible11:58
*** jamesdenton has quit IRC12:02
*** jamesdenton has joined #openstack-ansible12:06
*** markvoelker has joined #openstack-ansible12:06
epalpereumel8: ok, do you want to have resolvconf_enabled: "{{ groups['unbound'] is defined and groups['unbound'] | length > 0 }}" to be run before executing a role ?12:06
eumel8epalper: I think you want definitly use resolv.conf12:09
neithevrardjp: I really need to dig to understand which playbooks are ruuned in setup host/infra/openstack .12:11
evrardjpneith: that should be very simple12:12
evrardjpthey are very explicit :p open your setup-infrastructure.yml or setup-openstack.yml you'll understand :)12:13
evrardjpneith: don't forget you can run openstack-ansible playbook1 playbook2 playbook3 (thanks odyssey4me for the tip that I completely forgot!)12:13
neithevrardjp: It probably is, but my ass is between openstack and Kubernetes currently... I need to fight to get some extra time on OS12:13
epalpereumel8: yes, it works after resolvconf_enabled to true12:16
epalperthanks12:16
eumel8:)12:17
*** captaindave has quit IRC12:19
evrardjpneith: )12:33
*** woodard has quit IRC12:37
*** woodard has joined #openstack-ansible12:38
*** smatzek has quit IRC12:43
*** smatzek has joined #openstack-ansible12:44
odyssey4meepalper eumel8 resolvconf_enabled only applies if you're using unbound containers in the environment12:45
odyssey4meit will automatically enable/disable based on the presence of unbound_hosts in your environment12:46
odyssey4meadmin0 for the interface to use for the public interface (eth0 or eth1) just make sure your default route uses eth112:48
*** smatzek has quit IRC12:48
odyssey4methe public interface is determined by the default route, or can be set: https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/defaults/main.yml#L235-L23812:48
admin0odyssey4me: thanks .. got it now12:49
*** thorst has joined #openstack-ansible13:02
*** lbragstad has joined #openstack-ansible13:07
*** esberglu has joined #openstack-ansible13:09
*** smatzek has joined #openstack-ansible13:10
cloudnullmornings13:10
neithcloudnull: mornings13:12
jamesdentonmorning cloudnull13:13
cloudnullyo yo yo13:13
*** thorst has quit IRC13:17
admin0evrardjp, so from the paste and conversation, how do i make use of the host_vars folder13:20
admin0that seems more clean method . you just list the container/host and say its its on metal or not on metal ?13:21
*** mrtenio has joined #openstack-ansible13:21
evrardjpthe paste was complete IIRC13:21
evrardjpcloudnull: hello!13:22
admin0going thorigh it again: http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2017-09-29.log.html :)13:22
admin0i am not hardcore dev as  you guys :)13:22
*** smatzek has quit IRC13:25
*** smatzek has joined #openstack-ansible13:25
mrtenioGood Morning osa team. I have been having some problems when using osa master. I am using it as my development environment for Ironic. I think I am not supposed to do that. What do you advise if one is to use osa as the development environment? My intention is to have the master branches for other services and Ironic. Is there a way of using master for the services, but stable version for osa? Thank you.13:27
cloudnullmaster is the openstack master (head of queens).13:28
*** thorst has joined #openstack-ansible13:29
cloudnullso its not super stable, but is the dev environment I normally work from13:29
cloudnullif you need something more stable I'd recommend the stable/pike branch13:29
*** smatzek has quit IRC13:29
cloudnullgives a moving target in terms of backports and fixes getting into the various projects, but a more stable environment.13:29
*** mrch has quit IRC13:31
evrardjpmrtenio: that sounds great! an ironic developer using our stuff!13:31
evrardjpmrtenio: why not supposed? Other developers are doing it. It's just that ironic is maybe less tested13:32
*** mrch has joined #openstack-ansible13:32
evrardjpmrtenio: our deploy methods seem to be very much linked to the branches we deploy, so if you are deploying master, you should probably use master13:33
evrardjpbut it should be working13:33
*** mrch has quit IRC13:33
*** lucasxu has joined #openstack-ansible13:34
*** kylek3h has joined #openstack-ansible13:34
*** mrch has joined #openstack-ansible13:34
mrtenioevrardjp, osa is life.13:35
evrardjp:D13:35
evrardjptell me about it!13:35
mrteniocloudnull and evrardjp, I don't have too much contact with OpenStack Ansible other than using it to deploy OpenStack, so when there is a problem I kind of get stuck for a long time.13:35
evrardjpmmm but there hsouldn't have a problem :D13:36
evrardjpwhat are you stuck on ?13:36
cloudnullmrtenio: you're always welcome to ping out. lots of folks happy to help.13:36
mrteniocloudnull, thanks.13:36
admin0went thorough the post https://evrard.me/group-and-host-variables-overriding-in-openstack-ansible.html  (pike) went through all the 3 files .. still do not get it :(13:37
evrardjpadmin0: so13:37
evrardjpyou are not forced to use pike method, moreover if you have updated13:37
evrardjpadmin0: ^13:37
admin0i have updated13:37
evrardjpyou can keep using your old method13:37
mrtenioevrardjp, I am having a problem with haproxy, it is getting nova_console_port undefined.13:37
admin0to pike13:37
admin0if the pike method is the way things in future will go, better to understand it now and maybe write better docs on it13:38
evrardjpmrtenio: do you have a log and a commit sha ?13:38
evrardjpopenstack ansible sha*13:38
evrardjpadmin0: well that's simple for pike:13:38
evrardjpyou create a hostvars folder13:39
evrardjpinto your /etc/openstack_deploy/13:39
evrardjphost_vars*13:39
admin0created :)13:39
evrardjpthen you're using your ansible host as filename inside that13:39
evrardjpthat should be all13:39
evrardjpvery standard ansible.13:39
evrardjpcloudnull: did you see cinder errors recently?13:40
cloudnullno?13:40
evrardjpor anyone, not just you13:40
evrardjphttp://paste.openstack.org/show/622470/13:40
admin0my hostname that will be the isci server will be c3.. so i create c3.yml  .. but, c3 might serve iscsi on_metal, and maybe logs which is a container13:41
evrardjpwhat's its ansible hostname13:41
evrardjp?13:41
evrardjpthat's what you should use as filename13:41
evrardjplike any ansible host_vars :p13:42
odyssey4meadmin0 using host+vars or group_vars is explained here: http://docs.ansible.com/ansible/latest/intro_inventory.html#splitting-out-host-and-group-specific-data13:42
odyssey4meadmin0 for OSA those can be implemented in /etc/openstack_deploy/{host_vars,group_vars}13:42
evrardjpmrtenio: I am concerned it should work out of the box13:42
evrardjpmrtenio: however we recently shuffled around group vars to reduce their scope13:43
evrardjpso maybe there is a group that's using it and has it undefined13:43
cloudnullevrardjp: I've not seen that stacktrace, ever13:43
odyssey4meevrardjp I think admin0 is getting stuck on exactly which var to set to enable/disable on_metal when using host_vars13:44
mrtenioI do some hardcode modifications before running it. When I install Ironic I need to install isntall it baremetal. The error is: http://paste.openstack.org/show/622471/13:44
admin0odyssey4me, my cinder.yml has is_metal: false due to me using nfs/ceph ..  i need the override for just 1 storage host .. which is what i am trying to do ( and looking for examples )13:44
evrardjpmrtenio: https://github.com/openstack/openstack-ansible/commit/85501cbf263029cfd5be44b9274113e4e5e4b6f8#diff-0cf0ed34488dfe491c3f98f22bc6e39113:45
odyssey4meadmin0 have you tried just setting is_metal: true using the host_vars for that one host?13:45
*** thorst has quit IRC13:45
evrardjpodyssey4me: the problem is that our usage of is_metal is very much inconsistent, sometimes it's a property sometimes it's a direct variable13:46
admin0odyssey4me, that brings me to that question .. this host will serve cinder on metal, and possibly rsyslog  which will be a container .. .. will setting that on a host level say everything that goes on this host is on metal ?13:46
evrardjpadmin0: I'd say, configure with the proper ansible_hostname file, then use is_metal: True there13:46
odyssey4meis_metal is a group_var in master, then the dynamic inventory sets it as a host_var I think - anyway, if it's inconsistent we should change that13:47
evrardjpin the meantime, like I said, you can still use the env.d13:47
odyssey4meadmin0 good question - no idea13:47
odyssey4methis was all designed to be uniform, not to pick and choose13:47
*** thorst has joined #openstack-ansible13:47
evrardjpadmin0: that's why the name you are using is important13:47
*** thorst has quit IRC13:48
odyssey4meoh yeah, the name has to be the 'container' name right?13:48
evrardjpif you are using in your host_vars/myphysicalhostname.yml it would apply to everything that targets that13:48
*** thorst has joined #openstack-ansible13:48
evrardjpso if you are using myphysicalhostname_cinder_12341632 or something like that you'd only apply to the proper thing13:48
mrtenioevrardjp and odyssey4me I set the property is_metal in the ironic.yml inside env.d. I also disable the nova-compute.13:49
evrardjpmrtenio: that is what's documented13:49
evrardjpand for now, the is_metal is a special case, and should probably be done that way13:49
evrardjpbecause the inconsistencies of its usage right now13:49
evrardjpmrtenio: I think you have a variable scoping bug13:50
*** DanyC has quit IRC13:50
mrtenioevrardjp, what may I do to check that?13:51
evrardjpI'd add a few variables in group_vars/all/nova.yml : nova_console_port: "{% if nova_console_type == 'spice' %}{{ nova_spice_html5proxy_base_port }}{% elif nova_console_type == 'novnc' %}{{ nova_novncproxy_port }}{% else %}{{ nova_serialconsoleproxy_port }} {% endif %}"13:51
evrardjpwell13:51
evrardjphttps://github.com/openstack/openstack-ansible/commit/85501cbf263029cfd5be44b9274113e4e5e4b6f8#diff-0cf0ed34488dfe491c3f98f22bc6e39113:52
evrardjpon the left side for group_vars/all/nova.yml L30 to L34.13:52
evrardjpif that fixes it for you, then we're good for a bug!13:52
evrardjplogan-: maybe we should discuss about this group vars change13:53
mrtenioevrardjp, I will try it, thanks.13:55
evrardjpwell, thanks for triaging a bug :)13:55
evrardjpand filing13:55
evrardjpsorry for you to be in that place then, and we should probably document better13:55
evrardjpor test better13:55
admin0evrardjp, odyssey4me : need help there: https://gist.github.com/anonymous/c58976fa202f0b62b31d034186481bf6  -- what to put in the storage-1 line and where13:55
evrardjpis storage-1 the only dedicated to that?13:57
*** drifterza has quit IRC13:57
evrardjpthat different cinder backends I mean13:57
admin0for now yes, but if resources are not used much, i plan to move rsyslog container there13:57
admin0the other are qnap, so cinder.yml has is_metal: false so that its globally applied ..13:57
mrtenioevrardjp, no problem. Thank you again!13:58
admin0i want to use this server as lvm host in metal .. but want to keep the possibilites of moving workload here in future13:58
evrardjpsl13:59
evrardjpso*13:59
*** thorst has quit IRC13:59
evrardjpin container_vars you could apply is_metal: True13:59
*** smatzek has joined #openstack-ansible14:00
evrardjpit would apply to everything that runs on that host, and because it's storage you should be fine14:00
*** smatzek has quit IRC14:01
evrardjpbut if you want to adapt in the future you should do the following14:01
*** thorst has joined #openstack-ansible14:01
evrardjpedit your env.d/cinder.yml to have:14:01
*** smatzek has joined #openstack-ansible14:01
evrardjp(see previous paste)14:02
evrardjpthis way you can scope to only cinder14:02
*** hachi has joined #openstack-ansible14:02
evrardjpin the future, when is_metal will be uniform, you will be able to use host_vars easily for is_metal.14:02
evrardjpright now host_vars should be used for the rest.14:02
*** admin0 has left #openstack-ansible14:02
*** admin0 has joined #openstack-ansible14:02
*** smatzek_ has joined #openstack-ansible14:03
admin0so 1. edit cinder.yml and change the line to  is_metal: "{{ is_metal_per_host.get(inventory_hostname) | default(False) }}"14:03
mrtenioevrardjp, getting the topic about is_metal, how do I set the var is_metal?14:03
admin0and then, from where do I override the is_metal: true ?  from the container_var ?14:03
mrteniobesides doing it in env.d14:04
*** thorst has quit IRC14:04
admin0evrardjp, possible to write/blog a mockup sample file ? those help a lot14:04
evrardjpadmin0: after step 1, you must define in your user_variables a variable named14:04
evrardjpis_metal_per_host, which is a dict14:04
*** thorst has joined #openstack-ansible14:04
*** dxiri_ has joined #openstack-ansible14:05
evrardjpa dict in which a key could the inventory_hostname of the node you want to change the behavior14:05
evrardjpmrtenio: env.d for now14:05
evrardjpit's the best14:05
evrardjpin the future we can adapt this to be more clear14:05
evrardjpbut for now, let's count on that.14:05
admin0is_metal_per_host = ['c3','c33'] : True14:05
evrardjpand it will probably be for a while, because we take time14:06
evrardjpis_metal_per_host:14:06
*** smatzek has quit IRC14:06
evrardjp  c3:14:06
evrardjpc3: True I meant14:06
evrardjpI think that should work14:06
*** smatzek_ has quit IRC14:07
admin0ok .. .. but this will not allow me to move the rsyslog container here in future right ?14:07
evrardjpmrtenio: generally ppl don't override for one host, they override for many :p14:07
evrardjpadmin0: it's unlinked14:07
mrtenioevrardjp, that makes sense.14:07
evrardjpwe are not changing anything to rsyslog14:07
admin0ok14:08
evrardjpadmin0: don't forget: dict, not list.14:08
evrardjp {} not []14:08
*** dxiri has quit IRC14:08
evrardjpelse .get( won't work.14:08
*** armaan has quit IRC14:09
*** armaan has joined #openstack-ansible14:09
*** smatzek has joined #openstack-ansible14:11
admin0done ..14:15
admin0is there a way to run ansible to validae this ?14:15
*** smatzek has quit IRC14:16
*** smatzek_ has joined #openstack-ansible14:16
*** smatzek_ has quit IRC14:16
*** mrch has quit IRC14:19
*** mrch has joined #openstack-ansible14:20
*** dxiri_ has quit IRC14:23
*** dxiri has joined #openstack-ansible14:25
*** smatzek has joined #openstack-ansible14:29
*** smatzek_ has joined #openstack-ansible14:33
*** smatzek has quit IRC14:34
evrardjpadmin0: write your own playbook that checks the is_metal value14:37
evrardjpbackup your inventory first14:37
evrardjpand everything14:37
*** smatzek_ has quit IRC14:37
*** smatzek has joined #openstack-ansible14:37
admin0err... i am already running the playbooks :D14:37
evrardjpk14:38
admin0nothing failed, but it did not even touched c3 :)14:38
admin0play recap: only infra hosts14:38
admin0i will try to look into more details here14:38
admin0with limit, i get no hosts matched14:39
*** smatzek has quit IRC14:42
*** chyka has joined #openstack-ansible14:44
*** smatzek has joined #openstack-ansible14:44
*** smatzek_ has joined #openstack-ansible14:46
idleminddoes openstack-ansible have a preset configuration file that is similar in behavior to devstack already? seems like the "test" environment one still wants multiple nodes.14:46
*** armaan has quit IRC14:48
*** armaan has joined #openstack-ansible14:48
*** smatzek has quit IRC14:49
eumel8evrardjp, cloudnull: you want to repair the legacy zuul-v3 jobs or switch immediately to .zuul.yaml in osa repo?14:50
*** smatzek has joined #openstack-ansible14:51
*** smatzek_ has quit IRC14:51
*** dxiri_ has joined #openstack-ansible14:57
*** jvidal has quit IRC14:57
*** chyka has quit IRC14:57
*** chyka has joined #openstack-ansible14:58
*** dxiri has quit IRC15:01
*** captaindave has joined #openstack-ansible15:01
*** mrch has quit IRC15:02
*** Oku_OS is now known as Oku_OS-away15:05
evrardjpeumel8: that would be great if we switch in repo just after the release.15:09
evrardjpodyssey4me: and logan- also worked on this15:10
evrardjpodyssey4me: did you see that bug: https://bugs.launchpad.net/openstack-ansible/+bug/1716663 ?15:10
openstackLaunchpad bug 1716663 in openstack-ansible "repo_build : Initialize local facts fails when upgrading" [Undecided,New]15:10
cloudnullevrardjp: eumel8: we have https://review.openstack.org/#/c/508509 waiting to be merged.15:11
cloudnullonce that goes in I think our normal gates will work again.15:11
cloudnullbut It be great to get that converted to the new setup.15:12
odyssey4meevrardjp looks to me lie the jmespath solution would be cleanest... although I'd cage "ansible_local | default({})" in () to make sure the filter application is dine right15:13
odyssey4me*done15:13
evrardjpyeah latest have to be tested15:13
evrardjpbut it was just to know if you have seen or heard that too15:13
evrardjpI had that with my on metal test15:13
odyssey4meit's very weird though because ansible_local should exist at that point - a fact is dropped in the lxc-containers-create playbook to register the container variant used15:14
evrardjpI'd rather not do the jmespath with stable branches15:14
evrardjpI don't run that :p15:14
evrardjpthat's why ;)15:14
odyssey4meI've seen that discussed in the channel - the last person had not updated their roles properly after changing to the new version ;)15:14
evrardjpthat could be a cause too15:14
evrardjpI'll do a dual implementation I think15:15
odyssey4mebut yeah, it'd be nice to make those conditionals easier to read15:15
evrardjpone with jmespath in master, the simple in stable/15:15
evrardjpwell the jmespath is kinda ugly too15:16
evrardjpbecause it will take the value15:16
evrardjpbut let me evaluate15:16
odyssey4meIIRC nolan did a review to add a filter which worked too - but I'd rather do jmespath than add more code we carry15:16
admin0i see the pods example in pike ..and it has 3 infra nodes in 3 pods ..  so each infra node now has only data of its own set of nodes only ? its a single point of failure if that node goes down ?15:18
admin0or that example is very generic to just give example15:18
*** TomMc has quit IRC15:18
eumel8cloudnull: thx. Just to hear how you think about the zuul migration.15:19
*** pcaruana has quit IRC15:27
*** cshen has quit IRC15:31
*** galstrom_zzz is now known as galstrom15:31
*** dave-mccowan has quit IRC15:32
*** dave-mcc_ has joined #openstack-ansible15:32
*** hachi has quit IRC15:35
*** epalper has quit IRC15:38
logan-o/15:40
*** acormier has joined #openstack-ansible15:41
*** armaan has quit IRC15:42
*** armaan has joined #openstack-ansible15:43
idlemindis openstack-ansible able to configure ceph along with cinder or does ceph have to built manually w/custom playbooks outside of starting openstack-ansible?15:44
admin0logan-, \o15:45
admin0idlemind, work is ongoing/done for having an integrated build15:46
idlemindthx! k looks like i have a day full of playing ahead of me15:47
*** thorst has quit IRC15:49
*** weezS has joined #openstack-ansible15:52
*** galstrom is now known as galstrom_zzz15:53
*** gouthamr has joined #openstack-ansible15:54
*** rpittau_ has joined #openstack-ansible15:58
*** rpittau has quit IRC16:01
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible-repo_build master: Fix typo on conditional  https://review.openstack.org/50892816:07
idlemindam i able to run everything on br-mgmt? (not even create the other networks) for lab spin-ups16:08
*** thorst has joined #openstack-ansible16:10
*** armaan has quit IRC16:10
*** gmonteiro has quit IRC16:12
*** thorst has quit IRC16:13
admin0idlemind, you can use the aio16:14
admin0else other networks are listed as a requirements16:14
idlemindaio?16:14
idlemindall-in-one?16:14
admin0all-in-one16:14
dmsimardevrardjp, cloudnull, odyssey4me: fyi boxrick discovered an issue with ara in 2.4, it'll be fixed by https://github.com/ansible/ansible/pull/31200/commits/865d57aaf2fe47bc95d7becaac3a0cb182f71aa816:14
*** smatzek has quit IRC16:14
dmsimardI'll work on a patch to work around that issue until it's backported to 2.416:14
dmsimardit causes ara to use default configuration parameters all the time16:15
idlemindadmin0 what is the sample playbook for that one / command to kick it off16:15
dmsimardif using an ansible.cfg file.16:15
evrardjpdmsimard: oh16:15
*** smatzek has joined #openstack-ansible16:16
admin0idlemind, https://docs.openstack.org/openstack-ansible/pike/contributor/quickstart-aio.html#building-an-aio16:16
*** smatzek has quit IRC16:16
evrardjpdmsimard: you think this will merge?16:16
*** chhavi has joined #openstack-ansible16:16
evrardjpwell thanks for the patch16:17
dmsimardevrardjp: I discussed the issue with bcoca so it should hopefully merge yes16:18
dmsimardI mentioned it because I think you guys are using get_config too16:19
*** smatzek has joined #openstack-ansible16:19
evrardjplet me double check but I don't think we do16:23
evrardjpdarn you're right16:24
evrardjpgood catch!16:24
idlemindthx admin0 reading it now16:24
logan-https://review.openstack.org/#/c/508931/ cc evrardjp odyssey4me cloudnull16:25
logan-our regular aio check job is non-voting atm https://review.openstack.org/#/c/508509/ heh16:26
odyssey4meheh, ouch16:28
*** thorst has joined #openstack-ansible16:29
*** thorst has quit IRC16:30
*** thorst has joined #openstack-ansible16:33
*** thorst has quit IRC16:33
evrardjpodyssey4me: vote vote vote :)16:34
evrardjpthanks!16:34
*** thorst has joined #openstack-ansible16:40
*** thorst has quit IRC16:44
*** dave-mcc_ has quit IRC16:46
*** woodard has quit IRC16:47
*** drifterza has joined #openstack-ansible16:50
*** weezS has quit IRC16:56
*** woodard has joined #openstack-ansible16:56
*** thorst has joined #openstack-ansible16:57
*** idlemind has quit IRC16:59
mrtenioevrardjp, I reported a bug, I don't know if I did the right way https://bugs.launchpad.net/openstack-ansible/+bug/172084717:00
openstackLaunchpad bug 1720847 in openstack-ansible "OpenStack Ansible fails on haproxy setup with nova_console_port undefined" [Undecided,New]17:00
*** thorst has quit IRC17:01
*** gkadam has quit IRC17:01
mrtenioThe test you asked me to do solved the problem with nova_console_port undefined, but other errors happened later.17:01
evrardjplogan-: ^ the issue appearing there is due to the group var migration17:02
evrardjpwhat do you think of reverting? Or we go fixing one per one...17:02
evrardjpmrtenio: could you tell more about the next issue?17:03
logan-interesting17:03
evrardjpsorry for using you as a lab rat, if you allow me the expression (I don't mean to be rude or anything, just trying to be funny with my limited vocabulary!)17:03
logan-i wonder why that wouldnt be showing up in the gate17:04
evrardjpyeah17:04
evrardjpmaybe it appears in roles?17:05
mrtenioevrardjp, it is no offense. I am happy to help (and be helped :P)17:05
evrardjp:D17:06
*** acormier has quit IRC17:07
*** armaan has joined #openstack-ansible17:08
*** acormier has joined #openstack-ansible17:08
mrtenioThe other errors were related to nova as well. http://paste.openstack.org/show/622486/17:09
evrardjpthat is more worrying17:10
mrtenioI can't say too much about the errors, sorry.17:11
*** chyka_ has joined #openstack-ansible17:12
*** chyka has quit IRC17:14
*** chyka has joined #openstack-ansible17:16
*** chyka_ has quit IRC17:18
*** gouthamr has quit IRC17:21
*** chhavi has quit IRC17:21
*** chyka_ has joined #openstack-ansible17:21
*** chyka has quit IRC17:22
openstackgerritLogan V proposed openstack/openstack-ansible master: Load nova_console_port from a nova_console inventory item  https://review.openstack.org/50894917:22
logan-mrtenio could you test ^?17:22
mrtenioYes, will take some time. I will tell when it finishes.17:23
*** thorst has joined #openstack-ansible17:23
*** idlemind has joined #openstack-ansible17:23
*** chyka has joined #openstack-ansible17:25
*** chyka_ has quit IRC17:29
*** mbuil has quit IRC17:33
*** dxiri has joined #openstack-ansible17:33
*** dxiri_ has quit IRC17:36
*** weezS has joined #openstack-ansible17:39
*** shardy has quit IRC17:52
*** electrofelix has quit IRC17:56
*** poopcat has joined #openstack-ansible17:56
*** gouthamr has joined #openstack-ansible17:57
*** thorst_ has joined #openstack-ansible18:01
*** thorst has quit IRC18:03
captaindaveHey all. I'm trying to set up an openstack lab on a sufficiently beefy single physical box where the hosts are simple kvm centos vms. I'm passing through the networks from the NICs without touching them, save to put them on the necessary bridges for the VMs. AFAICT, I have all the requisite connectivity to the VMs (as confirmed with tcpdump). I've run setup-hosts.yml with 0 errors, but I'm not18:04
captaindavegetting any values for ansible_host on the containers (I am, however, on the hosts). Here's the output from 'ip addr' on all the hosts: http://sprunge.us/eYKb and here's my openstack_user_config.yml (the prod example with different addressing and commented out host entries): http://sprunge.us/DdeP Anyone have any input on what I might be doing wrong?18:04
admin0captaindave: have you created the necessary bridges on the vm itself ?18:06
captaindaveYes.18:07
captaindaveThe hosts in http://sprunge.us/eYKb are the vms.18:07
admin0i have it documented here: http://www.openstackfaq.com/openstack-dev-server-setup-centos/18:07
admin0checking!18:07
admin0how does your config file looks like ?18:07
captaindavethe openstack_user_config.yml?18:08
admin0yes18:08
captaindavehttp://sprunge.us/DdeP18:08
admin0:) why 2lines for used_ips ?     240.1, 240.63 can go in 1 line :)18:09
admin0ignore me ..18:09
captaindaveDifferent reasons for exclusion.18:09
*** hachi has joined #openstack-ansible18:10
captaindaveOne's for preexisting infrastructure and the other's for openstack's requirements in particular.18:10
captaindaveI'll eventually be modifying the second one for the actual (non-proof of concept) deployment.18:11
captaindaveThe repo container builds itself fine, but the haproxy install craps out when it doesn't have all the ansible_host values it needs.18:12
admin0are there the vars in user_variables as well for haproxy ?18:13
captaindaveaproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32"18:13
captaindavehaproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32"18:13
captaindavehaproxy_keepalived_external_interface: eth018:13
captaindavehaproxy_keepalived_internal_interface: br-mgmt18:13
captaindavethere's an h on that first one; don't worry.18:13
admin0maybe paste output from openstack-ansible haproxy-install.yml   run18:14
captaindaveSure. Give me a few.18:14
captaindaveI'll have to rebuild the repo first.18:15
captaindaveand unbound18:15
admin0no need18:15
captaindaveOh, OK.18:15
*** thorst_ has quit IRC18:15
admin0in fact, i run haproxy first before even running setup-infra or setup-openstack18:15
captaindaveInteresting!18:16
captaindaveHow verbose do you want it?18:16
admin0becuase if haproxy has errors, it fails on repo build .. so i ensure haproxy and the vips are there before i go to setup-infra18:16
admin0some18:16
captaindavetwo v's it is.18:16
captaindavehttp://sprunge.us/hKRX18:19
*** chas has quit IRC18:23
*** jbadiapa_ has joined #openstack-ansible18:23
admin0i have never used an IP on an address field.. always used address …. and always used more than 1 controllers ..  so i am not sure why its borking out .. “set _ = entry.append(\"backu18:25
admin0p\") %}\n    {{ entry | join(' ') }}\n{% endfor %}\n): unsupported operand type(s) for +: 'NoneType' and 'str’”}” — maybe it needs a backup infra node as well //  stay here and someone else can comment on that18:25
*** jbadiapa has quit IRC18:25
admin0but to try out, also specify infra1 , some mockup address and see if that is the case18:26
captaindavean IP on an address field?18:27
captaindaveyou mean external_lb_vip_address?18:27
admin0  internal_lb_vip_address:  api-internal.mydomain.cloud type18:27
admin0this may not be the case18:27
admin0remove the # from balancer1 and rerun it once more18:28
captaindaveFrom both infra1 and balancer1?18:28
admin0only the haproxy-install.yml .. see how far it goes and if it borks at this same error18:28
admin0no no18:28
admin0just for haproxy_hosts18:28
captaindaveOK.18:28
captaindaveI need to spin up that VM again. Give me a few.18:29
admin0well, no need actually .. it might fail saying that vm is dead18:29
admin0but we are trying to check if the config needs a backup host18:29
captaindaveGotcha.18:29
admin0if its the backup host thats needed, then the config will pass for balancer018:29
*** drifterza has quit IRC18:33
*** goldenfri has joined #openstack-ansible18:34
captaindaveLooks like the same thing: http://sprunge.us/MTab18:38
captaindavePlus an error about not being able to get facts from balancer118:39
admin0i do not know ..one final thing i can do is to bring up balancer1, change ips to hostname ( some mockups ) and see if that fixes18:40
admin0else, above my head :)18:40
admin0i brb18:41
captaindaveThis will take a bit.18:45
*** thorst has joined #openstack-ansible18:46
*** thorst has quit IRC18:46
mrteniologan-, the scripts finished, the patch https://review.openstack.org/508949 solved the problem with nova_console_port undefined. But it stopped with the errors I showed here http://paste.openstack.org/show/622486/18:46
*** thorst has joined #openstack-ansible18:48
*** dave-mccowan has joined #openstack-ansible18:59
*** idlemind has quit IRC19:04
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible master: [Docs] Clarify role development cycle  https://review.openstack.org/50427919:05
*** idlemind has joined #openstack-ansible19:05
*** dxiri has quit IRC19:06
*** dxiri has joined #openstack-ansible19:06
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible master: [DNM] Build everything on metal  https://review.openstack.org/50422419:07
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible master: [Docs] Add the bump of the upstream repo  https://review.openstack.org/50250219:10
*** smatzek has quit IRC19:12
*** smatzek has joined #openstack-ansible19:13
*** smatzek has quit IRC19:17
*** smatzek has joined #openstack-ansible19:21
*** smatzek has quit IRC19:25
*** smatzek has joined #openstack-ansible19:27
*** smatzek has quit IRC19:27
*** smatzek has joined #openstack-ansible19:28
*** hachi has quit IRC19:29
*** hachi has joined #openstack-ansible19:29
*** chyka has quit IRC19:35
*** chyka has joined #openstack-ansible19:35
*** thorst has quit IRC19:41
*** hachi has quit IRC19:46
*** hachi has joined #openstack-ansible19:47
taskerduring a run of 'os-keystone-install', at the task "gather software version list", the underlying ansible is throwing a "keyerror: ansible_local" at line 396 of "ansible/plugins/filter/core.py". I'm not seeing any active bugs on launchpad. has anyone seen this before?19:54
*** thorst has joined #openstack-ansible19:56
taskerthis is ansible 14.2.819:56
tasker.. OSA 14.2.819:56
* tasker is exhuasted19:56
taskerpreviously I have been using a "--limit". going to try a run with no limit.19:57
*** thorst has quit IRC20:00
*** gouthamr has quit IRC20:06
cloudnullansible_local is fact related20:06
cloudnullits possible the limit is omiting some fact generation which is causing an issue20:07
taskersweet20:07
taskerthat seems reasonable with the recent run having not blown up yet.20:07
taskerno limit, playbook completes successfully.20:10
taskeris this a design limitation to ansible or openstack-ansible?20:10
cloudnullits an osa issue.20:12
cloudnullretry limits have been broken for a while, we normally just turn them off.20:12
tasker. (20:12
cloudnulland then use tags20:12
taskeri was hoping to take advantage of them to upgrade my control plane in thirds.20:13
cloudnullyou can limit using groups20:13
taskertell me more of these "groups"20:13
cloudnullsomething like keystone_all[0]20:13
cloudnullwhich would do the first node from that group20:13
*** dave-mcc_ has joined #openstack-ansible20:13
taskeris this something I can apply on the command-line?20:14
cloudnullyes.20:14
cloudnullopenstack-ansible os-keystone-install.yml --limit 'keystone_all[0]'20:14
taskeroh!20:14
cloudnullif you wanted to do one, then the others you can do something like `openstack-ansible os-keystone-install.yml --limit 'keystone_all[0]',keystone_all[0:]`20:14
cloudnullopps.20:15
cloudnull`openstack-ansible os-keystone-install.yml --limit 'keystone_all[0],keystone_all[0:]'`20:15
taskerso basic python list indexing?20:15
cloudnullyes.20:15
taskerneat.20:15
cloudnullall of our services follow that pattern20:15
cloudnull${service}_all20:15
taskerroger.20:16
cloudnullif you really want to hook into the groups you can pick through the inventory json20:16
cloudnullwhich has a complete list of all groups found in osa20:16
*** dave-mccowan has quit IRC20:16
*** dave-mcc_ is now known as dave-mccowan20:17
*** dxiri has quit IRC20:19
*** dxiri has joined #openstack-ansible20:20
*** smatzek has quit IRC20:22
*** chas has joined #openstack-ansible20:23
taskerthat worked, but the ordering of containers will be different between services. could the inventory be edited by hand to force an ordering of the containers?20:25
*** gouthamr has joined #openstack-ansible20:25
cloudnullmaybe?20:26
cloudnullits regenerated on the run which I think is sorted.20:26
cloudnullyou can try it?20:26
taskeryeah. looks like it's maintaining whatever order I put in the inventory.20:27
*** chas has quit IRC20:28
taskerhmm. looks like all of my services were sorted in 2, 3, 1 order.20:28
taskerso, as long as that order is actually maintained, I should okay.20:30
admin0cloudnull:  how does pods work ?  .. when an ocata install is upgraded to pike, is it automatically on pod/cell-1 .. and later we can define more pods ?20:31
*** thorst has joined #openstack-ansible20:37
cloudnullyes you can define more cells as needed.20:43
*** esberglu has quit IRC20:43
cloudnullthough IMHO for the vast majority of deployments a single cell should be all that's required.20:43
cloudnulltasker: I want to say that wont change, unless you add more hosts to the environment.20:44
admin0i have one tenant that creates like 10 vms per week, and another that creates 100 per hour and then destroyes in the next 3-4 hours ( training ) .. so the active one sometimes creates problem fo the other one as rabbit/neutron is on queue20:45
admin0so was thinking if cells can solve it20:45
taskercloudnull: I just discovered that using service_all[i] won't work when you get to complex groups that are made of sub-groups, such as nova.20:45
taskerin the case of nova, the full size of the list is 18.20:46
cloudnullyes you'd have to chain the groups together. like 'nova_all[0]:neutron_all[0]'20:46
taskerahh20:46
taskerok. that might work.20:46
*** esberglu has joined #openstack-ansible20:55
*** smatzek has joined #openstack-ansible20:57
*** chas has joined #openstack-ansible20:58
*** smatzek has quit IRC20:58
*** smatzek has joined #openstack-ansible20:59
*** smatzek has quit IRC20:59
*** pbandark1 has joined #openstack-ansible21:00
*** dave-mccowan has quit IRC21:01
*** pbandark has quit IRC21:01
*** pbandark1 is now known as pbandark21:01
*** hachi has quit IRC21:04
*** mbuil has joined #openstack-ansible21:11
captaindaveadmin0: I think I've found the problem. BBL.21:12
*** captaindave has quit IRC21:12
*** mbuil has quit IRC21:40
admin0cloudnull:  i have these variables for az in my config: https://gist.github.com/a1git/fa8abc77333cc91d58b69e46fbb758e3 .. but still, when i try to launch an instance, horizon shows nova as default .. what am i missing there ?21:41
admin0hmm.. region is not same as az21:41
admin0how to change default AZ from nova to somethign else ?21:43
admin0nova_default_schedule_zone and cinder_storage_availability_zone  ??21:44
*** askb has joined #openstack-ansible21:44
*** lucasxu has quit IRC21:47
*** chyka_ has joined #openstack-ansible21:48
*** chyka has quit IRC21:52
*** masber has joined #openstack-ansible22:07
*** thorst has quit IRC22:15
*** chas has quit IRC22:16
*** thorst has joined #openstack-ansible22:18
*** chas has joined #openstack-ansible22:18
*** thorst has quit IRC22:22
*** pmannidi has joined #openstack-ansible22:23
*** chas has quit IRC22:24
*** armaan_ has joined #openstack-ansible22:26
*** armaan_ has quit IRC22:27
*** kylek3h has quit IRC22:28
*** armaan has quit IRC22:29
*** acormier has quit IRC22:29
*** acormier has joined #openstack-ansible22:30
*** thorst has joined #openstack-ansible22:31
*** acormier has quit IRC22:34
*** thorst has quit IRC22:35
*** lbragstad has quit IRC22:36
*** gouthamr has quit IRC22:38
*** pbandark has quit IRC22:42
*** chyka_ has quit IRC23:02
*** chyka has joined #openstack-ansible23:15
*** chhavi has joined #openstack-ansible23:17
*** esberglu has quit IRC23:21
*** chhavi has quit IRC23:22
*** acormier has joined #openstack-ansible23:26
*** acormier has quit IRC23:30
*** weezS has quit IRC23:32
*** germs has joined #openstack-ansible23:57
*** germs has quit IRC23:58
*** dxiri_ has joined #openstack-ansible23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!