Tuesday, 2015-12-15

*** oneswig has joined #openstack-ansible00:19
*** sdake_ has quit IRC00:34
*** sdake has joined #openstack-ansible00:39
*** eil397 has quit IRC00:39
*** sdake has quit IRC00:41
*** baker has joined #openstack-ansible00:42
*** oneswig has quit IRC00:53
*** baker has quit IRC00:53
*** markvoelker has joined #openstack-ansible00:56
*** karimb has quit IRC00:57
*** markvoelker has quit IRC01:01
*** oneswig has joined #openstack-ansible01:03
*** openstack has joined #openstack-ansible01:09
*** elo has quit IRC01:23
*** oneswig has joined #openstack-ansible01:30
*** markvoelker has joined #openstack-ansible01:37
*** cemmason1 has quit IRC01:47
*** oneswig has quit IRC02:03
*** baker has joined #openstack-ansible02:04
*** galstrom_zzz is now known as galstrom02:07
*** eil397 has joined #openstack-ansible02:08
*** sdake has joined #openstack-ansible02:10
*** oneswig has joined #openstack-ansible02:27
*** oneswig has quit IRC02:31
*** baker has quit IRC02:31
*** galstrom is now known as galstrom_zzz02:39
*** oneswig has joined #openstack-ansible02:41
*** apuimedo has quit IRC02:43
*** apuimedo has joined #openstack-ansible02:45
*** apuimedo has quit IRC02:49
*** apuimedo has joined #openstack-ansible02:50
*** baker has joined #openstack-ansible02:57
*** fawadkhaliq has joined #openstack-ansible02:59
*** apuimedo has quit IRC02:59
*** apuimedo has joined #openstack-ansible02:59
*** apuimedo has quit IRC03:04
*** apuimedo has joined #openstack-ansible03:05
*** sdake has quit IRC03:06
*** sdake has joined #openstack-ansible03:07
*** apuimedo has quit IRC03:10
*** apuimedo has joined #openstack-ansible03:10
*** oneswig has quit IRC03:15
*** apuimedo has quit IRC03:15
*** apuimedo has joined #openstack-ansible03:16
*** tlian has quit IRC03:20
*** apuimedo has quit IRC03:26
*** apuimedo has joined #openstack-ansible03:28
*** daledude has quit IRC03:29
*** galstrom_zzz is now known as galstrom03:33
*** sdake_ has joined #openstack-ansible03:34
*** sdake has quit IRC03:35
*** apuimedo has quit IRC03:37
*** apuimedo has joined #openstack-ansible03:38
*** oneswig has joined #openstack-ansible03:38
*** oneswig has quit IRC03:42
*** apuimedo has quit IRC03:42
*** apuimedo has joined #openstack-ansible03:43
*** apuimedo has quit IRC03:48
*** apuimedo has joined #openstack-ansible03:49
*** oneswig has joined #openstack-ansible03:52
*** markvoelker has quit IRC03:52
*** apuimedo has quit IRC03:59
*** apuimedo has joined #openstack-ansible03:59
*** galstrom is now known as galstrom_zzz03:59
*** apuimedo has quit IRC04:04
*** apuimedo has joined #openstack-ansible04:05
*** baker has quit IRC04:07
*** eil397 has quit IRC04:12
*** apuimedo has quit IRC04:20
*** apuimedo has joined #openstack-ansible04:22
*** galstrom_zzz is now known as galstrom04:23
*** openstackstatus has quit IRC04:24
*** openstack has joined #openstack-ansible04:27
*** woodard has joined #openstack-ansible04:32
*** apuimedo has quit IRC04:33
*** apuimedo has joined #openstack-ansible04:34
*** apuimedo has quit IRC04:39
*** apuimedo has joined #openstack-ansible04:39
*** galstrom is now known as galstrom_zzz04:40
*** galstrom_zzz is now known as galstrom04:44
*** oneswig has joined #openstack-ansible04:48
*** apuimedo has quit IRC04:48
*** apuimedo has joined #openstack-ansible04:49
*** oneswig has quit IRC04:53
*** markvoelker has joined #openstack-ansible04:53
*** apuimedo has quit IRC04:54
*** apuimedo has joined #openstack-ansible04:54
*** galstrom is now known as galstrom_zzz04:57
*** markvoelker has quit IRC04:57
*** apuimedo has quit IRC04:59
*** apuimedo has joined #openstack-ansible05:00
*** oneswig has joined #openstack-ansible05:03
*** apuimedo has quit IRC05:05
*** apuimedo has joined #openstack-ansible05:06
*** apuimedo has quit IRC05:15
*** apuimedo has joined #openstack-ansible05:17
*** apuimedo has quit IRC05:21
*** apuimedo has joined #openstack-ansible05:22
*** apuimedo has quit IRC05:26
*** apuimedo has joined #openstack-ansible05:27
*** oneswig has quit IRC05:36
*** apuimedo has quit IRC05:37
*** apuimedo has joined #openstack-ansible05:37
*** sdake_ has quit IRC05:43
*** apuimedo has quit IRC05:46
*** apuimedo has joined #openstack-ansible05:47
*** javeriak has joined #openstack-ansible05:50
*** elo has joined #openstack-ansible05:51
*** sdake has joined #openstack-ansible05:51
*** apuimedo has quit IRC05:57
*** apuimedo has joined #openstack-ansible05:57
*** apuimedo has quit IRC06:02
*** apuimedo has joined #openstack-ansible06:02
*** apuimedo has quit IRC06:07
*** apuimedo has joined #openstack-ansible06:08
*** oneswig has joined #openstack-ansible06:16
*** apuimedo has quit IRC06:17
*** apuimedo has joined #openstack-ansible06:18
*** apuimedo has quit IRC06:25
*** apuimedo has joined #openstack-ansible06:25
*** elo_ has joined #openstack-ansible06:32
*** apuimedo has quit IRC06:32
*** apuimedo has joined #openstack-ansible06:33
*** shausy has joined #openstack-ansible06:35
*** apuimedo has quit IRC06:37
*** apuimedo has joined #openstack-ansible06:38
*** elo has quit IRC06:39
*** elo_ has quit IRC06:40
*** phiche has joined #openstack-ansible06:42
*** elo has joined #openstack-ansible06:42
*** apuimedo has quit IRC06:42
*** elo is now known as Guest1180506:43
*** apuimedo has joined #openstack-ansible06:43
*** oneswig has quit IRC06:46
*** Guest11805 has quit IRC06:47
*** apuimedo has quit IRC06:52
*** apuimedo has joined #openstack-ansible06:53
*** eric_lopez has joined #openstack-ansible06:54
*** markvoelker has joined #openstack-ansible06:54
*** eric_lopez has quit IRC06:56
*** eric_lopez has joined #openstack-ansible06:56
*** apuimedo has quit IRC06:57
*** apuimedo has joined #openstack-ansible06:58
*** markvoelker has quit IRC06:58
*** phiche has quit IRC07:07
*** apuimedo has quit IRC07:07
*** apuimedo has joined #openstack-ansible07:08
*** oneswig has joined #openstack-ansible07:10
*** apuimedo has quit IRC07:13
*** apuimedo has joined #openstack-ansible07:13
*** oneswig has quit IRC07:14
*** phiche has joined #openstack-ansible07:16
*** apuimedo has quit IRC07:18
*** apuimedo has joined #openstack-ansible07:19
*** oneswig has joined #openstack-ansible07:25
*** apuimedo has quit IRC07:28
*** apuimedo has joined #openstack-ansible07:30
*** woodard has quit IRC07:32
*** javeriak_ has joined #openstack-ansible07:39
*** javeriak has quit IRC07:39
*** apuimedo has quit IRC07:39
*** apuimedo has joined #openstack-ansible07:40
*** apuimedo has quit IRC07:45
*** apuimedo has joined #openstack-ansible07:45
*** javeriak_ has quit IRC07:46
*** javeriak has joined #openstack-ansible07:47
*** apuimedo has quit IRC07:50
*** apuimedo has joined #openstack-ansible07:52
*** apuimedo has quit IRC07:57
*** oneswig has quit IRC07:57
*** apuimedo has joined #openstack-ansible07:58
*** adac has joined #openstack-ansible08:00
adacMorning folks08:00
*** cemmason1 has joined #openstack-ansible08:03
*** apuimedo has quit IRC08:05
*** oneswig has joined #openstack-ansible08:06
*** apuimedo has joined #openstack-ansible08:07
*** egonzalez has joined #openstack-ansible08:13
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: Add neutron_ceilometer_enabled default  https://review.openstack.org/25726808:14
*** apuimedo has quit IRC08:16
*** agireud has joined #openstack-ansible08:17
*** apuimedo has joined #openstack-ansible08:18
*** cemmason1 has quit IRC08:22
*** cemmason1 has joined #openstack-ansible08:23
*** apuimedo has quit IRC08:23
*** agireud has quit IRC08:24
*** apuimedo has joined #openstack-ansible08:24
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-security: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25772808:25
*** sdake has quit IRC08:27
*** admin0 has joined #openstack-ansible08:29
*** agireud has joined #openstack-ansible08:31
*** oneswig has quit IRC08:32
*** woodard has joined #openstack-ansible08:33
*** cemmason1 has quit IRC08:42
*** oneswig has joined #openstack-ansible08:44
*** woodard has quit IRC08:45
*** karimb has joined #openstack-ansible08:47
odyssey4meo/ adac08:48
*** oneswig has quit IRC08:48
*** markvoelker has joined #openstack-ansible08:55
adacodyssey4me, I just rebootet my machine (Installed AIO on a physical machine now) but having some trouble to access the keystone webinterface with the correct credentials. It says: "An error occurred authenticating. Please try again later." Which log would in this case prove more information? The keystone logs are not very verbose about this incident it seems08:55
*** cemmason1 has joined #openstack-ansible08:57
odyssey4meadac you can enable debug logging across the stack by adding 'debug: True' to /etc/openstack_deploy/user_variables.yml08:57
odyssey4meadac are you sure that your database is running/08:57
adacodyssey4me, not really. The processes are running. however a ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat" always just shows me status -1 on all nodes08:58
adacok thanks08:59
*** markvoelker has quit IRC08:59
adacI guess this requires a restart modifying that, right?08:59
odyssey4meadac yeah, apparently that's a non issue09:01
odyssey4meadac running the applicable playbooks to effect the debug change will restart services, yes09:02
odyssey4meadac you can just run setup-openstack.yml to do it for you09:02
adacodyssey4me, ok trying that. Do I need afterwards to restart the db manually again?09:03
odyssey4meadac no, you never need to restart the database for changes unless they are changes ot the DB configuration09:03
adacodyssey4me, but if the machine was rebooted, I have to, right?09:03
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25753609:05
odyssey4meadac yes, if the machine was rebooted then you need to bootstrap the galera cluster09:06
adacodyssey4me, yes I did that before, maybe an error occured on that09:06
*** shausy has quit IRC09:08
*** admin0 has quit IRC09:09
*** shausy has joined #openstack-ansible09:09
*** oneswig has joined #openstack-ansible09:11
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775209:12
*** cemmason1 has quit IRC09:12
*** cemmason1 has joined #openstack-ansible09:15
*** karimb_ has joined #openstack-ansible09:16
*** karimb has quit IRC09:16
*** oneswig has quit IRC09:17
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775509:17
*** oneswig has joined #openstack-ansible09:18
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775509:18
*** admin0 has joined #openstack-ansible09:20
*** oneswig has quit IRC09:23
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775809:24
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775509:25
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775209:26
*** tricksters has joined #openstack-ansible09:27
*** eric_lopez has quit IRC09:29
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776109:32
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776109:33
*** preeti_ has quit IRC09:33
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25753609:33
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775209:33
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775509:34
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775809:34
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-apt_package_pinning: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25774809:34
*** Prithiv has joined #openstack-ansible09:36
*** shausy has quit IRC09:37
adacI restarted all containers with openstack-ansible setup-hosts.yml but no luck, could still not login. I tred to reboot the machine afterwards again and restarted the cluster, but still I cannot login. Checking the logs now for verbose output09:38
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-memcached_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776709:38
*** shausy has joined #openstack-ansible09:38
adacSeems I'm getting only this 'relevant' output https://gist.github.com/anonymous/ae0a682adeabdd973ca609:42
adacmaybe the db clsuter is corrupted. Need to find out on how to check that09:45
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25777309:46
*** openstackgerrit has quit IRC09:47
*** openstackgerrit has joined #openstack-ansible09:48
*** agireud has quit IRC09:49
odyssey4meadac setup-hosts doesn't restart the containers, nor do you need to restart the containers09:50
odyssey4meadac from the utility container, if you execute:09:51
odyssey4mesource /root/openrc09:51
odyssey4methen: openstack user list09:51
odyssey4medoes it work?09:51
*** agireud has joined #openstack-ansible09:52
adacroot@aio1_utility_container-06420a58:/#  openstack user list09:53
adacAn unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-bb2de048-9ec6-4361-9dda-0a9bee2e37d3)09:53
openstackgerritMerged openstack/openstack-ansible: FIX: provider_networks module for  multiple vlans  https://review.openstack.org/25265809:53
adacodyssey4me, Ok I see now. I misunderstood your statement, sorry09:54
openstackgerritMerged openstack/openstack-ansible: Updating AIO docs for Ansible playbook  https://review.openstack.org/24472009:54
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-galera_client: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25778209:54
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rabbitmq_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25778809:57
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25779109:59
*** agireud has quit IRC09:59
odyssey4meadac ok, you should check in the keystone container - the logs there will tell you the issue09:59
odyssey4meit is likely a problem with galera - but check the keystone logs to confirm10:00
odyssey4meas keystone does the auth, it's often good to check whether it's working before moving on10:00
*** sdake has joined #openstack-ansible10:06
*** Prithiv has quit IRC10:06
*** agireud has joined #openstack-ansible10:09
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: FIX: provider_networks module for  multiple vlans  https://review.openstack.org/25779710:09
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: FIX: provider_networks module for  multiple vlans  https://review.openstack.org/25779810:10
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: Updating AIO docs for Ansible playbook  https://review.openstack.org/25779910:12
*** Prithiv has joined #openstack-ansible10:12
*** Bofu2U has joined #openstack-ansible10:17
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: Updating AIO docs for Ansible playbook  https://review.openstack.org/25780510:17
*** manous has joined #openstack-ansible10:18
*** miguelgrinberg has quit IRC10:20
*** agireud has quit IRC10:22
adacodyssey4me, yes you are right. An error appears and it seems to be db/galera related: https://gist.github.com/anonymous/3a52bb29e7a0d52037ed10:25
odyssey4meadac if you attach to one of the galera node, can you access mysql?10:27
odyssey4meadac you can try: ansible galera_container -m shell -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"10:28
odyssey4me(from http://docs.openstack.org/developer/openstack-ansible/install-guide/ops-galera-recoverymulti.html )10:28
*** agireud has joined #openstack-ansible10:28
adacodyssey4me, it shows the following: https://gist.github.com/anonymous/3922b9ff4a9dc236055110:30
adacok reading trough the page you send me thanks!10:31
*** openstackgerrit has quit IRC10:32
adacbe back in about 30 mins10:32
*** openstackgerrit has joined #openstack-ansible10:33
odyssey4meadac ah, so it seems that your cluster is not properly bootstrapped10:34
*** agireud has quit IRC10:34
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25779110:35
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25777310:37
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776110:38
Bofu2Uduring haproxy-install.yml running -- "[ALERT] 348/104050 (22733) : parsing [/etc/haproxy/conf.d/nova_console_novnc:19] : Unknown host in 'None:6080'" -- looks like it's on every single service not just nova. :-/ What'd I miss?10:41
Bofu2U(brb)10:41
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-py_from_git: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25781710:42
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-pip_install: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25781810:44
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-pip_lock_down: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25782210:48
*** agireud has joined #openstack-ansible10:48
*** markvoelker has joined #openstack-ansible10:55
*** markvoelker has quit IRC11:00
odyssey4meBofu2U you mentioned earlier that you have a user_group_vars file in place... what else is in there?11:08
odyssey4mefyi we typically encourage all user vars to go into user_variables.yml11:09
*** cemmason1 has quit IRC11:10
*** admin0 has quit IRC11:11
*** admin0 has joined #openstack-ansible11:12
Bofu2Uodyssey4me: user_variables.yml and openstack_user_config.yml11:16
*** ig0r_ has quit IRC11:20
odyssey4meBofu2U yep - optionally conf.d files can also be used to augment openstack_user_config stuff11:25
Bofu2Urgr11:27
Bofu2Uany idea what could cause the IP to not be set for that run?11:27
odyssey4meBofu2U if you check the haproxy host conf.d entries do they look complete?\11:28
odyssey4meis the ansible play failing, or are those just syslog alerts of some sort?11:28
Bofu2Uansible play is - 1 sec11:30
odyssey4meBofu2U so the group members are added to the configs, and if there are no group members (ie no containers in that group) then it'll leave a conf line out: https://github.com/openstack/openstack-ansible/blob/12.0.2/playbooks/vars/configs/haproxy_config.yml#L3011:30
odyssey4mebut it seems that you somehow have something else going on11:30
odyssey4meBofu2U what version of ansible are you using?11:30
Bofu2U1.9.4, chances are I just forgot to set something somewhere11:31
Bofu2Ubest way to handle assigning would be via the group_binds, yes?11:32
*** adac has quit IRC11:32
odyssey4mehmm, hang on a minute11:33
Bofu2UI have haproxy_hosts set in my openstack_user_config.yml11:34
Bofu2Ubut not sure if I have the right association to galera_all via the others, etc.11:34
odyssey4meandymccr ping?11:38
odyssey4meBofu2U yeah, although to my casual eye your config of infra_hosts should work, as that contains shared-infra_hosts, os-infra_hosts, etc... perhaps that's the issue11:39
odyssey4mein the AIO we break it out a lot more https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/openstack_user_config.yml.aio11:39
Bofu2UI don't mind putting more in there11:39
Bofu2Uheh11:39
Bofu2Uex: I don't have the group_binds on my br-storage either11:39
Bofu2Ulike in that one11:39
odyssey4meto be honest, this is the part of the config which confuses me... maybe as a tester add the 'shared-infra_hosts' group11:40
Bofu2Ulet's give it a try11:41
Bofu2U1 sec11:41
odyssey4meyeah, that'll hit you later - anything that needs to be able to talk over the storage network should have a group bind there afaik11:41
odyssey4mefor this particular issue, which is on your container/management network, your group binds look fine to me11:42
odyssey4meBofu2U what you can also do is query your inventory with scripts/inventory-manage.py (use the -g and -G options to see host/group membership)11:43
Bofu2Uheh11:43
Bofu2Uso you'll like this11:44
Bofu2Uon -G, there's galera11:44
Bofu2Uand galera_container11:44
Bofu2Uno galera_all\11:44
odyssey4meah, so I suspect that we need some tweaks to the env.d files to fix that up11:45
odyssey4mebut I wouldn't do that in your environment right now - an AIO would be a better place to experiment there11:45
Bofu2Uwell these are all clean slate servers11:46
Bofu2Uso w/e11:46
Bofu2UI don't mind getting this to work, then wiping them and doing a start to finish with a working workflow11:46
odyssey4meessentially I mean, rather adjust the openstack_user_config for now to get it right11:47
odyssey4methat said, if you could register that as a bug it'd be grand11:47
Bofu2UI'll make a list as I chug along :P11:47
odyssey4methen you, or anyone else, can follow up on it later11:47
*** adac has joined #openstack-ansible11:47
odyssey4meusually best to create the bugs as you go - they may be finished by the time you're done ;)11:47
Bofu2Utouche11:47
Bofu2Ubest place to report the bug? github or ...?11:48
Bofu2Unvm got the launchpad11:48
odyssey4meyup :)11:48
* odyssey4me points at IRC channel topic11:48
Bofu2UProbably would be good to put that in the github readme too ;)11:49
*** thegmanagain has joined #openstack-ansible11:50
odyssey4mehmm, I thought I had - it is indirectly linked: dev docs link -> contributor guidelines11:52
Bofu2Uyeah11:52
Bofu2UI'm doing a quick PR for it to be put in the readme11:52
Bofu2Ujust trying to make it easier heh11:52
odyssey4mepatches are always welcome :)11:53
Bofu2Udone11:54
*** mattoliverau has quit IRC11:55
*** matt6434 has joined #openstack-ansible11:56
odyssey4meBofu2U hmm, did you submit a review via gerrit - or a PR via github?11:58
Bofu2UPR via github for the readme, finishing the gerrit now11:58
thegmanagainHi folks. I'm trying to create my first instance on openstack via an instance running ansible 2.1 and can't get it to work11:59
odyssey4meBofu2U http://docs.openstack.org/infra/manual/developers.html takes you through how to contribute to an openstack project11:59
thegmanagainMirantis OpenStack 5.1.111:59
odyssey4meunfortunately github will just auto-reject your PR11:59
Bofu2Uoh good lord11:59
Bofu2Uthat's a lot of work for 1 line in the readme heh11:59
odyssey4meBofu2U it is, but once you're in you can contribute more :)12:00
thegmanagainI have a cloud.yaml file and get a correct response when I run "openstack --os-cloud my_cloud server list"12:00
odyssey4methegmanagain I'm confused - ansible 2.1 has not been released12:00
thegmanagainansible --version ansible 2.1.0  config file =  configured module search path = Default w/o overrides12:01
odyssey4methegmanagain also, this is a channel for developers and users of https://github.com/openstack/openstack-ansible - while we could try to assist you, you may be better off asking in #openstack or in #ansible12:01
Bofu2Uodyssey4me: https://bugs.launchpad.net/openstack-ansible/+bug/1526292 done12:01
openstackLaunchpad bug 1526292 in openstack-ansible "infra_hosts definition doesn't set galera_all, fails on haproxy_install.yml" [Undecided,New]12:01
thegmanagaingit clone git://github.com/ansible/ansible.git12:01
thegmanagainOk thanks12:02
odyssey4meafk for a bit12:02
odyssey4meBofu2U thanks :)12:02
Bofu2Uofc12:02
*** thegmanagain has left #openstack-ansible12:07
evrardjpcloudnull, pong12:12
evrardjpand hello everyone12:13
odyssey4meo/ evrardjp12:13
*** fawadkhaliq has quit IRC12:13
*** cemmason1 has joined #openstack-ansible12:14
evrardjphow is it?12:14
evrardjpthere was someone interested by designate, should I help?12:15
evrardjpalso, could this merge? https://review.openstack.org/#/c/249227/12:16
evrardjpthis way I can focus on implementation12:16
evrardjpthanks odyssey4me for having commented/validated already :)12:17
odyssey4meevrardjp yeah, it looks like swati is offline12:19
odyssey4mesomeone else was interested in helping too, perhaps you could try to make contact with them to collaborate on the work?12:19
evrardjpI'm on holiday, so it's not mandatory... just wanted to make sure everything was alright12:20
*** fawadkhaliq has joined #openstack-ansible12:21
*** fawadkhaliq has quit IRC12:23
*** fawadkhaliq has joined #openstack-ansible12:23
Bofu2Uha, got to setup-infra and it fails on the first run. :)12:27
Bofu2UFile "/usr/local/lib/python2.7/dist-packages/ansible/runner/connection_plugins/ssh.py", line 44, in __init__12:28
Bofu2U    self.ipv6 = ':' in self.host12:28
Bofu2UTypeError: argument of type 'NoneType' is not iterable12:28
odyssey4meBofu2U oh dear, I'm a bit confused about why it's trying to select the ipv6 address12:31
Bofu2Uheh12:31
Bofu2Ulooks like it's a dupe of https://bugs.launchpad.net/openstack-ansible/+bug/147717512:31
openstackLaunchpad bug 1477175 in openstack-ansible "Setup-infrastructure error with "NoneType" during Memcached install" [Undecided,Expired]12:31
Bofu2UConfirmed what was said there, though - in the inventory the ansible_ssh_host is null12:33
*** admin0 has quit IRC12:34
odyssey4mehmm, but your ip_from_q is right - unless your inventory has evolved from a bad config before?12:35
Bofu2Uno it should be fine12:35
Bofu2Uhm.12:36
Bofu2Urunning ifconfig from within one of the containers shows 10.0.3.183 -- shouldn't it be on one of the CIDR's I set?12:36
odyssey4meBofu2U so there will be a default eth0 address which lxc assigns12:37
odyssey4methe addresses from the networks you define should be extra12:37
Bofu2Uah ok12:37
odyssey4meif the values aren't in your inventory, then they wouldn't be set though12:37
odyssey4meso use the inventory-manage script to check things, or open the inventory json file and verify12:38
Bofu2Uhttp://pastebin.com/zjZfyFVR12:38
Bofu2Uthat's what I'm worried about - the ansible_ssh_host12:38
Bofu2Ucontainer_address is also null12:39
odyssey4meyeah, if that's busted then the deployment of those containers will be broken12:39
Bofu2Uhm. What segment of t his should I be trying to troubleshoot to figure out why those aren't being set12:39
odyssey4meI can't see why it's busted from your config though12:40
odyssey4methis is directly from the inventory12:40
Bofu2Uyeah12:40
odyssey4meso the best in this case is probably to blow away the containers, then remove them fro your inventory, then regenerate the inventory12:40
Bofu2Uk12:40
Bofu2Uany chance there's a playbook that does that? ;)12:40
odyssey4meopenstack-ansible lxc-containers-destroy.yml12:41
odyssey4methat'll blow away the containers12:41
*** markvoelker has joined #openstack-ansible12:41
odyssey4meI would guess that nothing else is sacred, so once that's done you can probably blow away the inventory.json and fact cache12:42
Bofu2Urgr12:42
Bofu2Uon it now12:42
Bofu2Uthen run setup-hosts again I'm assuming12:42
odyssey4meyou can then just execute: python playbooks/inventory/dynamic_inventory.py12:43
odyssey4methat'll output the inventory so you can inspect it before going back down the rabbit hole12:43
odyssey4mebut yes, your playbooks would need to be executed from the start again12:43
*** apuimedo has quit IRC12:45
*** markvoelker has quit IRC12:46
*** oneswig has joined #openstack-ansible12:46
*** apuimedo has joined #openstack-ansible12:46
Bofu2Uhm12:47
Bofu2Ueven on the python dynamic it's showing a lot of ansible_ssh_host null12:48
odyssey4mein that case it's definitely a config issue12:48
*** admin0 has joined #openstack-ansible12:52
*** cemmason1 has quit IRC12:54
*** oneswig has quit IRC12:55
*** woodard has joined #openstack-ansible12:56
*** woodard has quit IRC12:56
*** woodard has joined #openstack-ansible12:57
Bofu2Uwhat about the is_metal flag13:03
Bofu2Uhttps://github.com/openstack/openstack-ansible/blob/e51ceaa127c2639d39a798c6dc9ee41fa3635d24/playbooks/inventory/dynamic_inventory.py#L15713:03
*** fawadkhaliq has quit IRC13:03
evrardjphello Bofu2U I just saw your previous paste13:04
*** markvoelker has joined #openstack-ansible13:04
evrardjpare you sure you've correctly setup the networks?13:05
Bofu2UAs far as I know :P13:05
evrardjpcould you drop me your config somewhere? (just obfuscate the ips if some are public...)13:05
Bofu2Uall private, no worries - 1 sec13:05
Bofu2Uhttp://pastebin.com/nzmZJsae13:06
odyssey4meBofu2U the is_metal flag defaults to false - it's there to allow you to have stuff deploy on the hardware instead of a container13:07
odyssey4meBofu2U is_metal is set to true for cinder-volume: https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/cinder.yml#L6213:08
odyssey4mealso nova-compute: https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/nova.yml#L7513:08
Bofu2Uahhhh got ya13:09
Bofu2Umakes sense13:09
Bofu2Uevrardjp let me know if you need anything else and I can grab it13:09
odyssey4meand swift {object,container,storage} https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/swift.yml#L5513:09
odyssey4mebut that flag in those files allow you to change things up if you want to13:09
odyssey4mefor example, if you're only using ceph/nfs for cinder then you may as well have cinder-volume in a container on the infra hosts13:10
Bofu2Ugot ya got ya13:11
Bofu2Uyeah that makes sense13:11
evrardjpBofu2U, provider_networks are part of the global_overrides13:17
Bofu2Uwhat13:17
Bofu2Uson of a13:17
evrardjpalso, you can enclose your ip of used_ips in "13:17
*** prometheanfire has quit IRC13:18
evrardjplike - "10.102.0.11,10.102.0.13"13:18
evrardjpso the p of your provider_works should be at the same level as the t from tunnel_bridge13:18
evrardjpand everything beneath it should follow13:19
*** prometheanfire has joined #openstack-ansible13:19
evrardjpalso I wouldn't use infra_hosts as you already have the rest, but that's another topic13:20
evrardjp(you have shared-infra, repo-infra os-infra ...)13:20
Bofu2Uyeah, I just did that because of the crap I ran into with galera earlier13:20
evrardjpit's normally not needed13:21
evrardjplet's check if you inventory is fine now :)13:21
Bofu2Uthe dynamic inventory shows what I'm assuming will be the right IP's13:21
Bofu2UI"m running setup-hosts again and will see how that ends up.13:22
Bofu2U:)13:22
evrardjpcould you just redrop the config somewhere, to be sure?13:26
Bofu2Uyeah 1min - it's literally that exact one but with the provider & children indented to match13:30
Bofu2Uand running the dynamic_inventory file did show container ips13:30
evrardjpyou should check with the inventory manage -l13:32
evrardjpthere should be no blanks and all the components you're looking for13:32
evrardjp(it's easier on the eyes than pure json)13:33
odyssey4meBofu2U it's plausible that our splitting of the groups earlier was due to some bustedness in the config and wasn't necessary13:38
Bofu2Upossibly13:38
Bofu2Ulooks like the containers failed on "wait for ssh to be available"13:39
Bofu2UI guess now that the network config is technically working it means it's setup wrong ;)13:42
*** shausy has quit IRC13:47
*** tlian has joined #openstack-ansible13:48
Bofu2Uisn't the lxc-net-bridge supposed to have br-mgmt as a bridge_port?13:50
evrardjpnope13:52
evrardjplxc bridge is lxcbr013:52
evrardjpit's natted by default IIRC13:53
Bofu2Ugot ya13:53
Bofu2Uyeah upon running info on a container it has the standard 10.0.3.X IP and then an IP on the correct range after13:53
evrardjpit's not necessarily used, depends on your config13:53
Bofu2Ulocal master host can see those IP's of all containers on it13:53
Bofu2Ubut others can't13:53
Bofu2Uso something with the routing13:54
evrardjpwithout config/inventory.json I'll have issues to help you :p13:54
Bofu2U1 sec :P13:54
mhaydeni know it's a little OT, but does anyone have an opinion on zabbix?13:54
evrardjpwe have it mhayden13:54
Bofu2Umhayden: compared to something or just in general?13:54
evrardjpin production13:55
mhaydenBofu2U: in general13:55
evrardjpin general I don't like it13:55
mhaydenevrardjp: ah, okay -- the community version or paid?13:55
mhaydenhaha13:55
* mhayden has not yet a monitoring product he loves yet13:55
evrardjpbut I'm not really the ops guy, so I didn't get to decide13:55
evrardjpmonitoring or event management/statistics aggregation/...?13:55
evrardjpwe are using the community, because we don't really see a point of the commercial right now13:56
Bofu2Uafter I grab this file for evrardjp ill rant for a min about my experiences with it :P13:56
cloudnullMorning13:56
evrardjpbut my colleague was at the latest zabbix conference if you want to talk with him13:56
Bofu2Uevrardjp: http://pastebin.com/QiQq3Cf713:56
evrardjphe could give you insights of the future of zabbix13:57
evrardjpo/ cloudnull13:57
Bofu2Umhayden: When I set it up it's nice for certain things and others just seemed like a pain to deal with13:57
Bofu2Uexample - I pretty much just use it for alert monitoring on CPU load, temperature13:57
evrardjpmhayden, we didn't find something that wasn't workaroundable13:57
evrardjp(with zabbix)13:57
Bofu2U^ is pretty much my experience with it13:57
Bofu2UIt may not work the first time, but with a few tweaks and possibly some research you'll have it running13:58
Bofu2Uand only specific subsets. The community has good jumping off points to get you started13:58
Bofu2Uex: it monitored my juniper SRX just fine, but the EX for some reason had some problems13:58
evrardjpyeah, but it could do far more complex scenarios for data gathering/aggregation/reporting13:58
Bofu2Uchanged a few ID's and it was completely fine after that13:58
cloudnullo/ evrardjp I wanted to ping you about haproxy. I'm doing the irr work and didn't want to touch our haproxy role if we have a better one we can move to in the nearish future.13:59
Bofu2Uand yeah, the rolling averages are nice.13:59
Bofu2UI use a combo of observium and zabbix tbh mhayden13:59
evrardjpcloudnull, I didn't get the chance to work on it. Like I said in the past, the haproxy role I have is working14:00
*** fawadkhaliq has joined #openstack-ansible14:00
evrardjpwe just need to add the convenience tool that is linked to the inventory (the paste you sent me)14:00
cloudnullOK. So we should be able to move to that role without out much fuss.14:01
evrardjpmmm, not that easily, because it's not really backwards compatible :p14:01
evrardjpsome wiring should be done14:01
evrardjpand docs14:01
cloudnullOk.14:01
evrardjpI can work on this14:01
cloudnullWell I'll leave you haproxy roll in tree for now and circle back on it a bit later.14:02
cloudnullI don't want to pull it into its own repo iwe can get to the better role.14:02
cloudnull*if we14:02
cloudnullBut no worries or pressure. It's not a blocker.14:03
cloudnullI just wanted to ping you.14:03
cloudnullBecause I knew you had some bits in-flight.14:03
mhaydenBofu2U: thanks14:03
evrardjpcloudnull, yeah sure, no problem :p14:03
Bofu2Umhayden no problem14:03
evrardjpI'd rather that way :p14:03
Bofu2Uif you intention is to just have something to check overall health and watch it14:03
Bofu2Uobservium is nice14:03
Bofu2Uif you need triggers, alerts, thresholds and all of that14:04
Bofu2Uzabbix, etc.14:04
odyssey4mecloudnull I'm doing a tweak to the ldap/domains config for the keystone role, but I'm a bit stuck on something14:04
evrardjpmhayden, new relic is kinda nice :p14:04
cloudnullOdyssey4me What's going on?14:04
cloudnull+1 for newrelic ;)14:04
evrardjpmhayden, else at home, I'm using collectd for collection, influxdb for graphing, and I'll check on the new thingy that influxdb team as released for monitoring, but I don't really care about that :p14:05
odyssey4mecloudnull given http://pastebin.com/5SdDGdmF I'd like to use with_items to iterate over each list item, but I need the value of the item: eg list_item114:05
odyssey4meperhaps I should structure it slightly differently, I'm open to options14:05
evrardjpwith_list?14:05
odyssey4meof course I could also use key: value pairs all the way down14:05
odyssey4mebut I'm trying to keep it less verbose14:05
evrardjpif you could reorganise, it would be far more elegant14:06
cloudnullAdding multi domain support ?14:06
mhaydenevrardjp: ah, influx is on my list of "things to look at sometime later when i get that free time"14:06
odyssey4meevrardjp I need both the 'key' (ie list_item1) and the 'value' (ie the full dict of dict1)14:06
odyssey4mecloudnull yep - busy working on the ldap gate, and need this to make things sane14:06
odyssey4meusing LDAP for the default domain is dumb14:07
evrardjpodyssey4me, no issue14:07
*** sdake_ has joined #openstack-ansible14:07
cloudnull+114:07
*** sdake has quit IRC14:07
evrardjp+1 too :p14:07
cloudnullI think with_dict is going to be the best way14:07
evrardjpodyssey4me, I'd move dict1 dict2 UNDER list_items14:07
cloudnullAnd using a kvs is how to best achieve it.14:07
odyssey4mewe currently only provide the ability to implement it for the default domain, which sucks bad14:07
cloudnullI think we keep the mechanism to drop domain specific config but name the specific domains according to some value in the main dict.14:09
odyssey4meevrardjp cloudnull ie http://pastebin.com/92kb11VJ ?14:09
evrardjphttp://docs.ansible.com/ansible/playbooks_loops.html#looping-over-subelements14:09
odyssey4mecloudnull yep, that's what I'm doing14:09
Bofu2Uevrardjp: the IP's in the inventory for ansible_ssh_host and container_address are supposed to be one in the same, correct?14:10
cloudnullThe trick will be to keep it backwards compatible or. Create some integration script for upgrading the data structure change.14:10
odyssey4meok, let me show you a real config14:10
odyssey4mehttp://pastebin.com/jP1BvBda14:10
evrardjpodyssey4me, do you need the item in the list? or could it be just a dict?14:10
odyssey4me'Users' is the name of the domain14:10
odyssey4methere will be zero or more domains14:10
odyssey4meeach domain must have one dict (and only one) under it14:11
evrardjpI'd remove "- " before Users14:11
cloudnull^14:11
odyssey4meok, then how do I loop over it?14:11
odyssey4mewith_dict?14:11
cloudnullAnd add a key for name or similar.14:11
evrardjpodyssey4me, check the link with subelements14:12
evrardjpit will help you14:12
evrardjpBofu2U, it depends14:12
odyssey4meevrardjp I know that doc entry, every time I read it my innd turns to mush14:12
odyssey4me*mind14:12
cloudnullMaybe even remove ldap and simply call it options. Which should encompasses all options available in the keystone config.14:12
odyssey4mecloudnull 'ldap' is important - it's used for the section14:13
odyssey4mefor another domain it could be 'sql'14:13
cloudnullRight, but that could simply be a key14:13
odyssey4meyeah, I was trying to get away from making it all key: value pairs... but perhaps it's better not to14:14
cloudnulloptions: {ldap:..., driver:..}14:14
cloudnullIdk what's best tbh14:14
evrardjpodyssey4me, mixing items and dicts with ansible is kinda a pain sometimes, but it's doable14:15
cloudnullJust pondering14:15
evrardjpjust think what's best for you :)14:15
odyssey4meok, lemme try with_dict... it seems to be giving me what I want14:16
odyssey4mewhere I had it wrong was that I was listing the dicts14:16
evrardjpBofu2U IIRC, the container addresses can hold a storage address, a management adress, etc.14:16
odyssey4methanks :) I'll give you feedback shortly14:16
Bofu2Uyeah14:16
evrardjpBofu2U, one of it should be the ssh address14:16
evrardjpwhich is, in general the mgmt one14:16
Bofu2Ucorrect14:16
Bofu2Uyeah14:16
Bofu2Uand that's all coming back as correct14:17
evrardjpok14:17
evrardjpcould you do a ansible -m ping all ?14:17
evrardjpjust to make sure you can connect to all your containers from the deploy node14:17
Bofu2UI can't, that's the problem14:17
evrardjpyou need to be in the appropriate folder14:17
evrardjpthen you have ssh issues14:18
Bofu2Uthe containers can't be connected to outside of the machine it's on14:18
Bofu2Uit's also the same network that the machines are currently on, connected to, and talking over14:18
evrardjpare the management network reachable from the deploy nodes?14:18
Bofu2Uso my guess is something that has to do with the route14:18
Bofu2Uyep14:18
Bofu2Uthat's what I'm deploying with through ansible14:19
evrardjpoh you could do fancy stuff with ansible :)14:19
Bofu2Uhehe, not me at this point ;)14:19
Bofu2Uthe IP's on 10.104.0.X are the bare metal nodes14:19
Bofu2Umanagement network, where the containers are *supposed to be* binding to as well14:19
evrardjpwait14:20
evrardjpI'm interested about the wiring you've done14:20
evrardjpon your nodes14:20
Bofu2Ubonded NICs, 4 VLANs14:20
*** targon has joined #openstack-ansible14:20
odyssey4mecloudnull evrardjp heh, that totally worked :) patch incoming14:21
Bofu2Ubond0.101-bond0.10414:21
evrardjpok14:21
evrardjpouch14:21
Bofu2UI can do whatever I want tbh14:21
Bofu2UThat's how I had it setup for Fuel14:21
evrardjpforget Fuel :p14:21
evrardjpI dropped it myself :p14:21
Bofu2UThat's what I'm trying to do ;)14:21
Bofu2UHence I'm here lol14:22
evrardjp:)14:22
Bofu2Uthis is all physical hardware 10 feet from me14:22
Bofu2Uincluding the switches, routers14:22
Bofu2Uso I can change literally anything14:22
evrardjpquestion14:22
evrardjpBofu2U, I'm concerned about the host using the same network as inside the cloud14:23
Bofu2UNot a problem, I can change it14:24
evrardjpyou're bridging the vlan interfaces, right?14:24
evrardjpor you're bridging the NICs?14:24
Bofu2Uyes14:24
Bofu2Uthe NICs14:24
Bofu2Ueth0/eth1 into bond014:24
Bofu2Uthen bond0.102 is bridged into br-mgmt14:24
evrardjpcould you drop your /etc/network/interfaces somewhere?14:25
Bofu2Uyeah 1 sec14:26
Bofu2UI'll grab it from controller114:26
odyssey4meafk for a bit14:26
Bofu2Uhttp://pastebin.com/3EHytSwz14:27
Bofu2Ujust noticed the duplication at the bottom as well :| sigh14:27
*** apuimedo has quit IRC14:29
evrardjpFYI at some point you'll really want a larger mty14:30
evrardjpmtu14:30
Bofu2Umore than 9k?14:30
evrardjpyour bond as 150014:30
evrardjpthe nic have 150014:30
Bofu2Uer yeah14:30
Bofu2Uvlans have 9k14:31
evrardjpwhy not setting the links into 9k too then?14:31
Bofu2UI can, must have been reset by the provisioner14:31
Bofu2U:(14:31
*** apuimedo has joined #openstack-ansible14:31
evrardjpit's just to avoid you weird issues afterwards :)14:32
Bofu2Uof course14:32
evrardjpyour provisionner is doing weird stuff14:32
Bofu2U... couldn't I do that with ansible to run on all of the nodes? lol14:32
evrardjpI did, but it's tricky and not part of openstack-ansible :)14:32
Bofu2Utouche14:32
evrardjptricky because you can really remove the branch you're standing on :p14:32
Bofu2Uyeah14:33
Bofu2Uall good14:33
evrardjpanyway, I wouldn't mix configuration of bonding modes too14:33
evrardjpI wouldn't set the hwaddress for bond, moreover on balance-xor mode!14:33
evrardjpI'll reply to your paste :)14:34
evrardjpit's easier14:34
Bofu2Uappreciated :)14:34
openstackgerritMajor Hayden proposed openstack/openstack-ansible: [WIP] Testing parallel playbooks  https://review.openstack.org/25370614:34
evrardjpBofu2U, all 4 nics in one link aggregation? not 2?14:38
Bofu2Ucorrect14:38
Bofu2Umy compute nodes only have 2 NICs14:38
Bofu2Uwanted to unify with just bond014:38
*** javeriak has quit IRC14:38
evrardjpok14:38
Bofu2Usidenote did you want me to gist that instead14:39
Bofu2Uso you can literally reply to it?14:39
*** fawadkhaliq has quit IRC14:39
evrardjpI'll do something generic with master-backup and you'll adapt afterwards when you'll feel more confident14:39
Bofu2Uyeah that's fine14:39
Bofu2UI know I'll saturate 2-4Gbps so I wanted to make sure it had as much available as possible14:40
Bofu2Uheh14:40
*** dslevin_ has quit IRC14:42
*** dslevin has quit IRC14:42
evrardjpBofu2U, could you tell me which vlan is for what?14:44
*** adac has quit IRC14:45
evrardjpit's in your sourced file I guess14:45
Bofu2U101 tunnel14:45
Bofu2U102 container14:45
Bofu2U103 storage14:45
Bofu2U104 public14:45
evrardjpk14:45
*** apuimedo has quit IRC14:45
evrardjpBofu2U, you need vlan tenant isolation for your customers or just vxlan?14:47
Bofu2Ueither/or14:47
*** apuimedo has joined #openstack-ansible14:47
Bofu2Uvxlan is fine14:47
Sam-I-Amyou cant do vlan here14:48
Sam-I-Ambecause vlan tags are already used on the sub-ints14:48
evrardjpthat was my concern Sam-I-Am  :)14:49
*** adac has joined #openstack-ansible14:50
evrardjphe could, but it needs to be done carefully14:50
Bofu2UI'll do whatever way makes my and your life easier :P14:50
Sam-I-Amthere's no q-in-q support14:50
evrardjpBofu2U, management network is untagged on the host?14:51
*** KLevenstein has joined #openstack-ansible14:51
Bofu2Ucorrect14:52
*** apuimedo has quit IRC14:52
evrardjpdo you have something untagged on the host should be more correct14:52
Bofu2U10.20.0.x14:52
evrardjpI'll call that "host network" here14:52
*** apuimedo has joined #openstack-ansible14:52
cloudnullif any cores are around can we please bang thesse through to help out our CI bretheren https://review.openstack.org/#/q/status:open+branch:master+topic:lint-jobs,n,z14:55
cloudnulland https://review.openstack.org/#/c/256016/14:56
matttcloudnull: looking at the CI-related ones14:56
mattt(if anyone wants to peep the last review)14:57
evrardjpBofu2U, ok here is what I drafted for you14:59
evrardjphttp://pastebin.com/8wwyR2g114:59
cloudnullyes the last one is the initial gate for the galera_server role https://review.openstack.org/#/c/256016/ which relates to this change in the CI systems https://review.openstack.org/#/c/25775514:59
Bofu2Uevrardjp that looks perfect15:01
evrardjpBofu2U, ok mtu as missing at some places, but this should get you a basic networking that should work15:01
cloudnulltyvm btw mattt15:01
evrardjpcopy that to all your hosts, edit appropriately, pray and ifup/down15:01
Bofu2Utrying the first one now15:01
Bofu2Ufingers crossed15:01
evrardjpifdown/ifup/reboot because ifdown/ifup will fail15:01
evrardjpas usual :p15:02
cloudnullhahaha15:02
evrardjpthen connect on the node using your untagged interface (10.20.0.x)15:02
* Bofu2U fingers crossed15:05
*** egonzalez has quit IRC15:05
*** dslevin has joined #openstack-ansible15:06
*** apuimedo has quit IRC15:06
*** Mudpuppy has joined #openstack-ansible15:07
*** apuimedo has joined #openstack-ansible15:07
Bofu2Upinging but no SSH, going to step away for a bit before I go insane :P15:10
evrardjpok15:11
cloudnullBofu2U: was that the vm instance ping'ing but not ssh ?15:11
Bofu2Ubare metal15:11
Bofu2Uit's back up now, just took a bit15:12
cloudnullsorry lost some scroll back15:12
Bofu2Uno worries15:12
cloudnullkk15:12
Bofu2Uit's back up and working fully though15:12
matttodyssey4me: you there ?15:12
Bofu2Uso that's always good lol15:12
evrardjpyeah let him a few minutes to start everything15:12
evrardjpit*15:12
cloudnullmattt:  i think he's afk a bit15:12
matttcloudnull: ah ok15:12
evrardjpthe containers will definitely take a while to boot Bofu2U15:12
matttcloudnull: hey, regarding these reviews, https://review.openstack.org/#/c/257773/2/tox.ini for example15:12
evrardjpyou can check with lxc command line15:13
cloudnullmattt:  yes ?15:13
matttcloudnull: i'm not familiar w/ this tox testing stuff, does this assume you're not running these tests on your local workstation ?15:13
*** sigmavirus24_awa is now known as sigmavirus2415:14
matttcloudnull: i'm just wondering what hte workflow is for a developer who wants to maybe do some linting locally15:14
cloudnullno. they work on a local workstation, however if you ran the functional part it would pollute some things15:14
evrardjptox is basically a pip aware job runner, you can run these on your workstation15:14
*** apuimedo has quit IRC15:14
matttcloudnull: yeah, so i was wondering if the ansible-functional should be removed from envlist ?15:14
matttbecause that is quite dangerous no15:14
*** oneswig has joined #openstack-ansible15:15
*** apuimedo has joined #openstack-ansible15:15
cloudnulli guess it could be . however i think infra wanted a single combined job for all ansible tests15:15
evrardjpyup it could15:15
evrardjpsorry for commenting :p15:15
* cloudnull goes to read the infra convo from yesterday15:16
cloudnullevrardjp:  never be sorry15:16
matttevrardjp: sorry not sorry?  :)15:16
evrardjpdammit!15:16
evrardjpit proves there is room for improvement :)15:17
matttcloudnull: based on the commit message in the review it sounds like pep8/bashate are to be merged15:17
matttnot merge every job into a single run?15:18
mattts/job/test/ ?15:18
cloudnullpossibly. it seems odyssey4me based all of the commits on https://review.openstack.org/#/c/257536/4 which came from AJaeger15:18
mattti mentioned to odyssey4me a few weeks back, i guess i wanted to just express some concern that we don't run the functional test out of the box15:19
cloudnullbased on http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-12-14.log.html#t2015-12-14T18:37:2915:19
matttie. i don't know if it's standard practice to just blindly run 'tox' while dev'ing on a repo15:19
*** oneswig has quit IRC15:19
cloudnullIm actually good with saying that we leave the functional testing off of the default tox test15:20
matttcloudnull: i'll check back w/ odyssey4me once he's online, but otherwise i'm assuming these changes are all fine :)15:20
cloudnullit makes sense to me that you wouldnt want ansible running things on your local station without instructing it to do so15:20
cloudnullbut idk tbh .15:21
matttcloudnull: cool, well thanks anyway :)15:21
mattt(but i do generally agree w/ your sentiment)15:21
cloudnullsorry . im useless.15:21
evrardjpwho runs tox in its host for openstack stuff, even in a venv?15:21
matttcloudnull: bah!15:21
matttwhat is it with everyone all apologetic today :)15:21
cloudnullits tuesday15:21
Bofu2Uim sorry I don't have anything to apologize for yet mattt15:21
Bofu2Ubut I'll work on it.15:21
cloudnullhahahahaa15:21
mattt:P15:22
evrardjp:)15:22
*** mattronix_ has joined #openstack-ansible15:23
bgmccollumBofu2U: i see what you did there15:24
bgmccollumi mean...im sorry to say...i see what you did there...15:24
*** baker has joined #openstack-ansible15:24
Bofu2UI'm sorry I didn't make it more obvious for you. It's another thing I'll work on, I promise.15:25
Bofu2U<315:25
cloudnulllook at all the helpers. it brings a tear to my eye...15:25
*** mattronix has quit IRC15:25
Bofu2Uyou know what brings a tear to my eye? the amount of red coming from my ansible log over the last 5 hours15:26
Bofu2U:|15:26
bgmccollumBofu2U: turn off colors...problem solved15:26
Bofu2Uthe servers hate me ;(15:26
cloudnullnew OpenStack bitterness level unlocked15:26
Bofu2UHEY! you with your solutions!15:26
Bofu2Uquiet!15:26
cloudnullhahahaha15:26
Bofu2Ugoing to start redirecting to /dev/null15:26
Bofu2Uschrodingers ansible15:27
bgmccollumnothing to see here15:27
cloudnullI seriously LOLd for a moment there.15:27
Bofu2Unow I have to apologize for pulling you away from your standard emotional state.15:27
Bofu2UI apologize.15:27
Bofu2Uok im done now.15:27
*** mattronix_ has quit IRC15:28
cloudnullBofu2U: im totally late to the party but what is making you see red ?15:28
Bofu2U...sorry15:28
*** mattronix has joined #openstack-ansible15:28
Bofu2Uuh - literally almost anything you can think of so far15:28
evrardjpcan't ping his hosts :p15:28
Bofu2Ubeen working my way through the guide/tutorial15:28
cloudnullnot that you want to say all the things again .15:28
* cloudnull goes to see if the logs are online already15:29
Bofu2Uand since I'm not using AIO there's a lot of subtle changes throughout the process that make me want to place said head on said wall rapidly.15:29
evrardjpBofu2U, oh you using AIO?15:29
evrardjpdidn't know that!15:29
Bofu2Uno, not using AIO15:29
evrardjpmy bad15:29
Bofu2Uhehe15:29
Bofu2Uusing my 11 physical machines sitting in the server closet15:30
evrardjpyeah :)15:30
Bofu2Uscreaming as I provision them over, and over, and over15:30
evrardjpwhy do you reprovision them?15:30
Bofu2Ubecause at some point I give up trying to fix and just want to start fresh15:30
evrardjpseems a right approach, it should be fast that way15:30
Bofu2Uyeah. MAAS makes it a little less hectic.15:31
Bofu2U(except for the interfaces file that you just helped me replace)15:31
evrardjpnever used ubuntu's MAAS15:32
evrardjpis it nice for partitioning disks?15:32
Bofu2Uwhen it does it, yes15:32
Bofu2Ulol15:32
evrardjp:p15:32
Bofu2UI've had a problem with sticky MBRs15:32
adacMaybe here http://docs.openstack.org/developer/openstack-ansible/install-guide/ops-galera-start.html in the 1. step in "This command results in a cluster containing a single node. The wsrep_cluster_size value shows the number of nodes in the cluster."  there should be mentioned with which command one can check the example output in the box, which would be: ansible galera_container -m shell -a "mysql \15:32
adac-h localhost -e 'show status like \"%wsrep_cluster_%\";'"15:32
Bofu2UGoing to take a break and get back to it in a bit. Clear my mind and all of that.15:33
Bofu2UI sincerely appreciate all of the help I've received thus far, thank you all.15:33
Bofu2U... and sorry. For everything.15:33
Bofu2U^ there you go mattt actual apology.15:34
evrardjpmattt, is it now the time for a "yw"?15:35
evrardjp(english 101, someone?)15:35
matttyou guyz15:37
cloudnullhaha15:39
cloudnulladac: that'd be a good add15:39
adaccloudnull, :-)15:40
cloudnulladac: also https://review.openstack.org/#/c/256016/15:40
cloudnullwe added some tests15:40
cloudnullhttps://github.com/openstack/openstack-ansible-galera_server/blob/master/tests/test.yml#L130-L14415:41
cloudnulluseful commands15:41
evrardjpI'm not familiar to galera, what means wsrep_incoming_addresses ?15:42
cloudnullwhich if run through ansible, would give you a big picture that all nodes had the same data15:42
cloudnullevrardjp: thats a list of all nodes in the cluster.15:42
cloudnull<IP>:<PORT>15:42
evrardjpok15:42
adaccloudnull, I currently have the AIO installed, how can I update this to the newest openstack-ansible version?15:43
cloudnullwas it off of master?15:43
cloudnullor another tag?15:43
adaccloudnull, I was installing it via this curl one liner15:43
cloudnullah.15:44
cloudnullso go to /opt/openstack-ansible15:44
adacyepp there I am :)15:44
cloudnullif there are any changes you may need to stash them15:44
cloudnullthen git pull origin master15:44
cloudnullthen cd playbooks15:45
cloudnullopenstack-ansible setup-everything.yml15:45
adackk thanks a lot!15:45
adacwould this update shut down everything at once or one thing after another so there is no real interruption?15:45
cloudnullitll iterate through the stack15:46
cloudnullyou'll see api service interruptions as it does it, however if you have running vms they should all remain online .15:46
adaccloudnull, So basically it would not shut down my virtual machines or something like that (I'm not in production with that so it would be fine if it would)15:47
cloudnullnow this will be a deployment which was kicked using master IE Mitaka, so guaranteeing up time may be a hard thing to say15:47
cloudnullit will not shut down the vms15:47
cloudnullor take away networks15:47
adaccloudnull, so cool :)15:47
*** alextricity has quit IRC15:47
adaccloudnull, when I try to start the setup-everything.yml I get: https://gist.github.com/anonymous/964c5062173189009a0115:50
cloudnullah so your deployment was a bit ago.15:51
cloudnullcd ../15:51
cloudnull./scripts/bootstrap-ansible.sh15:51
cloudnullthen cd playbooks; openstack-ansible ...15:51
evrardjpI see what happened there :D15:51
cloudnullyup15:52
cloudnulladac:  the issue is that we've moved a lot of the roles into their own repos15:52
cloudnulland more will happen again in the nearish future15:52
*** admin0 has quit IRC15:53
cloudnullthat task is safe to run all the time to get the roles that you may be missible.15:53
cloudnull*missing.15:53
* cloudnull fingers need more coffee15:53
evrardjpcloudnull, quick question about tox tests, shouldn't increase test coverage? I'm thinking to add tests like idempotency test15:54
evrardjpand maybe multiple config tests15:54
cloudnullthat'd be nice.15:54
*** Bjoern_ has joined #openstack-ansible15:54
cloudnullidempotency tests may be hard. we have places where we use shell commands and things however its deffiniatly not impossible.15:55
evrardjpmy haproxy role has idempotency tests for now, but it's completely not running in the same way15:55
cloudnullmulti-config tests would be excellent15:55
evrardjpI'm running shell script for checking idempotency right now15:55
evrardjpbut I need to move all that to somewhere more visible15:55
evrardjpand with tox :/15:56
*** Bjoern_ is now known as Bjoern_\T15:56
*** Bjoern_\T is now known as BjoernT15:56
cloudnulli've been doing tests for cluster systems and such by spinning up things using our container roles. we can do more of that to really smoke test multiple configs quickly.15:56
evrardjpnice!15:56
evrardjpit would just be dependencies15:56
cloudnullI've been doing this so far15:57
cloudnullhttps://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/tests/test.yml15:57
evrardjphowever this would just increase the build time15:57
cloudnullit does increase the build time, however in the case of the rabbit role we're testing a cluster and then asserting that it works like a cluster. all of that is completed in <10 min15:58
evrardjpI'm reading the file15:58
evrardjpit makes sense15:58
adaccloudnull, thanks again!15:58
cloudnullevrardjp: example https://review.openstack.org/#/c/257788/15:58
cloudnullfunctional test took <5min15:58
evrardjpI don't see what you'll do with the shell that does ps |grep rabbit but that's another story :p15:59
evrardjpcool only 5 mins15:59
cloudnullwe can do that same thing with other tests and do multi-config scenarios per role as needed16:00
evrardjpyeah that was my goal, I'll just shamelessly copy yours16:00
evrardjpmine was for testing keepalived16:00
cloudnullplease do , and if you find a better way we'll shamelessly copy yours16:00
*** javeriak has joined #openstack-ansible16:00
cloudnull:)16:00
evrardjpso I may have issues :)16:00
cloudnullthat'd be a cool test to work out. because i think we could do the same thing within neutron role later down the road too.16:01
evrardjpI'm using semaphoreci right now, so I was planning to simply use the multiple "thread" system which builds on many hosts16:02
*** javeriak has quit IRC16:02
evrardjpthat would be enough to improve coverage, but it can't test clustering16:02
*** javeriak has joined #openstack-ansible16:02
evrardjpI'll move my role to a more "openstack" approach: rst files, tox testing...16:03
*** targon has quit IRC16:03
evrardjpI'll need help on the setup of the repo etc.16:03
*** javeriak_ has joined #openstack-ansible16:05
*** dslevin has quit IRC16:06
cloudnullanytime you let me know what you need16:06
*** javeriak has quit IRC16:07
openstackgerritKevin Carter proposed openstack/openstack-ansible-rabbitmq_server: changed the rabbitmq command test  https://review.openstack.org/25797916:07
cloudnullevrardjp: ^ fixed earlier stupidity16:08
evrardjpcloudnull, shell module can fail tasks now?16:09
evrardjpI thought we still had to check the rc16:10
evrardjpby registering a variable16:10
cloudnullyes. if rc != 016:10
cloudnulli think...16:10
cloudnullhttp://cdn.pasteraw.com/d4u0ouan9eb3w8f6rufe0pghqdgt8l516:11
cloudnullyes16:11
evrardjp:)16:12
evrardjpcool16:12
openstackgerritMerged openstack/openstack-ansible: Use fastest Linux mirrors for gate jobs  https://review.openstack.org/25630116:12
evrardjpnote that you could use command instead of shell16:12
evrardjp:p16:12
openstackgerritMerged openstack/openstack-ansible: Update for PLUMgrid config - appending identity version to auth uri  https://review.openstack.org/25714816:12
evrardjp(now that you don't have a | anymore)16:12
openstackgerritMerged openstack/openstack-ansible: Skip Keystone task when not using swift w keystone  https://review.openstack.org/25428616:12
openstackgerritKevin Carter proposed openstack/openstack-ansible-rabbitmq_server: changed the rabbitmq command test  https://review.openstack.org/25797916:13
cloudnulldone16:13
evrardjp:)16:13
evrardjpok I'll give a +1 for your hard efforts :p16:13
* cloudnull taking the rest of the day off16:14
cloudnull:016:14
cloudnull:)16:14
cloudnullbah.. . ima bad person...16:14
palendaeSure are16:14
cloudnullbug triage time16:14
cloudnullcloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden, scarlisle, luckyinva, ntt, javeriak16:14
palendaeOh, wait, is that a time where I'm not supposed to agree?16:14
cloudnullnot the agreement is on point.16:15
cloudnull:p16:15
d34dh0r53o/16:15
palendae(present)16:16
mattto/16:16
evrardjpo/16:16
Sam-I-Amyo16:17
cloudnulllets jump right in16:17
cloudnullfirst up https://bugs.launchpad.net/openstack-ansible/+bug/152477016:17
openstackLaunchpad bug 1524770 in openstack-ansible juno "Cinder LVs are monitored by disk util MaaS " [Medium,In progress] - Assigned to Andy McCrae (andrew-mccrae)16:17
cloudnullneed people from rax to chime in here.16:17
cloudnullseems like andymccr is already working on this16:17
cloudnullhowever whats the heat level ?16:17
cloudnulland should it target 10.1.19 ?16:18
cloudnullrelated review https://review.openstack.org/#/c/255833/16:19
andymccrcloudnull: i think the PR is already in for backport16:19
cloudnullpalendae mattt d34dh0r53 Sam-I-Am andymccr ?16:20
andymccrso basically, when that merges its done16:20
andymccrid like it targeted at whatever the next 10 release is if we can get it in, but nobody will die if we dont - so its not massively critical i imagine :)16:21
cloudnulldone.16:21
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/152590016:21
openstackLaunchpad bug 1525900 in openstack-ansible " Adding multipath-tools package for nova hosts" [Undecided,New]16:21
palendaeHonestly not sure on heat level myself - since 10 branches are now more or less EOL, I'd assume it gets rolled up with some critical mass has happened16:21
evrardjphadn't we decided to include docs inside each commit and avoid DocImpact?16:23
evrardjpshouldn't we target someone from the dev team to coordinate with doc team and go on?16:23
cloudnullwe did, however this was noting the override capability now. idk that we need to doc that specifically.16:23
cloudnullSam-I-Am: thoughts?16:23
Sam-I-Amthat would be nice16:24
Sam-I-Amincluding docs means no docimpact16:24
cloudnulli think DocImpact is a misnomer in this case.16:24
cloudnullthe change added a package, the override capability has always been there.16:24
evrardjpso maybe this bug should be assigned to someone from the doc team, but with a comment from the commiter to explain why the docimpact was mentionned16:24
Sam-I-Amlooking at bug - in 5983453 meetings at once, and no coffee for 3 hours now16:25
cloudnullim happy to close this as "not a bug"16:25
evrardjpyeah that's why it went through the merging :p16:25
evrardjpSam-I-Am, :D16:25
cloudnullinvalid , moving on16:25
Sam-I-Amif we're documenting this override anywhere, thats the docs thing16:26
Sam-I-Amif its self-documenting, then no16:26
Sam-I-Amcloudnull: did you notice the change in how docimpact works?16:26
evrardjpSam-I-Am, there is a doc to explain the override already16:26
Sam-I-Amevrardjp: ok, then its invalid16:26
cloudnullSam-I-Am:  no, what was the change?16:26
evrardjpwhen the deployer uses the overrides, there are issues that can happen. IMO the deployer using overrides should know what he does16:26
Sam-I-Amcloudnull: it used to be that docimpact opened a bug in openstack-manuals, but that just allowed devs to throw docs over the fence16:27
cloudnullevrardjp: ++16:27
Sam-I-Amcloudnull: so now using docimpact opens a separate bug in the original repo for tracking documentation16:27
cloudnullnice!16:27
cloudnullthats actually a good change.16:27
Sam-I-Amin some cases it may be necessary to tag openstack-manuals with it, but not all of them (think devref)16:27
evrardjpindeed16:27
cloudnullnext: https://bugs.launchpad.net/openstack-ansible/+bug/152629216:28
openstackLaunchpad bug 1526292 in openstack-ansible "infra_hosts definition doesn't set galera_all, fails on haproxy_install.yml" [Undecided,New]16:28
Sam-I-Amif your patch includes all of the docs, you don't need docimpact... then you dont get another bug to handle.16:28
cloudnullRobert Adler  you around  ?16:28
*** mattronix has quit IRC16:30
cloudnullso idk if we fix this, because this is a dynamic inventory/environmental bug and only an issue when using the infra_hosts group.16:30
cloudnullwhich was deprecated in favor of shared-infra16:30
cloudnullthoughts ?16:30
evrardjpinvalid?16:30
evrardjpit's been ages that it's shared-infra, and definitely not in the branch it's mentionned16:31
openstackgerritBjoern Teipel proposed openstack/openstack-ansible: Allow for multiple store backends for Glance  https://review.openstack.org/25558916:31
cloudnullim good with that.16:31
cloudnullill add a note to the issue16:31
cloudnullthats all we have16:32
cloudnullanything else we should bring up  ?16:32
*** galstrom_zzz is now known as galstrom16:32
evrardjpmaybe we should check if there is something in the docs that would make these error come16:32
evrardjpit's maybe a doc bug16:32
cloudnullthats fair16:32
evrardjpbut we don't have info, so...16:32
mhaydencould someone double-check me on https://bugs.launchpad.net/openstack-ansible/+bug/1477273 ? that one looks like it might be done already16:33
openstackLaunchpad bug 1477273 in openstack-ansible " Fix Horizon SSL certificate management and distribution" [Low,In progress] - Assigned to Major Hayden (rackerhacker)16:33
cloudnullevrardjp: so that is a doc issue16:34
evrardjpyup I was grepping16:34
cloudnullhttp://docs.openstack.org/developer/openstack-ansible/install-guide/configure-hostlist.html16:35
evrardjpyup I'm on this one too..16:35
cloudnullok so we'll need to update the docs on that one16:35
evrardjpwhat's the new syntax?16:35
evrardjpI didn't remember there was a change16:35
evrardjpshared-infra?16:35
cloudnullshared-infra_hosts16:35
evrardjpdo you know when was it changed? like before liberty?16:36
evrardjp(to know where to stop backporting)16:36
cloudnullkilo16:36
evrardjpok16:37
evrardjpthanks16:37
evrardjp:)16:37
cloudnullmhayden: looks like its fixed released16:37
mhaydencloudnull: should i flip status? normally i defer to ol' odyssey4me16:37
cloudnulli'd say yes however it may need to be backported to liberty/kilo as a doc update16:38
cloudnullmhayden: yes the vars that power the doc change are in kilo16:39
cloudnullso it should be brought all the way back16:39
mhaydencloudnull: can do16:40
odyssey4memattt cloudnull mhayden back16:40
*** cemmason1 has joined #openstack-ansible16:41
odyssey4mecloudnull mhayden FYI OpenStack-CI now flips all launchpad bugs from in-progress straight to fix-released16:42
cloudnullohai16:42
BjoernTis there any gating for https://review.openstack.org/#/c/257104/ ?16:42
BjoernTdont see anything triggering16:42
cloudnullha, maybe infra was down. ill retest16:42
odyssey4memattt cloudnull in my view anyone running tox should know what they're doing - if anything make run_tests.sh default to not running the functional test, but leave ansible-functional in the list so that run_tests can work through it16:43
evrardjpcloudnull, I see there is still a infra_hosts in the env.d, is that correct?16:43
*** sdake_ is now known as sdake16:44
cloudnullthere is16:45
evrardjpshould it?16:45
cloudnullits there for posterity , but should no longer be used.16:45
cloudnulli'd say keep it for now. im sure removing it completly would break some folks.16:45
cloudnullBjoernT: its testing now16:46
evrardjpI agree16:46
evrardjpI'll document shared-infra but also os-infra, which isn't in the doc16:46
BjoernTthanls16:46
BjoernTthanks16:46
cloudnullseems zuul missed it, was busted at that time, not enough goats were sacrificed, etc ...16:47
cloudnulltyvm evrardjp16:47
cloudnullin future times, it might be good to cover the use of the bits in env.d so that folks can build or add to things as needed.16:48
cloudnullwhat we have is not absolutely required they could be customized per a deployment etc.16:48
cloudnullbut that may be more than we want to take of for some of that right now16:49
cloudnullodyssey4me:  im good with the changes as is16:49
cloudnullwe just need another core to agree.16:49
cloudnullthen in the infra gate we can consolidate those tests.16:50
evrardjpcloudnull, I agree... I'll check when I have time to document a little more about the env.d, but don't expect anything before january :p16:51
cloudnull++16:52
*** alextricity has joined #openstack-ansible16:52
palendaeevrardjp: I think a lot of things are like that right now - on hold or very slow til Jan16:53
evrardjppalendae, I'm on holidays :)16:54
palendaeevrardjp: Right, a significant portion of our team are or will be, too :)16:54
evrardjpwhich makes it difficult to work, right? ;)16:54
palendaeIt should, yes16:54
palendaeDoesn't stop some people16:54
*** mattronix has joined #openstack-ansible16:54
evrardjpI'm connected :p16:54
evrardjpsometimes16:55
evrardjpbut yeah, I understand :)16:55
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible: Old references of infra_hosts in the documentation  https://review.openstack.org/25801217:00
openstackgerritMerged openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25777317:00
*** adac has quit IRC17:01
evrardjpI'm off for... at least today!17:01
evrardjpenjoy your evening17:02
cloudnullhave a good one evrardjp17:05
*** phiche has quit IRC17:06
openstackgerritJesse Pretorius proposed openstack/openstack-ansible: Implement multi-domain configuration for Keystone  https://review.openstack.org/25801517:09
odyssey4mecloudnull ^ lemme know what you think17:12
odyssey4meit's not portable back to Kilo, but will be fine for Liberty17:12
cloudnull^ reviewed17:15
cloudnulli agree, kilo should be left alone for this specific case.17:16
cloudnullmy nit https://review.openstack.org/#/c/258015/1/playbooks/roles/os_keystone/templates/keystone.conf.j2 is that if domain specific config is activated we should do it all the way down and get rid of the global driver17:17
*** Prithiv has quit IRC17:17
odyssey4mecloudnull yep, I was just thinking that17:18
cloudnullthen its explicit what domain does what and where.17:18
odyssey4meyup agreed17:19
cloudnullitll also force the deployer to consider domains when activating them not just create things all on the fly and forget about them later.17:19
cloudnullbut otherwise it looks great17:19
odyssey4meI was thinking that perhaps another task should go in to also create the domains listed... otherwise they won't work.17:20
odyssey4methat said, perhaps we should rather let deployers be the experts and create the domain afterwards?17:20
*** phiche has joined #openstack-ansible17:21
cloudnullthat makes sense we do the same thing in cinder when multiple backends are specified.17:21
cloudnullbecause they wont work otherwise .17:21
cloudnullso i think thats a healthy pattern to follow17:21
cloudnullbut also concede that deployers should be the expert in how they want the domians created when using multiple domains.  so either way is fine and regardless of the direction we chose a note should be added to the docs to tell the deployer what they need to know to make it all go .17:23
odyssey4mecloudnull yep, I'll do the doc thing - good catch :)17:25
*** adac has joined #openstack-ansible17:29
*** galstrom is now known as galstrom_zzz17:29
cloudnullodyssey4me: i think your also going to have to update https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_keystone/tasks/keystone_service_setup.yml17:29
cloudnullto define a default domain17:30
cloudnullwhich will have to match something in the multi-domain config17:30
cloudnullmaybe a keystone_service_domain17:30
cloudnulland then other services will have to have *_service_domain17:31
cloudnullbeacuase if i did multi-domain config i'd be able to move services into sql, called ServiceDomainX, and but that would break the rest of the plays.17:32
odyssey4meso for the moment OpenStack has a basic assumption of using the Default domain for services17:33
odyssey4mesome of the services are still dependent on Keystone's v2 API, so other domains are not an option17:33
odyssey4meWe still have that issue in Aodh, although a patch merged in master recently to fix it.17:34
odyssey4mebut yeah, it'll require some more changes in various places17:34
odyssey4memore than I have energy for right now :)17:34
cloudnullyes. but if you define a users domain w/ ldap called "Users" and didn't define the "Default" domain keystone would be effectively broken17:35
openstackgerritMiguel Alex Cantu proposed openstack/openstack-ansible: Added notification options for keystone  https://review.openstack.org/25754717:35
cloudnullgo rest up, this can wait for another day.17:35
*** japplewhite has joined #openstack-ansible17:35
cloudnull:)17:35
odyssey4mecloudnull yeah, so that's kinda where I got stuck thinking... dammit - more complication17:35
odyssey4meone method could be to set one dict with the defaults for the Default domain... but that seems wasteful17:36
cloudnullmaybe the answer is to simply define it by default and users could add to it in their specific config as they deem fi t17:37
odyssey4mehow do you mean? a static template with a config override?17:37
cloudnullthat way all deployments are all using the multi-domain backend regardless of ldap, or other domain17:37
*** sigmavirus24 is now known as sigmavirus24_awa17:38
cloudnullsimply make the example an actual var17:38
cloudnullhttps://review.openstack.org/#/c/258015/1/playbooks/roles/os_keystone/defaults/main.yml17:38
*** baker has quit IRC17:38
odyssey4mewell, that's what I was thinking - all deployments use domain_specific_drivers_enabled, including the Default domain17:38
cloudnullwith the default domain defined within i t17:38
odyssey4mesure, but then someone wanting to add another var will end up unintentionally overriding it17:38
cloudnullthen we're converging on a single use case and ldap is simply an extension of whats already in-place17:39
cloudnullif the user wants to add ldap then they redefine the keystone_domains var as needed17:40
odyssey4mewhat I mean is, someone will add just a Users domain, and forget to add the Default domain bits17:40
*** markvoelker_ has joined #openstack-ansible17:40
cloudnullwe should doc that it always needs to be defined for now, but if they do that it will break. theyll log a bug, and well point at the docs.17:41
*** markvoelker has quit IRC17:41
*** karimb_ has quit IRC17:42
odyssey4meI'm thinking of a slightly different method, although that would also be fine.17:42
cloudnulleither way. im just spit-balling.17:42
odyssey4methe alternative method is to somehow check whether the dict has the Default domain in the list, and if not then implement the sql config - otherwise trust what's in the dict17:43
*** galstrom_zzz is now known as galstrom17:43
*** baker has joined #openstack-ansible17:43
cloudnullwe could also use the assert module to check for it and fail if not17:43
odyssey4mepre-requisite checks are something we could do a lot more of all over the place17:44
odyssey4mebut much, much earlier17:45
odyssey4meperhaps even rolled into the openstack-ansible command17:45
cloudnullyea, a sanity check play could go a long way to helping a lot of people17:45
cloudnullthey setup all the config, run openstack-ansible sanity-check.yml and its does a quick check on all the things we can think of to make sure the deployment is successful .17:46
*** markvoelker_ has quit IRC17:47
openstackgerritMerged openstack/openstack-ansible-galera_server: Updated repo for new org  https://review.openstack.org/25601617:48
*** markvoelker has joined #openstack-ansible17:49
*** baker has quit IRC17:59
*** daneyon_ has joined #openstack-ansible17:59
*** baker has joined #openstack-ansible17:59
odyssey4meI'm out for the evening. My 'flu is messing with my ability to think.18:01
odyssey4meNight all.18:02
Sam-I-Ams/think/drink18:02
Sam-I-Amfeel better18:02
*** daneyon has quit IRC18:03
cloudnulltake care18:05
*** cemmason1 has quit IRC18:12
*** phiche1 has joined #openstack-ansible18:17
*** galstrom is now known as galstrom_zzz18:18
*** phiche has quit IRC18:18
*** japplewhite has quit IRC18:25
*** sigmavirus24_awa is now known as sigmavirus2418:31
*** tricksters has quit IRC18:32
*** elo has joined #openstack-ansible18:32
*** daneyon has joined #openstack-ansible18:33
*** daneyon_ has quit IRC18:35
*** Guest73233 is now known as mgagne18:42
mhaydenif a user doesn't specify an affinity in their openstack_user_config.yml file, how do we know the quantity of containers to make of each type?18:42
*** mgagne is now known as Guest7643418:42
mhaydeni'm trawling through dynamic_inventory.py now18:43
cloudnull118:43
cloudnullper relevant host18:43
mhaydenso if a user says they have one host in "shared-infra_hosts", does that mean they'll have one galera container and one rabbimtmq container?18:44
cloudnullyes18:44
mhaydenah, so the .yml.aio makes much more sense now18:44
*** adac has quit IRC18:44
mhaydenwe have to push galera_container affinity to 3 to make them stack up three on one host, eh?18:45
bgmccollumexactly18:46
mhaydenokay, that makes sense now18:46
mhaydenthanks, folks18:46
bgmccollumor set to 0 in the case of rabbit and a stand alone swift configuration18:46
mhaydenyeah, i just picked up the bug to document that :P18:46
bgmccollumorly18:46
bgmccollum:)18:46
mhaydenand it forced me to learn something new! :P18:46
mhaydenwhich is fun18:46
bgmccollumstash away in brain...immediately forget.18:47
*** harlowja_ has quit IRC18:49
*** harlowja has joined #openstack-ansible18:50
cloudnullbgmccollum: got the right idea18:53
cloudnull:)18:53
*** galstrom_zzz is now known as galstrom18:54
cloudnullas a reminder if any of our cores are around we'd like to make these go https://review.openstack.org/#/q/status:open+branch:master+topic:lint-jobs,n,z which will help reduce load on infra18:56
cloudnullalso this would be useful https://review.openstack.org/#/c/257979/18:59
*** richoid has quit IRC19:00
stevelleworking on them19:02
stevellewhile other tasks run19:02
mhaydencloudnull / bgmccollum: am i on the right track? https://gist.github.com/major/9272b37f66169336a62119:02
mhaydenwell i have messed up indentions for affinity there :P19:03
cloudnullthanks stevelle !19:03
cloudnullmhayden:  yes19:03
cloudnullthat would result in 0 rabbitmq containers on those hosts19:03
mhaydencloudnull: WOOT19:03
mhaydenthanks sir19:03
cloudnullmhayden:  for prez!19:03
mhaydencloudnull: you're still my most favorite former openstack-ansible PTL named kevin19:04
cloudnullthat makes one19:04
mhaydenhaha19:04
*** richoid has joined #openstack-ansible19:05
*** lkoranda has quit IRC19:07
*** eil397 has joined #openstack-ansible19:09
openstackgerritMerged openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775219:09
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Adding docs for affinity  https://review.openstack.org/25811619:10
*** galstrom is now known as galstrom_zzz19:11
*** oneswig has joined #openstack-ansible19:13
*** richoid has quit IRC19:13
*** lkoranda has joined #openstack-ansible19:14
*** oneswig has quit IRC19:17
*** richoid has joined #openstack-ansible19:19
*** elo has quit IRC19:28
*** KLevenstein_ has joined #openstack-ansible19:32
*** openstackgerrit has quit IRC19:32
*** KLevenstein has quit IRC19:32
*** KLevenstein_ is now known as KLevenstein19:32
*** openstackgerrit has joined #openstack-ansible19:33
* mhayden hugs bgmccollum19:46
bgmccollumhugs not bugs19:47
mhaydenhah, yes19:47
*** b3rnard0 is now known as b3rnard0_away19:52
openstackgerritMerged openstack/openstack-ansible-apt_package_pinning: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25774819:59
openstackgerritMerged openstack/openstack-ansible-security: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25772820:00
*** javeriak_ has quit IRC20:07
*** Bofu2U2 has joined #openstack-ansible20:15
Bofu2U2alrighty, no longer want to bang head against wall.20:16
bgmccollumBofu2U2: good to hear20:16
Sam-I-AmBofu2U2: we can fix that20:16
cloudnullbgmccollum: fixed?20:16
Bofu2U2Sam-I-Am - I can too, just need to run openstack-ansible setup-hosts.yml20:16
Bofu2U2:P20:16
cloudnullor did you take the nuclear option.20:16
bgmccollumcloudnull: you mean Bofu2U2 ?20:17
bgmccollumtabcomplete fail20:17
Bofu2U2cloudnull nuclear.20:17
Bofu2U2started over but with the interface configs.20:17
cloudnullyup tabcomplete  failure20:19
cloudnullBofu2U2: and all is right with the world20:19
Bofu2U2well, not really20:19
Bofu2U2but so far so good with the setup-hosts - just orange and yellow so far.20:19
openstackgerritMerged openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25753620:20
*** elo has joined #openstack-ansible20:22
Bofu2U2alrighty, it's at lxc_container_create - awaiting a sea of red.20:23
alextricityWhere are we keeping the rabbitmq_server role these days?20:24
*** galstrom_zzz is now known as galstrom20:25
alextricityfound it20:26
cloudnullalextricity: https://github.com/openstack/openstack-ansible-rabbitmq_server20:26
alextricityBy any chance, would anybody know why 'rabbitmqctl list_qeueus' wouldn't return anything?20:27
alextricityIsn't there suppose to be a bunch a queues for each service?20:28
cloudnullthis is a thing we should have an opinion on https://review.openstack.org/#/c/257530/ if anyone has some spare cycles.20:29
cloudnullalextricity: each service in each vhost20:29
alextricitycloudnull: Ah... that's the keyword i'm missing :)20:29
* alextricity needs to practice his rabbitmq-fu20:30
*** karimb has joined #openstack-ansible20:30
cloudnullrabbitmqctl list_queues -p /nova20:31
cloudnullwould get you what your looking for20:31
cloudnulland list_vhosts for a complete list of all the vhosts we have20:31
alextricitycloudnull:  Thanks :)20:32
cloudnullanytime20:32
cloudnulloff for a bit, bbl20:33
Bofu2U2glhflo3 cloudnull20:34
BjoernTwho can finish https://review.openstack.org/#/c/257104/20:35
BjoernTso we can get the kilo/liberty changes reviewed and merged20:36
Bofu2U2Looks like the only failure was the neutron agents20:38
sigmavirus24cloudnull: odyssey4me btw, https://bugs.launchpad.net/nova/+bug/1526413 might bite us at the gate if our version of requests isn't pinned20:40
openstackLaunchpad bug 1526413 in OpenStack Compute (nova) liberty "test_app_using_ipv6_and_ssl fails with requests 2.9.0" [High,Confirmed]20:40
Bofu2U2Probably because I still had the vlan neutron network in the vars.20:40
*** dslevin has joined #openstack-ansible20:47
*** matt6434 is now known as mattoliverau20:48
*** b3rnard0_away is now known as b3rnard020:48
*** Prithiv has joined #openstack-ansible20:51
*** galstrom is now known as galstrom_zzz20:51
*** phalmos has joined #openstack-ansible20:53
Bofu2U2Looks like still timing out hard on the "wait for ssh to be available"20:55
Bofu2U2The controllers can ping the lxc containers that are running on themselves, but 1 can't ping the ones on 2, etc.20:57
*** dslev has joined #openstack-ansible20:57
Bofu2U2Actually ... I take that back. they can ping the instances within themselves, but deploy server can't20:58
Bofu2U2So all 3 controllers can ping all containers within the group of 3 (cont1 can ping instances on cont2, etc) but deploy can't access any of them. Hm.20:58
*** phiche has joined #openstack-ansible21:08
bgmccollumwho / what updates the upstream repo for juno?21:10
*** phiche1 has quit IRC21:10
bgmccollumthe rackspace_monitoring_cli update requires a newer version of rackspace_monitoring...21:12
*** sigmavirus24 is now known as sigmavirus24_awa21:17
*** sigmavirus24_awa is now known as sigmavirus2421:17
openstackgerritMerged openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25775821:20
*** admin0 has joined #openstack-ansible21:22
openstackgerritMerged openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25779121:24
bgmccollumis it as simple as updating the appropriate repo_vars file for rackspace_monitoring?21:25
cloudnullBofu2U2: so you can ping containers everywhere except the deploy host ?21:27
Bofu2U2yeah I think I messed up the routing21:27
Bofu2U2re-doing the interfaces on it and rebooting21:27
cloudnullah thatll do it.21:27
* cloudnull hands Bofu2U2 a beer21:27
Bofu2U2had all of the 10.10X interfaces running through 10.2021:27
Bofu2U2instead of the .1 gateway on each vlan21:28
openstackgerritMerged openstack/openstack-ansible-pip_install: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25781821:28
cloudnullBjoernT: we're waiting on another core for https://review.openstack.org/#/c/257104/21:28
openstackgerritMerged openstack/openstack-ansible-pip_lock_down: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25782221:29
*** KLevenstein has quit IRC21:30
openstackgerritMerged openstack/openstack-ansible-rabbitmq_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25778821:30
*** KLevenstein has joined #openstack-ansible21:31
*** Guest76434 is now known as mgagne21:33
*** mgagne is now known as Guest16021:34
*** Guest160 has quit IRC21:34
*** Guest160 has joined #openstack-ansible21:34
*** Guest160 is now known as mgagne21:35
openstackgerritMerged openstack/openstack-ansible-py_from_git: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25781721:37
openstackgerritMerged openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776121:38
BjoernTcloudnull: yes i know21:42
BjoernTthat's why i asked to get all this stuff into the other branches21:42
cloudnullBjoernT:  its PR'd right ?21:42
BjoernThttps://review.openstack.org/#/c/257104/21:43
BjoernT?21:43
cloudnullyes21:43
BjoernTthat's the one for  https://review.openstack.org/25608221:43
openstackgerritMerged openstack/openstack-ansible-memcached_server: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25776721:44
cloudnulljust checking that we have https://review.openstack.org/257434 and https://review.openstack.org/25742521:45
cloudnulloff to eye dr. bbl21:46
BjoernTcorrect, those are for the sub branches21:46
Bofu2U2Alright, got it fixed with the IP's and now it's pulling the "Unknown host in 'None:8776'" etc on the haproxy config. Sigh. At least the containers are up now -- progress.22:06
openstackgerritByron McCollum proposed openstack/openstack-ansible: Upgrade rackspace-monitoring package  https://review.openstack.org/25816222:06
openstackgerritByron McCollum proposed openstack/openstack-ansible: Upgrade rackspace-monitoring package  https://review.openstack.org/25816222:11
*** admin0 has quit IRC22:28
*** dslev has quit IRC22:39
*** phiche has quit IRC22:45
*** phiche has joined #openstack-ansible22:47
*** phiche has quit IRC22:47
*** dslev_ has joined #openstack-ansible22:47
*** phalmos has quit IRC22:51
*** dstanek has quit IRC22:56
*** dstanek has joined #openstack-ansible22:56
*** sdake has quit IRC23:01
*** dstanek has quit IRC23:06
*** prometheanfire has quit IRC23:06
*** prometheanfire has joined #openstack-ansible23:07
*** dstanek has joined #openstack-ansible23:07
*** Bofu2U2 has quit IRC23:13
*** sigmavirus24 is now known as sigmavirus24_awa23:15
*** Guest75 has joined #openstack-ansible23:17
*** manous has quit IRC23:22
*** baker has quit IRC23:26
*** elo has quit IRC23:26
*** errr has quit IRC23:30
*** errr has joined #openstack-ansible23:31
Sam-I-Amcloudnull: when/where is the o-a midcycle?23:31
Sam-I-Amseems there was some discussion but i cant find the final decision23:32
openstackgerritMerged openstack/openstack-ansible: Updating AIO docs for Ansible playbook  https://review.openstack.org/25780523:33
openstackgerritMerged openstack/openstack-ansible: Fix typos in doc/source/developer-docs  https://review.openstack.org/25725723:33
openstackgerritMerged openstack/openstack-ansible: Fix typos in doc/source/developer-docs  https://review.openstack.org/25725823:39
openstackgerritMerged openstack/openstack-ansible-galera_client: Merge bashate/pep8 lint jobs in common job  https://review.openstack.org/25778223:39
stevellefwiw Sam-I-Am I haven't heard a final yet either23:39
openstackgerritMerged openstack/openstack-ansible-openstack_hosts: Increasing max AIO kernel limit  https://review.openstack.org/25710423:39
*** KLevenstein has quit IRC23:46
openstackgerritMichael Carden proposed openstack/openstack-ansible: Add missing file extension  https://review.openstack.org/25819623:47
*** errr has quit IRC23:49
*** karimb has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!