Thursday, 2016-06-30

*** thorst has joined #openstack-ansible00:05
*** thorst has quit IRC00:12
*** thorst has joined #openstack-ansible00:16
*** ManojK has joined #openstack-ansible00:16
*** thorst has quit IRC00:17
*** daneyon_ has quit IRC00:25
*** eil397 has joined #openstack-ansible00:26
*** appprod0 has quit IRC00:34
*** adrian_otto has quit IRC00:35
*** weezS has quit IRC00:43
*** klamath has quit IRC01:05
*** klamath has joined #openstack-ansible01:05
*** javeriak has quit IRC01:05
*** eil397 has quit IRC01:13
*** klamath has quit IRC01:14
*** klamath has joined #openstack-ansible01:15
*** thorst has joined #openstack-ansible01:17
*** asettle has joined #openstack-ansible01:21
*** asettle has quit IRC01:25
*** thorst has quit IRC01:26
*** thorst has joined #openstack-ansible01:33
*** thorst has quit IRC01:33
*** thorst has joined #openstack-ansible01:33
*** thorst has quit IRC01:38
*** eil397 has joined #openstack-ansible01:51
*** sdake has joined #openstack-ansible01:54
*** thorst has joined #openstack-ansible01:56
*** thorst has quit IRC01:56
*** thorst has joined #openstack-ansible01:57
*** jthorne has quit IRC01:57
*** eil397 has quit IRC01:58
*** sacharya has joined #openstack-ansible01:59
*** sdake has quit IRC02:04
*** thorst has quit IRC02:05
*** markvoelker has quit IRC02:06
*** sacharya_ has joined #openstack-ansible02:12
*** adrian_otto has joined #openstack-ansible02:12
*** adrian_otto1 has joined #openstack-ansible02:15
*** sacharya has quit IRC02:15
*** adrian_otto has quit IRC02:16
*** adrian_otto1 has quit IRC02:16
*** adrian_otto has joined #openstack-ansible02:17
*** ManojK has quit IRC02:41
*** chandanc_ has joined #openstack-ansible02:45
*** jorge_munoz has quit IRC02:45
*** klamath has quit IRC02:45
*** jorge_munoz has joined #openstack-ansible02:48
*** thorst has joined #openstack-ansible03:03
*** markvoelker has joined #openstack-ansible03:06
*** thorst has quit IRC03:10
*** thetrav has joined #openstack-ansible03:10
*** markvoelker has quit IRC03:11
thetravcloudnull: thanks for your help, the guy fixed the routing for me today and I can indeed confirm that dropping nohup has solved my issue03:12
thetravit looks like you already updated the launchpad bug?03:12
*** winggundamth has quit IRC03:15
*** ManojK has joined #openstack-ansible03:16
*** winggundamth has joined #openstack-ansible03:17
*** daneyon has joined #openstack-ansible03:40
*** zerda2 has joined #openstack-ansible03:46
*** raddaoui has quit IRC03:47
*** zhangjn has quit IRC03:51
*** zerda2 has quit IRC03:56
*** weezS has joined #openstack-ansible03:57
*** zhangjn has joined #openstack-ansible03:59
openstackgerritMerged openstack/openstack-ansible: Define galera_address in the all group_vars  https://review.openstack.org/33513104:02
openstackgerritMerged openstack/openstack-ansible: Added the ip_vs kernel module to all openstack hosts  https://review.openstack.org/33469904:02
openstackgerritAdam Reznechek proposed openstack/openstack-ansible-specs: Add multiple CPU architecture support spec  https://review.openstack.org/32963704:02
openstackgerritMerged openstack/openstack-ansible: conditionally include the scsi_dh kernel module  https://review.openstack.org/33520204:03
*** zhangjn has quit IRC04:05
openstackgerritAdam Reznechek proposed openstack/openstack-ansible-lxc_container_create: Add Ubuntu ppc64le support  https://review.openstack.org/33578304:05
*** albertcard has quit IRC04:06
*** markvoelker has joined #openstack-ansible04:07
*** thorst has joined #openstack-ansible04:08
*** woodard has joined #openstack-ansible04:08
*** zhangjn has joined #openstack-ansible04:11
*** markvoelker has quit IRC04:12
*** thorst has quit IRC04:15
*** javeriak has joined #openstack-ansible04:30
*** sdake has joined #openstack-ansible04:49
*** sdake_ has joined #openstack-ansible04:51
*** ManojK has quit IRC04:52
*** sdake_ has quit IRC04:53
*** sdake has quit IRC04:54
*** jorge_munoz has quit IRC05:06
*** thorst has joined #openstack-ansible05:13
*** zerda2 has joined #openstack-ansible05:14
*** adrian_otto has quit IRC05:20
*** thorst has quit IRC05:21
*** sdake has joined #openstack-ansible05:24
*** sacharya_ has quit IRC05:27
*** sacharya has joined #openstack-ansible05:27
*** sdake_ has joined #openstack-ansible05:36
*** sdake has quit IRC05:37
*** rcarrillocruz has quit IRC05:46
*** javeriak has quit IRC05:52
*** markvoelker has joined #openstack-ansible06:09
*** javeriak has joined #openstack-ansible06:11
*** bootsha has joined #openstack-ansible06:13
*** markvoelker has quit IRC06:13
*** javeriak_ has joined #openstack-ansible06:15
*** javeriak has quit IRC06:16
*** chhavi has joined #openstack-ansible06:19
*** qiliang27 has joined #openstack-ansible06:23
*** bootsha has quit IRC06:24
*** bsv has joined #openstack-ansible06:27
*** bootsha has joined #openstack-ansible06:27
*** sdake_ has quit IRC06:30
*** weezS has quit IRC06:30
*** chhavi has quit IRC06:31
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: Remove the AIO metadata checksum fix from run-playbooks  https://review.openstack.org/33038906:33
winggundamthhi odyssey4me. did you back from vacation?06:34
*** pcaruana has joined #openstack-ansible06:37
*** prometheanfire has quit IRC06:38
*** sacharya has quit IRC06:51
*** bootsha has quit IRC06:51
*** prometheanfire has joined #openstack-ansible06:55
*** thetrav has quit IRC07:00
*** bootsha has joined #openstack-ansible07:04
*** asettle has joined #openstack-ansible07:04
*** javeriak_ has quit IRC07:05
*** woodard has quit IRC07:08
*** woodard has joined #openstack-ansible07:09
*** markvoelker has joined #openstack-ansible07:10
*** thorst has joined #openstack-ansible07:11
*** woodard has quit IRC07:14
*** markvoelker has quit IRC07:14
openstackgerritMerged openstack/openstack-ansible: Add conditional for overlay network settings  https://review.openstack.org/33557907:15
*** thorst has quit IRC07:20
*** javeriak has joined #openstack-ansible07:27
*** asettle has quit IRC07:32
*** javeriak has quit IRC07:43
*** rcarrillocruz has joined #openstack-ansible07:45
*** neilus has joined #openstack-ansible07:46
evrardjpgood morning everyone07:49
*** neilus has quit IRC07:56
*** neilus has joined #openstack-ansible08:00
*** admin0 has joined #openstack-ansible08:04
winggundamthevrardjp: good morning08:06
*** woodard has joined #openstack-ansible08:10
*** markvoelker has joined #openstack-ansible08:11
*** woodard has quit IRC08:15
*** markvoelker has quit IRC08:15
*** jamielennox is now known as jamielennox|away08:16
*** thorst has joined #openstack-ansible08:18
*** thorst has quit IRC08:25
*** asettle has joined #openstack-ansible08:36
*** electrofelix has joined #openstack-ansible08:43
*** bootsha has quit IRC08:52
openstackgerritJerry Cai proposed openstack/openstack-ansible-os_nova: Filter libvirt in nova.conf. Add nova_powervm module in nova.virt  https://review.openstack.org/33269708:57
*** bapalm has quit IRC08:57
*** bapalm has joined #openstack-ansible09:00
*** admin0 has quit IRC09:02
*** julian1 has joined #openstack-ansible09:02
*** qiliang28 has joined #openstack-ansible09:04
*** jwagner- has joined #openstack-ansible09:05
*** jcannava_ has joined #openstack-ansible09:07
*** qiliang27 has quit IRC09:08
*** julian1_ has quit IRC09:08
*** jcannava has quit IRC09:08
*** galstrom_zzz has quit IRC09:08
*** dolphm has quit IRC09:08
*** jwagner has quit IRC09:08
*** jcannava_ is now known as jcannava09:08
*** qiliang28 is now known as qiliang2709:08
*** dolphm has joined #openstack-ansible09:08
*** galstrom_zzz has joined #openstack-ansible09:09
*** bsv has quit IRC09:11
*** bootsha has joined #openstack-ansible09:12
*** raddaoui has joined #openstack-ansible09:14
*** javeriak has joined #openstack-ansible09:17
*** admin0 has joined #openstack-ansible09:17
*** thorst has joined #openstack-ansible09:24
*** thorst has quit IRC09:30
odyssey4mewinggundamth o/ yeah, back in the rain after being in the sun for a week... it's mercifully cool here :)09:42
winggundamthodyssey4me: sound cool09:44
*** admin0 has quit IRC09:46
*** berendt has joined #openstack-ansible09:48
winggundamthodyssey4me: do you have time to review this for me? https://review.openstack.org/#/c/330961/09:48
*** bootsha has quit IRC09:59
*** david-lyle has quit IRC10:09
*** david-lyle_ has joined #openstack-ansible10:09
*** jargonmonk has joined #openstack-ansible10:12
*** admin0 has joined #openstack-ansible10:14
*** javeriak has quit IRC10:27
*** thorst has joined #openstack-ansible10:28
*** bootsha has joined #openstack-ansible10:29
*** bootsha has quit IRC10:34
*** thorst has quit IRC10:35
*** smatzek has joined #openstack-ansible10:36
*** javeriak has joined #openstack-ansible10:40
*** NNNN has quit IRC10:41
*** aernhart has quit IRC10:44
*** _hanhart has quit IRC10:44
*** neilus1 has joined #openstack-ansible10:45
*** aernhart has joined #openstack-ansible10:46
*** aernhart has quit IRC10:46
odyssey4mewinggundamth it's in my very large queue :/10:46
odyssey4mewill look when I can10:46
*** aernhart has joined #openstack-ansible10:47
winggundamthodyssey4me: sure. no worries :)10:47
*** neilus has quit IRC10:49
*** javeriak has quit IRC10:50
*** neilus1 has quit IRC10:52
*** neilus has joined #openstack-ansible10:52
*** klamath has joined #openstack-ansible10:56
*** _hanhart has joined #openstack-ansible10:56
*** berendt has quit IRC11:04
*** chandanc_ has quit IRC11:04
*** admin0_ has joined #openstack-ansible11:09
*** admin0 has quit IRC11:11
*** admin0_ is now known as admin011:11
*** weshay has joined #openstack-ansible11:11
*** javeriak has joined #openstack-ansible11:13
*** berendt has joined #openstack-ansible11:13
*** admin0 has quit IRC11:14
*** javeriak has quit IRC11:16
*** Guest____ has joined #openstack-ansible11:27
*** klamath has quit IRC11:30
mgariepygood morning everyone11:40
*** admin0 has joined #openstack-ansible11:41
*** deverter has joined #openstack-ansible11:42
mgariepyI get some error in neutron : http://paste.openstack.org/show/524250/11:42
mgariepythis is after the upgrade to mitaka : http://paste.openstack.org/show/524252/ i have both liberty and mitaka process running.11:44
*** jargonmonk has quit IRC11:51
*** asettle has quit IRC11:51
odyssey4mecloudnull There's a variety of feedback in https://review.openstack.org/312274 which needs some addressing. mhayden You may wish to remove your vote to expose the downvotes from others.11:52
*** thorst has joined #openstack-ansible11:52
*** thorst has quit IRC11:53
*** thorst_ has joined #openstack-ansible11:53
*** Guest____ is now known as Bofu2U11:54
*** sdake has joined #openstack-ansible11:55
*** asettle has joined #openstack-ansible11:57
*** sdake_ has joined #openstack-ansible11:57
*** woodard has joined #openstack-ansible11:57
*** johnmilton has joined #openstack-ansible11:57
*** jargonmonk has joined #openstack-ansible11:58
*** admin0_ has joined #openstack-ansible11:58
*** admin0 has quit IRC11:58
*** admin0_ is now known as admin011:58
*** woodard has quit IRC11:58
*** woodard has joined #openstack-ansible11:59
*** neilus has quit IRC12:00
*** sdake has quit IRC12:01
*** psilvad has joined #openstack-ansible12:04
*** johnmilton has quit IRC12:07
*** markvoelker has joined #openstack-ansible12:13
*** markvoelker has quit IRC12:14
*** markvoelker has joined #openstack-ansible12:15
*** johnmilton has joined #openstack-ansible12:20
*** zerda2 has quit IRC12:21
*** Andrew_jedi has joined #openstack-ansible12:27
*** admin0 has quit IRC12:28
*** jargonmonk has quit IRC12:28
*** johnmilton has quit IRC12:30
*** Ger-chervyak has joined #openstack-ansible12:36
*** jthorne has joined #openstack-ansible12:37
*** Ashana has joined #openstack-ansible12:38
cloudnullmgariepy: thats an odd one.12:52
cloudnullmorning btw12:52
cloudnullmgariepy: It looks like the general filters are up-to-date.12:56
cloudnullis that error causing neutron to crash ?12:56
cloudnullis it an error you're seeing a lot of?12:56
cloudnullodyssey4me: ohai.12:56
cloudnullyou back from Holiday ?12:57
odyssey4mecloudnull Buenas tardes12:57
odyssey4meyeah, I'm back12:57
cloudnullwelcome, and sorry12:57
odyssey4mebusy working my way through the fire hose :p12:57
cloudnullthat'll teach you to go on holidy12:58
cloudnull:P12:58
odyssey4mehahaha12:58
ionihello, what's the sintax for overwriting interval for cpu for /etc/ceilometer/pipeline.yaml ?12:58
*** ManojK has joined #openstack-ansible12:59
ionihttps://paste.xinu.at/3Sfw9O/12:59
cloudnullioni: from the sources list ?13:00
cloudnullhttps://github.com/openstack/openstack-ansible-os_ceilometer/blob/master/templates/pipeline.yaml.j2#L213:00
odyssey4meioni http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-openstack.html13:00
ionii've added the entire section of sources there because if i only leave the interval, the final file, is missing almost everything apart of sinks and whatever i have added in overwrites13:00
cloudnullyes.13:00
cloudnullsadly its a list13:00
ioniok, so my configuration is fine this way no?13:01
ionihttps://paste.xinu.at/3Sfw9O/13:01
cloudnullyes.13:01
ionicool13:01
*** psilvad has quit IRC13:01
ionii was thinking i do something wrong :D13:01
ionithank you13:01
cloudnullrecursing into a list to find a hash with more lists to container more hashes, etc... would be slow and hard to validate especially for a single key change.13:02
ionii understand13:02
mgariepycloudnull, might be an issue with the L > M upgrade13:03
mgariepyit doesn't make neutron crash.13:03
cloudnullis it something you see a lot of in the logs?13:04
*** neilus has joined #openstack-ansible13:05
mgariepyif the process isn't killed, yes13:05
mgariepyit tries to kill it and fail..13:05
automagicallymorning all13:06
cloudnullmornings13:07
evrardjpweird I didn't remember seeing that in my L-> M upgrade13:08
*** TheIntern has joined #openstack-ansible13:09
mgariepyi started from a (fuel)icehouse DB (ovs + gre) > (osad)kilo (linux bridge + vxlan)13:09
mgariepyI'll keep an eye to see if it does happen again after the reboot.13:09
*** sdake has joined #openstack-ansible13:10
evrardjpI may I missed that too if neutron behaved correctly13:10
Andrew_jediHello folks13:11
mgariepyyeah13:11
evrardjphey Andrew_jedi13:12
mgariepyi guess so, I wouldn't probably seen it if I had not had issue with my network haha13:12
Andrew_jediCan i create 3 new containers and install rabbitmq there? I want to completely destroy the old containers.13:12
cloudnullhey Andrew_jedi how goes the network partitioning/hardware saga?13:12
Andrew_jedievrardjp: Hello :)13:12
cloudnullAndrew_jedi:  yes13:12
*** sdake_ has quit IRC13:13
cloudnullhowever you're going to need to rerun the OS roles due to vhost and user creates .13:13
cloudnullyou can do that with a tag though13:13
Andrew_jedicloudnull: hello, removed bonding, changed some hardware, but still no cigars13:13
mgariepythe kill error  was only for the liberty process..13:13
*** psilvad has joined #openstack-ansible13:13
*** messy has joined #openstack-ansible13:14
Andrew_jedicloudnull: Ok, and is it possible to create two rabbit containers on one node?13:14
cloudnullyup13:15
Andrew_jedicloudnull: like i want to recreate the controller1 container on container313:15
Andrew_jedicloudnull: Like this, openstack-ansible setup-hosts.yml --limit controller1 \13:15
Andrew_jedi   --limit controller1_rabbit_mq_container-2d645e7f13:15
cloudnullAndrew_jedi: on shared infra under your host where you want >1 you'd add the following13:15
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L76-L7813:16
cloudnullaio1 == hostname, rabbit_mq_container: 213:16
Andrew_jedicloudnull: Okay, done13:17
cloudnullalextricity25: you around ?13:18
Andrew_jedicloudnull: And then simply run the rabbit play ?13:18
cloudnullalextricity25: this was the rsync error you were talking about w/ master the other day, right? http://cdn.pasteraw.com/cas8ql17ju5spds2jepcslbxt2ppjdj13:18
cloudnullAndrew_jedi: yup13:18
cloudnullwell. no. sorry13:18
cloudnullyou have to run the lxc-container-create play too13:18
cloudnullopenstack-ansible lxc-container-create.yml --limit rabbitmq_all13:19
cloudnullthen run the rabbitmq play13:19
*** psilvad has quit IRC13:19
Andrew_jedicloudnull: ahhh, thx :)13:19
alextricity25cloudnull: yup13:20
alextricity25cloudnull: How were you able to reproduce it this time around?13:21
*** ametts has joined #openstack-ansible13:21
cloudnullI deployed swift.13:23
cloudnullthe last go i hadn't13:24
*** johnmilton has joined #openstack-ansible13:24
cloudnullbut thought i had13:24
alextricity25cloudnull: oh XD13:24
Andrew_jedicloudnull: Any idea how tricky is to do the same thing with galera?13:24
cloudnullAndrew_jedi: https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L7713:24
cloudnulljust add that line13:24
cloudnullopenstack-ansible lxc-container-create.yml --limit galera_all13:24
*** electrofelix has quit IRC13:25
cloudnullthen run the galera-install.yml play13:25
Andrew_jediAwesome13:25
Andrew_jediMy galera cluster is down, if i run galera install.yml play, is there any chance that i may loose my db?13:26
*** klamath has joined #openstack-ansible13:27
*** klamath has quit IRC13:27
*** klamath has joined #openstack-ansible13:27
*** psilvad has joined #openstack-ansible13:29
cloudnullAndrew_jedi: nope.13:29
cloudnulland the play itself will attempt to bring the cluster back up13:30
Andrew_jediAwesome!13:30
*** neilus1 has joined #openstack-ansible13:31
*** jamesdenton has joined #openstack-ansible13:31
Andrew_jediAnd if i remove controller1 from infra nodes in openstack_user_config.yml then the containers on controller1 will be ignored.13:31
*** neilus1 has quit IRC13:31
*** neilus1 has joined #openstack-ansible13:32
*** neilus1 has quit IRC13:32
cloudnullno they're part of inventory .13:32
*** neilus has quit IRC13:32
cloudnullalextricity25: do you have the ansible bug handy?13:33
cloudnullfor the rsync things?13:33
*** jiteka has joined #openstack-ansible13:34
*** electrofelix has joined #openstack-ansible13:34
Andrew_jedicloudnull: Okay, any idea how can i remove an infra node completely from the openstack setup?13:34
*** sacharya has joined #openstack-ansible13:34
cloudnullyes. ./scripts/inventory-manage.py --file /etc/openstack_deploy/openstack_inventory.json --remove $HOSTNAME_OR_CONTAINERNAME13:35
*** neilus has joined #openstack-ansible13:35
Andrew_jediThe thing is that rabbit and galera still thinks that there is a network partition which is making them to fail miserably.13:36
Andrew_jediAhhhh, you are a life saver,13:36
Andrew_jedithanks!13:36
*** vnogin has quit IRC13:36
mrhillsmang'morning13:36
TheInternmorning13:36
*** javeriak has joined #openstack-ansible13:36
cloudnullmornings13:36
cloudnullTheIntern: in da' House!13:37
TheInternyea he his13:37
TheInternbut he still can't type13:37
TheIntern...13:37
*** ManojK has quit IRC13:37
asettleo/13:37
TheIntern\o13:38
asettleo/\o13:38
*** sacharya has quit IRC13:39
TheInternhey now, let's keep it clean. There are children present.13:39
messyasettle: that kind of looks like an angry bird also13:39
asettleI was going for a high 5 guys, jeeeez13:39
TheInternol13:40
asettleOkay, I can only see the angry bird. How is that dirty!?13:40
TheInternidk13:40
asettleOh. Lies!13:40
TheInternTriangles are just dirty shapes13:40
*** KLevenstein has joined #openstack-ansible13:41
*** jmckind has joined #openstack-ansible13:43
*** jmckind_ has joined #openstack-ansible13:46
alextricity25cloudnull: Yeah.13:47
*** admin0 has joined #openstack-ansible13:47
alextricity25cloudnull: https://github.com/ansible/ansible/issues/1540513:47
alextricity25cloudnull: Also https://github.com/ansible/ansible/issues/1382513:48
alextricity25but the latter one is marked as closed13:49
*** psilvad has quit IRC13:49
Andrew_jedicloudnull: After removing a container from inventory, http://paste.openstack.org/show/524270/13:49
Andrew_jedi./scripts/inventory-manage.py --file /etc/openstack_deploy/openstack_inventory.json --remove-item controller1_rabbit_mq_container-2d645e7f13:50
Andrew_jediopenstack-ansible rabbitmq-install.yml13:50
*** jmckind has quit IRC13:50
Andrew_jediAlso gets this,13:51
Andrew_jediGATHERING FACTS ***************************************************************13:51
Andrew_jedifatal: [controller1_rabbit_mq_container-d14f8be7] => SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh13:51
*** neilus has quit IRC13:51
odyssey4mecloudnull does https://review.openstack.org/#/c/323504/ need a backport or is it only applicable to newton?13:51
*** Mudpuppy has joined #openstack-ansible13:52
cloudnullodyssey4me: IMO yes.13:52
cloudnullbackport to stable/mitaka13:53
openstackgerritJerry Cai proposed openstack/openstack-ansible-os_nova: Filter libvirt in nova.conf. Add nova_powervm module in nova.virt  https://review.openstack.org/33269713:53
*** TxGirlGeek has joined #openstack-ansible13:53
cloudnullAndrew_jedi: did you remove all of the controller1 containers and hosts and then pull that host out of your openstack_user_cnfig.yml ?13:53
Andrew_jedicloudnull: Only removed one container.13:54
Andrew_jedi./scripts/inventory-manage.py --file /etc/openstack_deploy/openstack_inventory.json --remove-item controller1_rabbit_mq_container-2d645e7f13:54
odyssey4mecloudnull ugh, it's a rather nasty refactor :/13:54
cloudnullyea it'll need to be manually picked.13:54
cloudnullI can do that if you like13:54
cloudnullAndrew_jedi: if you just dont want rabbit on that host you can set an affinity of 013:55
odyssey4mecloudnull I'd rather avoid pulling it back unless it's really necessary13:55
odyssey4mecloudnull or perhaps pull just the necessary part back to make the upgrade process succeed... leave out the new feature bits13:55
cloudnullI dont believe its "really" necessary.13:56
cloudnullI think we have the needed things in place for an upgrade13:56
odyssey4mein that case, we leave it alone13:56
cloudnullok13:56
odyssey4meno backports to the stable branch unless they're necessary - from the looks of things we have enough people with implementations now to need to very carefully respect that13:57
cloudnullthat change change provides the correct hostnames everywhere and aliases for legacy support.13:57
odyssey4meyeah, I think we already have aliases in mitaka - that will have to do13:57
cloudnull++13:58
*** ManojK has joined #openstack-ansible13:58
*** kylek3h has joined #openstack-ansible13:58
odyssey4meunless there's demand from cores across multiple stakeholders to bring that in - d34dh0r53 / evrardjp / git-harry / automagically / jmccrory ?13:58
automagicallyI don’t have a screaming need for a backport of that to mitaka14:00
automagicallyMaybe add to the agenda for later today tho?14:00
Andrew_jedicloudnull: Changed the affinity to 0 and it failed again with the same message. :( http://paste.openstack.org/show/524270/14:00
odyssey4meFYI for those I pinged https://review.openstack.org/#/c/323504/14:00
cloudnullAndrew_jedi: you running with a retry or limit?14:01
Andrew_jedicloudnull: Nope14:01
Andrew_jediopenstack-ansible rabbitmq-install.yml14:01
Andrew_jedideleted ansible facts as well14:02
cloudnullif you list out your inventory do you see a rabbitmq container on controller1 ?14:03
cloudnullor the one you've deleted?14:03
Andrew_jedicloudnull: Yes, it is there.14:06
Andrew_jediwait14:06
*** kstev has joined #openstack-ansible14:06
Andrew_jediThe container name is different14:06
Andrew_jedibut yes there is a container on controller114:07
cloudnullyou'll need to remove it14:07
*** sdake_ has joined #openstack-ansible14:07
cloudnullset the affinity to 014:08
cloudnullthen rerun14:08
Andrew_jediaffinity is set, removing conatiner ...14:08
*** smatzek has quit IRC14:08
*** spotz_zzz is now known as spotz14:10
*** sdake has quit IRC14:11
*** ManojK has quit IRC14:13
*** javeriak has quit IRC14:18
*** sdake_ is now known as sdake14:20
Andrew_jedicloudnull: Rabbit fixed, now trying the same trick for galera14:22
odyssey4mepalendae jwagner- d34dh0r53 do you guys have a link to something which describes how to do an upgrade test including checking uptime and other markers to observe? I think it'd be great if we could collaborate on a test method that perhaps can be better gated14:23
Andrew_jedicloudnull: Since my galera cluster is broken, should i use this "openstack-ansible -e galera_ignore_cluster_state=true galera-install.yml" ?http://paste.openstack.org/show/524273/14:24
*** javeriak has joined #openstack-ansible14:29
*** TxGirlGeek has quit IRC14:29
*** mkrish has joined #openstack-ansible14:31
mkrish@mhayden,14:32
*** ManojK has joined #openstack-ansible14:33
mkrishin your IPv6 openstack configuration, what was the DHCP server you used, SLAAC or DHCPv6 ?14:33
*** smatzek has joined #openstack-ansible14:39
*** sacharya has joined #openstack-ansible14:40
*** charz has quit IRC14:43
*** sacharya has quit IRC14:45
*** rromans has quit IRC14:45
*** charz has joined #openstack-ansible14:46
*** pcaruana has quit IRC14:46
*** Ger-chervyak has quit IRC14:48
*** saneax is now known as saneax_AFK14:49
*** Ger-chervyak has joined #openstack-ansible14:49
*** david-lyle_ is now known as david-lyle14:51
*** rromans has joined #openstack-ansible14:54
*** sdake_ has joined #openstack-ansible14:56
*** TxGirlGeek has joined #openstack-ansible14:56
cloudnull-cc mrhillsman -- RE: <odyssey4me> ... do you guys have a link to something which describes how to do an upgrade test including checking uptime and other markers to observe? I think it'd be great if we could collaborate on a test method that perhaps can be better gated14:56
cloudnullAndrew_jedi: sorry was in meetings.14:57
cloudnullyes that flag will likely be needed14:57
mrhillsmancloudnull i would say no but we can definitely collaborate on figuring that out14:58
*** sdake has quit IRC14:59
*** sacharya has joined #openstack-ansible14:59
cloudnullif you could that'd be great.14:59
*** javeriak has quit IRC14:59
mrhillsmanthe only "test" we ran was to ensure resources that were created pre upgrade were still around post upgrade14:59
mrhillsmanand functioning14:59
cloudnullI know you all are working on similar things and if we can bridge some of that work we may be able to reduce work duplication .15:00
mrhillsmani am spending the day pretty much going back through the upgrade and testing each service individually and observing logs15:00
mrhillsmanfor sure15:00
*** ManojK has quit IRC15:01
cloudnullmgariepy: was talking about mitaka upgrades just this morning regarding potentially broken filters.15:01
cloudnull[07:50] <mgariepy> [11:45:39] I get some error in neutron : http://paste.openstack.org/show/524250/15:03
cloudnull[07:50] <mgariepy> [11:47:09] this is after the upgrade to mitaka : http://paste.openstack.org/show/524252/ i have both liberty and mitaka process running.15:03
*** ManojK has joined #openstack-ansible15:03
*** BjoernT has joined #openstack-ansible15:03
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: Updates Neutron-FWaaS/Swift SHAs for 11.2.17 (kilo-eol SHA's)  https://review.openstack.org/33607515:04
cloudnullnote the 12.0.14 and 13.1.3 both running. seems like a service restart didnt happen, or we missed it15:04
*** TxGirlGeek has quit IRC15:04
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: Finalise Neutron-FWaaS/Swift SHAs for 11.2.17 (kilo-eol)  https://review.openstack.org/33607515:05
cloudnullmrhillsman: but if you can confirm in your tests we should be able to get something fixed sooner rather than later.15:05
*** TxGirlGeek has joined #openstack-ansible15:05
*** moorryan has joined #openstack-ansible15:05
mrhillsmansure thing15:06
odyssey4memrhillsman please make notes of all the things that you do to verify the upgrade - ideally I'd like us to document it and formalise it into the repo as the upgrade test process, probably for the Mitaka->Newton upgrade spec15:06
odyssey4memrhillsman we may want to rather have it in our upgrade docs, but reference it in the spec15:07
mrhillsmanodyssey4me do you already have an etherpad15:07
odyssey4memrhillsman but for now, just putting together an etherpad would be a good start15:07
mrhillsmansec15:07
mrhillsmanhttps://etherpad.openstack.org/p/osa-liberty-mitaka-upgrade15:07
mrhillsmanadd your thoughts there please15:08
mrhillsmanif you know of anything we may want to test but may not think of - cause we do not know all the things - please add15:08
cloudnullmgariepy: you still around ?15:08
mgariepyyep15:08
mrhillsmanmgariepy cloudnull15:08
cloudnulldo you still have that node up with the 12.0.14 and 13.1.3 services running ?15:09
mgariepyI rebooted the container.15:09
*** weezS has joined #openstack-ansible15:10
cloudnulland that fixed it ?15:10
mgariepywell the 12.0.14 is not running for sure ;)15:10
*** sdake has joined #openstack-ansible15:10
cloudnullgood point15:10
*** adrian_otto has joined #openstack-ansible15:10
cloudnullhum i'm trying to figure out how/why the old process was not respawned on upgrade15:11
mgariepythe error i got was only for the 12.0.14 process .. so i'm not too sure about what was wrong there.15:11
odyssey4mepalendae d34dh0r53 sigmavirus24 I'm pretty sure that we had an etherpad with test details for kilo->liberty testing - any idea where that is?15:11
odyssey4methe only one I found was https://etherpad.openstack.org/p/openstack-ansible-kilo-liberty-upgrade-test115:11
palendaeodyssey4me, I don't believe I made another, others might have15:12
odyssey4mepalendae could you do a brain dump of things you remember were good to test in https://etherpad.openstack.org/p/osa-liberty-mitaka-upgrade ?15:12
*** sdake_ has quit IRC15:13
palendaeYou're looking for post-upgrade tests? Or tests during?15:13
odyssey4mepalendae all the things15:14
odyssey4mepalendae as I recall one of the tests was to build an instance and ping it, then measure down-time in the process?15:14
palendaeThat was manual, but yeah15:15
*** gregfaust has quit IRC15:15
mrhillsmani'm asking who from rpc helped - if anyone - so they can add to the pot15:15
mrhillsmanwilling to bet BjoernT did15:15
odyssey4mepalendae yeah, understood15:16
*** TxGirlGeek has quit IRC15:16
odyssey4methe path to doing this better is to document it in the first place15:17
palendaeYeah15:17
odyssey4meand the path to automating it is to document it too15:17
palendaeMost of our testing was focused around specific areas that had bitten us15:17
*** TxGirlGeek has joined #openstack-ansible15:17
odyssey4memgariepy please add your name to the participant list so that when etherpad.o.o drops your session we know who it was :)15:17
cloudnullmgariepy: https://bugs.launchpad.net/neutron/+bug/1570958 looks related.15:19
openstackLaunchpad bug 1570958 in neutron "Need neutron-ns-metadata-proxy child ProcessMonitor for dhcp agent" [Undecided,Invalid]15:19
cloudnullor similar to the issue you had seen.15:19
cloudnullwe might want to kill the metadata-proxy from an old series on upgrade.15:20
*** Ger-chervyak has quit IRC15:23
*** Ger-chervyak has joined #openstack-ansible15:23
cloudnullit looks like that process is spawned by the neutron-metadata-agent and will be respawned if it dies.15:23
cloudnullotherwise its left alone.15:24
*** eil397 has joined #openstack-ansible15:25
*** TxGirlGeek has quit IRC15:27
*** Ger-chervyak has quit IRC15:27
*** sacharya_ has joined #openstack-ansible15:27
*** TxGirlGeek has joined #openstack-ansible15:27
*** TxGirlGeek has quit IRC15:28
*** eil397 has left #openstack-ansible15:29
*** TxGirlGeek has joined #openstack-ansible15:29
*** sacharya has quit IRC15:30
*** Ger-chervyak has joined #openstack-ansible15:30
*** mummer has joined #openstack-ansible15:43
*** TxGirlGeek has quit IRC15:46
*** TxGirlGeek has joined #openstack-ansible15:46
*** michaelgugino has joined #openstack-ansible15:47
*** Mudpuppy_ has joined #openstack-ansible15:52
*** Mudpuppy_ has quit IRC15:53
*** Mudpupp__ has joined #openstack-ansible15:54
*** Mudpupp__ has quit IRC15:54
*** Mudpuppy has quit IRC15:55
*** Mudpuppy_ has joined #openstack-ansible15:55
odyssey4memhayden ping?15:57
palendaeodyssey4me, I think he's in San Francisco at the Red Hat Summit16:00
*** Zucan has joined #openstack-ansible16:01
odyssey4mepalendae ah, good call16:01
odyssey4memeeting in #openstack-meeting-4 cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden, scarlisle, luckyinva, ntt, javeriak, automagically, spotz, vdo, jmccrory, alextricity25, jasondotstar, KLevenstein, admin0, michaelgugino,16:01
odyssey4me ametts, v1k0d3n, severion, bgmccollum, darrenc, JRobinson__, asettle, colinmcnamara, thorst, adreznec, eil39716:01
*** TxGirlGeek has quit IRC16:03
openstackgerritNolan Brubaker proposed openstack/openstack-ansible: Use in-tree env.d files, provide override support  https://review.openstack.org/33259516:06
Andrew_jedicloudnull: Fixed it finally :D16:07
*** jwagner- is now known as jwagner16:07
*** woodard has quit IRC16:08
*** cloader8_ has joined #openstack-ansible16:08
*** TxGirlGeek has joined #openstack-ansible16:09
Andrew_jedicloudnull: Deleted the controller1 container, Created a new one on controller3, installed galera on it. Then started one of the mysql node with the highest seq no with --wsrep-new-cluster to make it the primary node. And then started mysql process on other two nodes.16:10
*** eil397 has joined #openstack-ansible16:11
*** TxGirlGeek has quit IRC16:13
*** thorst_ has quit IRC16:13
*** TxGirlGeek has joined #openstack-ansible16:13
*** thorst_ has joined #openstack-ansible16:14
*** thorst__ has joined #openstack-ansible16:15
*** sdake_ has joined #openstack-ansible16:15
*** mummer has quit IRC16:16
*** thorst_ has quit IRC16:18
*** sdake has quit IRC16:18
Andrew_jedicloudnull: Appreciate all the help. I will say it again. This is the best Openstack irc channel. :)16:18
*** Andrew_jedi has quit IRC16:19
*** woodard has joined #openstack-ansible16:20
*** thorst__ has quit IRC16:20
*** zhangjn has quit IRC16:20
*** sdake_ is now known as sdake16:23
*** TxGirlGeek has quit IRC16:23
*** TxGirlGeek has joined #openstack-ansible16:24
*** zhangjn has joined #openstack-ansible16:25
*** woodard has quit IRC16:25
*** woodard has joined #openstack-ansible16:25
*** TxGirlGeek has quit IRC16:26
*** woodard has quit IRC16:26
*** TxGirlGeek has joined #openstack-ansible16:26
*** admin0 has quit IRC16:30
*** TxGirlGeek has quit IRC16:32
*** TxGirlGeek has joined #openstack-ansible16:34
*** TxGirlGeek has quit IRC16:36
*** TxGirlGeek has joined #openstack-ansible16:37
*** javeriak has joined #openstack-ansible16:41
*** TxGirlGeek has quit IRC16:42
*** TxGirlGeek has joined #openstack-ansible16:42
*** Ger-chervyak has joined #openstack-ansible16:43
*** Ger-chervyak has quit IRC16:43
*** Ger-chervyak has joined #openstack-ansible16:43
*** Ger-chervyak has quit IRC16:43
*** Ger-chervyak has joined #openstack-ansible16:44
*** TxGirlGeek has quit IRC16:44
*** TxGirlGeek has joined #openstack-ansible16:45
*** TxGirlGeek has quit IRC16:49
*** TxGirlGeek has joined #openstack-ansible16:49
*** TheIntern has quit IRC16:51
*** TxGirlGeek has quit IRC16:54
*** TxGirlGeek has joined #openstack-ansible16:56
openstackgerritMerged openstack/openstack-ansible-os_swift: Implement Xenial Support  https://review.openstack.org/33454316:56
*** asettle has quit IRC16:59
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_neutron: Cleanup metadata-proxy when old versions are present  https://review.openstack.org/33612217:00
cloudnullmgariepy: ^17:00
*** ManojK has quit IRC17:01
logan-evrardjp: yeah i'm not proposing any change regarding what services use dns vs ip.. that's for a separate convo.. but I'm just curious if new containers in newton are now being generated with rfc complaint hostnames? is there a migration path for containers with old hostnames? why are we keeping them around--just lack of time to write the upgrade plays? etc17:02
*** Mudpuppy_ has quit IRC17:03
cloudnulllogan-: the containers are generated the same way as before.17:03
cloudnullthey will have a non-rfc name by default.17:03
*** ManojK has joined #openstack-ansible17:03
cloudnullwe're just making sure the file /etc/hosts and /etc/hostname have an RFC compliant name and an alias to the original inventory name17:04
logan-gotcha17:04
cloudnullthat way it works for the future but is compatible with what we had done before.17:04
*** Mudpuppy has joined #openstack-ansible17:04
*** thorst_ has joined #openstack-ansible17:05
*** Mudpuppy has quit IRC17:05
logan-that makes sense now. cross container references (ie nova services etc) will use the new hostname format then due to the /etc/hostname change, so I'll definitely have to make sure the rfc complaint hosts are resolvable across the infra17:05
*** Mudpuppy_ has joined #openstack-ansible17:05
*** thorst_ has quit IRC17:07
*** thorst_ has joined #openstack-ansible17:07
cloudnulllogan-: https://github.com/openstack/openstack-ansible-lxc_container_create/blob/master/tasks/container_create.yml#L218-L233  -- that is the container change17:08
logan-thanks17:08
cloudnullsorry . https://github.com/openstack/openstack-ansible-lxc_container_create/blob/master/tasks/container_create.yml#L194-L23317:08
cloudnullthose 4 tasks17:08
cloudnulldo your images need to have the old container name within the hosts file ?17:10
cloudnullIE we could change https://github.com/openstack/openstack-ansible-lxc_container_create/blob/master/tasks/container_create.yml#L210 to \17:11
cloudnull"127.0.1.1 {{ inventory_hostname | replace('_', '-') }}.{{ lxc_container_domain }} {{ inventory_hostname | replace('_', '-') }} {{ inventory_hostname }}"17:11
odyssey4mecloudnull evrardjp logan- I do wonder why we need everything to have the DNS names of every other thing?17:11
cloudnullin the container ?17:11
cloudnullor on the hosts ?17:11
odyssey4meI suppose it's possible that some services may need it - but perhaps we can adjust configs and stuff to negate the need for it?17:12
cloudnullwe have all the names of all the things on host machines. not in the container17:12
odyssey4meok, then why do we need it at all?17:12
pabelangerodyssey4me: have you had a chance to tryout the UCA mirrors?17:12
logan-cloudnull: because the containers are using the lxcbr0 dnsmasq for resolution?17:13
odyssey4mepabelanger not just yet - busy prepping a patch to make them used in our gate commits17:13
odyssey4mepabelanger I should have that going today/tomorrow - just got back from a week's holiday :)17:13
pabelangerodyssey4me: great! I haven't actually tested them either so feel free to ping when once that is online.  I'm expecting them to work, but until something actually has17:14
*** adrian_otto has quit IRC17:14
cloudnulllogan-: i dont think we need the hosts file for the lxcbr0 name resolution to work.17:14
odyssey4mepabelanger yep, I'll ping you with the review for tracking17:14
*** TxGirlGeek has quit IRC17:14
cloudnullodyssey4me: i think its one of the things we had that way, so it is still that way.17:14
logan-yeah just trying to understand why the hosts file is synced on the hosts but not the containers17:14
cloudnullI know its easier to move around the env using the host name17:15
odyssey4mecloudnull logan- as I recall lxcbr0 is setup with a dnsmasq service by default, which will consume the hosts file - thus allowing all containers to resolve all hosts and other containers17:15
cloudnullwhich the /ets/hosts file makes possible17:15
logan-yeah thats what Im guessing odyssey4me17:15
logan-The dns names are used for sure by some services. nova uses the agent's dns name for live migration. so the stuff has to be resolvable mostly globally for certain things17:16
odyssey4mecloudnull if it's simply a convenience then I'd say let's only build it on the control hosts, or perhaps the utility container... and perhaps also make it optional to have it done... with the default to off17:16
*** TxGirlGeek has joined #openstack-ansible17:16
odyssey4melogan- hmm, is there no way to turn the dns name usage off and instead to use the IP?17:17
logan-yes that is customizable17:17
odyssey4meit strikes me as something worth switching to remove another external reliance17:17
odyssey4meie we should do that by default17:18
logan-however you would have all of your nova services named after the agent's IP instead of name I think?17:18
odyssey4meoh dear, no that wouldn't be fun17:18
logan-nova-compute services*17:18
pabelangerodyssey4me: great, thanks17:19
* cloudnull afk lunching17:19
*** woodard has joined #openstack-ansible17:20
*** cloader8_ has quit IRC17:21
*** TxGirlGeek has quit IRC17:22
*** TxGirlGeek has joined #openstack-ansible17:24
openstackgerritSean M. Collins proposed openstack/openstack-ansible-os_neutron: Clarify the default for neutron_vxlan_group  https://review.openstack.org/33613217:33
*** aernhart has quit IRC17:36
*** aernhart has joined #openstack-ansible17:37
*** e-vad has quit IRC17:38
*** e-vad has joined #openstack-ansible17:39
*** adrian_otto has joined #openstack-ansible17:40
*** asettle has joined #openstack-ansible17:43
*** sdake has quit IRC17:45
*** admin0 has joined #openstack-ansible17:47
*** asettle has quit IRC17:47
evrardjpI'm off for today, see you tomorrow!17:48
*** michaelgugino has quit IRC17:49
spotzseeya evrardjp17:50
*** javeriak has quit IRC17:51
*** Zucan has quit IRC17:54
openstackgerritMerged openstack/openstack-ansible: Finalise Neutron-FWaaS/Swift SHAs for 11.2.17 (kilo-eol)  https://review.openstack.org/33607517:54
*** electrofelix has quit IRC17:55
*** michaelgugino has joined #openstack-ansible17:56
*** weezS has quit IRC17:56
*** javeriak has joined #openstack-ansible18:01
*** ManojK has quit IRC18:02
*** weezS has joined #openstack-ansible18:02
*** chandanc_ has joined #openstack-ansible18:02
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_neutron: Cleanup metadata-proxy when old versions are present  https://review.openstack.org/33612218:03
*** Ger-cher_ has joined #openstack-ansible18:04
*** Ger-chervyak has quit IRC18:04
*** TheIntern has joined #openstack-ansible18:05
*** KLevenstein has quit IRC18:16
*** TheInter_ has joined #openstack-ansible18:18
mgariepycloudnull, thanks for that.18:18
mgariepycloudnull, i'll test it on monday probably if I don't forget.18:18
cloudnullawesome18:19
cloudnulli hope it helps18:19
cloudnullin my local tests it seems to be doing the trip18:19
cloudnull**trick18:19
mgariepydid you reproduce the issue ?18:19
cloudnullyes18:19
mgariepyha cool18:19
cloudnullso when you go from 12 > 13 its a problem because filters change ect18:20
cloudnullhowever its still an issue going from 12.x > 12.y18:20
cloudnullor 13.x > 13.y18:20
mgariepyok18:20
cloudnullthe old code for the metadata-ns-proxy does not restart until killed18:20
cloudnullthe matadata agent monitors the process but does not restart it, it will only create it if its missing/needed18:21
*** TheInter_ has quit IRC18:21
*** TheIntern has quit IRC18:22
*** TheIntern has joined #openstack-ansible18:22
cloudnullso it was a simple fix but kinda a pain to figure it.18:22
*** ManojK has joined #openstack-ansible18:26
*** weezS has quit IRC18:31
openstackgerritMerged openstack/openstack-ansible-os_neutron: Clarify the default for neutron_vxlan_group  https://review.openstack.org/33613218:34
*** wadeholler has quit IRC18:42
*** admin0 has quit IRC18:50
*** Ger-cher_ has quit IRC18:53
*** Ger-chervyak has joined #openstack-ansible18:53
*** chandanc_ has quit IRC18:56
*** aernhart has quit IRC18:58
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-plugins: Add the Ansible human_log call back plugin  https://review.openstack.org/32133118:58
cloudnullodyssey4me: still around ?18:59
*** aernhart has joined #openstack-ansible18:59
cloudnullif you can have a look at that last PR i think that should solve our human_log issues w/ 2.918:59
cloudnull**ansible 2.018:59
*** hybridpollo has joined #openstack-ansible19:00
*** wadeholler has joined #openstack-ansible19:02
odyssey4mecloudnull: evrardjp and I were just chatting about how to do a pod-type deployment, and were thinking that this might just work: https://gist.github.com/odyssey4me/09d963776f8872f2562e477c5158a3e019:06
odyssey4meyour thoughts?19:06
odyssey4mepalendae ^ I expect you may also be interested19:07
*** admin0 has joined #openstack-ansible19:08
cloudnullodyssey4me: I think we need to re-do inventory storage for AZ specific deployments. along the lines of what we were talking about at the OPS midcycle (using soming like console or etcd)19:10
cloudnullrackertom: ping'd me about that the other day19:10
cloudnullhe may have some thoughts on it too19:10
cloudnullOnce we have something that can actually manage inventory intellegently, IE not the json file, I think multi-region type deployments get a lot simpler.19:11
palendaeodyssey4me, So the only differentiation of AZs for the nodes is the network?19:11
palendaecloudnull, yeah. I was thinking of something like using LDAP which already has region attributes19:12
*** javeriak has quit IRC19:12
cloudnullpalendae:  that'd work too19:12
palendaeI need to track down someone working on https://github.com/rackerlabs/craton19:12
cloudnulljimbaker: ^19:12
palendaeMy thoughts, which I need to blueprint - work to make a base class for importing host data, then redo the current stuff as a YAMl subclass19:13
palendaeThen we use a standard /etc/ansible/inventory/inv.ini file to specify other classes for other backends19:13
palendaeThe tricky part is network specification, which I suppose could just live on19:13
palendaein YAML19:14
*** admin0 has quit IRC19:14
palendaeAlso, right now we don't have a good way to keep track of the containers themselves besides that JSON file19:14
palendaeIf we import host data from LDAP, that's all well and good, but I'm not sure about writing container data there19:14
cloudnullyea writing potentially transient daat to LDAP is fugly19:15
cloudnull*data19:15
palendaeRight19:15
palendaeThis is where I wonder if LXD would be useful, but I haven't put meaningful time into that19:15
*** Ger-chervyak has quit IRC19:16
palendaeAlso, mapping variables becomes a little harder, because presumably not everyone will want or be able to attach AZ/group membership to variables within their current host inventory19:16
cloudnullI'm a fan of the console, etcd, redis backend idea for a distributed system to store inventory. however I too haven't put time into prototyping19:16
palendaeThough group vars might help, and that would "simply" require mapping from the LDAP/AD/whatever inventory's group memberships to OSA's Ansible groups19:17
cloudnullthats a good point19:17
palendaeAre you thinking of importing the data from e.g. LDAP once then writing to etcd?19:18
cloudnulland group vars in 2.x have been greatly improved19:18
cloudnullpalendae: yea.19:18
palendaeI'd think people would want to continue mananging hosts with their current solution. Adding a host to LDAP then maybe running a refresh query should add it19:18
cloudnullmaybe a tool ot periodically re-import19:18
palendaeEspeciall yfor compute19:18
palendaeYeah, just re-run19:18
palendaeWith a flag19:18
palendaeBut I think making that pluggable is important19:19
cloudnull++19:19
palendaeBecause a lot of places have their own source of truth19:19
palendaeAnd if you really want it to be a YAML file...good luck19:19
cloudnullneillc: had some work started on that front.19:19
*** Ger-chervyak has joined #openstack-ansible19:19
palendaeYeah, I need to revisit it and draft my spec19:19
*** weezS has joined #openstack-ansible19:20
palendaeI also think the ability to override variables per host will have to go, in order to simplify this19:20
palendaeMake the levels be per AZ, per group19:20
palendaeMaybe 1-2 levels of subgroup if you have something like mixed backend cinder storage19:20
cloudnull++ was just writing that19:21
odyssey4mepalendae cloudnull yeah, in my view the pluggable inventory should simply trust the inventory source for the truth for everything... in terms of craton that would mean that we should simply read the inventory from there, we don't write anything to it... in other words the container must already be there will all its data19:21
palendaeodyssey4me, Hm. That would mean we get out of provisioning containers entirely?19:21
odyssey4me*with19:21
odyssey4mewe still provision containers - but the properties for them must come from the inventory system19:22
palendaeGotcha19:22
odyssey4meie we do not generate a single thing19:22
stevellepalendae: variables per host is a core Ansible feature. I understand why you want to simplify but that may cause friction19:22
odyssey4meso the only place to generate things would be in the case of us using a file-based configuration19:22
palendaeSo basically the configuration is networking information and a query/lookup to the inventory system19:22
cloudnullalso worth noting that group vars have a longer load time that host vars.19:22
odyssey4mein general I think if we're using an inventory source, all overrides and environment data must come from that source19:23
palendaestevelle, True; perhaps a better way to put that is to move the host vars into host var files, not in the same place19:23
cloudnull1000s of hosts all using group vars may cause runtime issues.19:23
stevellepalendae: how about moving them into the Source Of Truth :P19:23
palendaecloudnull, That seems....backwards19:23
odyssey4meie we no longer require overrides, we no longer require /etc/openstack_deploy19:23
palendaestevelle, I'd be willing if someone told me what source of truth they want :)19:23
palendaeodyssey4me, What about the environment?19:23
stevellepalendae: /me waves hands in Craton's direction19:24
palendaeMy frustration with this so far is that the source of truth is always ambiguous19:24
odyssey4mepalendae it must all come from the inventory source19:24
palendaeMmmm19:24
palendaeI think we're just moving the problem into a bigger working part19:24
cloudnullodyssey4me: has a point, /etc/openstack_deploy was created to facilitate inventory19:25
stevelleI think of it as separation of concerns19:25
cloudnullthough we store secrets there too19:25
odyssey4meour inventory implementation via files should then mirror that model - so if we execute ansible then it all executes from a flat inventory19:25
palendaeIt seems like hosts would still have to be imported into this thing - for example, rackspace will be using CORE no matter what we build19:25
odyssey4meyeah, so I think the environment and secrets should possibly be individually pluggable19:25
cloudnullodyssey4me: also worth noting large flat inventory files are slow too.19:25
odyssey4methus allowing secrets to come from one place, and the environment to come from another19:25
palendaeI don't think anyone wants large flat inv files19:25
palendaeBecause they're painful to manage as a human19:26
*** sacharya_ has quit IRC19:26
odyssey4mecloudnull sure, but the file-based configuration should only be used for testing in small environments19:26
cloudnull++19:26
*** sacharya has joined #openstack-ansible19:26
odyssey4meansible reads a flat result that comes from the whatever the dynamic inventory spits out... whether the dynamic inventory source is a system with a db back-end, or a file system is irrelevant19:27
cloudnullon flat files it re-assembles it19:28
cloudnullwhen you load inventory from a dynamic source it simply reads it and runs19:28
odyssey4meall we should be providing in our own group_vars are the bits of glue we need to stitch things together - much like we do now19:28
odyssey4meie 'sensible defaults'19:28
palendaecloudnull, Well, depending on whatever your dynamic source reader is19:29
cloudnullpalendae: true19:29
palendaeI assume by 'flat files' you mean Ansible's default ini support, not our dynamic_inventory.py script19:29
cloudnullpalendae: yes.19:29
palendaeI'm not aware of OSA deploys that use that19:29
cloudnullthe end of the script is simply print(content)19:29
palendaeRight19:29
palendaeMaybe doing some mapping, but really it's just connect, query, output19:30
palendaeFor most of the included stuff19:30
cloudnullansible dynamic inventory is simply a fancy way of saying stdin19:30
palendaeWe kind of wedged ourselves cause we needed to specify container networking info for ansible to take and use19:30
*** johnmilton has quit IRC19:31
cloudnullyup and those networking bits are stored in "hostvars" within the inventory.19:31
palendaeReading http://craton.readthedocs.io/en/latest/high-level-design.html, they don't appear to have a place where the equivalent of env.d would live19:31
palendaecloudnull, Right, it's generating and populating host vars19:31
cloudnullbut it'd be nice to decouple19:31
palendaeenv.d is kind of separate19:32
palendaeBecause you need that to define how containers relate to hosts...and how many to make19:32
palendaeIn some form19:32
cloudnullmaybe the container get an IP / network info from some actual authoritative IP source.19:33
cloudnulland our inventory things simply broker API transaction instead of managing it internally.19:33
odyssey4mecloudnull if we expect the inventory system to specify all inventory info, then that's where that belongs19:34
palendaehttps://blueprints.launchpad.net/craton/+spec/craton-inventory-integration-patterns - this is kind of what I was meaning by moving the problem; craton itself needs source of truth19:34
odyssey4mewe no longer need to care about it19:34
odyssey4meall we need to do is some validation of things we expect - like we do now... ie no two hosts have the same IP, etc19:34
palendaeI'm going to poke at the craton folks; they seem to be targeting OSA support without talking in here19:36
cloudnullodyssey4me: if we move the issue outside of OSA then we wouldn't even need to do validation. the external system should be doing that. we would simply be a consumer19:36
cloudnulljimbaker, jcannava ^^19:37
odyssey4mecloudnull we may assume some things that are perfectly valid in other environments19:37
odyssey4mefor example - maybe we want to ensure that a compute host is not in two AZ's19:37
cloudnulli'd think that would be managed outside of OSA.19:38
stevellepalendae: "moving the problem" is kind of the point. Craton is partly intended to allow pluggable sources of truth19:38
odyssey4meperhaps, but then we need to set the expectations somewhere in documentation... and we know how well that works :p19:38
palendaestevelle, Ok, fair enough. I had not been included in conversation on that19:38
stevelleI just went to one of the summit sessions for it and have read a bit to keep up19:39
stevelleoffering the context I have19:39
palendaeYeah19:39
palendaeI would hope they'd be coming to us if they're looking to make it work for OSA, but I'll go to them19:39
odyssey4mepalendae I had a brief chat with sulo today and plan a sync up next week where I want to put him in contact with you to hopefully prep some sort of workable set of patches which we can start testing with early in the Ocata cycle... or maybe sooner19:41
palendaelol19:41
palendaeOk19:41
cloudnullneillc: would be good to have in that convo too19:41
odyssey4meyeah, for now I just want to set the scene, then leave it to y'all to run with it :)19:42
palendaeHonestly I think getting it working before Ocata is a bad idea19:42
palendaeTesting, sure19:42
palendaeadding a newton deliverable now? That seems unwise19:42
odyssey4mepalendae agreed - but I think there may be some basic groundwork needed and I'd like to see us get that spec done before the Newton release so that work can begin as soon as we have a Newton branch19:43
*** TheIntern has quit IRC19:43
odyssey4meI'll leave it to you to figure out timelines though. I don't want to commit on your behalf.19:44
palendaeodyssey4me, So is the expectation that craton's the solution?19:44
odyssey4meI just think that this is going to be a fairly major change and it'll need plenty of time and testing to mature.19:44
palendaeYeah, it is19:44
stevellepalendae: maybe not The solution, but A solution19:45
stevelleat least from OSA pov19:45
cloudnull++19:45
odyssey4mepalendae I expect that we'll need to have a file-based inventory solution for testing/gating and that we'll promote craton's use as a production system. Alternative inventory systems are welcome as long as they deliver the same data and have a plugin.19:45
stevelleI think the interface used to work with Craton should be The solution from OSA PoV19:46
stevellethen we add an etcd plugin or w/e19:46
odyssey4meie as long as someone commits an appropriate library plugin then it'll be supported. If they add some sort of CI for on-commit testing then it'll be promoted more actively.19:46
odyssey4mecloudnull we have a filter that'll convert http://mirror.rackspace.com/ubuntu to http://mirror.rackspace.com/ right?19:49
*** psilvad has joined #openstack-ansible19:49
cloudnullI think19:51
*** admin0 has joined #openstack-ansible19:51
odyssey4menetorigin ?19:51
odyssey4meyeah, that looks like it19:51
cloudnullget_netorigin19:51
cloudnullyes netorigin19:52
cloudnullno get_19:52
odyssey4meyeah, I figured from https://github.com/openstack/openstack-ansible/blob/liberty/playbooks/inventory/group_vars/hosts.yml#L4419:52
*** wadeholler has quit IRC19:53
cloudnullafk a bit19:55
*** asettle has joined #openstack-ansible19:59
*** asettle has quit IRC19:59
mgariepyfor whom is interested: scripts/upgrade-utilities/playbooks/old-hostname-compatibility.yml will break cinder a bit since service will register via their new hostname20:00
mgariepyadded a note in the etherpad.20:03
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: Enable OpenStack-Infra Ubuntu Cloud Archive mirror  https://review.openstack.org/33627120:04
*** smatzek has quit IRC20:04
*** weezS has quit IRC20:04
odyssey4mepabelanger ^ hopefully that does the trick20:05
*** admin0 has quit IRC20:06
odyssey4meI'm out for the night -cheerio all, chat tomorrow!20:07
automagicallylater odyssey4me20:09
*** Mudpuppy_ is now known as Mudpuppy20:16
cloudnullcheers odyssey4me20:20
openstackgerritAdam Reznechek proposed openstack/openstack-ansible-specs: Add multiple CPU architecture support spec  https://review.openstack.org/32963720:21
cloudnullmgariepy: did you get it fixed?20:21
cloudnullif so what was the fix?20:21
*** ManojK has quit IRC20:24
*** ManojK has joined #openstack-ansible20:26
*** Ashana has quit IRC20:28
*** spotz is now known as spotz_zzz20:33
*** toddnni has quit IRC20:39
*** bootsha has joined #openstack-ansible20:39
*** adrian_otto has quit IRC20:41
*** adrian_otto has joined #openstack-ansible20:42
*** toddnni has joined #openstack-ansible20:44
*** bootsha has quit IRC20:45
*** adrian_otto has quit IRC20:46
*** KLevenstein has joined #openstack-ansible20:49
*** Mudpuppy_ has joined #openstack-ansible20:50
*** weezS has joined #openstack-ansible20:51
*** Mudpuppy has quit IRC20:53
*** Mudpuppy_ has quit IRC20:54
*** jayc has quit IRC20:55
*** jayc has joined #openstack-ansible20:55
*** bootsha has joined #openstack-ansible21:01
mrdaMorning OSA21:03
eil397morning21:03
*** psilvad has quit IRC21:08
*** hybridpollo has quit IRC21:08
*** hybridpollo has joined #openstack-ansible21:09
*** ametts has quit IRC21:13
*** aernhart has quit IRC21:23
*** kylek3h has quit IRC21:24
*** aernhart has joined #openstack-ansible21:24
*** galstrom_zzz has quit IRC21:29
*** galstrom_zzz has joined #openstack-ansible21:30
*** woodard_ has joined #openstack-ansible21:31
*** weezS has quit IRC21:31
*** mkrish has quit IRC21:32
*** KLevenstein has quit IRC21:34
*** woodard_ has quit IRC21:36
*** aernhart has quit IRC21:36
*** jamesdenton has quit IRC21:37
*** Ger-chervyak has quit IRC21:38
*** Ger-chervyak has joined #openstack-ansible21:41
*** thorst_ has quit IRC21:42
*** spotz_zzz is now known as spotz21:43
*** jroll|dupe has joined #openstack-ansible21:44
*** jroll|dupe has quit IRC21:44
*** jroll|dupe has joined #openstack-ansible21:44
*** woodard has quit IRC21:45
*** BjoernT has quit IRC21:45
*** john51 has quit IRC21:45
*** jroll has quit IRC21:45
*** lkoranda has quit IRC21:45
*** jroll|dupe is now known as jroll21:45
*** cloader89 has joined #openstack-ansible21:46
*** Ger-chervyak has quit IRC21:46
*** TxGirlGeek has quit IRC21:46
*** adrian_otto has joined #openstack-ansible21:47
*** cloader89 has quit IRC21:47
*** messy has quit IRC21:53
*** sdake_ has joined #openstack-ansible21:53
*** sdake_ has quit IRC21:53
*** sdake_ has joined #openstack-ansible21:53
*** catintheroof has joined #openstack-ansible22:03
*** bootsha has quit IRC22:05
*** thorst_ has joined #openstack-ansible22:16
openstackgerritBjoern Teipel proposed openstack/openstack-ansible-os_nova: Cleanup Nova console proxy git repos before updating it  https://review.openstack.org/33630222:19
*** thorst_ has quit IRC22:20
*** ManojK has quit IRC22:20
*** thorst_ has joined #openstack-ansible22:21
*** bootsha has joined #openstack-ansible22:25
*** bootsha has quit IRC22:27
openstackgerritBjoern Teipel proposed openstack/openstack-ansible: Cleanup Nova console proxy git repos before updating it  https://review.openstack.org/33630722:28
*** KLevenstein has joined #openstack-ansible22:29
*** thorst_ has quit IRC22:29
*** sdake_ has quit IRC22:33
*** KLevenstein has quit IRC22:37
*** jamielennox|away is now known as jamielennox22:37
*** jayc has quit IRC22:39
*** admin0 has joined #openstack-ansible22:44
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-os_keystone: Ansible 2.x - Address deprecation warning of bare variables  https://review.openstack.org/30942522:50
*** admin0 has quit IRC22:52
*** asettle has joined #openstack-ansible23:04
*** jmckind_ has quit IRC23:06
*** thorst_ has joined #openstack-ansible23:08
*** thorst_ has quit IRC23:08
*** thorst_ has joined #openstack-ansible23:09
*** asettle has quit IRC23:10
*** sdake has joined #openstack-ansible23:11
*** sdake has quit IRC23:11
*** sacharya has quit IRC23:11
*** weezS has joined #openstack-ansible23:16
openstackgerritShashirekha Gundur proposed openstack/openstack-ansible-os_swift: add bool to var: swift_ceilometer_enabled  https://review.openstack.org/33631623:16
*** thorst_ has quit IRC23:18
*** catintheroof has quit IRC23:18
openstackgerritShashirekha Gundur proposed openstack/openstack-ansible-os_swift: add bool to var: swift_ceilometer_enabled  https://review.openstack.org/33631623:21
*** markvoelker has quit IRC23:21
*** saneax_AFK is now known as saneax23:22
*** berendt has quit IRC23:26
*** weezS_ has joined #openstack-ansible23:29
*** weezS has quit IRC23:30
*** weezS_ is now known as weezS23:30
*** thorst_ has joined #openstack-ansible23:44
*** BjoernT has joined #openstack-ansible23:44
*** BjoernT has quit IRC23:50
*** thorst_ has quit IRC23:54
*** thorst_ has joined #openstack-ansible23:55
*** eil397 has left #openstack-ansible23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!