Tuesday, 2015-09-15

openstackgerritKevin Carter proposed openstack/openstack-ansible: adds the config_template to nova  https://review.openstack.org/22332900:03
cloudnullim out have a good night all00:13
cooljlater cloudnull00:28
*** abitha has quit IRC00:47
openstackgerritIan Cordasco proposed openstack/openstack-ansible: Make our usage of with_items/with_nested consistent  https://review.openstack.org/22336501:22
sigmavirus24cloudnull: odyssey4me ^01:29
sigmavirus24haven't run it on a lab/aio but we noticed it today with the swift stuff so I went through and audited all ~150 uses of with_items/with_nested01:29
sigmavirus24I'm tired though so I'll let the gate bash its head against that for now01:29
*** sigmavirus24 is now known as sigmavirus24_awa01:31
*** woodard has joined #openstack-ansible02:11
*** woodard has quit IRC02:13
openstackgerritKevin Carter proposed openstack/openstack-ansible: adds the config_template to tempest  https://review.openstack.org/22334202:14
*** markvoelker has quit IRC03:15
*** sigmavirus24_awa has quit IRC03:27
*** palendae has quit IRC03:27
*** metral is now known as metral_zzz03:27
*** eglute has quit IRC03:27
*** lbragstad has quit IRC03:27
*** daneyon_ has joined #openstack-ansible03:27
*** metral_zzz is now known as metral03:27
*** bgmccollum has quit IRC03:27
*** spotz_zzz has quit IRC03:27
*** mgagne has quit IRC03:28
*** dolphm has quit IRC03:28
*** Guest60363 has quit IRC03:28
*** metral is now known as metral_zzz03:28
*** daneyon has quit IRC03:28
*** neillc has quit IRC03:28
*** jroll has quit IRC03:28
*** darrenc_ has joined #openstack-ansible03:29
*** gus has quit IRC03:29
*** mattoliverau has quit IRC03:29
*** persia has quit IRC03:29
*** darrenc has quit IRC03:29
*** mattoliverau has joined #openstack-ansible03:29
*** bgmccollum has joined #openstack-ansible03:30
*** spotz_zzz has joined #openstack-ansible03:30
*** persia has joined #openstack-ansible03:30
*** persia has quit IRC03:30
*** persia has joined #openstack-ansible03:30
*** neillc has joined #openstack-ansible03:30
*** eglute has joined #openstack-ansible03:30
*** dolphm has joined #openstack-ansible03:30
*** lbragstad has joined #openstack-ansible03:30
*** palendae has joined #openstack-ansible03:31
*** blewis has joined #openstack-ansible03:31
*** blewis is now known as Guest8235403:31
*** gus has joined #openstack-ansible03:31
*** mgagne has joined #openstack-ansible03:31
*** jroll has joined #openstack-ansible03:31
*** sigmavirus24_awa has joined #openstack-ansible03:32
*** Guest82354 has quit IRC03:39
*** eglute has quit IRC03:39
*** lbragstad has quit IRC03:39
*** bgmccollum has quit IRC03:40
*** jroll has quit IRC03:41
*** mgagne has quit IRC03:41
*** gus has quit IRC03:41
*** sigmavirus24_awa has quit IRC03:41
*** neillc has quit IRC03:41
*** persia has quit IRC03:41
*** dolphm has quit IRC03:41
*** palendae has quit IRC03:41
*** spotz_zzz has quit IRC03:41
*** mattoliverau has quit IRC03:41
*** d34dh0r53 has quit IRC03:42
*** mattoliverau has joined #openstack-ansible03:42
*** persia has joined #openstack-ansible03:42
*** neillc has joined #openstack-ansible03:42
*** gus has joined #openstack-ansible03:44
*** bgmccollum has joined #openstack-ansible04:03
*** blewis_ has joined #openstack-ansible04:03
*** eglute has joined #openstack-ansible04:03
*** dolphm has joined #openstack-ansible04:03
*** mgagne has joined #openstack-ansible04:03
*** d34dh0r53 has joined #openstack-ansible04:04
*** lbragstad has joined #openstack-ansible04:04
*** jroll has joined #openstack-ansible04:04
*** spotz_zzz has joined #openstack-ansible04:04
*** palendae has joined #openstack-ansible04:04
*** sigmavirus24_awa has joined #openstack-ansible04:05
*** tlian2 has quit IRC04:14
*** markvoelker has joined #openstack-ansible04:15
*** markvoelker has quit IRC04:21
*** darrenc_ is now known as darrenc04:21
openstackgerritKevin Carter proposed openstack/openstack-ansible: adds the config_template to swift  https://review.openstack.org/22334004:44
*** shoutm has quit IRC04:45
*** shoutm has joined #openstack-ansible04:55
*** javeriak has joined #openstack-ansible05:01
*** abitha has joined #openstack-ansible05:31
*** shoutm has quit IRC05:45
*** shoutm has joined #openstack-ansible05:48
*** manas has joined #openstack-ansible05:52
*** javeriak has quit IRC05:52
*** javeriak has joined #openstack-ansible06:04
*** abitha has quit IRC06:12
*** markvoelker has joined #openstack-ansible06:18
*** markvoelker has quit IRC06:22
*** javeriak has quit IRC06:26
*** javeriak has joined #openstack-ansible07:12
*** willemgf has joined #openstack-ansible07:32
*** itsuugo has joined #openstack-ansible07:32
*** itsuugo has quit IRC07:33
*** itsuugo has joined #openstack-ansible07:34
*** shoutm_ has joined #openstack-ansible07:35
*** shoutm has quit IRC07:35
svgmattt: ping07:43
matttsvg: morning2u07:44
svgHello07:44
svgWe're trying out a deploy based on 11.2.2 and bump into an issue related to the cinder-backup /ceph update07:45
matttsvg: you need to update your environment file07:45
svgAh. That sound slike it might be it: error while evaluating conditional: inventory_hostname in groups[item.0.component]07:46
matttsvg: https://github.com/openstack/openstack-ansible/commit/44d3f25de6daab8073137c9fc76edcdfc23c716f#diff-137efd1f3dd0a7249de05894bd21dd54R2607:47
matttsvg: depending on when you deployed your env, you may have cinder-volume on metal or not ... so based on that it's not safe to just copy the new file across07:49
svgI can't recall cinder-volume ever was on metal07:50
matttsvg: if a diff doesn't show any further changes then you'll be ok, but just be careful incase your cinder-volume config is different07:50
matttsvg: it defaults to on metal now, so you'll probably want to merge those changes in manually07:50
matttsvg: i really wanted to avoid making changes to the env for cinder-backup, but without it being in a specific group we couldn't target it properly in that task file07:51
svgthe setup was initially based on a version some weeks before 11.1.007:51
evrardjpgood morning everyone07:54
matttsvg: https://review.openstack.org/#/c/195181/07:54
matttmerged july 707:54
svgthx07:54
matttsvg: no worries, but yeah def. the env file needs updating07:54
svgthis is all damn confusing :-s07:55
matttsvg: yeah :(07:56
matttthat's why we deliberated over that review for so long, i wanted to avoid changes to env file :(07:56
matttbut the change was the cleanest long-term solution07:56
svgsure07:57
svghow osad's inventory works and how the envuronment is used where I remain confused07:58
svgI get the basic usage, but how OSA(D) works wrt a changing environment, that I don;t get07:59
mattti have to be honest, i'm not an authority on osad inventory08:01
*** gparaskevas has joined #openstack-ansible08:04
svgYeah, this project itself is already very complex.08:05
mattti think a lot of people feel that way08:05
matttspecifically around the inventory08:06
matttwe've had some discussions about simplifying things, especially around how we do roles and playbooks etc. ... i hope that work extends to the inventory also08:06
*** sdake has quit IRC08:07
*** itsuugo has quit IRC08:11
*** itsuugo has joined #openstack-ansible08:19
*** markvoelker has joined #openstack-ansible08:19
*** markvoelker has quit IRC08:23
odyssey4mesvg I know that palendae has been wanting to document how the dynamic inventory all fits together with regards to precedence, etc.08:29
odyssey4meWe'd love to figure out how we can simplify the dynamic inventory though. Right now it's a powerful tool, but with power comes complexity.08:29
svgack; I guess document how it works in detail would be a first step08:32
odyssey4mesvg considering your own discussion around how best to handle inventory at scale, perhaps you've come up with some good ideas?08:38
odyssey4meI would imagine that using the dynamic inventory is a better option than most, but simplifying how we do it would be better.08:39
svgodyssey4me: that was an early opinion based on hoa I was used to handle inventory in general; given how inventory in osad is shaped, there is quite some complexity and I assume needs which I still don't grasp well enough to be able to come up with ideas, TBH08:41
evrardjpabout the inventory at scale, it's definitely something we have expertise on, because 100% of our hosts are manageable with an ansible dynamic inventory08:51
evrardjpjust a different than ansible ;)08:51
evrardjposad*08:51
svgnice evrardjp - what tool do you use for that?08:56
evrardjpcobbler08:56
evrardjpand our own python scripts08:56
svgurgh, cobbler of all tjings08:56
evrardjpit works perfectly fine08:56
svgprolly08:57
svgI tend to see inventory as a single source of truth, from where everything comes from08:57
svgsth that should drive cobbler08:57
svgnot the other way around08:58
svgbut that's just me :)08:58
svgI see to many people focusing on inventory as just a simple host list08:58
evrardjpsvg: it depends on what your source of truth is09:00
evrardjpours is cobbler09:01
evrardjpyours could be monitoring09:01
evrardjpansible is kinda flexible09:01
evrardjpand if you're using cobbler, remember it has a good xmlrpc interface09:01
svgsure09:03
*** shausy has joined #openstack-ansible09:03
svgmy point is, as in your example, monitoring hsould not be the single source of truth09:03
svgyou need a SSoT, then use it to deploy monitoring - IMHO09:04
evrardjpindeed09:04
evrardjpit just happens that our cobbler serves as a SSoT ;)09:05
*** shausy has quit IRC09:08
*** javeriak has quit IRC09:08
*** shausy has joined #openstack-ansible09:09
*** javeriak has joined #openstack-ansible09:09
*** itsuugo has quit IRC09:09
svgevrardjp: so, when do you go into production with OS?09:12
evrardjpSeptember 2109:12
evrardjp2015 :D09:13
svgβΈ® ?09:13
evrardjpyeah it's close09:13
*** markvoelker has joined #openstack-ansible09:19
*** itsuugo has joined #openstack-ansible09:23
*** markvoelker has quit IRC09:24
openstackgerritMerged openstack/openstack-ansible: Fix the heat stack user create  https://review.openstack.org/22324909:26
openstackgerritMerged openstack/openstack-ansible: Disable scatter-gather offload on host bridges  https://review.openstack.org/22329209:36
*** javeriak has quit IRC09:50
*** pradk has quit IRC09:59
*** javeriak has joined #openstack-ansible10:00
*** javeriak_ has joined #openstack-ansible10:09
*** javeriak has quit IRC10:10
*** javeriak has joined #openstack-ansible10:10
*** javeriak_ has quit IRC10:12
*** pradk has joined #openstack-ansible10:14
tiagogomeshi, I am seeing this http://paste.openstack.org/show/462758/ when running osad . Should I be worried?10:30
mattttiagogomes: what version of ansible are you using?10:35
mattt(ansible --version_10:35
mattt(ansible --version)10:35
tiagogomesmattt, ansible 1.9.310:39
*** shoutm_ has quit IRC10:40
mattttiagogomes: ok, wanted to make sure you weren't using an old problematic ansible version :)10:41
*** shoutm has joined #openstack-ansible10:41
mattttiagogomes: wonder if you don't have bridge kernel module loaded10:45
tiagogomesmm, I have bridges interfaces up and running10:45
mattttiagogomes: do you have ip_tables, ip6_tables, and ebtables loaded?10:47
mattt(not saying to load, i'm just curious)10:47
tiagogomesnot ebtables10:48
matttactually taht should be fine, ebtables shouldn't be loaded on a controller node i don't think10:49
mattt(my test node is an AIO so we have some overlap here)10:49
*** javeriak has quit IRC10:56
*** itsuugo has quit IRC10:56
gparaskevaswhat branch are you using and what kind of installation.  AIO?11:00
*** itsuugo has joined #openstack-ansible11:02
*** javeriak has joined #openstack-ansible11:05
*** shoutm has quit IRC11:05
tiagogomeskilo with a multi-node installation11:09
*** shoutm has joined #openstack-ansible11:14
openstackgerritMerged openstack/openstack-ansible: Install nfs-common with nova-compute  https://review.openstack.org/22184411:18
*** markvoelker has joined #openstack-ansible11:20
openstackgerritMerged openstack/openstack-ansible: Install logrotate with ryslog  https://review.openstack.org/22214311:22
*** markvoelker has quit IRC11:25
*** sdake has joined #openstack-ansible11:25
*** itsuugo has quit IRC11:27
*** sdake_ has joined #openstack-ansible11:30
*** sdake has quit IRC11:31
*** itsuugo has joined #openstack-ansible11:36
*** itsuugo has quit IRC11:48
*** javeriak_ has joined #openstack-ansible11:49
*** javeriak has quit IRC11:51
*** javeriak_ has quit IRC12:08
*** pradk has quit IRC12:14
*** markvoelker has joined #openstack-ansible12:19
*** woodard has joined #openstack-ansible12:20
*** itsuugo has joined #openstack-ansible12:22
*** shoutm has quit IRC12:24
*** sdake_ has quit IRC12:30
*** woodard has quit IRC12:32
tiagogomesHi, I got this error http://paste.openstack.org/show/462899/ when using osad. Any ideas? I am using the kilo branch12:33
mgariepytiagogomes, ssh to bl002-test0_galera_container-4d824b40, and run : /usr/local/bin/pip install MySQL-python12:35
mgariepyyou will see what's wrong i guess.12:35
odyssey4metiagomes - did the setup-infrastructure complete? or did you use run-playbooks to go through the playbooks run?12:36
odyssey4metiagogomes ^12:36
odyssey4mealso, are you using a set of repo servers which you've setup using a different branch before - or is the environment entirely fresh?12:37
tiagogomesmgariepy I got this http://paste.openstack.org/show/462909/12:39
tiagogomesodyssey4me I ran setup-hosts, and them setup-infrastructure12:39
tiagogomesThe environment is fresh12:39
*** pradk has joined #openstack-ansible12:40
odyssey4metiagogomes if you look at your repo container in /var/www/repo can you find the python wheel for that?12:41
odyssey4meverify it on both your repo containers12:41
*** itsuugo has quit IRC12:42
*** itsuugo has joined #openstack-ansible12:43
tiagogomesI found this file /var/www/repo/os-releases/11.2.2/MySQL_python-1.2.5-cp27-none-linux_x86_64.whl12:44
odyssey4meis it in both repo containers?12:44
tiagogomesodyssey4me yes12:45
odyssey4mehmm, 'Connection refused' tells me that your load balancer is either not running or not properly setup12:45
gparaskevasif i am not mistakes you should have HAProxy installed if no hardware proxy is present.12:45
odyssey4meare you using an external LB or HAproxy ?12:45
mgariepydid you configured haproxy and forgot to run the playbook ?12:46
gparaskevassetup-hosts.yml then haproxy-install.yml and then the rest12:46
odyssey4meif you are using haproxy then you need to run the haproxy playbook before running setup-infrastructure12:46
odyssey4me:) what gparaskevas  said12:46
gparaskevaseverything relies on VIP , repo, enedpoints.12:47
tiagogomesodyssey4me ah, it could be that, I have some haproxy_hosts setup yes12:47
gparaskevasok then run haproxy-install.yml12:48
tiagogomesgparaskevas ok, thanks, I'll try12:49
gparaskevasand then try again to run the infrastructure playbook12:49
gparaskevastiagogomes: ok! np12:49
*** tlian has joined #openstack-ansible12:54
odyssey4metiagogomes confirm that haproxy is running on the intended servers, and that the vip is bound on one of them12:55
willemgfI'm having following error during the openstack-ansible setup-openstack.yml for cinder_scheduler_container. Installing 11.2.212:56
willemgffailed: [dc2-rk1-ch1-bl2_cinder_scheduler_container-3fbb44ef] => (item=cinder) => {"attempts": 5, "cmd": "/usr/local/bin/pip install cinder", "failed": true, "item": "cinder"}12:56
willemgfIn the container: root@dc2-rk1-ch1-bl2_cinder_scheduler_container-3fbb44ef:~# /usr/local/bin/pip install cinder12:56
willemgfIgnoring indexes: https://pypi.python.org/simple12:56
willemgfCollecting cinder12:56
willemgf  Could not find a version that satisfies the requirement cinder (from versions: )12:56
willemgfNo matching distribution found for cinder12:56
willemgfWhere should the version be defined?12:57
*** gparaskevas has quit IRC12:57
odyssey4mewillemgf can you verify whether your repo containers has the pip wheel (all repo containers) and whether your load balancer is listening on the right ip and directing traffic to the repo containers12:57
*** woodard has joined #openstack-ansible13:00
*** gparaskevas has joined #openstack-ansible13:04
tiagogomesok, I don't have that error anymore after running the ha_proxy playbook. But I have another one now: http://paste.openstack.org/show/462947/13:04
willemgfodyssey4me: the package to be installed is present: http://10.16.8.10:8181/os-releases/11.2.2/cinder-2015.1.2.dev20-py2-none-any.whl13:06
odyssey4metiagogomes seeing as your cluster is fresh and has nothing in it, you can follow the instructions in the message about the galera cluster not being healthy13:07
willemgfAlso the pip-package: http://10.16.8.10:8181/os-releases/11.2.2/pip-7.1.2-py2.py3-none-any.whl13:07
willemgfas well for wheel: http://10.16.8.10:8181/os-releases/11.2.2/wheel-0.24.0-py2.py3-none-any.whl13:07
odyssey4mewillemgf ok, from inside the cinder container do a verbose pip install of the package to see what it says13:07
willemgfall other pip installs went well so far, so it can not be a problem with loadbalancer.13:08
odyssey4mewillemgf it could be - as the other packages may already be on the boxes and therefore they were just skipped13:08
willemgfroot@dc2-rk1-ch1-bl2_cinder_scheduler_container-3fbb44ef:~# /usr/local/bin/pip install cinder -v13:09
willemgfIgnoring indexes: https://pypi.python.org/simple13:09
willemgfCollecting cinder13:09
willemgf  1 location(s) to search for versions of cinder:13:09
willemgf  * http://10.16.8.10:8181/os-releases/11.2.2/13:09
willemgf  Skipping link http://10.16.8.10:8181/os-releases/11.2.2/ (from -f); unsupported archive format: .213:09
willemgf  Getting page http://10.16.8.10:8181/os-releases/11.2.2/13:09
willemgf  Starting new HTTP connection (1): 10.16.8.1013:09
willemgf  "GET /os-releases/11.2.2/ HTTP/1.1" 200 None13:09
willemgf  Analyzing links from page http://10.16.8.10:8181/os-releases/11.2.2/13:09
willemgf    Skipping link http://10.16.8.10:8181/os-releases/ (from http://10.16.8.10:8181/os-releases/11.2.2/); not a file13:09
odyssey4mewillemgf it's usually better to use paste.openstack.org or pastebin to post log results13:10
willemgfyeah, sorry13:10
tiagogomesodyssey4me, that didn't solve the problem. Still having issues with the galera cluster : http://paste.openstack.org/show/462970/13:12
odyssey4metiagogomes so your galera cluster is in some sort of unhealthy state - you'll need to figure out the issue and get it into a healthy state before you can continue13:13
willemgfodyssey4me: here the complete output: http://paste.openstack.org/show/462974/13:13
*** itsuugo has quit IRC13:13
odyssey4mewillemgf and you're sure that every repo server has that cinder package?13:16
odyssey4meie the LB may be doing round robin and some repo servers have not properly synchronised the repo13:16
willemgfWill check the 2 other containers as well13:16
tiagogomesodyssey4me looking at the log, is it expected to have `password: "" ` when connecting to the server13:17
odyssey4metiagogomes did you generate your passwords?13:18
odyssey4meie are they in /etc/openstack_deploy/user_secrets.yml ?13:18
tiagogomesno, I started fresh and I forgot to do it again13:18
odyssey4metiagogomes haha, that'll cause issues with the setup process for sure :p13:19
tiagogomesno I need to re-run everything?13:19
tiagogomesre-do13:19
odyssey4megenerate the passwords13:19
odyssey4meit's probably best to re-run each of the playbooks to be sure13:19
evrardjphello13:19
evrardjpI don't know what happened to my kilo install but it seems that some of my repos are out of sync13:20
evrardjpre-running setup-infra doesn't change anything13:20
evrardjpdid some of you got that issue?13:21
evrardjpget*13:21
*** itsuugo has joined #openstack-ansible13:23
*** shausy has quit IRC13:25
odyssey4meevrardjp that is possible if the lsync (not sure if that's the right name) stops running for some reason on the secondary repo servers13:26
odyssey4methere's a service that runs on all repo servers but the primary which handles the sync of changes13:27
odyssey4meI've seen it stop sometimes13:27
evrardjpif I run the setup-infra again, shouldn't it make sure the service is running?13:27
odyssey4meevrardjp unfortunately it does not seem to13:27
evrardjpI'll check the process lsync, will tell you13:27
odyssey4meit only verifies that the config is right and only restarts it if config changes - which it never does13:27
evrardjpyeah it's not running13:28
evrardjpI'll check in the playbooks, to see if we need a state=started13:28
odyssey4mewe could probably do something better to ensure that the process restarts or have some sort of check to verify that it runs and keeps running13:29
evrardjpthat's what I'll do with my monitoring system13:30
odyssey4meevrardjp I think that's the best approach13:30
evrardjpit doesn't seem to want to run, even manually13:31
evrardjpnothing in logs13:31
odyssey4mebear in mind that on one server (the primary) it should not run13:31
odyssey4meit should run on all the others though13:32
evrardjpok my bad13:32
odyssey4meit's been a while since I tried it, but I do seem to recall that you can run it in the foreground or with a debug/verbose flag13:32
evrardjpyou sure it's that way?13:32
odyssey4meyup13:32
tiagogomesok, while I am waiting to redeploy. I don't possess an hardware balancer. If I setup haproxy hosts does that mean if one infrastructure node goes down, the other one is there to server? Do the haproxy hosts negotiates who has the virtual IP? Or I need something like corosync?13:33
evrardjpIt's not the opposite? like only the master that sends the rsync to the others?13:33
evrardjpthat send the command to rsync to the others*13:33
*** jmckind has joined #openstack-ansible13:33
odyssey4mehmm, evrardjp you may be right: https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/repo_server/tasks/main.yml#L2813:34
odyssey4meodd, I always thought it was the other way around13:34
evrardjponly one of the repo has the lua file13:35
evrardjpone of the repo containers*13:35
evrardjpbut I don't see the issue of running rsync accross different servers13:35
*** woodard has quit IRC13:39
willemgfOK, only one of the 3 repo-containers has the 11.2.2 repo. Looks like we have the same repo-sync problem as evrardjp has.13:40
*** manas has quit IRC13:40
evrardjpso now you have the answer :)13:40
odyssey4mehughsaunders can you please backport https://review.openstack.org/221844 and https://review.openstack.org/22214313:40
evrardjpif you are using haproxy or hardware LB, you can disable the other backends temporarily13:41
gparaskevastiagogomes: do you mean if one haproxy goes down?13:41
evrardjpwillemgf: ^13:41
*** woodard has joined #openstack-ansible13:42
evrardjpthe other way would be to start lsyncd on the appropriate server and let the thing run13:42
odyssey4metiagogomes that functionality is being worked on by evrardjp and is up for review at https://review.openstack.org/21881813:42
tiagogomesgparaskevas yes, I assume that typically haproxy will run in every infrastructure node13:44
gparaskevasit does run where you say it will run13:45
tiagogomesodyssey4me ah very cool. I need something like that. I may cherry-pick the change13:45
odyssey4metiagogomes both gparaskevas and evrardjp have been testing and improving hat patch :)13:46
odyssey4meso thank them :)13:46
tiagogomesthanks gparaskevas and evrardjp13:46
tiagogomes:)13:46
gparaskevashaha the osa police has spoken13:46
gparaskevasyou are all wellcome13:46
evrardjpwhat? what did I do?13:46
gparaskevaslol13:47
odyssey4melol, only good things evrardjp :)13:47
evrardjp;)13:47
*** itsuugo has quit IRC13:47
evrardjpmgariepy is also a tester13:47
evrardjphe really helped improving the product ;)13:47
odyssey4meah yes, I apologise for leaving mgariepy out13:48
evrardjpI'll ask cloudnull to check if it (at least the inventory part) sounds good to him13:48
evrardjpit's not because he's living far away and have speaks french, you forgot him, right? ;)13:49
mgariepyi don't do that for the fame ;p13:49
*** itsuugo has joined #openstack-ansible13:49
mgariepyis there an easy way to update patch set in my tree ?13:49
mgariepyi cherry-picket patch set 5 and would like to update to patch set 10..13:50
odyssey4memgariepy as in a patch which you cherry-picked into a fork?13:50
odyssey4mepersonally I rebase to remove the old patch, then cherry-pick the new one - but you could revert the old and cherry-pick the new13:50
mgariepyi use the cherry-pick command in gerrit13:50
evrardjpis it the last commit?13:51
mgariepyno13:51
mgariepynot the last13:51
evrardjpif it was it's simple, reset HEAD^and cherry pick again13:51
willemgfevrardjp: thnx, running lsyncd solved my problem13:51
evrardjpwillemgf: no worries13:51
cloudnullmorning13:51
cloudnullevrardjp: looking now13:52
evrardjpgood morning cloudnull13:52
cloudnullhows it going today ?13:52
evrardjpcloudnull: we are not in a rush, don't worry13:52
evrardjpI just don't want it to be lost in oblivion13:52
odyssey4meo/ cloudnull13:53
cloudnulloblivion is never good :)13:53
evrardjpcloudnull: it's going fine in Belgium. How is life in texas? beautiful weather? It's rainy here :p13:53
cloudnullit is a bit rainy here. and the weather has been a lovly 23-28 the last couple of days.13:55
cloudnullso no complaints from me :)13:55
openstackgerritHugh Saunders proposed openstack/openstack-ansible: Install logrotate with ryslog  https://review.openstack.org/22360313:58
openstackgerritHugh Saunders proposed openstack/openstack-ansible: Install nfs-common with nova-compute  https://review.openstack.org/22360413:58
mgariepyevrardjp, so you are  belgian ?13:59
*** KLevenstein has joined #openstack-ansible13:59
evrardjpmgariepy: why? what did you expect?14:00
mgariepyi'm half belgian, my mother is belgian.14:00
evrardjpcool!14:00
mgariepyi'm not expecting anything ;p haha14:00
mgariepyshe came to canada when she was 5 or 6.14:01
evrardjpFYI, svg and willemgf are Belgians too if I'm not mistaken14:01
*** cloudtrainme has joined #openstack-ansible14:01
evrardjpbut we are getting personal, it's maybe not the purpose of this chan ;)14:01
svgASL?14:02
mgariepylol14:02
evrardjp><14:02
odyssey4meoh dear14:02
cloudnullevrardjp:  looks good to me. seems to be real clean. +1 for now, I hope to have some time today to give it a test.14:02
*** willemgf has quit IRC14:03
evrardjpother topic, I'm getting this: error while evaluating conditional: inventory_hostname in groups[item.0.component]14:03
evrardjpon glance install right now (I'm using ceph backend)14:03
evrardjpit comes from ceph_install14:04
evrardjpfound it14:05
matttevrardjp: you also getting burned by same issue as svg ?14:05
mattt(needing to update cinder env.d file ?)14:05
*** cloudtra_ has joined #openstack-ansible14:05
evrardjpfound the commit to blame14:06
evrardjpit's indeed yours14:06
evrardjp:p14:06
*** phalmos has joined #openstack-ansible14:06
evrardjpI'll update my env.d14:06
mattthehehe14:06
evrardjpand see14:06
*** cloudtra_ has quit IRC14:06
odyssey4meevrardjp you gotta follow you own docs ;)14:06
evrardjpwhat do you mean?14:06
*** cloudtra_ has joined #openstack-ansible14:07
odyssey4meevrardjp https://github.com/openstack/openstack-ansible/commit/4bf2a1f37c17433ff37246acd0fd292aabd02d70 ;)14:07
matttodyssey4me: ah, different issue here14:08
evrardjpodyssey4me: I'm using both backends ;)14:08
odyssey4meah ok :p14:08
*** itsuugo has quit IRC14:09
*** cloudtrainme has quit IRC14:09
*** cloudtra_ has quit IRC14:12
evrardjpmattt: "inventory_hostname in groups[item.0.component]"14:12
matttevrardjp: ?14:13
*** spotz_zzz is now known as spotz14:13
evrardjpmmm wait14:13
*** sigmavirus24_awa is now known as sigmavirus2414:13
*** Mudpuppy has joined #openstack-ansible14:14
evrardjpI'll re-test and tell you what really bit me14:14
matttevrardjp: that error looks right14:15
matttas in, i'd expect it if you don't have the current cinder env14:15
*** Mudpuppy has joined #openstack-ansible14:15
*** cloudtrainme has joined #openstack-ansible14:15
*** tiagogomes has quit IRC14:19
evrardjpeven if running glance?14:19
evrardjpbut you're right, that's the env14:20
*** tiagogomes has joined #openstack-ansible14:20
evrardjpso, forget my previous message :p14:20
*** itsuugo has joined #openstack-ansible14:21
*** alextricity has quit IRC14:28
*** tiagogomes has quit IRC14:31
*** tiagogomes has joined #openstack-ansible14:31
sigmavirus24git-harry: if by "not actually doing what the swift play should be doing" is working, then I'll just happily abandon that patch14:39
matttevrardjp: i am a bit surprised it triggers on glance tbh14:39
git-harrysigmavirus24: If I've misunderstood I'm happy to be corrected but from what I could tell it was working as expected14:41
sigmavirus24git-harry: it's not working as intended, it's skipping those tasks because the account '{' does not exist14:41
sigmavirus24Although perhaps we don't want to run those tasks because their broken so it is working as intended?14:41
*** itsuugo has quit IRC14:41
git-harrysigmavirus24: to confirm, are  you referring to the task "Ensure contents file matches ring after ring sync for account/container"14:42
*** tiagogomes has quit IRC14:44
sigmavirus24Sorry misspoke, that file was the first we noticed it in though, yes14:47
sigmavirus24(Also your point about the docs is valid, except that it's very clear that we're iterating over a string, not a list in these tasks)14:47
git-harrysigmavirus24: okay, so the reason that task is skipped is because swift_managed_regions is not defined, it's nothing to do with the random values the command would receive14:49
git-harrysigmavirus24: so the logs look a bit weird but it's not exactly broken, if you define the variable and set it to an empty list everything will work as expected.14:50
sigmavirus24git-harry: but that's showing the parameter to the task (as its running the task over the items it thinks it has)14:50
sigmavirus24If "swift_managed_regions" is not defined why is it still iterating over the string like that?14:50
git-harrysigmavirus24: because that's how ansible works14:51
*** cloudtrainme has quit IRC14:51
sigmavirus24I'm not quite convinced sir14:51
*** tiagogomes has joined #openstack-ansible14:52
sigmavirus24especially given the majority of our uses are sans "{{ }}"14:52
git-harrysigmavirus24: the better way to do this would be to git swift_managed_regions a default value of []14:52
git-harrysigmavirus24: and as I pointed out in the documentation that is not how ansible suggests.14:52
sigmavirus24git-harry: except documentation is always a lie14:53
sigmavirus24I thought you knew that14:53
git-harrysigmavirus24: also changing it to what you suggest doesn't actually make any difference to what ansible is doing in the sense that the task is still being skipped for the same reason only now is shows a junk value of the variable name instead of splitting it into individual characters14:54
*** shoutm has joined #openstack-ansible14:54
*** phalmos has quit IRC14:54
git-harrysigmavirus24: so I stand by my point that this is only really adjusting the skipped log entries and even then it still shows something that isn't quite as nice as it should be14:55
*** phalmos has joined #openstack-ansible14:59
*** itsuugo has joined #openstack-ansible14:59
tiagogomesI still didn't manage to have the galera container correctly configured. These are the first errors related to MySQL that I am seeing: http://paste.openstack.org/show/463126/15:00
*** cloudtrainme has joined #openstack-ansible15:03
odyssey4metiagogomes notice the ..ignoring - those are meant to fail on a fresh cluster15:07
*** jwagner_away is now known as jwagner15:07
tiagogomesah, that's good to know. So far I am only seeing ..ignoring. The playbook is still running15:08
matttevrardjp: get it sorted ?15:08
*** galstrom_zzz is now known as galstrom15:09
mattttiagogomes: did you figure the sysctl -p issue you mentioned earlier ?15:09
evrardjpmattt: I didn't check further15:09
evrardjpit was solved with the env.d overwriting15:09
evrardjpI'm on another thing15:09
evrardjpERROR: openstack The request you have made requires authentication15:09
evrardjpduring heat install15:09
odyssey4meevrardjp https://review.openstack.org/223249 may be useful to you?15:11
matttodyssey4me: i was just looking at that15:11
matttodyssey4me: but i'd have guessed that our gate would fail if that was absolutely required15:11
odyssey4memattt it typically only fails if the playbook is re-run after failing the first time15:12
odyssey4methe above patch sorts it out properly15:12
evrardjpmy issue happen at Create heat domain admin user15:12
odyssey4me(ie so that it works on every run, or doesn't work at all)15:12
evrardjpI'll check15:12
evrardjpif this introduced the bug15:12
odyssey4meevrardjp confirm that the heat container in question has an appropriate openrc15:12
*** gparaskevas has quit IRC15:13
evrardjpI think it's maybe my haproxy ssl that could cause the issue15:13
prometheanfireodyssey4me: about your candidacy, what are you looking to do about the stability of the existing project?15:13
odyssey4meare you using self-signed certs?15:13
prometheanfiremoving forward is nice and all, but I'm concerned about moving too fast and turning into the next neutron15:14
*** cloudtra_ has joined #openstack-ansible15:14
prometheanfiregaining tech debt etc15:14
odyssey4meprometheanfire each area has an implication of managing that - I didn't want to write more of an essay than I already did15:14
matttodyssey4me: you sent this to -dev ?15:15
odyssey4memattt no, I'm not campaigning as it appears that there is no-one else raising their hand15:15
evrardjpodyssey4me: nope I'm not ;)15:15
odyssey4mehttp://git.openstack.org/cgit/openstack/election/plain//candidates/mitaka/OpenStackAnsible/jesse_pretorius.txt15:16
matttah cool15:16
odyssey4meI believe that the actual voting happens next week15:16
tiagogomesmattt no I am ignoring it for the time being15:17
*** cloudtrainme has quit IRC15:17
*** shoutm has quit IRC15:17
evrardjpI'm completely clueless with this heat stuff15:17
odyssey4meprometheanfire also, I question calling OSA 'unstable' at this point in time - the issues experienced in testing upgrades from Juno to Kilo were expected as it was the transition between the RPC product and a community product which included a mass of architectural changes... So if there's a perceived issue of instability, then I'd like that to be a constructive discussion at the summit. Can you please add it as a topic in https://etherpad.15:22
odyssey4meopenstack.org/p/openstack-ansible-mitaka-summit - perhaps in the Workroom Slots?15:22
odyssey4mere-link: https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit15:22
evrardjpI'll check heat stuff tomorrow15:23
matttevrardjp: which tag you using again?15:23
evrardjplast kilo15:23
matttevrardjp: ok let me try a build off that and see if i can duplicate15:23
evrardjpif you don't mind I'll check tomorrow15:23
palendaeodyssey4me: Let me answer that with a question - how many OSA juno -> kilo upgrades have you done?15:23
*** alextricity has joined #openstack-ansible15:23
matttevrardjp: yeah i'm not going to be here much longer15:23
evrardjpwe can check tomorrow, maybe not lose time for me right now ;)15:23
palendaeJust OSA15:23
prometheanfireodyssey4me: I don't call it unstable15:24
odyssey4mepalendae the question is not relevant - juno is rax tech debt... the upgrade to kilo was expected to be an issue15:24
prometheanfireodyssey4me: just worried about adding non-working code, review quality, etc15:24
palendaeodyssey4me: I think it is; just because juno is rackspace tech debt doesn't mean others aren't used it15:24
*** itsuugo has quit IRC15:24
palendaeusing*15:25
prometheanfirenew features are what I'm most worried about15:25
odyssey4mepalendae it is not something that will be repeated, so it's not a topic relevant to the next cycle15:25
prometheanfirebut is someing that's not finished :P15:25
odyssey4methe only topic of relevance would be the design of an upgrade framework, I would think15:25
prometheanfireremaining tech debt should be a topic15:25
palendaeSo juno -> kilo upgrades being unstable remains15:25
palendaeAnd will15:25
palendaeThat's what I'm hearing15:25
*** Mudpuppy has quit IRC15:27
palendaeUntil there are consistent upgrade gate jobs, hard for it to be considered reliable, IMO15:27
palendaeFor any version to any one15:27
prometheanfire+115:27
palendaeI know it's in the works, and the gate job split will help in general, but that's where my perception of instability in OSA comes from15:28
palendaeThe gate job doesn't reflect reality, as evidenced in the many work arounds implemented to get it to run on AIOs15:28
odyssey4meI think that the definition of 'unstable' needs to be agreed on then.15:28
*** Mudpuppy has joined #openstack-ansible15:28
palendaeBah, my UPS battery's almost dead, moving to different location15:29
*** cloudtrainme has joined #openstack-ansible15:29
*** cloudtr__ has joined #openstack-ansible15:30
*** cloudtra_ has quit IRC15:31
odyssey4meprometheanfire to go back to the question you raised, do you have a proposed session in mind to cover that? do you think we can effectively have a discussion at the summit about it?15:32
*** cloudtrainme has quit IRC15:33
prometheanfirethat's a good idea, but I don't think saying 'make sure new features have tests' is enough for a session15:34
prometheanfirea session on upgrades would be nice though, that's a community concern in general and would pull them in15:34
odyssey4meprometheanfire :) yeah, I'm seeing it more like some documentation to create15:34
prometheanfire?15:35
odyssey4meso I think we should have a discussion about documentation for developers - what's missing, etc15:35
prometheanfiremaybe15:35
odyssey4methat would be to cover stuff like policies, etc15:35
prometheanfiredoc best practices  and general requirements in spec / feature submission?15:35
odyssey4meyeah, let me put something together along that line15:35
prometheanfirecool15:35
*** mgariepy has quit IRC15:37
*** itsuugo has joined #openstack-ansible15:38
*** mgariepy has joined #openstack-ansible15:39
palendaeWould that I had time to finish the developer docs I had in mind :(15:49
palendaeI wanted to do a quickstart "here's how to have an AIO up and running"15:51
palendaeBring-your-own-12GB minimum machine (because 8's too little now, realistically), checkout, run this, done15:51
odyssey4mepalendae you mean like https://github.com/openstack/openstack-ansible/blob/master/development-stack.rst ?15:51
palendaeYeah, I guess that really is what I had in mind15:52
palendaeNot sure how many people have read it, though. I'd probably move the instructions for starting up a stack into a 'Quickstart' or something15:52
palendaeOr have a 'Quickstart' link back to that15:52
palendaeOf course there's the problem that OpenStack in general fights where people unfamiliar with Linux are coming to it and expecting it to be easy15:53
-cloudnull- -- bug triage in 5 min15:54
prometheanfireneat15:55
*** Bjoern_ has joined #openstack-ansible15:55
*** alop has joined #openstack-ansible15:57
*** fawadkhaliq has joined #openstack-ansible15:58
tiagogomesIs there an easy way to disable heat?15:59
odyssey4meprometheanfire I'm trying to put together a thingy about code quality, but it seems to overlap too much with an existing topic I suggested - the Keystone Role Analysis. Can you see what I mean?15:59
odyssey4metiagogomes not right now - it's expected to be a core deployment service16:00
cloudnullbug triage time cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden16:00
prometheanfireohai16:00
* mhayden woots16:00
d34dh0r53o/16:00
Sam-I-Ammoos16:00
* dstanek yawns16:00
prometheanfireodyssey4me: link?16:00
mhaydenhold on, need to put away the selfie stick16:00
cloudnullfirst up https://bugs.launchpad.net/openstack-ansible/+bug/143580816:01
openstackLaunchpad bug 1435808 in openstack-ansible trunk "glance module doesn't support multiple api versions" [Wishlist,Confirmed] - Assigned to Darren Birkett (darren-birkett)16:01
prometheanfireodyssey4me: I think a generic talk would be good, also one on upgrades16:01
prometheanfiresorry, will stop distracting now16:01
odyssey4meprometheanfire https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit16:01
cloudnullmancdaz: you around?16:01
cloudnullsems like this was confirmed already, idk if this is an issue or a wishlist item?16:02
mancdazcloudnull o/16:02
cloudnullohai16:02
mancdazsup16:02
cloudnullre: https://bugs.launchpad.net/openstack-ansible/+bug/1435808 <- what say you ?16:02
openstackLaunchpad bug 1435808 in openstack-ansible trunk "glance module doesn't support multiple api versions" [Wishlist,Confirmed] - Assigned to Darren Birkett (darren-birkett)16:02
mancdazcloudnull I think I filed a patch for this16:02
mancdazlemme find16:03
rromans.16:03
cloudnullyup and its merged. .. https://review.openstack.org/#/c/167959/16:03
mancdazhttps://github.com/openstack/openstack-ansible/commit/53cc76b15e55ab36ff3c9b434ab3f272bdea609916:03
mancdazyep16:04
cloudnullcan you cherry pick athat commit back to kilo?16:04
mancdazjust bad gerrit bug updating16:04
mancdazright did it not ever get into kilo?16:04
mancdazok I will do16:04
cloudnullnot according to gerrit16:04
cloudnullmaybe it went in without the cherry-pick ?16:04
* cloudnull looking16:05
odyssey4menothing with the same change id16:05
cloudnullyup16:05
mancdazwell it's in kilo16:06
mancdazit possibly went into master before we branched?16:06
cloudnullmaybe , its an old issue.16:07
cloudnullthats been sitting there for a while.16:07
mancdazhttps://github.com/openstack/openstack-ansible/blame/kilo/playbooks/library/glance16:07
cloudnullwell then we're done here :)16:07
mancdazok I will update the issue with the things16:07
mancdaznot sure what release to target that at16:07
mancdazsince the release will probably be closed for new issues on launchpad16:08
cloudnullyup, i set it fix released. it should be good for now.16:08
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/148715516:08
openstackLaunchpad bug 1487155 in openstack-ansible "Juno to Kilo upgrade: nova spice console services should be removed" [Undecided,New]16:08
cloudnullBjoern_:  cc16:08
*** Bjoern_ is now known as BjoernT16:08
cloudnullthis is something that "should" be resovled within the upgrade script16:08
cloudnullwe're you seeing something different ?16:09
BjoernTI have not real preference where this gets done. Either playbooks or upgrade script16:09
BjoernTI think the better place might be playbooks since we going away from the upgrade anyway16:10
cloudnullthis is where its currently handled https://github.com/openstack/openstack-ansible/blob/kilo/scripts/run-upgrade.sh#L384-L40116:10
BjoernTit's not about the containers its about the nova service entry16:10
odyssey4mecloudnull this is to remove the keystone entries, not the containers16:10
odyssey4mesorry, nova entries16:10
cloudnullah.16:11
cloudnullmy bad16:11
cloudnullwas not reading so good :)16:11
BjoernTyeah it's early in the morning, lol16:11
cloudnullin the current structure, this would have to be added to the upgrade script or simply documented.16:12
*** itsuugo has quit IRC16:12
cloudnullSam-I-Am:  palendae: do you guys have a min to go have a look at this and make sure we're covered from an upgrade perspective ?16:12
palendaeLooking16:13
cloudnullwhile thats happening , https://bugs.launchpad.net/openstack-ansible/+bug/148945116:13
openstackLaunchpad bug 1489451 in openstack-ansible "Not able to connect memcached_container while running setup-infrastructure.yml " [Undecided,New]16:13
palendaeBjoernT: Does this apply to the ec2 entries too?16:13
palendaeFrom my perspective, this'll be handled in docs for the time being, but a patch can be worked16:14
Sam-I-Amdo we have a docs issue somewhere?16:14
*** phalmos has quit IRC16:14
palendaeSam-I-Am: Probably not yet, I can make one16:14
cloudnullkk will update that issue as a doc bug for now.16:15
Sam-I-Amif its an upgrade-specific thing, the only place for that now is in rpc docs16:15
palendaeOr that16:15
cloudnullon https://bugs.launchpad.net/openstack-ansible/+bug/1489451 - thers been a few comments on this issue. i've not personally seen this in the wild however it seems like some have. I think with the chroot / update image solution that odyssey4me majorhaden and andymccr have been pondering we might be able to better solve that issue16:16
openstackLaunchpad bug 1489451 in openstack-ansible "Not able to connect memcached_container while running setup-infrastructure.yml " [Undecided,New]16:16
odyssey4meyeah, it does seem like somehow people are getting the wrong image or the lxc-create playbooks aren't completing properly16:17
odyssey4mehttps://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/lxc_container_create/tasks/container_create.yml#L321-L330 is meant to ensure that python2.7 is installed and linked16:18
cloudnullin truth im not sure how that is, but it looks like something we need to investigate and fix.16:18
odyssey4mebut it seems like it sometimes fails to execute properly16:18
cloudnullconfirmed the issue and marked it high16:19
odyssey4meso one way is obviously to switch to creating a base lxc cache/image which contains everything required...16:19
cloudnull++16:19
palendaeI'm +116:19
odyssey4mebut this is a change in the way we do things, so I'm not sure it's an appropriate way to resolve this particular issue16:19
palendaeWhen using images, build up a base with the thing in it16:19
palendaethings*16:20
odyssey4mebut I can propose a review and we can take it from there16:20
tiagogomesmm, I just finish the OSAD deployment but I can't access horizon. I ssh'ed to the container but and there is nothing listening on port 80/808016:22
cloudnullnext - https://bugs.launchpad.net/openstack-ansible/+bug/149409716:23
openstackLaunchpad bug 1494097 in openstack-ansible " Be more clear about used_ips, mostly in the example file" [Undecided,New]16:23
cloudnulldoc issue16:24
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/149410916:25
openstackLaunchpad bug 1494109 in openstack-ansible " Adds the ability to provide user certificates to HAProxy" [Undecided,New]16:25
cloudnulldoc issue16:25
*** itsuugo has joined #openstack-ansible16:25
cloudnullhttps://bugs.launchpad.net/openstack-ansible/+bug/149422716:26
openstackLaunchpad bug 1494227 in openstack-ansible "neutron metadata service running but not responding properly" [Undecided,New]16:26
cloudnullmancdaz: cc ^ - was that a Juno build ?16:26
Sam-I-Amlooks like rabbit crud16:27
mancdazcloudnull this was juno - there is a corresponding rpc-openstack bug16:27
mancdazI filed it on behalf of support16:27
Sam-I-Amor db16:27
*** sdake has joined #openstack-ansible16:28
cloudnullok juno issue confirmed / medium - trunk invalid -16:28
Sam-I-Amdont think this is an osad-specific thing16:29
cloudnullwe'll have to try and recreate it within a Juno lab / AIO and see if we can figure out what caused that problem, however I suspect its an oslo messaging issue.16:30
*** daneyon_ has left #openstack-ansible16:31
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/149429516:31
openstackLaunchpad bug 1494295 in openstack-ansible "Rabbitmq must forget node during infra node recovery" [Undecided,New]16:31
*** sdake_ has joined #openstack-ansible16:31
cloudnullit seems to me this is a wishlist item . thoughts?16:31
odyssey4meyeah, it looks like something that should perhaps be considered in the upgrade framework too16:32
odyssey4mesimilar work to that done in the galera cluster - ie don't touch it if it's broken16:32
cloudnullyea.16:33
git-harryRepair seems like a separate use case for the project to take on, are there any existing examples in the project?16:33
odyssey4menope, although I'm happy to see a repair handled in a playbook if there are known problems with solutions that can be automated16:34
*** cloudtr__ has quit IRC16:34
odyssey4mefor now though I'd advocate the same as the galera playbook - check status and bail if the cluster's broken16:34
cloudnull++ i agree with that16:35
*** sdake has quit IRC16:35
*** cloudtrainme has joined #openstack-ansible16:35
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/149513916:36
openstackLaunchpad bug 1495139 in openstack-ansible "Getting Error while performing galera-install.yml in OSAD "error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)'" [Undecided,New]16:36
*** cloudtra_ has joined #openstack-ansible16:36
cloudnullis chandu aroun d?16:37
cloudnulli asked in the issue if they've run the command noted in the failure message.16:38
cloudnullissue marked as incomplete16:38
cloudnullfor now16:38
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/149600116:39
openstackLaunchpad bug 1496001 in openstack-ansible "Add SSL listener to RabbitMQ" [Undecided,New] - Assigned to Major Hayden (rackerhacker)16:39
cloudnullits a wishlist item but does anyone have any thoughts about the issue ?16:39
*** cloudtrainme has quit IRC16:40
cloudnullcc sigmavirus24 ?16:40
sigmavirus24cloudnull: I'm not against it16:40
sigmavirus24I just want to be sure our threat models are understood correctly16:40
sigmavirus24mhayden: is in here too16:40
*** cloudtra_ has quit IRC16:41
cloudnullmhayden: what say you ? :)16:42
openstackgerritKevin Carter proposed openstack/openstack-ansible: Add auth version for legacy OpenStack clients  https://review.openstack.org/22369216:43
cloudnullok well confirmed, wishlist. we'll have to discuss on the ML or in the issue.16:44
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/149568716:45
openstackLaunchpad bug 1495687 in openstack-ansible juno "Don't create cinder_volumes containers on hosts without cinder-volumes VG" [Undecided,New]16:45
cloudnullthis is a Juno only specific item, IMO this is a wishlist issue .16:45
cloudnullcc rackertom mancdaz16:46
odyssey4methis seems a little dangerous16:46
odyssey4meif servers are designated as storage servers then they should be properly prepared16:47
*** gparaskevas has joined #openstack-ansible16:47
cloudnullthe cinder hosts can already be defined by the various groups and be instructed to use different cinder backend s16:47
odyssey4methe containers can be created for non lvm too16:48
cloudnullIMO if the deployer does not want the cinder volume container on a given host, the host should not be defined within the storage host group.16:48
odyssey4mealso, I don't see how this is juno only16:48
cloudnullthe issue was opened with juno as the only series targetted.16:49
odyssey4meyeah, this seems invalid to me - it makes no sense to add cinder containers on a host that's not meant to manage storage - do if the environment is correctly defined then this is a non-issue16:49
*** cloudtrainme has joined #openstack-ansible16:49
cloudnullok , with nobody lobbying further on it, marked invalid.16:50
cloudnullthats it16:50
cloudnulldoes anyone else have anything they want to cover ?16:50
rackertomcloudnull: That's ok with me. I created the bug to track the fact that I saw the issue.16:50
cloudnullok. so we're done here. have a good one everyone.16:51
cloudnulland thank you16:51
*** itsuugo has quit IRC16:51
cloudnulltiagogomes:  you still having issues with horizon ?16:52
cloudnullis apache running within the container ?16:52
*** gparaskevas has quit IRC17:04
mhaydencloudnull: oops, i disappeared when 1496001 came up ;)17:04
*** itsuugo has joined #openstack-ansible17:06
*** woodard has quit IRC17:17
*** arbrandes has quit IRC17:18
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Add SSL/TLS listener to RabbitMQ  https://review.openstack.org/22371717:26
*** abitha has joined #openstack-ansible17:28
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Add SSL/TLS listener to RabbitMQ  https://review.openstack.org/22371717:29
mhaydenSam-I-Am: ^^ first time for me writing docs -- feel free to throw darts ;)17:30
*** abitha has left #openstack-ansible17:31
*** phalmos has joined #openstack-ansible17:32
*** sdake_ is now known as sdake17:33
Sam-I-Ammhayden: so this is how patches are supposed to work17:38
Sam-I-Amwrite the code, write the docs17:39
Sam-I-Amdont throw shit over the fence17:39
mhaydenSam-I-Am: wait, did i throw poo?17:40
*** sdake_ has joined #openstack-ansible17:41
Sam-I-Ammhayden: no, you didnt. most people just do DocImpact in commit messages and hope docs will a) understand the code b) find time to write docs17:41
mhaydenah okay17:42
Sam-I-Aminclusion of at least some docs makes me happy. i can look at it, +1 for readablity without digging through code17:42
Sam-I-Ammaybe you want to hit the long list of DocImpact bugs? :)17:42
mhaydenhaha, well i was going to toss in some docs on how to quickly do an AIO build on rackspace cloud17:43
mhaydenpeople keep asking me17:43
*** phalmos has quit IRC17:43
Sam-I-Ammhayden: dont we already have that?17:44
Sam-I-Amlaunch >= p1.8, then follow AIO instructions in-tree17:44
* mhayden looks again17:44
*** sdake has quit IRC17:44
Sam-I-Amthey're not in the install guide specifically17:44
Sam-I-Amthey're in the development-stack.rst or something17:44
mhaydenSam-I-Am: i was thinking more a one-command launch17:45
mhaydenusing user data17:45
Sam-I-Amahhh17:46
Sam-I-Amyeah we dont have that stuff17:46
* mhayden opens a bug17:46
*** itsuugo has quit IRC17:47
mhaydenSam-I-Am: would that make sense as an appendix or somewhere in the install guide itself?17:49
*** phalmos has joined #openstack-ansible17:53
BjoernTpalendae: this issue (1489451) is similar what we found in out kilo green field install where apt-transport-https was missing from the container and therefore basic packages could not be installed17:57
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Small docs fix for adding compute  https://review.openstack.org/22372717:57
Sam-I-Ammhayden: hold on, grandma died17:57
mhaydenSam-I-Am: uh17:57
*** woodard has joined #openstack-ansible18:06
Sam-I-Ammhayden: probably works best in the development-stack document18:06
mhaydenSam-I-Am: that sounds good18:07
*** woodard has quit IRC18:11
*** cloudtra_ has joined #openstack-ansible18:13
*** cloudtra_ has quit IRC18:13
*** cloudtrainme has quit IRC18:15
*** phalmos has quit IRC18:20
*** fawadkhaliq has quit IRC18:25
*** phalmos has joined #openstack-ansible18:31
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Add AIO build docs  https://review.openstack.org/22374218:33
*** cloudtrainme has joined #openstack-ansible18:40
*** jmckind has quit IRC18:41
*** sdake_ is now known as sdake18:45
*** jmckind has joined #openstack-ansible18:47
*** phalmos has quit IRC18:48
*** woodard has joined #openstack-ansible18:53
*** woodard_ has joined #openstack-ansible18:55
*** woodard has quit IRC18:55
*** phalmos has joined #openstack-ansible19:04
openstackgerritRobb Romans proposed openstack/openstack-ansible: Add AIO build docs  https://review.openstack.org/22374219:07
openstackgerritMatthew Thode proposed openstack/openstack-ansible-specs: Add spec for IPv6 support for projects  https://review.openstack.org/22151619:17
prometheanfireodyssey4me: cloudnull: let me know if I did that spec correctly, was told to put into the mikata folder when none existed19:18
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Fixing broken networking link  https://review.openstack.org/22378319:44
*** devlaps has joined #openstack-ansible19:51
openstackgerritvenkatamahesh proposed openstack/openstack-ansible-specs: Added the home-page with openstack.org in setup.cfg  https://review.openstack.org/22378519:58
*** jmckind has quit IRC20:04
openstackgerritJesse Pretorius proposed openstack/openstack-ansible-specs: Add spec for IPv6 support for projects  https://review.openstack.org/22151620:09
prometheanfirethought I rebased...20:13
prometheanfireI know I fetched and rebased -i20:14
* prometheanfire shrugs20:14
prometheanfireodyssey4me: would have been nice not to stomp on my commit though :P20:14
*** jwagner is now known as jwagner_lunch20:17
*** daneyon has joined #openstack-ansible20:17
*** jmckind has joined #openstack-ansible20:22
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Docs for named veths + troubleshooting  https://review.openstack.org/22379220:27
*** keopp has joined #openstack-ansible20:27
openstackgerritMatthew Thode proposed openstack/openstack-ansible-specs: Add spec for IPv6 support for projects  https://review.openstack.org/22151620:32
*** daneyon_ has joined #openstack-ansible20:36
*** jwagner_lunch is now known as jwagner20:36
*** daneyon has quit IRC20:39
openstackgerritMajor Hayden proposed openstack/openstack-ansible: Adding docs for configuring horizon  https://review.openstack.org/22380420:50
*** pradk has quit IRC20:59
*** jmckind has quit IRC21:04
logan2after running setup-hosts, I end up with a bunch of containers that have 69.20.0.164 and 69.20.0.196 set as resolvers, and it is not setting a default gateway in the eth1 br-mgmt interface.21:06
logan2i was able to get a default route in the containers by using static_routes int he provider_networks, but the resolver issue I am having a lot of trouble with-- not seeing where that is coming from in the playbooks anywhere21:07
*** woodard_ has quit IRC21:08
*** galstrom is now known as galstrom_zzz21:13
*** cloudtra_ has joined #openstack-ansible21:18
*** cloudtrainme has quit IRC21:19
*** Mudpuppy has quit IRC21:27
logan2nevermind regarding the resolvers issue.. I see where resolvconf is getting its configuration from now21:30
*** cloudtra_ has quit IRC21:33
*** cloudtrainme has joined #openstack-ansible21:38
*** jmckind has joined #openstack-ansible21:39
*** phalmos has quit IRC21:47
*** jmckind has quit IRC21:57
*** keopp has quit IRC21:59
*** spotz is now known as spotz_zzz22:32
*** KLevenstein has quit IRC22:45
*** darrenc is now known as darrenc_afk22:52
*** sdake has quit IRC22:56
*** darrenc_afk is now known as darrenc23:06
*** Mudpuppy has joined #openstack-ansible23:06
*** markvoelker has quit IRC23:10
*** Mudpuppy has quit IRC23:13
*** Mudpuppy has joined #openstack-ansible23:14
*** openstackgerrit has quit IRC23:16
*** openstackgerrit has joined #openstack-ansible23:16
cloudnulllogan2: sorry for the poor responses most.23:23
cloudnullim glad you got figured.23:23
cloudnullfeel free to ping out if something else comes up.23:23
logan2well i am still confused. i see where it is coming from--the rackspace repo--but i thought that wasn't used if I specified repo hosts in the openstack_user_config23:32
logan2the whole repo structure and buildout isn't really discussed in the docs. is that outside the scope of osad? I guess maybe I can just modify the rootfs image from rackspace repo, repackage, and feed the url to the lxc host setup playbook. is that the correct way to go about doing that?23:34
*** cloudtrainme has quit IRC23:36
palendaelogan2: What are you trying to do? The repo structure will be built by the repo playbooks23:42
palendaeBut you're right, it's not documented, sadly23:42
*** galstrom_zzz is now known as galstrom23:43
logan2i did notice the repo playbooks but yea I am not sure which ones to run and which order to run them in. at this point I am still just trying to get the containers bootstrapped with the correct network & resolver config etc.23:49
*** alop has quit IRC23:53
*** galstrom is now known as galstrom_zzz23:56
*** markvoelker has joined #openstack-ansible23:56
*** BjoernT has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!