Tuesday, 2015-05-12

*** sdake has quit IRC00:01
*** JRobinson__ has quit IRC00:15
*** britthou_ has joined #openstack-ansible00:28
*** JRobinson__ has joined #openstack-ansible00:30
*** britthouser has quit IRC00:30
*** ChanServ changes topic to "next meeting - Thursday 1600UTC "https://wiki.openstack.org/wiki/Meetings/openstack-ansible""00:54
-openstackstatus- NOTICE: Gerrit has been downgraded to version 2.8 due to the issues observed today. Please report further problems in #openstack-infra.00:54
*** stevemar has joined #openstack-ansible01:25
*** dstanek has quit IRC01:26
*** dstanek has joined #openstack-ansible01:26
*** britthou_ has quit IRC01:37
*** britthouser has joined #openstack-ansible01:41
*** britthouser has quit IRC01:41
*** britthouser has joined #openstack-ansible01:41
*** galstrom_zzz is now known as galstrom01:44
*** appprod0 has joined #openstack-ansible01:47
*** fangfenghua has joined #openstack-ansible01:50
*** appprod0 has quit IRC01:52
*** sigmavirus24 is now known as sigmavirus24_awa01:54
*** metral is now known as metral_zzz02:04
*** galstrom is now known as galstrom_zzz02:12
*** JRobinson__ has quit IRC02:12
*** galstrom_zzz is now known as galstrom02:13
*** JRobinson__ has joined #openstack-ansible02:14
*** galstrom is now known as galstrom_zzz02:18
*** metral_zzz is now known as metral02:19
*** sigmavirus24_awa is now known as sigmavirus2402:21
*** dstanek has quit IRC03:16
*** JRobinson__ has quit IRC03:18
*** JRobinson__ has joined #openstack-ansible03:19
*** dstanek has joined #openstack-ansible03:19
*** galstrom_zzz is now known as galstrom03:24
*** stevemar has quit IRC03:31
*** sigmavirus24 is now known as sigmavirus24_awa04:05
*** galstrom is now known as galstrom_zzz04:20
*** sdake_ has quit IRC04:37
*** jaypipes has quit IRC04:40
*** sacharya has quit IRC04:51
*** stevemar has joined #openstack-ansible04:54
*** markvoelker has joined #openstack-ansible04:54
*** jaypipes has joined #openstack-ansible04:56
*** mahito has joined #openstack-ansible04:57
*** mahito has quit IRC04:58
*** mahito has joined #openstack-ansible04:58
*** javeriak_ has joined #openstack-ansible05:14
*** javeriak has quit IRC05:14
*** JRobinson__ has quit IRC05:49
*** prometheanfire has quit IRC05:49
*** fangfenghua has quit IRC05:53
*** prometheanfire has joined #openstack-ansible05:56
*** dstanek has quit IRC06:08
*** dstanek has joined #openstack-ansible06:09
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] Ceph/RBD support  https://review.openstack.org/18195706:09
*** markvoelker has quit IRC06:15
*** markvoelker has joined #openstack-ansible06:18
*** sdake_ has joined #openstack-ansible06:38
*** stevemar has quit IRC06:57
*** fangfenghua has joined #openstack-ansible07:01
*** sdake_ has quit IRC07:23
*** openstackstatus has quit IRC07:52
*** openstack has joined #openstack-ansible07:55
*** daneyon has quit IRC08:01
*** javeriak_ has quit IRC08:03
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] Ceph/RBD support  https://review.openstack.org/18195708:30
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] Ceph/RBD support (JUNO)  https://review.openstack.org/18102208:53
*** mahito has quit IRC09:04
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] Ceph/RBD support  https://review.openstack.org/18195709:12
*** fangfenghua has quit IRC09:13
*** daneyon has joined #openstack-ansible09:31
svganyone awake here yet?10:15
*** fangfenghua has joined #openstack-ansible10:48
*** andyhky has quit IRC11:01
*** galstrom_zzz is now known as galstrom12:01
cloudnullMorning.12:07
cloudnullSvg I'm partially awake now :)12:07
* cloudnull operating on pre-coffee brain 12:07
svggood afternoon cloudnull :)12:13
svgto wake you up a bit, some good news, Zuul finally approved my patch :)12:15
svgand I also have ported it to the master branch, and it passes tests too :)12:16
svgnow, I had a question, on Juno, I'm trying to figureout if LSAAS (using internal LB on neutron, so not an external LB) is available12:17
svgI'm seeing stuff configured in neutron for that, though I think that still misses some parts (haproxy package to be installed?); also, it's hardcoded-disabled in horizon, so I guess that must have a reason?12:18
cloudnullThe bits are available to run laas you just need to create the appropriate inits for the lbaas agent and drop lbaas config. As for why it's disabled , it didn't work very well and wasn't a feature that was asked for by most. That's said it should be fairly easy to get going.12:19
cloudnullYes the default config uses neutron+haproxy12:20
svgok, so that's adding some config in the neutron agent containers?12:21
svg(and installing haproxy)12:21
cloudnullYes.12:21
svgok, thx12:22
cloudnullAnd an init script for the agent.12:22
cloudnullWhich should be able to use the general template.12:22
cloudnullIt'll be easier to do in master than Juno, due to the Juno spaghetti but should be none the less straight forward.12:23
* cloudnull off yo find caffeine 12:28
cloudnullBrb12:28
*** KLevenstein has joined #openstack-ansible12:41
*** galstrom is now known as galstrom_zzz12:48
*** sdake has joined #openstack-ansible13:01
*** sdake_ has joined #openstack-ansible13:03
*** fangfenghua has quit IRC13:03
*** sdake has quit IRC13:06
palendaeMore reviews on https://review.openstack.org/#/c/178503/appreciatated13:17
palendaeMore reviews on https://review.openstack.org/#/c/178503/ appreciatated13:17
*** sdake_ is now known as sdake13:18
cloudnullmancdaz: hughsaunders: git-harry: andymccr: mattt: and anyone else that has ceph experience please review => https://review.openstack.org/#/c/181022/  and  https://review.openstack.org/#/c/181957/13:21
cloudnullthe commits now all pass gating and LGTM however I would like to have someone with actual OpenStack ceph experience to review them first.13:21
matttwe will be in a good position to review that after this week13:24
*** markvoelker has quit IRC13:53
palendaeSpec that could use reviews: https://review.openstack.org/#/c/181544/13:57
*** Mudpuppy has joined #openstack-ansible14:01
*** Mudpuppy has quit IRC14:01
*** Mudpuppy_ has joined #openstack-ansible14:01
*** Mudpuppy_ has quit IRC14:03
*** Mudpuppy has joined #openstack-ansible14:03
*** appprod0 has joined #openstack-ansible14:05
*** appprod0 has quit IRC14:15
*** stevemar has joined #openstack-ansible14:17
*** sigmavirus24_awa is now known as sigmavirus2414:25
*** jwagner_away is now known as jwagner14:28
*** galstrom_zzz is now known as galstrom14:35
openstackgerritMatthew Thode proposed stackforge/os-ansible-deployment-specs: os-ansible-for-gentoo-hosts  https://review.openstack.org/18195514:40
*** Mudpuppy has quit IRC14:47
*** sdake_ has joined #openstack-ansible14:49
*** sdake has quit IRC14:52
*** jaypipes has quit IRC14:54
*** b3rnard0 has left #openstack-ansible14:59
*** b3rnard0 has joined #openstack-ansible14:59
palendaecloudnull: Any community meetings next week due to Summit?15:01
sigmavirus24palendae: all the community meetings during the Summit15:03
*** appprod0 has joined #openstack-ansible15:10
hughsaundershttp://needsmorejpeg.com/15:11
*** daneyon_ has joined #openstack-ansible15:15
*** b3rnard0 has left #openstack-ansible15:15
*** b3rnard0 has joined #openstack-ansible15:16
openstackgerritMatthew Thode proposed stackforge/os-ansible-deployment-specs: os-ansible-for-gentoo-hosts  https://review.openstack.org/18195515:16
*** appprod0 has quit IRC15:17
*** daneyon has quit IRC15:18
*** andyhky has joined #openstack-ansible15:21
sigmavirus24hughsaunders: first problem with that site: "Bitcoin donations"15:23
hughsaunderssigmavirus24: but if you needmorejpeg, thats the easiest place to get some15:23
andymccrsigmavirus24: you do look like you could usemorejpeg15:25
sigmavirus24andymccr: sounds legit15:25
cloudnullpalendae:  good call. the meeting schedule has been updated.15:39
palendaecloudnull: Cool15:39
sigmavirus24So am I imagining things or does Kilo os-a-d not care about glancestore revisions when building the package repo?15:41
sigmavirus24It's not pulling in the version from the specified git repository that I'm giving it in user_variables.yml15:42
*** ccrouch has quit IRC15:42
*** jaypipes has joined #openstack-ansible15:45
*** Bjoern__ has joined #openstack-ansible15:46
*** daneyon_ has quit IRC15:47
*** daneyon has joined #openstack-ansible15:48
palendaeYou have to put it a very specific place15:49
palendaesigmavirus24: Try it in playbooks/vars/repo_packages/openstack_other.yml15:49
sigmavirus24palendae: so you can't override that in user_vars then?15:50
palendaeI don't thinks15:50
palendaeso*15:50
sigmavirus24(I know where it's originally defined, and I have the right var defined in user_vars)15:50
sigmavirus24Is there a reasoning for that or is it "just because"?15:50
palendaeEvery time I've added to the repo, I've done it in those repo_packages vars files15:50
palendaeI don't know the reason15:51
sigmavirus24Well, I'm not looking to contribute these back to os-a-d, I'm trying to test out other patches from different sources15:51
palendaeAhh15:51
palendaeHrm15:51
sigmavirus24Seems a little inconsistent not to accept them from user_vars15:51
palendaeYeah, I'd have to dig into the role to know15:51
sigmavirus24Pointers on what to look for to figure out why it isn't?15:52
palendaeLooking15:54
palendaesigmavirus24: So you're putting it in repo_pip_packages?15:55
sigmavirus24Huh?15:56
palendaeWhat var are you putting it in inside of user_variables15:57
sigmavirus24palendae: glancestore_git_repo and glancestore_git_install_branch15:58
*** erikmwilson has joined #openstack-ansible15:59
sigmavirus24palendae: I guess I'm confused why those exist in openstack_other.yml but can't be overridden from user_variables.yml. I think we need to be looking at the package build playbooks16:00
sigmavirus24but we have bug triage now16:00
sigmavirus24so16:00
palendaeSo I think the builders are only looking under playbooks/vars/repo_packages - playbooks/plugins/lookups/py_pkgs.py and playbooks/roles/repo_server/files/openstack-wheel-builder.py both have comments to that effect16:00
Sam-I-Amthere are no bugs16:01
palendaeYeah...I think it's cause the builders don't know about user_variables16:01
b3rnard0we are done here then if there are no bugs16:01
palendaeWe'll need to dig in and document how the repo server works a little more16:01
*** Mudpuppy has joined #openstack-ansible16:02
*** britthouser has quit IRC16:03
sigmavirus24palendae: playbooks/repo-build.yml should probably mention that too16:03
b3rnard0Bug triage notes: https://etherpad.openstack.org/p/openstack_ansible_bug_triage.2015-05-12-16.0016:03
sigmavirus24That said, it seems a bit inconsistent that one cannot use user_variables16:03
*** claco has joined #openstack-ansible16:03
cloudnullHello16:04
*** britthouser has joined #openstack-ansible16:04
clacohola16:04
cloudnullis it time to make the triag of the bugs?!16:04
clacocalendar says may 25th is the next triage16:04
b3rnard0oh crap, then we are done here16:05
palendaesigmavirus24: yeah, I agree.16:05
cloudnullit does indeed . after this one.16:05
bgmccollum_was the removal of ansible from requirements.txt intentional?16:06
d34dh0r53~8~16:06
*** Bjoern__ is now known as BjoernT16:06
cloudnullbgmccollum_:  yes16:06
cloudnullwith modern ansible you can no longer install ansible via pip16:07
cloudnullso there is a bootstrap-ansible.sh script16:07
bgmccollum_fuuuuuuu16:07
bgmccollum_docimpact :/16:07
cloudnullfirst up https://bugs.launchpad.net/openstack-ansible/+bug/145203716:07
openstackLaunchpad bug 1452037 in openstack-ansible "Nova spice_console container are not cleaned up with Kilo upgrade" [Undecided,New]16:07
cloudnullso there is a cleanup process for that container type.16:09
cloudnullBjoernT: you around ?16:09
BjoernTok, nice16:09
BjoernTis it part of the run-upgrade script16:09
BjoernT?16:09
cloudnullhttps://github.com/stackforge/os-ansible-deployment/blob/master/scripts/run-upgrade.sh#L302-L30916:10
cloudnullso long as the nova_spice container is in inventory it should find it16:10
BjoernTok, then we can close it16:10
cloudnulland destroy it16:10
cloudnullwe can run an upgrade and make double sure thats running as expected.16:10
cloudnullb3rnard0: ^16:10
BjoernTyeah I was running the playbooks manually after run-upgrade failed16:10
cloudnullok.16:10
BjoernTit was idempotent right ? (the upgrade script)16:11
cloudnullit was intended to be, however it will quit if the rpc_deployment directory is no longer present.16:11
*** openstackgerrit_ has quit IRC16:11
BjoernTyeah saw that16:13
cloudnullso along that thread https://bugs.launchpad.net/openstack-ansible/+bug/145294516:13
openstackLaunchpad bug 1452945 in openstack-ansible "Juno to Kilo neutron log permission issue" [Undecided,New]16:13
BjoernTyeah it's not a permission issue anymore, the directory needs to be moved/removed in order to create a symlink so a job for the upgrade script16:14
cloudnullthis is another issue with the initial upgrade script that can effect neutron permissions./ log directories, so we should fix that too.16:14
clacoand particular time/milestone?16:15
claconext sprint post conf?16:15
cloudnullI've confirmed those issues and Ill get working on those post conf, most likely.16:15
cloudnull^ claco16:15
cloudnullend of this week , we'll tag 11.0.1 and be gone for the next week.16:16
cloudnulllast new issue that is open https://bugs.launchpad.net/openstack-ansible/+bug/145359916:16
openstackLaunchpad bug 1453599 in OpenStack Object Storage (swift) "Swift recon does not work in kilo (quarantine check)" [Undecided,New]16:16
cloudnullandymccr:  ^ is that something that you can dig into ?16:16
clacoso, 11.0.2 around may 5th or so?16:17
cloudnullthat sounds about right.16:17
clacok, I'll make a milestone if there isn't16:17
cloudnullthe swift recon issue is a high IMO ? thoughts?16:18
cloudnull the swift recon issue is a high IMO ? thoughts?16:19
cloudnullhowever it may be an upstream issue16:19
BjoernTagree, high16:19
BjoernTMAAS will go off16:19
BjoernThave not debuged more16:19
BjoernTif the package is the issue16:19
BjoernTor version16:19
cloudnullBjoernT: in hunting that down , you tagged upstream swift. do you think its an upstream thing ?16:20
BjoernTI saw it in the kilo release. Did not test upstream yet16:20
BjoernTso could be, not sure yet16:20
cloudnullok.16:20
cloudnullconfirmed | high16:21
cloudnullthats it,16:21
clacodoes this effect juno?16:21
cloudnullany other issues that we want to talk about ?16:21
cloudnullBjoernT:  re: claco ?16:21
clacoalso, 11.0.2 at a minimum I assume?16:21
BjoernTnope, other then where are we with this sprint and the escalated issues (novnc etc)16:21
BjoernT?16:21
cloudnullclaco:  yes16:22
cloudnullBjoernT: novnc ? for kilo ?16:22
BjoernTjuno16:22
cloudnullfor juno thats not a thing16:22
BjoernTthat will make customers unhappy16:23
BjoernTi see the no one updating soon16:23
cloudnullnovnc isn't even a thing in kilo yet either.16:23
BjoernTso hurry up, lol16:23
BjoernTyo're L4 now ;-)16:23
clacopatches welcome16:23
claco:-)16:23
clacooh shi......16:24
clacocalled out16:24
cloudnullha, this is why i can say its not a thing16:24
cloudnulland wont be16:24
cloudnullpatches welcome .16:24
BjoernTI can work on it if you want, lol But do you really want that16:24
cloudnullthe question is do you really want novnc ? :)16:25
cloudnullhahaha16:25
cloudnullanything else that we want to touch on ?16:25
BjoernTyes sadly I want. Unless Redhat is pushing spice really hard.16:25
BjoernTThat project seems to be in a coma, at least the html5 proxy16:26
clacoBjoernT: have some support folks at the Summit track down RH folks about spice :-)16:26
cloudnull^ boom!16:27
cloudnullspice is the future and the future is now !16:27
cloudnull:D16:27
clacowhat about now?16:27
clacohow about now?16:27
claconow?16:27
palendaeSpice must flow16:27
clacoor now?16:27
BjoernTlol,http://www.vectorcast.com/sites/default/files/styles/medium/public/images/blog/red_hat_guy_30_oct_2014.jpg?itok=VoaB0_Db16:27
cloudnullok so anything else  ?16:28
BjoernTyeah, spice must have missed that statement about the future16:28
clacotheir future was then, not now16:28
cloudnullok we're done here.16:28
cloudnullthank you very much !16:28
BjoernTSpice has a prototype Web client which runs entirely within a modern browser. It is limited in function, a bit slow, and lacks support for many features of Spice (audio, video, agents just to name a few).16:28
BjoernTfrom their site16:29
b3rnard0thanks everyone16:30
BjoernTthanks16:31
palendaeOh, since summit is next week, no community meetings16:31
clacoI think the wiki schedule is up to date on that16:31
palendaeYeah, just repeating it for people who don't check that16:32
palendae(<-)16:32
BjoernTHmm, cloudnull the nova_spice_console container was inside the inventory but not cleaned up. Maybe  I still have the old log16:42
BjoernTwhy this failed16:42
cloudnullBjoernT:  so https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/run-upgrade.sh#L299 should destroy the containers16:48
cloudnulland https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/run-upgrade.sh#L306-L309 should remove it from inventory16:49
BjoernTyeah for some reason it didn't, don't have a log anymore since it ran inside the tmux buffer and I didn't turn on logging fast enough16:49
cloudnullok. im looking into it now too .16:55
cloudnullbut let me know if you find something else out about it16:55
*** jwagner is now known as jwagner_lunch17:00
*** jbweber has quit IRC17:02
*** hrvojem has quit IRC17:02
*** jbweber_ has joined #openstack-ansible17:02
*** hrvojem_ has joined #openstack-ansible17:02
*** sdake has joined #openstack-ansible17:11
*** sdake_ has quit IRC17:14
*** Mudpuppy has quit IRC17:26
*** sacharya has joined #openstack-ansible17:30
*** Mudpuppy has joined #openstack-ansible18:06
*** yaya has joined #openstack-ansible18:31
*** Mudpuppy has quit IRC18:40
*** Mudpuppy has joined #openstack-ansible18:53
*** javeriak has joined #openstack-ansible18:55
*** javeriak has quit IRC19:00
bgmccollum_11.0.0 doesn't seem to be restarting nginx in the repo containers...so the pip from said repo fail.19:04
*** javeriak has joined #openstack-ansible19:09
palendaebgmccollum_: Are you seeing that on the galera containers first?19:10
bgmccollum_yes19:10
palendaeAha19:10
palendaeOk, that's what I was seeing and I thought I was nuts19:10
bgmccollum_hop into repo containers and restart nginx19:11
palendaeInteresting19:13
palendaeRestarting nginx is listed 3 times in the repo_post_install task file19:13
*** ccrouch has joined #openstack-ansible19:15
clacoif you look into a mirror, and say nginx three times, it restarts fyi19:15
*** openstackgerrit_ has joined #openstack-ansible19:16
palendaebgmccollum_: Hrm, that hasn't helped me. nginx said it was running on all of my repo containers, and I just manually restarted it19:21
bgmccollum_palendae: nginx is running, but not after dropping the repo virtual host config...if you netstat -antulp its not listening on 818119:21
bgmccollum_its running...but needs to be restarted to pick up the vhost config thats dropped...which listens on 8181...sorry words mangled19:22
palendaeGotcha. Mine are listening now, galera_client's Install pip packages is still hanging19:23
bgmccollum_palendae: using 11.0.0 tag?19:23
palendaeYep19:23
palendaeGranted, I did this restart in the same run; I'm waiting on time out19:23
bgmccollum_palendae: also, did you run haproxy playbook?19:23
*** nosleep77 has joined #openstack-ansible19:24
palendaeAll my stuff's behind an F5 right now, but I'm gonna double check to see if I can hit that port throughit19:24
*** openstackgerrit_ has quit IRC19:25
palendaebgmccollum_: Yeah, looks like it's our f5 configuration19:31
*** markvoelker has joined #openstack-ansible19:45
*** KLevenstein has quit IRC19:54
cloudnullpalendae:  bgmccollum_so i tested this and my repo containers are restarting / serving the wheels as expected ?20:04
cloudnullnew install w/ 3 repo containers .20:04
palendaecloudnull: Yeah, my issue was completely separate20:04
cloudnullok20:04
palendaecloudnull: F5 wasn't set up correctly20:05
cloudnullok , so ill carry on with the upgrade bits20:05
*** KLevenstein has joined #openstack-ansible20:07
palendaeJust got to that point again in this run, nginx is listening on 818120:11
*** KLevenstein has quit IRC20:19
*** KLevenstein has joined #openstack-ansible20:19
*** JRobinson__ has joined #openstack-ansible20:21
*** markvoelker has quit IRC20:30
*** BjoernT has left #openstack-ansible20:33
*** logan2 has quit IRC20:33
openstackgerritMiguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP]  https://review.openstack.org/17306720:34
openstackgerritMiguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP]  https://review.openstack.org/17306720:36
*** openstackgerrit has quit IRC20:37
*** openstackgerrit has joined #openstack-ansible20:37
openstackgerritJulian Montez proposed stackforge/os-ansible-deployment: Add all middleware to Swift WSGI proxy servers  https://review.openstack.org/18156020:38
*** sacharya has quit IRC20:55
*** Mudpuppy has quit IRC20:58
*** yaya has quit IRC20:59
*** erikmwilson has quit IRC21:07
*** stevemar has quit IRC21:08
bgmccollum_cloudnull: i didn't check to see if the restart task actually did anything, but nginx wasn't listening on 8181 inside the repo containers. manually restarted nginx, and galera install continued on. i even tried re-running the repo playbook, before manually restart nginx. *shurg*21:21
*** JRobinson__ is now known as JRobinson__afk21:24
*** galstrom is now known as galstrom_zzz21:35
bgmccollum_cloudnull: a task failed between dropping the nginx vhost, and the restart handler. re-running the playbook didn't change the vhost, so the restart handler never fired. :/21:54
cloudnullin kilo ?21:54
bgmccollum_yeah...handlers are lame21:55
*** JRobinson__afk is now known as JRobinson__21:55
palendaeOoo, ooo I read about this http://wherenow.org/ansible-handlers/21:55
bgmccollum_the vhost drop, then some other random task inbetween failed...playbook run stops...re-run playbook, vhost doesn't change, so handler does't fire.21:55
cloudnulli've never seen a failure there21:56
bgmccollum_transient failure...21:56
bgmccollum_which is why, when i re-ran the playbook, nginx was never getting restarted21:56
palendaeThat's annoying, because adding a - meta: flush_handlers after every notify would be tedius21:57
palendaeAnd against design21:57
bgmccollum_i stopped using handlers, and just added restart tasks for good measure ;)21:58
cloudnullwe could add asserts and test to see that its operational , then flush the handlers / do the needful .21:59
*** jaypipes has quit IRC22:13
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Update all services in master to current head  https://review.openstack.org/18247622:16
*** logan2 has joined #openstack-ansible22:18
*** markvoelker has joined #openstack-ansible22:24
*** jwagner_lunch is now known as jwagner_away22:27
*** sdake_ has joined #openstack-ansible22:29
*** sdake has quit IRC22:33
*** sdake has joined #openstack-ansible22:43
*** sdake_ has quit IRC22:46
*** KLevenstein has quit IRC22:50
*** sacharya has joined #openstack-ansible23:00
*** markvoelker has quit IRC23:08
*** galstrom_zzz is now known as galstrom23:26
*** sacharya has quit IRC23:38
*** galstrom is now known as galstrom_zzz23:39
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Re-enable building of heat_repo_plugins  https://review.openstack.org/18194623:48
*** mahito has joined #openstack-ansible23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!