Sunday, 2016-04-24

openstackgerritMerged openstack/openstack-ansible-os_keystone: Change pip install task state to 'latest'  https://review.openstack.org/30843400:07
*** thorst has quit IRC00:08
openstackgerritMerged openstack/openstack-ansible-os_keystone: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949400:08
*** thorst has joined #openstack-ansible00:09
openstackgerritMerged openstack/openstack-ansible-os_cinder: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949200:11
*** woodard has joined #openstack-ansible00:11
openstackgerritMerged openstack/openstack-ansible-os_neutron: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949600:11
openstackgerritMerged openstack/openstack-ansible-os_heat: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949300:11
openstackgerritMerged openstack/openstack-ansible-os_ceilometer: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949000:13
*** thorst has quit IRC00:17
*** woodard has quit IRC00:17
openstackgerritMerged openstack/openstack-ansible-os_nova: Use tempest for testing  https://review.openstack.org/30156600:22
openstackgerritMerged openstack/openstack-ansible-os_nova: Update paste, policy and rootwrap configurations 2016-04-22  https://review.openstack.org/30949700:22
*** admin0 has quit IRC00:40
*** sdake has quit IRC00:46
openstackgerritMerged openstack/openstack-ansible: Update Newton SHA's 2016-04-22  https://review.openstack.org/30949800:54
*** weezS has quit IRC01:01
*** markvoelker has joined #openstack-ansible01:01
*** kiranv_ has joined #openstack-ansible01:05
*** markvoelker_ has joined #openstack-ansible01:07
*** markvoelker has quit IRC01:10
*** kiranv_ has quit IRC01:10
*** thorst has joined #openstack-ansible01:14
*** thorst has quit IRC01:22
*** woodard has joined #openstack-ansible01:34
*** markvoelker has joined #openstack-ansible01:36
*** markvoelker_ has quit IRC01:36
*** markvoelker has quit IRC01:36
*** markvoelker has joined #openstack-ansible01:37
*** woodard has quit IRC01:39
*** weezS has joined #openstack-ansible01:45
*** iceyao has joined #openstack-ansible01:45
*** markvoelker_ has joined #openstack-ansible01:50
*** keedya has joined #openstack-ansible01:53
*** markvoelker has quit IRC01:53
*** thorst has joined #openstack-ansible01:55
*** thorst has quit IRC02:05
*** thorst has joined #openstack-ansible02:06
*** sdake has joined #openstack-ansible02:10
*** thorst has quit IRC02:14
*** markvoelker_ has quit IRC02:17
*** sdake has quit IRC02:25
*** sdake has joined #openstack-ansible02:26
*** weezS has quit IRC02:27
*** mgodeck has joined #openstack-ansible02:31
*** mgodeck has quit IRC02:38
*** gluytium has joined #openstack-ansible03:01
*** itlinux has joined #openstack-ansible03:07
*** itlinux has quit IRC03:15
*** thorst has joined #openstack-ansible03:17
*** weezS has joined #openstack-ansible03:19
*** weezS has quit IRC03:21
*** thorst has quit IRC03:25
*** itlinux has joined #openstack-ansible03:26
*** itlinux has quit IRC03:36
*** itlinux has joined #openstack-ansible03:37
pellaeonI found a bug upgrading kilo to liberty03:38
pellaeonfailed: [infra2_memcached_container-13d3d76a] => {"changed": true, "cmd": "echo 'flush_all' | nc $(awk '/\\-l/ {print $2}' /etc/memcached.conf) $(awk '/\\-p/ {print $2}' /etc/memcached.conf)", "delta": "0:00:00.136367", "end": "2016-04-24 11:14:09.882937", "rc": 1, "start": "2016-04-24 11:14:09.746570", "warnings": []}03:38
pellaeonstderr: nc: port number invalid: 172.29.236.7303:38
pellaeonIn scripts/upgrade-utilities/playbooks/memcached-flush.yml , change $(awk '/\-l/ {print $2}' /etc/memcached.conf) to $(awk '/^\-l/ {print $2}' /etc/memcached.conf) will fix it03:40
pellaeonArrggh, just found it's my own fault.. I shouldn't have placed my playbooks in non standard path /opt/ansible-lxc-rpc/ ...03:42
*** itlinux has quit IRC03:43
*** markvoelker has joined #openstack-ansible03:49
*** markvoelker_ has joined #openstack-ansible03:51
*** phalmos_ has quit IRC03:53
*** sdake has quit IRC03:54
*** markvoelker has quit IRC03:55
*** markvoelker_ has quit IRC03:58
*** markvoelker has joined #openstack-ansible03:58
*** markvoelker has quit IRC03:58
*** markvoelker has joined #openstack-ansible03:59
*** phalmos has joined #openstack-ansible04:02
*** markvoelker_ has joined #openstack-ansible04:11
*** markvoelker has quit IRC04:14
*** markvoelker_ has quit IRC04:19
*** markvoelker_ has joined #openstack-ansible04:20
*** kiranv_ has joined #openstack-ansible04:21
*** markvoelker_ has quit IRC04:34
*** markvoelker has joined #openstack-ansible04:34
*** markvoelker has quit IRC04:40
*** markvoelker has joined #openstack-ansible04:40
*** weezS has joined #openstack-ansible04:53
*** weezS has quit IRC04:53
kiranv_@odyssey4me @automagically are you guys online?04:58
cloudnullkiranv_: they're likely all in transit. whats up?05:02
*** markvoelker has quit IRC05:03
kiranv_oh ok. I was trying to set up a multi region setup with a single keystone by changing the config in the file: playbooks/roles/os_keystone/defaults/main.yml05:03
cloudnullthats doable.05:04
kiranv_I modified the file in one of the nodes and pointed it to the keystone running on a second node.05:04
kiranv_But, I have a few questions05:04
cloudnullhttps://github.com/cloudnull/osad-regions05:04
cloudnulli did that as a POC a while05:05
cloudnull*while back05:05
cloudnullit works well .05:05
cloudnullthe use of vars should be able to make it all go05:05
kiranv_I haven't added any vars.. I just changed the keystone_service config like following05:07
kiranv_#keystone_service_publicuri: "{{ keystone_service_publicuri_proto }}://{{ external_lb_vip_address }}:{{ keystone_service_port }}"05:07
kiranv_#keystone_service_internaluri: "{{ keystone_service_internaluri_proto }}://{{ internal_lb_vip_address }}:{{ keystone_service_port }}"05:07
kiranv_#keystone_service_adminuri: "{{ keystone_service_adminuri_proto }}://{{ internal_lb_vip_address }}:{{ keystone_admin_port }}"05:07
kiranv_#keystone_service_publicuri: "{{ keystone_service_publicuri_proto }}://{{ external_lb_vip_address }}:{{ keystone_service_port }}"05:07
kiranv_keystone_service_publicuri: "http://10.128.0.2:5000"05:07
kiranv_#keystone_service_internaluri: "{{ keystone_service_internaluri_proto }}://{{ internal_lb_vip_address }}:{{ keystone_service_port }}"05:07
kiranv_keystone_service_internaluri: "http://10.128.0.2:5000"05:07
kiranv_#keystone_service_adminuri: "{{ keystone_service_adminuri_proto }}://{{ internal_lb_vip_address }}:{{ keystone_admin_port }}"05:07
kiranv_keystone_service_adminuri: "http://10.128.0.2:5000"05:07
cloudnullthat kinda looks like so https://github.com/cloudnull/osad-regions/blob/master/templates/user_identity.yml.j205:08
kiranv_the questions I had are:05:08
kiranv_1. the keystone service that is running on node 1 has public url as I listed above.. but the admin url and internal url are the loadbalancer IP.. so can I use the public url for admin url and internal url as well?05:10
cloudnullyou can use the public URL for admin and internal, assuming your API nodes have a route back to it.05:11
kiranv_how do I check if they have a route back? I verified that both nodes are pingable from each other05:12
kiranv_do I have to make some other changes to make it possible?05:12
cloudnullno. if you can ping it you're likely fine.05:12
kiranv_ok.05:12
kiranv_2nd question is.. I had node 1 with default aio config brought up and running.. while bringing up aio on the second node, I made the changes I listed above and changed the region name to RegionTwo in the same file05:14
kiranv_and I ran through all the steps of bringing up aio05:14
cloudnullin my POC i essentially had three deployments. 1 for identity (keystone) & 2 compute (nova) + object (swift)05:14
cloudnullok.05:14
kiranv_everything came up without errors.. but I am not able to launch horizon on the 2nd node05:14
kiranv_and when I try to fetch endpoints on the 1st node, it only lists endpoints running on node 105:15
cloudnullso it sounds like you have 2 stand alone deployments (AIO)05:15
kiranv_I am trying to bring up 2 aio nodes with 2nd node services pointing to keystone running on 1st node05:15
kiranv_Do I have to make any changes to the 1st node as well? to notify the keystone about RegionTwo(aio node2)?05:17
cloudnullall of that works fine, however you need to disable the deployment of keystone on the 2nd deployment.05:18
cloudnullyou will have a single shared keystone05:18
kiranv_ok. I can do that.. can I just stop the keystone containers to achieve that?05:19
cloudnullif you have keystone in region1 and keystone in region2 and they're on different databases then you will need feneration to make them talk05:19
kiranv_I only want to have a single keystone and both the nodes pointing to the same keystone05:20
cloudnullso you need to setup some additional variables because at present you have 2 stand alone deployments both with RegionOne setup05:20
cloudnullin region 2 you need something like this https://github.com/cloudnull/osad-regions/blob/master/templates/user_region_two.yml.j205:21
cloudnullkiranv_: i have to run again, however i'd recommend you have a look at this playbook which orchestrates the deployment of region1 and 2 and sets up keystone for that kind of an env: https://github.com/cloudnull/osad-regions/blob/master/osad-regions-playbook.yml05:24
cloudnullspecifically this part05:25
cloudnullhttps://github.com/cloudnull/osad-regions/blob/master/osad-regions-playbook.yml#L108-L18105:25
kiranv_ok. Should I have this in the /opt/openstack-ansible/tests/roles/bootstrap-host/templates user_variables file?05:25
cloudnullI co located horizon and keystone on the same infrastructure  and then had totally stand along compute environments05:26
*** thorst has joined #openstack-ansible05:26
cloudnullyou'd need those files to be in their own deployment directory05:26
cloudnullwhich is why we're looking at isolating ansible and allowing additional configuration directories to co-exist in this PR https://review.openstack.org/#/c/304840/05:27
cloudnullok gotta run . kiranv_ if you're going to be at the summit i'd love to chat more about this05:29
cloudnullotherwise I'll see you back on IRC later on.05:30
cloudnullcheers05:30
kiranv_ok. sorry for being ignorant.. I am just getting started with deploying OpenStack and ansible.. So, I can clone your osad-regions and make some changes to match my setup and run the playbook to get the  regions up and running05:30
kiranv_sure! thanks for your help!05:30
kiranv_I am not attending the summit.. I'll catch you later on IRC :) thanks again :)05:31
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Enable SSL termination for all services  https://review.openstack.org/27719905:32
*** thorst has quit IRC05:34
*** fxpester has quit IRC05:54
*** fxpester has joined #openstack-ansible05:56
*** markvoelker has joined #openstack-ansible05:57
*** markvoelker has quit IRC06:02
*** weezS has joined #openstack-ansible06:21
*** iceyao has quit IRC06:30
*** thorst has joined #openstack-ansible06:32
*** thorst has quit IRC06:40
*** weezS has quit IRC07:13
*** kiranv_ has quit IRC07:33
*** thorst has joined #openstack-ansible07:36
*** fxpester has quit IRC07:41
*** thorst has quit IRC07:45
*** markvoelker has joined #openstack-ansible07:46
*** markvoelker has quit IRC07:51
*** javeriak has joined #openstack-ansible08:25
*** keedya has quit IRC08:29
*** javeriak_ has joined #openstack-ansible08:40
*** javeriak has quit IRC08:40
*** thorst has joined #openstack-ansible08:42
*** thorst has quit IRC08:50
*** dalees has quit IRC09:15
*** phiche has joined #openstack-ansible09:23
*** phiche has quit IRC09:32
*** markvoelker has joined #openstack-ansible09:34
*** markvoelker has quit IRC09:39
pellaeonHi, I hit an error when upgrading from kilo to liberty:09:41
pellaeonTASK: [os_neutron | Install apt packages for LBaaS] ***************************09:41
pellaeonfatal: [infra3_neutron_agents_container-529e5e60] => error while evaluating conditional: inventory_hostname in groups[neutron_services['neutron-lbaas-agent'][09:41
pellaeon'group']]09:41
pellaeonhmm, for looks like for some reason add_new_neutron_env.py wasn't executed.. tracking down09:44
*** admin0 has joined #openstack-ansible09:47
*** thorst has joined #openstack-ansible09:48
*** javeriak_ has quit IRC09:49
admin0cloudnull: i am going to retry again with the lastest patch, without the keystone header09:50
*** thorst has quit IRC09:55
*** oneswig has joined #openstack-ansible10:05
*** oneswig has left #openstack-ansible10:06
*** sdake has joined #openstack-ansible10:21
*** sdake has quit IRC10:26
*** admin0 has quit IRC10:46
*** kysse has quit IRC10:52
*** thorst has joined #openstack-ansible10:52
*** kysse has joined #openstack-ansible10:52
*** thorst has quit IRC11:00
*** markvoelker has joined #openstack-ansible11:51
*** markvoelker has quit IRC11:56
*** markvoelker has joined #openstack-ansible11:56
*** thorst has joined #openstack-ansible11:59
*** thorst has quit IRC11:59
*** thorst has joined #openstack-ansible12:00
*** thorst has quit IRC12:04
*** woodard has joined #openstack-ansible12:05
*** woodard has quit IRC12:10
*** dalees has joined #openstack-ansible12:14
*** v1k0d3n has quit IRC12:44
*** oneswig has joined #openstack-ansible12:44
*** oneswig_ has joined #openstack-ansible12:46
*** oneswig has quit IRC12:46
*** markvoelker has quit IRC12:46
*** oneswig_ has quit IRC12:51
*** iceyao has joined #openstack-ansible13:00
*** retreved has joined #openstack-ansible13:15
*** phiche has joined #openstack-ansible13:18
*** phalmos has quit IRC13:25
*** phiche has quit IRC13:27
*** iceyao has quit IRC13:29
*** iceyao has joined #openstack-ansible13:32
*** retreved has quit IRC13:38
*** retreved has joined #openstack-ansible13:51
*** markvoelker has joined #openstack-ansible14:03
*** markvoelker_ has joined #openstack-ansible14:06
*** markvoelker has quit IRC14:07
*** dkehn has quit IRC14:11
*** thorst has joined #openstack-ansible14:12
*** dkehn_ is now known as dkehn14:12
*** woodard has joined #openstack-ansible14:13
*** thorst has quit IRC14:13
*** gluytium has quit IRC14:13
*** mgodeck has joined #openstack-ansible14:18
*** gluytium has joined #openstack-ansible14:19
*** oneswig has joined #openstack-ansible14:23
*** winggundamth_ has joined #openstack-ansible14:41
odyssey4mepellaeon It sounds to me like you have an upgraded environment from back in Icehouse days or something.15:03
odyssey4mepellaeon Do you have any user_*.yml files in /etc/openstack_deploy/ other than user_secrets.yml and user_variables.yml ?15:03
*** sacharya has joined #openstack-ansible15:03
odyssey4mepellaeon Also, what version did you upgrade from and to?15:04
pellaeonodyssey4me: nevermind, I figured it out, complicate to explain but it's my own problem15:04
odyssey4mepellaeon well, if you have a test environment available then it'd be great to have input when we get going on upgrade tests from Liberty to Mitaka - I'm hoping to get a basic body of work for that done right after the summit15:06
*** woodard has quit IRC15:12
*** oneswig has quit IRC15:14
*** sacharya has quit IRC15:15
pellaeonodyssey4me: sorry, that's probably not possible, this is the only environment i've got for evaluating for my organization :-/15:20
*** sacharya has joined #openstack-ansible15:21
odyssey4mepellaeon oh wow - I take it that you take the approach of testing it live and failing forward? that's pretty risky15:21
odyssey4mepellaeon well, do you have a spare host available with a suitable amount of ram to use https://gist.github.com/cloudnull/f71e3078f9a0018017c3 ? you may find yourself having a lot more freedom to try things out15:22
odyssey4meotherwise an AIO in an instance is how we do a lot of testing, and is usually our starting point15:23
pellaeonyep, but the IT center only has me working on openstack, still in evaluation phase ¯\_(ツ)_/¯15:24
pellaeonAIO is what I can definitely do, but multiple physical hosts might be difficult15:25
*** oneswig has joined #openstack-ansible15:26
odyssey4mepellaeon that gist does a multi-node build on a single host with 128GB RAM and enough CPU's... it's probably that it'll work well enough with less RAM too - I just haven't tried that yet15:27
pellaeonah, I see, still looking at what it really does xD15:28
odyssey4meit sets up cobbler, then pxe boots a set of VM's in a multi-node configuration for general purpose testing15:28
winggundamth_odyssey4me: https://bugs.launchpad.net/openstack-ansible/+bug/1573908 I found this bug on my newest kernel version. do you have an idea to fix the syntax?15:29
openstackLaunchpad bug 1573908 in openstack-ansible "role openstack_hosts fails not to add scsi_dh to /etc/modules" [Undecided,New]15:29
odyssey4mewinggundamth_ that's fixed up in mitaka as I recall15:29
*** weezS has joined #openstack-ansible15:29
odyssey4mewinggundamth hmm, actually - I'm confused15:30
winggundamth_odyssey4me: I'm using tag 13.0.1. isn't that mitaka?15:30
pellaeonodyssey4me: thanks for the tip, I'll see if i can get a machine with that much RAM ;-)15:30
odyssey4mewinggundamth what kernel version are you using?15:30
winggundamth_4.4.0-21-generic15:30
odyssey4meok, so that should result in a blank result in that conditional, which is correct15:31
winggundamth_yep15:31
odyssey4mewhat's the problem with that?15:31
winggundamth_it causing an error as I put in bug report15:31
winggundamth_failed: [aio1] => {"changed": false, "failed": true, "item": "", "name": "", "params": "", "state": "present"}15:32
winggundamth_msg: modprobe: FATAL: Module not found in directory /lib/modules/4.4.0-21-generic15:32
odyssey4meah, it took me a moment to find the failure15:32
winggundamth_haha sorry. I'm not sure in launchpad bug report don't have a syntax for pretext right?15:33
*** dkehn_ has joined #openstack-ansible15:33
odyssey4methat's odd - we've had a similar conditional in for a while and haven't seen issues with it15:33
odyssey4mewe'll have to see if we can replicate it and figure out a solution15:33
*** iceyao has quit IRC15:33
odyssey4mewinggundamth_ what you could try is to change https://github.com/openstack/openstack-ansible-openstack_hosts/blob/master/tasks/openstack_kernel_modules.yml#L16-L22 to check whether the item is '' and to skip it15:34
winggundamth_yes. I want to help too but couldn't find any syntax that easily fixes this15:34
winggundamth_yep. I just commented it :P15:34
odyssey4memaybe 'when: openstack_host_kernel_modules is defined and item != '' ?15:35
winggundamth_that will causing condition that you have to look at on many places right?15:36
odyssey4meit's worth a try to see if it works15:37
winggundamth_yeah let me try it15:37
odyssey4meotherwise we have to pull the conditional for the kernel out of defaults and put them into tasks, but that will result in a ton of extra tasks15:37
winggundamth_yeah :(15:38
odyssey4meI think it's worth just evaluating that the item has a value and acting on it. This is simple.15:38
winggundamth_I'm trying right now15:39
winggundamth_still the same :(15:40
winggundamth_failed: [aio1] => {"changed": false, "failed": true, "item": "", "name": "", "params": "", "state": "present"}15:40
winggundamth_msg: modprobe: FATAL: Module  not found in directory /lib/modules/4.4.0-21-generic15:40
odyssey4mekeep trying that concept out, or see if you can figure out another way to do it15:40
*** phiche has joined #openstack-ansible15:40
winggundamth_yup15:41
*** weezS has quit IRC15:44
*** oneswig has quit IRC15:48
*** oneswig has joined #openstack-ansible15:48
*** oneswig has joined #openstack-ansible15:48
winggundamth_odyssey4me: ah sorry. your propose fix is working15:50
*** oneswig has quit IRC15:50
*** markvoelker_ has quit IRC15:50
*** oneswig has joined #openstack-ansible15:50
winggundamth_I jsut put on wrong block :(15:50
odyssey4meoh? nice! :) you can submit a patch to fix it then15:50
winggundamth_no problem15:50
winggundamth_I'm experienced now :)15:51
*** oneswig has quit IRC15:55
openstackgerritJirayut Nimsaeng proposed openstack/openstack-ansible-openstack_hosts: Fix error by add condition to check if item is empty from the conditional  https://review.openstack.org/30979615:58
winggundamth_what is need to do to run functional test on this project?15:59
*** sacharya has quit IRC16:04
*** sacharya has joined #openstack-ansible16:05
*** markvoelker has joined #openstack-ansible16:08
*** sacharya has quit IRC16:12
*** sacharya has joined #openstack-ansible16:12
*** mrda has quit IRC16:14
*** markvoelker has quit IRC16:16
*** galstrom_zzz is now known as galstrom16:16
*** markvoelker_ has joined #openstack-ansible16:16
*** markvoelker_ has quit IRC16:16
*** markvoelker_ has joined #openstack-ansible16:18
*** sdake has joined #openstack-ansible16:26
*** kiranv_ has joined #openstack-ansible16:28
*** markvoelker_ has quit IRC16:31
*** galstrom is now known as galstrom_zzz16:31
*** sacharya has quit IRC16:36
*** sacharya has joined #openstack-ansible16:39
*** markvoelker has joined #openstack-ansible16:40
odyssey4mewinggundamth_ well, unfortunately that particular issue depends on the kernel used which we can't control - however if you can figure out a way to functionally test something then it should go into the post_tasks https://github.com/openstack/openstack-ansible-openstack_hosts/blob/master/tests/test.yml16:44
*** markvoelker has quit IRC16:46
*** admin0 has joined #openstack-ansible16:48
*** aslaen has joined #openstack-ansible16:48
winggundamth_okay16:49
*** markvoelker has joined #openstack-ansible16:50
*** markvoelker has quit IRC16:52
winggundamth_odyssey4me: which session that I should attend to propose my idea on OSA? Actually I need help on writing blueprint17:00
*** markvoelker has joined #openstack-ansible17:00
*** markvoelker has quit IRC17:01
*** sacharya has quit IRC17:10
*** sacharya has joined #openstack-ansible17:12
*** winggundamth_ has quit IRC17:13
*** dkehn_ has quit IRC17:14
*** elo has quit IRC17:16
*** mgodeck has quit IRC17:17
*** sacharya has quit IRC17:26
*** sdake has quit IRC17:27
*** sacharya has joined #openstack-ansible17:28
*** sacharya has quit IRC17:32
*** oneswig has joined #openstack-ansible17:40
odyssey4mewinggundamth stuff we don't have schedule is best planned for the community day on friday morning17:45
odyssey4mewinggundamth add your discussion to https://etherpad.openstack.org/p/openstack-ansible-newton-contributor-day and make sure that https://www.openstack.org/summit/austin-2016/summit-schedule/events/9428?goback=1 is on your schedule17:46
odyssey4meafk for the rest of the day17:47
*** oneswig has quit IRC17:48
*** oneswig has joined #openstack-ansible17:51
*** thorst has joined #openstack-ansible17:54
*** mgodeck has joined #openstack-ansible17:56
*** thorst has quit IRC17:58
*** winggundamth_ has joined #openstack-ansible18:02
*** winggundamth_ has quit IRC18:08
*** aslaen has quit IRC18:09
*** mgodeck has quit IRC18:13
*** mgodeck has joined #openstack-ansible18:15
*** deadnull has joined #openstack-ansible18:19
*** mgodeck has quit IRC18:21
*** woodard has joined #openstack-ansible18:23
*** woodard has quit IRC18:27
*** kiranv_ has quit IRC18:36
*** kiranv_ has joined #openstack-ansible18:37
*** BjoernT has joined #openstack-ansible18:49
*** BjoernT has quit IRC18:53
*** deadnull has quit IRC18:54
* cloudnull on what's app. Ping if in Austin and or want to chat ping on 4158276749 18:55
cloudnullSeveral of us are at "craft pride" if anyone wants to roll through.18:56
logan-i might swing by in a bit, just checking in @ hotel19:03
cloudnullAwesome.19:13
cloudnullThey have good pizza and beer. So all of well. ;-)19:13
*** ozialien10 has joined #openstack-ansible19:16
*** itlinux has joined #openstack-ansible19:39
*** itlinux has quit IRC19:51
*** markvoelker has joined #openstack-ansible19:51
*** sacharya has joined #openstack-ansible20:19
*** mgodeck has joined #openstack-ansible20:38
*** itlinux has joined #openstack-ansible20:41
*** itlinux has quit IRC20:52
*** markvoelker has quit IRC20:52
*** oneswig has quit IRC20:53
*** winggundamth_ has joined #openstack-ansible20:56
*** markvoelker has joined #openstack-ansible20:57
*** markvoelker has quit IRC20:57
*** markvoelker has joined #openstack-ansible20:57
*** markvoelker has quit IRC21:13
*** markvoelker has joined #openstack-ansible21:17
*** oneswig has joined #openstack-ansible21:18
*** oneswig has quit IRC21:20
*** markvoelker has quit IRC21:21
*** sacharya has quit IRC21:24
*** sacharya has joined #openstack-ansible21:26
*** markvoelker has joined #openstack-ansible21:33
*** phiche has quit IRC21:35
*** aludwar has joined #openstack-ansible21:36
*** aludwar has quit IRC21:36
*** markvoelker has quit IRC21:37
*** oneswig has joined #openstack-ansible21:38
*** winggundamth_ has quit IRC21:40
*** mgodeck has quit IRC21:41
*** sacharya has quit IRC21:43
*** kiranv_ has quit IRC21:43
*** markvoelker has joined #openstack-ansible21:45
*** sacharya has joined #openstack-ansible21:46
*** markvoelker has quit IRC21:47
*** oneswig has quit IRC21:47
*** admin0 has quit IRC21:58
*** aludwar has joined #openstack-ansible22:01
*** sacharya has quit IRC22:08
*** sacharya has joined #openstack-ansible22:10
*** oneswig has joined #openstack-ansible22:10
*** aludwar has quit IRC22:29
*** sacharya has quit IRC22:35
*** sacharya has joined #openstack-ansible22:38
*** tlbr has quit IRC22:47
*** tlbr has joined #openstack-ansible22:48
*** aludwar has joined #openstack-ansible22:59
*** aludwar has quit IRC23:03
*** aludwar has joined #openstack-ansible23:11
*** ChrisBenson has quit IRC23:12
*** oneswig has quit IRC23:13
*** ChrisBenson has joined #openstack-ansible23:15
*** sacharya has quit IRC23:15
*** sacharya has joined #openstack-ansible23:17
*** sacharya has quit IRC23:23
*** sacharya has joined #openstack-ansible23:24
*** sacharya has quit IRC23:43
*** sacharya has joined #openstack-ansible23:45

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!