Tuesday, 2015-08-18

*** britthouser has quit IRC00:01
*** alop has quit IRC00:17
*** daneyon has joined #openstack-ansible00:39
openstackgerritMerged stackforge/os-ansible-deployment: Updated master for new dev work - 15 Aug 2015  https://review.openstack.org/21291900:42
*** daneyon_ has joined #openstack-ansible00:46
*** daneyon has quit IRC00:49
*** sigmavirus24 is now known as sigmavirus24_awa00:49
*** shoutm has quit IRC00:51
*** shoutm has joined #openstack-ansible01:04
*** JRobinson__ has joined #openstack-ansible01:16
*** shoutm has quit IRC02:18
*** shoutm has joined #openstack-ansible02:20
*** galstrom_zzz is now known as galstrom02:28
*** galstrom is now known as galstrom_zzz02:28
*** shoutm has quit IRC02:44
*** logan2 has quit IRC02:44
*** shoutm has joined #openstack-ansible03:01
*** sdake has joined #openstack-ansible03:32
*** logan2 has joined #openstack-ansible03:34
*** sdake_ has quit IRC03:36
*** markvoelker has quit IRC03:42
*** sdake_ has joined #openstack-ansible03:42
*** sdake has quit IRC03:45
*** shoutm has quit IRC04:00
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add nova_libvirt_live_migration_flag variable  https://review.openstack.org/21245204:02
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Use slurp to get the content of the ceph.conf file  https://review.openstack.org/21317004:02
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove hardcoded config drive enforcement  https://review.openstack.org/21249704:03
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Removes trailing whitespace for bashate  https://review.openstack.org/20766304:03
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update the documented ceph user variables  https://review.openstack.org/20913004:04
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/20953704:04
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add support for additional nova.conf options  https://review.openstack.org/21049204:04
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Keystone SSL cert/key distribution and configuration  https://review.openstack.org/19447404:05
*** shoutm has joined #openstack-ansible04:06
*** JRobinson__ is now known as JRobinson__afk04:17
*** sdake_ is now known as sdake04:22
*** shoutm_ has joined #openstack-ansible04:27
*** shoutm has quit IRC04:28
*** markvoelker has joined #openstack-ansible04:42
*** JRobinson__afk is now known as JRobinson__04:45
*** markvoelker has quit IRC04:47
*** shoutm_ has quit IRC05:03
*** shoutm has joined #openstack-ansible05:34
*** ashishb has joined #openstack-ansible05:58
*** javeriak has joined #openstack-ansible06:01
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Replace ADFS example DNS name with something appropriate  https://review.openstack.org/21401206:02
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Adds a pep8 target to tox.ini  https://review.openstack.org/21401306:03
*** shoutm has quit IRC06:05
*** shoutm has joined #openstack-ansible06:12
*** britthou_ has quit IRC06:22
*** britthouser has joined #openstack-ansible06:22
*** JRobinson__ has quit IRC06:23
*** shoutm_ has joined #openstack-ansible06:39
*** shoutm has quit IRC06:42
*** openstackgerrit_ has joined #openstack-ansible06:43
*** markvoelker has joined #openstack-ansible06:43
*** javeriak has quit IRC06:47
*** javeriak has joined #openstack-ansible06:48
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update rabbitmq version deployed to v3.5.4  https://review.openstack.org/21051606:48
*** markvoelker has quit IRC06:48
*** javeriak_ has joined #openstack-ansible06:51
*** javeriak has quit IRC06:52
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update tempest configuration  https://review.openstack.org/21010706:57
*** fawadkhaliq has joined #openstack-ansible07:05
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch tempest to test Cinder API v2  https://review.openstack.org/21404507:22
*** benwh4 has joined #openstack-ansible07:50
benwh4I have an sysadmin question, does the container-mgmt network should perform web action or is it entirely manage by the hosts-net in the example does 10.240.0.0/24 should reach or have wan connectivity ?07:56
odyssey4mebenwh4 I'm not sure that I understand your question.07:58
*** fawadkhaliq has quit IRC08:01
*** fawadkhaliq has joined #openstack-ansible08:01
matttbenwh4: that is a private network, you don't want that publicly exposed08:06
benwh4during the osad deployment following the example you have net 10.240.0.0 & 172.29.236.008:06
benwh4and does 172.29.36.0 should have internet conectivity to perform some openstack i stallation ? or is it manage by the 10.240.0.0 network ?08:07
odyssey4mebenwh4 you can allow the networks out if you want to, but don't allow public access into anything except the external load balancer address08:07
benwh4ok but whom install from source the openstack package ? the management-net (10.240.0.0) or the container-mgmt (172.29.236.0/22) ?08:08
benwh4coz ansible should reach over ssh both network and I need to set up my firewall to allow the traffic, I just want to know which network will install the openstack services08:10
odyssey4mebenwh4 I think all containers use eth0 for the default gateway, and therefore their traffic will pass through the host and be natted - so the access to public packages will depend on the host's default gateway, which is something you setup08:10
odyssey4meyes, ansible reaches the hosts and containers on the mgmt network08:11
benwh4eth0 who is bind to bind0 correct ?08:12
odyssey4mebenwh4 the host network configuration is up to you08:13
odyssey4methe containers do not use bonded network configurations though08:13
odyssey4meonly the hosts08:13
odyssey4meand the outgoing traffic will pass through the default gateway for the hosts08:14
benwh4yes but I still have ssh error and I suppose it is due to my firewall which hosts the networks vlan and GW08:15
odyssey4mean ssh error from where to where?08:15
benwh4but if all traffic go through 10.240.0.0 (or my conf) it should work fine but it doesn't08:16
benwh4from the deployment node to the container I think coz form the deployment node I ssh all my targets08:17
odyssey4meok, is the deployment node on the same network as the containers?08:17
matttwe typically deploy from one of the nodes in the cluster itself08:17
odyssey4meie does it have an address on the management subnet?08:17
benwh4yes08:17
odyssey4meok, then no routing should be involved08:18
odyssey4meis the ssh error all the time, or sometimes?08:18
odyssey4meand is it when you use ansible only, or when you ssh too?08:18
benwh4all the time to the targets hosts08:18
benwh4only when I use the setup-hosts.yml playboo08:19
odyssey4meok - can you ssh to the host using the ssh client (not ansible)08:19
matttbenwh4: do you have keys setup?08:19
benwh4form my deployment node yes08:19
benwh4yes08:19
benwh4I have keys set up08:20
matttbenwh4: and when you ssh in you're doing so as root right?08:21
matttall the ansible stuff runs as root08:21
benwh4yes I ssh using root08:21
odyssey4mebenwh4 can you ssh as root to the target host?08:21
benwh4yes08:21
matttbenwh4: go into /path/to/os-ansible-deployment/playbooks, and type "ansible all -m ping"08:22
benwh4should my deployment node have the same network configuration as the target ?08:22
odyssey4mebenwh4 no, but it does need to be able to communicate via ssh to the target08:22
odyssey4meis there any firewalling on the target hosts?08:23
benwh4the ansible all -m ping doesn't work it give me ssh error08:25
benwh4non there is no FWing on the target hosts08:26
odyssey4meit sounds to me like you probably have some sort of sshd configuration issue that's preventing ansible from working properly08:26
odyssey4meon the target hosts08:26
benwh4but when I do ssh tragethost it works08:26
benwh4ok I will double check08:27
odyssey4meyes, but ansible uses more options08:27
odyssey4metry reverting the config to defaults, enabling key based access for root and then re-enabling any special settings you have08:27
benwh4ok last thing does chmod 600 on the authorized_keys file is good or to restrictive ?08:28
matttbenwh4: that's fine08:33
matttassuming it's owned by the correct user08:33
*** c0m0 has joined #openstack-ansible08:39
*** shausy has joined #openstack-ansible08:42
*** markvoelker has joined #openstack-ansible08:45
*** shoutm_ has quit IRC08:48
*** markvoelker has quit IRC08:50
*** d0ugal has joined #openstack-ansible08:53
evrardjpgood morning everyone08:59
odyssey4meo/ evrardjp08:59
evrardjpI've got a working draft for haproxy08:59
odyssey4megood news - I seem to have tracked down the pattern for the build failures in master: https://bugs.launchpad.net/openstack-ansible/+bug/148591709:00
openstackLaunchpad bug 1485917 in openstack-ansible trunk "hpcloud AIO's are failing tempest tests" [Critical,Confirmed] - Assigned to Jesse Pretorius (jesse-pretorius)09:00
evrardjpI'll create the blueprint with the features09:00
evrardjpnice!09:00
odyssey4meevrardjp great! if you can prep a spec for what you're implementing, that'd be great09:00
evrardjpI've never done that09:00
odyssey4meie register a blueprint with a short summary, then the spec with the details09:00
evrardjpok09:01
evrardjpI'll try and we'll adapt if necessary09:01
odyssey4meclone https://github.com/stackforge/os-ansible-deployment-specs09:01
odyssey4mecopy https://github.com/stackforge/os-ansible-deployment-specs/blob/master/specs/template.rst to the 'liberty' folder and name the file according to the same name as the blueprint09:01
odyssey4meyou'll see the pattern with existing files there09:01
odyssey4methen edit the file with the details09:02
odyssey4meevrardjp here are some examples in flight https://review.openstack.org/213779 and https://review.openstack.org/20771309:02
odyssey4menote that you don't have to complete everything in round one - just do what you can and we can discuss and iterate from there09:03
odyssey4mehughsaunders please can you review https://review.openstack.org/213439 - 11.1.1 has a pbr issue which breaks the repo build09:05
evrardjpodyssey4me: ok09:05
hughsaundersodyssey4me: yep09:06
evrardjpodyssey4me: if it's a blueprint that doesn't target liberty, but works for kilo, should I create the spec in libery folder?09:06
odyssey4meevrardjp you're welcome to propose it to the kilo folder - we can hopefully get it in for kilo then, perhaps in 11.2.009:07
*** fawadkhaliq has quit IRC09:30
evrardjpodyssey4me: so for the spec, I should do a git review too?09:36
evrardjpor simply pushing it?09:36
odyssey4meevrardjp yes, you submit it using git review09:36
odyssey4meit's reviewed in gerrit09:36
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment-specs: Add spec to change haproxy default behaviour  https://review.openstack.org/21408909:37
evrardjpwoops09:37
evrardjpwhitespaces09:37
evrardjpI'm fixing that09:37
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment-specs: Add spec to change haproxy default behaviour  https://review.openstack.org/21408909:39
odyssey4megit-harry https://bugs.launchpad.net/openstack-ansible/+bug/148401109:41
openstackLaunchpad bug 1484011 in openstack-ansible trunk "python-openstacksdk build fails" [Critical,New]09:41
*** neillc is now known as neillc_away09:42
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Disable tempest swift tests when DEPLOY_SWIFT=no  https://review.openstack.org/21410209:56
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: [WIP] HAProxy rewrite  https://review.openstack.org/21410710:03
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: Enables the admin level on the haproxy stats socket.  https://review.openstack.org/21411010:07
evrardjpI have two hosts for working on OSAD: my main workstation and one of my deploy hosts... when I'm working on a patch, I don't know how I should transfer the state of the work realized... I thought it's best to use another repo (like a personal github repo)10:11
evrardjpbut it seems to bring a mess with the commits (pushes to personal repo and then review seem to bring more than one commit)... how should I do?10:12
odyssey4meevrardjp typically I use a deployment host in a test environment to work out a working patch, then I diff it and copy the diff to my workstation where I patch it into a branch and submit it via gerrit10:12
evrardjpatm I'm using patch to solve this10:13
evrardjpok10:13
evrardjpI'm doing the same10:13
evrardjpthen it's not that bad10:13
evrardjp;)10:13
odyssey4meI also ensure that I do at least one patch set upload each day so that the work can be reviewed, commented on or continue10:13
evrardjpI ddin't get that10:13
evrardjpdidn't*10:13
odyssey4meyeah, it's not the best workflow - but the alternative would be to put my ssh keys on the lab host which is not all that wise10:14
evrardjpoh you make sure that your code is submitted on a day to day basis, right?10:14
odyssey4meyes10:14
odyssey4methat way it's safe, and it can be reviewed10:14
evrardjpI don't have that opportunity, deploy host is in isolated lab ;)10:14
evrardjphard to reach I'd say10:15
odyssey4mebut if I'm co-authoring a patch with someone then they can also continue from where I left off10:15
evrardjpthat makes sense10:15
hughsaundersI use my github fork to shuffle patches between my workstation and test env before pushing to gerrit for review10:15
*** sdake has quit IRC10:15
*** britthouser has quit IRC10:17
evrardjphughsaunders: so you pull and then review, but do you have to squash commits, or pay attention to specific stuff?10:19
hughsaundersevrardjp: I tend to work on a single commit, so I dont need to squash, I commit --ammend for each update that I want to test, then force push to my github fork. Then when testing is done, I only have a single commit to submit to gerrit10:20
evrardjpI maybe did something wrong... I try to also amend all the time10:23
openstackgerritMerged stackforge/os-ansible-deployment: Update kilo for new dev work - 15 Aug 2015  https://review.openstack.org/21343910:24
hughsaundersevrardjp: you will have multiple commits to submit if you are working on a patch that depends on other patches that haven't merged yet10:26
*** shoutm has joined #openstack-ansible10:27
*** britthouser has joined #openstack-ansible10:36
*** ashishb has quit IRC10:37
*** ashishb has joined #openstack-ansible10:37
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch tempest to test Cinder API v2  https://review.openstack.org/21404510:41
*** javeriak_ has quit IRC10:42
*** javeriak has joined #openstack-ansible10:43
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch Nova/Tempest to use/test Cinder API v2  https://review.openstack.org/21404510:44
*** markvoelker has joined #openstack-ansible10:46
*** javeriak has quit IRC10:48
*** markvoelker has quit IRC10:50
*** shausy has quit IRC11:09
*** javeriak has joined #openstack-ansible11:09
*** shausy has joined #openstack-ansible11:09
*** britthouser has quit IRC11:11
*** britthouser has joined #openstack-ansible11:12
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch Nova/Tempest to use/test Cinder API v2  https://review.openstack.org/21404511:13
*** javeriak has quit IRC11:15
*** javeriak_ has joined #openstack-ansible11:15
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update tempest configuration  https://review.openstack.org/21010711:16
*** fawadkhaliq has joined #openstack-ansible11:37
*** rward has quit IRC11:37
*** britthouser has quit IRC11:43
*** markvoelker has joined #openstack-ansible11:46
*** fawadk has joined #openstack-ansible11:46
*** fawadkhaliq has quit IRC11:47
*** markvoelker has quit IRC11:51
*** fawadk has quit IRC11:55
mgariepygood morning everyone.12:11
mgariepyodyssey4me, do you sleep ?12:12
*** markvoelker has joined #openstack-ansible12:12
*** woodard has joined #openstack-ansible12:14
mgariepySam-I-Am, I would like to have a quick idea on how to implement  the external network with osad if you don't mind.12:17
*** woodard has quit IRC12:26
odyssey4memgariepy sometimes :p12:50
*** woodard has joined #openstack-ansible12:52
mgariepyhaha12:52
*** britthouser has joined #openstack-ansible12:59
*** britthou_ has joined #openstack-ansible13:01
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Set iptables-persistent install execution to append to log  https://review.openstack.org/21417213:02
*** tlian has joined #openstack-ansible13:03
*** KLevenstein has joined #openstack-ansible13:04
*** britthouser has quit IRC13:04
evrardjpmgariepy: I'm not sure. Maybe he sleeps between some red bulls ;)13:07
evrardjpmgariepy: you mean using neutron?13:08
mgariepyyeah,13:08
evrardjpyou can connect on the utility container first13:08
mgariepyi just want to have a gross idea on how to specify the network for the container.13:08
evrardjpsource the rc file there13:08
evrardjpho13:08
evrardjpwe aren't speaking about the same thing I think13:08
evrardjpwhat do you mean?13:09
mgariepyfirst. on the node i have a br-ex (external net) maps to eth12 on the neutron container13:09
mgariepyhttp://paste.ubuntu.com/12118008/13:10
mgariepyI guess i need that but i'm not quite sure. haha13:10
evrardjpthis means that a network named "ex", with untagged traffic will arrive to neutron from the host bridge br-ex13:11
evrardjpso your host should have a br-ex bridge (manually configured on the host)13:12
mgariepyso I add this to the neutron container, got my eth12 interface, after this i think it need to be added to neutron  config ?13:12
mgariepyyes i have that.13:12
evrardjpok13:12
mgariepyans this maps the veth in the container.13:12
evrardjpneutron can then leverage that network in openstack13:12
mgariepywhere do i add this in neutron ?13:13
evrardjpit depends on what you want to do13:13
evrardjplet me find a good doc that explains that13:13
mgariepyonly by adding a external network in openstack ? or do i need to specify it in config files ?13:14
*** ashishb has quit IRC13:14
evrardjpyou define it with neutron CLI13:14
evrardjpthe configuration files are handled by OSAD13:14
evrardjpcheck maybe here13:15
evrardjphttp://docs.openstack.org/networking-guide/deploy.html13:15
*** woodard has quit IRC13:16
matttmgariepy: https://github.com/stackforge/os-ansible-deployment/blob/master/etc/openstack_deploy/openstack_user_config.yml.example#L254-L26213:18
matttmgariepy: https://github.com/stackforge/os-ansible-deployment/blob/master/etc/network/interfaces.d/openstack_interface.cfg.example#L102-L10913:19
*** woodard has joined #openstack-ansible13:19
*** persia_ is now known as persia13:19
matttmgariepy: so if you're using a vlan or flat it needs to be specific in openstack_user_config.yml13:20
evrardjpmattt: with the paste above, I think mgariepy: already configured his openstack_user_config13:21
mgariepymy br-ex is already part of a vlan, so my eth12 will need to be flat13:21
*** prad_ has joined #openstack-ansible13:39
*** woodard has quit IRC13:39
*** woodard has joined #openstack-ansible13:40
*** javeriak_ has quit IRC13:41
Sam-I-Ammgariepy: br-ex is an openvswitch thing13:50
Sam-I-Amosad uses linuxbridge, which creates bridges that bind to interfaces13:51
mgariepySam-I-Am, ok so the rigth way to do it is ?13:53
Sam-I-Amwhat kind of networks do you use?13:54
Sam-I-Amvlan? vxlan? tenant/provider?13:54
mgariepyvxlan for the tenant13:54
mgariepyand vlan for public net.13:54
Sam-I-Amon each hosts, you have a br-vlan and br-vxlan, right?13:56
mgariepyyes13:56
Sam-I-Amyou can remove the section in the config about 'flat' networks if you dont need them13:56
*** javeriak has joined #openstack-ansible14:00
mgariepyok, and use then create the provider network with   : neutron net-create provider-101 --shared --provider:physical_network provider --provider:network_type vlan --provider:segmentation_id 10114:00
mgariepyand the also the subnet14:00
mgariepysimple as that ?14:01
matttmgariepy: let me look for some documentation on what our support team does14:04
palendaeodyssey4me: Why was pbr completely removed here? https://review.openstack.org/#/c/213439/2/playbooks/roles/repo_server/defaults/main.yml14:04
*** Mudpuppy has joined #openstack-ansible14:04
Sam-I-Ammgariepy: depends how you configured your deployment14:05
odyssey4mepalendae because it was only added the previous sha bump as a workaround until https://review.openstack.org/211800 merged14:05
Sam-I-Ambut assuming thats correct, neutron is just neutron14:05
Sam-I-Amphysical_network would probably be 'vlan' unless you called it something else14:05
palendaeodyssey4me: I think we need to keep it at 0.11 or higher14:05
mgariepyit's vlan14:05
mgariepya few pointer in the doc to configure this would be nice tho ;)14:06
odyssey4mepalendae it now uses the global requirements from OpenStack, which are pbr>=0.6,!=0.7,<1.0 for stable/kilo14:06
*** phalmos has joined #openstack-ansible14:06
Sam-I-Ammgariepy: the docs do say this14:06
Sam-I-Ammgariepy: after you deploy openstack, its neutron stuff, already documented upstream here: http://docs.openstack.org/networking-guide/scenario_legacy_lb.html14:07
evrardjpmattt: as you were working with ceph stuff, did you get issues for creating instances?14:08
*** javeriak has quit IRC14:08
matttevrardjp: i have seen a few issues after deploying where i need to bounce nova-compute, i need to investigate this now that you mentoin it14:09
matttevrardjp: but further than taht no, it works for me14:09
evrardjpwhat do you mean by "bounce nova-compute" ?14:10
evrardjprestart nova-compute services?14:10
matttevrardjp: yep14:10
matttevrardjp: i think that service isn't getting restarted properly after we do the ceph bits14:10
matttevrardjp: what kind of problems you running into?14:11
evrardjpit still fails on me14:11
*** sigmavirus24_awa is now known as sigmavirus2414:11
evrardjpNo valid host was found. There are not enough hosts available14:12
evrardjpwait14:12
* mattt waits14:13
evrardjpI'll first check if nova still responds14:13
evrardjpyup, nova services down14:14
*** spotz_zzz is now known as spotz14:14
evrardjpshould I do something else after a service nova-compute restart ?14:15
evrardjpthe process is listed14:15
evrardjpon the compute node14:15
matttevrardjp: nope, they take a while to check back in tho14:16
evrardjpI've restarted libvirt-bin justin case14:16
evrardjpOk, I'll tell you after a few minutes if it's still failing14:17
Apsuevrardjp: You have to check the nova-compute logs to see what/if there's a problem nova-compute is having with identifying available resources14:17
matttyeah, i'd recommend that ... or the scheduler logs incase it's not finding a compute node14:18
Apsu"No valid host" means either A) nova-computes aren't checking in, or B) they're checking in but misidentifying resources available for what you're trying to boot.14:18
evrardjpI'd check the scheduler myself14:18
ApsuWhich could be neutron services down14:18
ApsuSo might want to make sure neutron's agent is running on the hypervisor too.14:18
ApsuCheck its logs, etc14:18
evrardjpthe process runs, and I've checked in the nova-compute logs, but I'm not sure about what I should see14:19
andymccrevrardjp: since Apsu mentions neutron services - its worth checking, I had that issue with an install I did a few days ago, there was a neutron-db-manage issue so the neutron-server services werent starting.14:19
ApsuIf they're not checking in (nova service-list shows it down)...14:19
evrardjpI see a lot of dumps about capabilities14:19
Ti-moHi, just found this in nrb's gists: https://gist.github.com/nrb/d6142c104677c09683f1, anyone else facing this issue by any chance ?14:19
evrardjpneutron service runs14:19
Apsuevrardjp: Is it repeating the capability identification over and over?14:19
evrardjpneutron-linuxbridge-agent at least14:19
evrardjpyup14:19
Apsuevrardjp: neutron agent-list, make sure compute's agent is ":-)"14:19
Apsunova service-list, make sure the compute host is "up"14:20
*** shoutm has quit IRC14:20
ApsuThe fact it's repeating the capability identification means it's almost certainly "up"14:20
ApsuThat's the log output when it checks in14:20
evrardjpok14:21
*** jmckind has joined #openstack-ansible14:21
ApsuSo check neutron agent-list14:21
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable/disable Swift/OpenStack deployment properly  https://review.openstack.org/21421314:22
evrardjpa lot of smileys14:22
evrardjponly smileys I should say14:22
odyssey4memattt there you go - https://review.openstack.org/21421314:23
ApsuIf the compute's neutron agent is ":-)", you can start worrying about nova config being wrong on that node, the flavor looking for things not on that node, needing to restart the nova-scheduler...14:23
*** benwh4 has quit IRC14:23
ApsuEasiest order is restart nova-scheduler, first.14:23
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable/disable Swift/OpenStack deployment properly  https://review.openstack.org/21421314:23
*** wmlynch has quit IRC14:23
*** wmlynch has joined #openstack-ansible14:24
matttodyssey4me: did you deliberately remove export BOOTSTRAP_AIO ?14:25
odyssey4memattt nope, sorry - lemme put that back14:25
matttk14:26
odyssey4meurgh, I see some other nonsense in there too14:26
odyssey4melemme clesan it up14:26
*** prad_ is now known as pradk14:26
evrardjpI think it's an issue with the scheduler, but it didn't show up before I tried ceph14:28
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable/disable Swift/OpenStack deployment properly  https://review.openstack.org/21421314:28
evrardjpin my "host aggregates" that I see on horizon (I don't recall the command to show that), I see "Services Down" for the availability zone "Nova"14:29
*** wmlynch has quit IRC14:29
evrardjpand I've restarted nova-schedulers and nova-compute14:29
odyssey4memattt ^ updated14:29
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable/disable Swift/OpenStack deployment properly  https://review.openstack.org/21421314:31
*** sdake has joined #openstack-ansible14:31
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable/disable Swift/OpenStack deployment properly  https://review.openstack.org/21421314:31
*** woodard has quit IRC14:33
*** rward has joined #openstack-ansible14:34
*** jmckind has quit IRC14:34
*** sdake_ has joined #openstack-ansible14:37
*** jwagner_away is now known as jwagner14:37
*** sdake has quit IRC14:40
*** woodard has joined #openstack-ansible14:41
*** phalmos has quit IRC14:42
evrardjpmattt: on your working installation, shouldn't virsh pool-list mention the ceph pool ?14:43
*** wmlynch has joined #openstack-ansible14:44
matttevrardjp: mine doesn't14:45
evrardjpok14:45
evrardjpI thought it had it14:45
evrardjptoo*14:45
evrardjpto*14:45
evrardjpI'm redeploying os-nova-install.yml, just to make sure14:45
matttare your instances booting now?14:45
evrardjpnope14:45
evrardjpand my nova services are still down14:46
evrardjpand no error in the logs14:46
evrardjpjust some info14:46
evrardjpamqp seems fine14:47
odyssey4meevrardjp is your time consistent across nodes? ie ntp sync14:48
odyssey4mealso is the DB healthy14:48
odyssey4meand is nova-conductor running?14:48
evrardjpI have a ntp server, and my nodes should be using it14:49
odyssey4menova-conductor is the interface between the compute nodes and the DB14:49
matttevrardjp: sometimes it's helpful to flip debug on in the nova.conf file14:49
odyssey4me++14:49
*** phalmos has joined #openstack-ansible14:50
evrardjpok14:50
evrardjpthen I should restart all the nova services, right? is there a specific order for that?14:50
ApsuNot really14:50
evrardjpok14:50
evrardjpgood14:50
ApsuI mean, nova-server first is probably the least noisy14:51
evrardjpI'll check nova-conductor on all the nodes first14:51
ApsuThen conductor+compute14:51
evrardjpthen flip debug on server - conductor - compute14:51
evrardjpis it bad if I see Failed to consume message from queue on a conductor, and then 2 lines after: Connecting to AMQP Server, and then Connected to AMQP server14:54
evrardjpAfter the "Failed to consume message from queue:" I've got "skipping periodic task _periodic_update_dns becaue its interval is negative"14:54
evrardjpthe time seems good though at first sight14:55
matttevrardjp: is it a production install?14:55
evrardjpnope but it should be considered as14:56
evrardjpwhy?14:56
matttevrardjp: just curious14:56
*** wmlynch_ has joined #openstack-ansible14:56
evrardjpIt has worked in the past14:56
matttevrardjp: your rabbit cluster is up right?14:56
evrardjpyup14:57
evrardjpelse it would say connected14:57
evrardjpwouldn't*14:57
matttyeah that is true14:57
evrardjpyeah, list_queues and list_policies seem fine14:58
*** rromans_ has joined #openstack-ansible14:58
evrardjpI'm flipping debug14:58
*** tobasco_ has quit IRC14:59
*** mgariepy has quit IRC14:59
*** metral has quit IRC14:59
*** rromans has quit IRC14:59
*** Ti-mo has quit IRC14:59
*** tobasco has joined #openstack-ansible14:59
matttevrardjp: how many compute nodes do you have ?14:59
*** woodard_ has joined #openstack-ansible15:02
evrardjpin that environment, just one15:02
evrardjpbut it should be enough for starting my vm15:02
evrardjpenough cpu/ram15:02
matttevrardjp: are you using the libvirt rbd thing, or booting w/ cinder rbd ?15:02
evrardjpgood question15:02
*** woodard_ has quit IRC15:03
*** woodard_ has joined #openstack-ansible15:03
evrardjpI'm not starting from a volume15:03
*** yaya has joined #openstack-ansible15:03
*** javeriak has joined #openstack-ansible15:03
evrardjpso I guess it's libvirt rbd15:03
matttevrardjp: ok, cool ... you created the necessary user on the ceph clsuter right?15:03
evrardjpI'm not the ceph expert of my company, so someone created it for me15:04
evrardjpit should have failed in the playbooks if it was wrong, right?15:04
matttevrardjp: should have, wondering if it was given the necessary permissions15:05
matttand that the right pool was created etc.15:05
matttjust thinking of all the things that can go wrong here15:05
*** woodard has quit IRC15:05
openstackgerritMerged stackforge/os-ansible-deployment-specs: Add tox generated files to .gitignore  https://review.openstack.org/20844015:06
*** javeriak has quit IRC15:07
*** javeriak has joined #openstack-ansible15:07
ApsuSo, really, "no valid host" should have more detail available in some logfile15:08
Apsunova's api service, or the nova-scheduler15:08
evrardjpI've enabled debug15:09
*** phalmos has quit IRC15:09
* Apsu nods15:09
sigmavirus24odyssey4me: where are the logs you wanted me to look at?15:09
evrardjpI'll check again on nova api and nova scheduler15:09
evrardjpto follow the flow15:09
*** mgariepy has joined #openstack-ansible15:12
*** Ti-mo has joined #openstack-ansible15:12
*** metral has joined #openstack-ansible15:12
*** phalmos has joined #openstack-ansible15:17
evrardjpdebug indeed helps15:19
Sam-I-Ammgariepy: did you find what you needed?15:21
odyssey4mesigmavirus24 if you have a gap, note that the pbr limit in https://review.openstack.org/211265 also came with a python-openstacksdk issue which I worked around, if you can figure out the source of the issue so that we can get it fixed then that'd be awesome15:25
sigmavirus24odyssey4me: so that bug report is incomplete15:25
odyssey4mesigmavirus24 https://bugs.launchpad.net/openstack-ansible/+bug/1484011 is the issue15:25
openstackLaunchpad bug 1484011 in openstack-ansible trunk "python-openstacksdk build fails" [Critical,New]15:25
sigmavirus24It doesn't tell me what version it was trying to build of openstacksdk15:25
sigmavirus24Yeah I'm reading that issu15:25
mgariepySam-I-Am, yeah i'll be ok.15:25
sigmavirus24ENOTENOUGHINFORMATION15:25
mgariepythanks for your help.15:25
odyssey4mesigmavirus24 check the first comment15:25
sigmavirus24"git tag and github show differing tags, so I suspect there's an issue somewhere - but fixing the requirements on a static version fixes this"15:26
odyssey4me(and the last)15:26
evrardjppaste.openstack.org/show/NWWiRXC9yrE2aPkOfbFw15:26
sigmavirus24"This now also affects the master branch."15:26
sigmavirus24still no information about what was being built that failed15:26
sigmavirus24odyssey4me: I don't mean version of osad15:26
odyssey4mesigmavirus24 'Command "python setup.py egg_info" failed with error code 1 in /tmp/openstack-builder/python-openstacksdk'15:26
sigmavirus24I mean version of python-openstacksdk15:26
sigmavirus24Yeah I get that15:27
odyssey4mesigmavirus24 I haven't managed to replicate it down to a specific tag - that's what needs to be figured out15:27
odyssey4meI tried a little this morning, but then squirrels15:27
*** alop has joined #openstack-ansible15:27
odyssey4mesigmavirus24 I'll retry a build without that version set and see how it goes - master seems not to be affected... suffice to say that this may not be an issue any more with other updates15:29
sigmavirus24odyssey4me: no need15:29
odyssey4meI just need an independent verification15:29
sigmavirus24I'm trying to do the same15:29
*** woodard_ has quit IRC15:30
odyssey4mefor the issue in master related to constant failures - pick a failed build, any build from master - you'll see that hpcloud is common, but I haven't found a cause yet15:30
sigmavirus24odyssey4me: we should start an etherpad to track failed builds15:32
*** woodard has joined #openstack-ansible15:32
odyssey4mesigmavirus24 sounds like a plan15:32
sigmavirus24So odyssey4me on master, I don't see any mention of openstacksdk in the playbooks15:32
sigmavirus24or anywhere in osad15:32
odyssey4mesigmavirus24 yep, it's a dep15:33
sigmavirus24it's a transitive dep?15:33
sigmavirus24i.e., a dependency of something we're using?15:33
*** javeriak_ has joined #openstack-ansible15:34
palendaeWouldn't be surprised if that didn't come from openstack global requirements15:34
odyssey4mesigmavirus24 it comes from global requirements: https://github.com/openstack/requirements/blob/master/global-requirements.txt#L16015:34
*** javeriak has quit IRC15:34
sigmavirus24Oh right15:34
palendaeCaaaaaallled it15:34
sigmavirus24I forget we just build everything in there15:34
odyssey4meand https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L13615:34
sigmavirus24so does yaprt always take the lowest possible version?15:35
sigmavirus24I can never remember the answer to this15:35
Apsusafest*15:35
ApsuBecause old is safe15:35
odyssey4mesigmavirus24 and the issue is http://logs.openstack.org/65/211265/2/check/gate-os-ansible-deployment-dsvm-commit/3b38df0/console.html.gz#_2015-08-12_07_32_17_79415:35
odyssey4mesigmavirus24 it seems to be the case that it takes the lowest possible option, yes15:36
*** rward has quit IRC15:36
svgDoes OSAD do something wrt to lvm.conf or lvm in general, that could potentially overwrite an existing lvm setup?15:37
*** phalmos has quit IRC15:39
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove python-openstacksdk version spec  https://review.openstack.org/21424215:39
odyssey4mesigmavirus24 ^ that is a test to see if it works again without the version specification15:39
odyssey4mesvg it would appear so: playbooks/roles/openstack_hosts/tasks/openstack_lvm_config.yml and playbooks/roles/os_cinder/tasks/cinder_lvm_config.yml15:40
svgat least looking in ./playbooks/roles/openstack_hosts/templates/lvm.conf.j2 it seems it checks for current setup and defines them there15:41
svgcolleague mailed me about pre-existing setup where /openstack is mounted on an external storage backed lvm device, where this fails after the osad rollout and after a reboot15:44
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove hardcoded config drive enforcement  https://review.openstack.org/21249715:55
openstackgerritMerged stackforge/os-ansible-deployment: Enable debug logging for gate checks  https://review.openstack.org/21343815:56
*** wmlynch has quit IRC15:57
*** javeriak_ has quit IRC15:58
evrardjpfound my issue \o/16:01
evrardjpnothing related to nova16:01
Apsuyey16:01
evrardjprouting on the nova host was failing16:01
evrardjpit worked perfectly fine at first sight, but routing to ceph instance was not working16:02
*** javeriak has joined #openstack-ansible16:02
evrardjpdamn ipv616:02
odyssey4mehowdy all - ready for bug triage?16:03
odyssey4mecloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp bug triage in this room16:04
dstaneki'm half here...trying to type faster to get stuff done this morning16:05
odyssey4meif it's ok with everyone, I'd like to get through the new bugs quickly - then spend some time showing everyone how to debug gate failures16:05
odyssey4mejust orientation regarding where the logs are, how to read them, etc16:05
sigmavirus24that's not what a bug triage is for16:05
sigmavirus24=P16:05
odyssey4mesure, but if we triage the bugs - then I think that there are a few people who may be able to help spot the cause of the master blocker right now16:06
rromans_.16:06
sigmavirus24odyssey4me: I'm just teasing16:07
palendaeReady for triaging16:07
odyssey4mefirst up: https://bugs.launchpad.net/openstack-ansible/+bug/148554716:07
openstackLaunchpad bug 1485547 in openstack-ansible trunk "Need ability to set default_volume_type in cinder" [Undecided,New]16:07
odyssey4meseems like a good enhancement request - happy for this to be on the wish list?16:08
palendaeI am16:08
odyssey4methat's a low hanging fruit patch for anyone wanting to patch it up :)16:09
* odyssey4me looks at evrardjp :)16:09
evrardjp:)16:09
evrardjpI'll check if I have time16:10
evrardjpI'll tell later16:10
odyssey4mewe have some docs bugs, I'll allocate those appropriately16:10
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bug/148226516:10
openstackLaunchpad bug 1482265 in openstack-ansible juno "Nova-computes need nfs-common installed if NFS cinder backend" [High,Confirmed] - Assigned to Jesse Pretorius (jesse-pretorius)16:10
odyssey4meI think we discussed this last week, but I haven't had a chance to look into it for master/kilo.16:10
odyssey4meDoes anyone have a gap to confirm the bug? If you do, please feel free to allocate it to yourself16:11
odyssey4meotherwise I'll get to it asap16:11
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bug/148425616:11
openstackLaunchpad bug 1484256 in openstack-ansible "Apache servers reporting version in response header" [Undecided,New]16:11
palendaeThat was reported by our netsec folks16:12
odyssey4methat's a good call, and another low hanging fruit fix :)16:12
odyssey4menot really a bug though? wishlist?16:13
sigmavirus24odyssey4me: eh16:14
odyssey4medue to the security factor I am tempted to rate it a medium bug though16:14
sigmavirus24low is better16:14
palendaePossibly a security concern16:14
palendaeLow seems reasonable to me16:14
sigmavirus24Put it like this, it makes it easier for someone to know which exploits to test against our apache servers16:14
odyssey4meok, low it is16:14
sigmavirus24That cuts down the attack time16:14
sigmavirus24But it doesn't mean they wont' find a vulnerability if they're attacking the server16:15
odyssey4meI'll target it for 11.2.0 and hopefully someone can pick it up before then.16:15
sigmavirus24So medium works too but I don't think it's a huge concern16:15
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bug/148461916:15
openstackLaunchpad bug 1484619 in openstack-ansible "Document host_bind_override option" [Undecided,New]16:15
sigmavirus24assign Sam-I-Am16:16
sigmavirus24=P16:16
odyssey4meSam-I-Am are you around perhaps?16:16
odyssey4medone16:17
sigmavirus24We need a bug triage bot so we can do "#assign Sam-I-Am"16:17
sigmavirus24and it just works16:17
odyssey4meok, now the current master blocker: https://bugs.launchpad.net/openstack-ansible/+bug/148591716:17
openstackLaunchpad bug 1485917 in openstack-ansible trunk "hpcloud AIO's are failing tempest tests" [Critical,New] - Assigned to Jesse Pretorius (jesse-pretorius)16:17
odyssey4mepalendae do you want to do a run-through of navigating gerrit and identifying the logs16:18
ApsuLooking at the volume failure right now16:18
ApsuLogs for the volume failure* that is16:18
odyssey4mepalendae ?16:19
palendaeBasically we're looking through https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+branch:master,n,z to find -1 verifieds; the gate-os-ansible-dsvm-commit log will contain the failures in the 'console' log16:19
palendaeAs to the ones specifically affecting master on HP cloud, I have not looked in depth to know what the errors in that log are16:19
Apsuodyssey4me: It was me who asked what was hpcloud and what wasn't, earlier. I'm quite familiar with identifying logs and navigating gerrit ;P16:19
odyssey4meApsu sure, but evrardjp mgariepy and others are new to this :)16:20
palendaeThat part I also don't know...I assume there's a host name in the logs?16:20
evrardjpI am but I can grep -i FAIL *16:20
odyssey4meok, so you can identify the provider and region right at the top of the console log16:20
odyssey4meeg: http://logs.openstack.org/67/213467/1/gate/gate-os-ansible-deployment-dsvm-commit/ab4aa89/console.html#_2015-08-18_15_02_50_31816:20
palendae(also, upstream jobs have a standard footer that goes on all their job index pages, which I would love to include someday)16:20
odyssey4mesee devstack-trusty-hpcloud-b2-<number>16:21
evrardjpsame as Apsu16:21
odyssey4methat means it's the devstack image, on ubuntu trusty, running in hpcloud region b216:21
ApsuTempest is failing on waiting for cinder volume create because the build goes into ERROR. Why is what I'm looking into now16:22
odyssey4meApsu ok, so here's where some context may help16:22
evrardjp OSError: [Errno 2] No such file or directory seems bad too16:23
odyssey4menote: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/scripts-library.sh#L2116:23
odyssey4meand https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/scripts-library.sh#L65-L11116:23
evrardjpbut not that much apparently16:23
odyssey4meand also http://logs.openstack.org/67/213467/1/gate/gate-os-ansible-deployment-dsvm-commit/ab4aa89/logs/instance-info/16:23
Apsuok16:23
odyssey4methe host_info files contain information about the instance16:23
odyssey4meincluding disk layout16:24
evrardjpok16:24
ApsuCool16:25
odyssey4meif you check successful master jobs, you'll be able to compare as well16:26
odyssey4mehere's a recent master success which ran in rax: https://review.openstack.org/21291916:27
*** phalmos has joined #openstack-ansible16:27
odyssey4meyou'll see that the disk layouts are different16:27
odyssey4meand you'll notice that the script sets up the cinder-volumes vg differently16:27
*** rward has joined #openstack-ansible16:28
odyssey4mein hocloud it's on a real disk because there's enough space to do that, but on rax we have to use a loopback disk for cinder16:28
odyssey4mein hpcloud there's not enough space on the system disk, whereas on rax there is16:28
odyssey4mesomething I noticed is that we're missing the cinder-volume log here: http://logs.openstack.org/67/213467/1/gate/gate-os-ansible-deployment-dsvm-commit/ab4aa89/logs/aio1-cinder/16:29
ApsuInteresting.16:29
ApsuAlso, appears the 1gb test volume create/deletes are successful, at least from the api/scheduler perspectives16:30
odyssey4meyep16:30
*** yaya has quit IRC16:30
odyssey4meI built a g1-8 on rax and added a 500gb disk to provide a similar setup - then bootstrapped master and confirmed that it was setup with a similar layout to hpcloud.16:31
odyssey4metempest failed for me, multiple times - but each time it failed differently16:31
odyssey4meand none of those times were related to volumes16:31
ApsuAre the account/container/container-error/object logs supposed to have things in them, under logs/aio1?16:31
ApsuBecause they're all empty16:31
odyssey4meApsu those are swift logs, and yeah - I think there's an issue where rsyslog needs to be restarted before those get populated.16:32
odyssey4memaybe another bug to pick up on16:32
odyssey4mebut not related16:32
Apsukk16:32
ApsuFigured they were swift, but ok16:32
odyssey4meevrardjp do you see anything unusual?16:33
odyssey4meI expect that we may be finding plenty of red herrings here, which is why I'm asking for more eyes.16:33
evrardjpI'm looking to that right now16:33
evrardjphttp://logs.openstack.org/67/213467/1/gate/gate-os-ansible-deployment-dsvm-commit/ab4aa89/console.html#_2015-08-18_15_02_50_79616:33
odyssey4methe issue is most likely cinder/nova related, I think - but it befuzzles me why it works in rax but not in hpcloud16:34
evrardjpI read your analysis16:34
*** sdake_ is now known as sdake16:34
odyssey4methe following patches have been submitted through my digging: https://review.openstack.org/21417216:35
odyssey4mehttps://review.openstack.org/21010716:35
odyssey4mehttps://review.openstack.org/21404516:35
evrardjpI never used tempest, so it's really hard for me to understand the impact of all changes, but I get what you've done. Although I don't get why yet16:37
odyssey4mewell, basic logic tells me that the underlying difference in storage architecture is the most likely culprit16:38
odyssey4mebut that very same architecture is being used for juno and kilo with success, so it may have to do with changes in cinder code instead16:39
ApsuWonder what's up with no cinder-volume log16:39
*** phalmos has quit IRC16:39
*** shausy has quit IRC16:39
evrardjpabout that you were right16:40
evrardjphttp://logs.openstack.org/67/213467/1/gate/gate-os-ansible-deployment-dsvm-commit/ab4aa89/console.html#_2015-08-18_16_10_48_79116:40
odyssey4meApsu I suspect the switch from on_metal false to on_metal true was not done properly - the log link probably has not been implemented16:40
Apsuodyssey4me: Ah16:40
odyssey4meevrardjp yep, below that you'll see the tempest debug output showing the stack traces for the failures16:41
evrardjpI'll get there, it's hard to do everything at the same time ;)16:41
*** c0m0 has quit IRC16:43
odyssey4melet me fyi - I've logged https://bugs.launchpad.net/openstack-ansible/+bug/148613316:44
openstackLaunchpad bug 1486133 in openstack-ansible "zero sized swift logs" [Low,New]16:44
ApsuWithout that log, this might not be easy to figure out. The API call logs from tempest show the create call succeeding, the status is "creating", followed by "error"16:44
evrardjpodyssey4me: what do you mean by the switch with on_metal?16:44
ApsuThen it deletes it, and the delete succeeds16:44
odyssey4mewill also log the bug regarding the cinder-volume log, as that'd be very helpful right now16:44
Apsuevrardjp: cinder-volume isn't contained16:44
odyssey4meevrardjp the cinder container thing that you did some documentation for16:45
evrardjpyup I remember that ;)16:45
odyssey4mewell, since that change we no longer have a cinder-volume log16:45
evrardjpbut we are still on_metal because we're using the lvm right?16:47
*** ashishb has joined #openstack-ansible16:47
evrardjpI know nothing about these devstack instances, I'll check the link you posted16:47
odyssey4meevrardjp in this case 'devstack-trusty' refers to the base image used inside openstack-ci16:48
evrardjpanyway that's not the important part of the conversation16:48
evrardjpI get that16:48
odyssey4meit's an Ubuntu Trusty image with a boatload of other software on it16:48
evrardjpI've checked now what's inside16:49
evrardjpwith your link16:49
odyssey4mewe have discovered that sometimes the image is inconsistent, so that's quite possibly a cause16:49
evrardjpso by default, there isn't any lvm in the /dev/vd*16:49
odyssey4meanother bug https://bugs.launchpad.net/openstack-ansible/+bug/148613716:49
openstackLaunchpad bug 1486137 in openstack-ansible "cinder-volume log missing" [Medium,New]16:49
evrardjpok16:49
evrardjpso there is a complete reverse engineering to do about the is_metal change16:50
evrardjp;)16:50
*** rromans_ is now known as rromans_afk16:51
evrardjpI'm taking the conversation away from its original goal, sorry16:51
*** phalmos has joined #openstack-ansible16:51
evrardjpquick question: isn't it quicker to temporarily recheck and arrive to a different host, or even ask to be specifically build on one host?16:52
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Set iptables-persistent install execution to append to log  https://review.openstack.org/21417216:53
odyssey4meevrardjp I've been rechecking for days now. This is too disruptive.16:53
odyssey4meThe node pools aren't allocated evenly.16:54
odyssey4meAnd we don't get to choose which node pool, or which host to build on.16:54
odyssey4meThis is a general CI service for all of openstack. See jobs come and go here: http://status.openstack.org/zuul/16:54
evrardjpyeah I was guessing that, since the jobs running or failing have the same name16:56
evrardjpI should check on zuul, learn more about it16:56
odyssey4meevrardjp workhole alert: http://docs.openstack.org/infra/manual/16:57
odyssey4me*wormhole16:57
evrardjpI'll stay away from time warps as much as can, thanks for the alert16:58
evrardjpI'll have to go for today, but I can still learn/check tomorrow17:01
odyssey4meI need to get out of here - time to get home. I need a break for it.17:02
odyssey4meIf anyone discovers anything that's a pattern across failed builds that might be worth looking into then please let me know.17:02
evrardjpI guess we'll try to see what will change with your api version change, right?17:02
evrardjpat least trying17:03
evrardjpok odyssey4me, let's try this tomorrow17:03
Apsuodyssey4me: rsyslog usage and config/vars seem the same in the cinder playbook as nova and such17:03
ApsuVery odd17:03
odyssey4meevrardjp already been through and failed :/17:03
Apsuodyssey4me: Where does the bit that uploads the logs on the gate live?17:04
odyssey4meApsu that's a jenkins thingy in openstack-ci and out of our control17:04
Apsuyey17:04
odyssey4mewe do this to facilitate it: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/gate-check-commit.sh#L49-L5217:05
odyssey4meessentially whatever ends up in a subdirectory of the pwd called 'logs' ends up being uploaded by jenkins17:05
ApsuGotcha17:05
*** spotz is now known as spotz_zzz17:05
odyssey4mewe symlink it to /openstack/logs seeing as that's where we store all out logs anyway17:06
odyssey4me*ahem '/openstack/log'17:06
odyssey4meif we're still stuck tomorrow and need more info, we'll try and crank out some bugfixes for the above-registered bugs17:06
Apsuyep17:06
odyssey4mebut Apsu if there's info you're missing that could be useful, let us know and we can pop in a review to get those included - whether permanently or temporarily17:07
* Apsu nods17:08
ApsuThanks sir. Catch you later17:08
openstackgerritMerged stackforge/os-ansible-deployment: Remove unused variables in os_swift role  https://review.openstack.org/21311417:10
*** dabernie_ has quit IRC17:12
*** jwagner is now known as jwagner_away17:18
*** KLevenstein has quit IRC17:22
*** yaya has joined #openstack-ansible17:24
openstackgerritMerged stackforge/os-ansible-deployment: development-stack Doc Update  https://review.openstack.org/20601617:28
*** britthouser has joined #openstack-ansible17:30
*** TheIntern has joined #openstack-ansible17:31
*** britthou_ has quit IRC17:33
*** phalmos has quit IRC17:34
*** phalmos has joined #openstack-ansible17:35
*** yaya has quit IRC17:35
Sam-I-Amodyssey4me: moo?17:35
Sam-I-Amodyssey4me: was conferencing during bug triage17:35
*** k_stev has joined #openstack-ansible17:38
prometheanfireSam-I-Am: no irc for you17:39
Sam-I-Amshhh you17:40
*** alop has quit IRC17:41
*** prometheanfire has quit IRC17:45
*** prometheanfire has joined #openstack-ansible17:46
*** KLevenstein has joined #openstack-ansible17:50
*** TheIntern has quit IRC18:05
*** yaya has joined #openstack-ansible18:05
*** openstackgerrit_ has quit IRC18:11
*** spotz_zzz is now known as spotz18:23
openstackgerritMerged stackforge/os-ansible-deployment: Remove python-openstacksdk version spec  https://review.openstack.org/21424218:34
openstackgerritMerged stackforge/os-ansible-deployment: Remove hardcoded config drive enforcement  https://review.openstack.org/21249718:34
odyssey4meSam-I-Am it was just to discuss https://bugs.launchpad.net/openstack-ansible/+bug/1484619 but we decided to assign it to you instead :p18:38
openstackLaunchpad bug 1484619 in openstack-ansible "Document host_bind_override option" [Low,Confirmed] - Assigned to Matt Kassawara (ionosphere80)18:38
Sam-I-Amoh, yeah... its just not documented anywhere18:39
Sam-I-Amexcept if you look at the ansibles :)18:40
Sam-I-Amor... understand what it does18:40
Sam-I-Amself-documenting?18:40
odyssey4meApsu if you're up to continue debugging the master block then we can continue - just give me an hour or so to make food and such.18:40
odyssey4meIt's nagging me, which means that sleep will be elusive once again.18:41
Apsuodyssey4me: Yep, in meetings right now, will hit you up in a bit18:44
Sam-I-Amodyssey4me: whats broked this time?18:45
odyssey4meSam-I-Am https://bugs.launchpad.net/openstack-ansible/+bug/148591718:45
openstackLaunchpad bug 1485917 in openstack-ansible trunk "hpcloud AIO's are failing tempest tests" [Critical,In progress] - Assigned to Jesse Pretorius (jesse-pretorius)18:45
Sam-I-Amif hpcloud exit 0 ?18:46
sigmavirus24lol18:46
odyssey4menot sure of the root cause yet18:46
Sam-I-Amhp18:47
Apsureturn true18:48
sigmavirus24odyssey4me: http://paste.openstack.org/show/420843/18:48
sigmavirus24odyssey4me: specifically "No valid host was found. No weighed hosts available"18:49
odyssey4mesigmavirus24 hmm, I've seen that before - and also seen devstack gate references recently to it - but it wasn't consistent between failures18:50
odyssey4mebut it may be a clue18:50
sigmavirus24odyssey4me: just looking through one set of logs18:50
sigmavirus24trying to dig in18:50
sigmavirus24and look around18:50
odyssey4meI suspect in that case it may be due to cinder-volume not starting, or something to that effect18:50
sigmavirus24it's the only "error" in cinder's logs though18:50
Sam-I-Amyeah18:50
Sam-I-Amthats what i'm thinking18:50
odyssey4mewith https://bugs.launchpad.net/openstack-ansible/+bug/1486137 it makes it harder to know18:50
openstackLaunchpad bug 1486137 in openstack-ansible "cinder-volume log missing" [Medium,New]18:50
sigmavirus24hah18:50
sigmavirus24I was like, "Well that explains why I can't find cinder volume's log file"18:51
*** woodard has quit IRC18:51
sigmavirus24I'll tackle that now18:51
Sam-I-Amis cinder finding all of its infra bits?18:51
sigmavirus24And see if it shows up in the logs18:51
Sam-I-Amlike... fake devices18:51
odyssey4meI suspect we have a missing symlink which needs to be placed there if cinder-volume is 'on metal'18:51
odyssey4meSam-I-Am in most failed builds the scheduler shows successful volume creats, deletes, scrubs, etc18:52
odyssey4mealso, hpcloud happens to not use a fake device for cinder - that's the odd bit18:52
*** yaya has quit IRC18:52
Sam-I-Amwhat does it do?18:52
Sam-I-Ami havent looked at the infra jobs18:52
odyssey4methe other things I'm thinking is that perhaps lvm is saying that it supports 'thin' lv's - but we're missing a package to actually make that work properly18:53
Sam-I-Amwhy would this be different on different clouds?18:53
odyssey4meSam-I-Am in hpcloud there's a big enough ephemeral disk for us to share it between the containers and cinder - so there are two vg's. It uses real LVM on a real disk.18:53
Sam-I-Amahh18:54
Sam-I-Amvs. containers just using root?18:54
odyssey4meSam-I-Am there's a difference in underlying disk structure, but also why is this working perfectly for kilo - but not master18:54
odyssey4meSam-I-Am both hp and rax have their containers built in an LVM vg18:54
Sam-I-Ambut rax uses a fake for cinder?18:55
odyssey4meonly cinder's vg differs in the underlying supporting disk - in rax it's a fake loopback disk, in hp it's a real disk18:55
Sam-I-Amhmmm18:55
odyssey4mein both cases they are still lvm vg's - it's just the underlying disk that differs18:55
odyssey4meSam-I-Am more details in your backscroll :p18:56
*** sdake_ has joined #openstack-ansible18:56
*** yaya has joined #openstack-ansible18:58
*** k_stev has quit IRC18:58
*** sdake has quit IRC18:59
*** k_stev has joined #openstack-ansible19:00
*** cloudnull_afk is now known as cloudkiller19:02
*** fawadkhaliq has joined #openstack-ansible19:02
*** javeriak has quit IRC19:07
*** javeriak has joined #openstack-ansible19:10
*** fawadk has joined #openstack-ansible19:13
*** fawadkhaliq has quit IRC19:16
*** sdake_ is now known as sdake19:16
*** alop has joined #openstack-ansible19:25
*** jwagner_away is now known as jwagner19:25
*** fawadk has quit IRC19:41
*** rromans_afk is now known as rromans19:42
*** cloudkiller is now known as cloudnull_zzz19:48
*** woodard has joined #openstack-ansible19:51
*** britthouser has quit IRC19:54
*** britthouser has joined #openstack-ansible19:54
*** woodard has quit IRC19:57
*** sigmavirus24 is now known as sigmavirus24_awa19:57
*** sigmavirus24_awa is now known as sigmavirus2419:58
*** ashishb has quit IRC20:06
*** britthou_ has joined #openstack-ansible20:12
*** britthouser has quit IRC20:15
openstackgerritMerged stackforge/os-ansible-deployment: Correct binding logic in haproxy configuration  https://review.openstack.org/21323020:16
*** britthouser has joined #openstack-ansible20:18
*** britthou_ has quit IRC20:19
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove unused variables in os_swift role  https://review.openstack.org/21432620:19
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Correct binding logic in haproxy configuration  https://review.openstack.org/21432820:19
*** woodard has joined #openstack-ansible20:31
sigmavirus24odyssey4me: you around?20:32
sigmavirus24So, I found that while cinder-volume.log may not be present in HP Cloud, aio1-cinder may not be present at all either20:32
sigmavirus24Apsu: http://paste.openstack.org/show/420987/20:36
sigmavirus24odyssey4me: ^20:36
*** javeriak has quit IRC20:40
odyssey4mesigmavirus24 pong20:40
sigmavirus24nevermind20:41
sigmavirus24that's wrong20:41
odyssey4mesigmavirus24 I was just having a wtf moment there20:41
sigmavirus24yeah20:41
sigmavirus24the logging is not helpful20:42
sigmavirus24because it runs the restarts across cinder_all20:42
sigmavirus24Instead of restart specific services on the host they're running on20:42
sigmavirus24which my Cmd-F searching found that and confused me20:42
odyssey4mesigmavirus24 our playbooks are too broad, and our roles do too much magic with the inventory - it's time we did better20:42
sigmavirus24so20:44
sigmavirus24aio1-cinder/ is missing on some of these jenkins runs20:44
sigmavirus24which makes it seem like cinder-backup/cinder-volume never start or never actually print anything out20:44
odyssey4methat makes sense to me in light of what's going on - so I'd like to zone in on that somehow20:45
*** britthou_ has joined #openstack-ansible20:45
odyssey4mewe have a ton of superfluous stuff that gets run and logged to try and figure out the issue with hpcloud-b4 - and much of it is getting in the way20:45
odyssey4meI'm tempted to rip it all out and just exit fail on hpcloud-b420:45
sigmavirus24odyssey4me: Apsu brought up the theory that this could be related to base images20:46
sigmavirus24I'm asking in -infra if they updated the dsvm image recently?20:46
sigmavirus24s/\?//20:46
d34dh0r53food for thought, I start getting SSH errors on boxen with a large number of processors, we should probably cap the FORKS number, number of procs up until a cap20:46
odyssey4mesigmavirus24 yes - although in theory the base images are synchronised, there is no guarantee20:46
sigmavirus24right20:46
odyssey4med34dh0r53 long fixed on that one :p20:46
d34dh0r53there's a cap?20:46
*** yaya has quit IRC20:47
odyssey4mewell, we use the cpu number for some tasks - but on normal openstack-ansible executions we use the default and let the deployer change it20:47
odyssey4methe gate uses the cpu number as a forks override20:47
odyssey4mebut we must really stop using overrides, dammit20:47
*** britthouser has quit IRC20:47
d34dh0r53heh20:48
odyssey4med3 https://review.openstack.org/20747420:48
odyssey4med34dh0r53 ^20:48
d34dh0r53odyssey4me: right, what I'm saying is that number of cpu's can be > than what FORKS can successfully run at.20:50
*** britthou_ has quit IRC20:51
odyssey4med34dh0r53 interesting - perhaps it's based on the target then?20:52
odyssey4meI expect that perhaps we should do something like #cpu's or 8, whichever is lower20:52
odyssey4meor something like that20:52
odyssey4memaybe not 8, but 10 or 12 - but you get the idea20:53
d34dh0r53odyssey4me: yeah, it's strange, I think we can go higher than 8, but I'm not sure what the cap is, on this 40 core OM box I fail at 40 but seem to be running ok at 2020:53
odyssey4memaybe 16 is a more reasonable number20:54
d34dh0r53odyssey4me: yeah, I'm thinking 12-16 is probably the range20:54
odyssey4meor perhaps some sort of calc based on the number of threads available, or cores per proc20:54
odyssey4mewell, 15 has been the default where we've failed so often in the gate20:55
odyssey4me8 has worked well for the p1-8 in the gate and in AIO tests20:55
d34dh0r53right, so it probably should follow the number of cores up until the limit20:55
odyssey4me10 worked well for QE with hardware - and they needed it to be less than 15.20:55
bgmccollumdoes ansible take into account delegate to when popping tasks from the queue? i can imagine a situation where lots of delegates for containers could hit the ssh limit for the host...20:56
odyssey4mebgmccollum no idea personally, but once we have some numbers it'd be great to approach them to figure it out20:56
odyssey4meideally ansible should 'know better' after an initial handshake with its target - and after knowing its deployment host20:57
d34dh0r53odyssey4me: ^20:57
d34dh0r53ansible should definitely know better20:57
odyssey4meit's a young tool, so we forgive it a little :p20:58
*** yaya has joined #openstack-ansible20:58
bgmccollumd34dh0r53: maybe bump the sshd conf that limits the number of simul connections...20:58
d34dh0r53A more robust SSH retry mechanism is in 2.0 so this hopefully will be a mute point20:58
d34dh0r53bgmccollum: interesting idea, I may play around with that20:59
odyssey4mebgmccollum while that may be a factor, we did spend an awful lot of time trying that sort of thing a few revisions ago - things may have gotten better, but you can ask hughsaunders who literally spent around two months trying everything under the sun to make it work better20:59
odyssey4methe ssh retry mechanism in ansible 2.0 was the resulting work which was the only thing that actually achieved a consistent result21:01
odyssey4mesigmavirus24 a part of me wants to normalise the gate checks to use the same underlying storage to see what happens, but another part of me is happy that using different mechanisms is like a canary21:04
odyssey4meI can't work out whether ti love it or hate it.21:04
odyssey4me*to21:04
sigmavirus24odyssey4me: so21:04
sigmavirus24"sigmavirus24: its possible we stopped doing the ephemeral dirve formating on one for some reason" from -infra21:05
* odyssey4me switches to -infra to see the discussion21:05
sigmavirus24When I asked about the differences (because there are flavor differences) between the two providers I got "sigmavirus24: devstack-gate's setup function documents via code"21:05
sigmavirus24odyssey4me: this was in the scrollback21:05
sigmavirus24I'm reading devstack's function(s) to find wtf they're talkinga bout21:05
odyssey4meI see it.21:06
odyssey4methe scrollback I mean21:06
odyssey4meas I recall in hpcloud the ephemeral disk is mounted at startup - we then dismount, repartition and continue21:07
odyssey4meie we dismount and remove the config: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/scripts-library.sh#L75-L7921:08
odyssey4mewe only care about devices beyond the first one: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/scripts-library.sh#L7221:08
*** woodard_ has joined #openstack-ansible21:09
sigmavirus24hm21:10
odyssey4meit seems though, that we do format - but only for the lxc partition: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/scripts-library.sh#L81-L10921:10
odyssey4meso, correct me on this - it's been a while since I wrote it - but...21:11
odyssey4meif we have more than 250GB of disk space on a secondary disk21:11
odyssey4methen partition 80% for lxc, and the rest for cinder-volume21:11
d34dh0r53yep21:12
d34dh0r53that's how I read it21:12
odyssey4me(no formatting on either as they're both lvm vg's - so lv's will get created inside them and those will be formatted)21:12
*** woodard has quit IRC21:12
sigmavirus24That's 50GB for cinder21:12
odyssey4meotherwise (if we have less than 250gb - ie rax), just format the thing as ext4 for and use it for lxc21:13
*** woodard_ has quit IRC21:13
ApsuCinder's still not even being found as a service to potentially start21:14
ApsuIgnoring how much space it might have if it did :P21:14
odyssey4methe otherwise bit does not use lvm at all - it's simply a normal partition mount21:14
odyssey4meApsu right, but not always - we have inconsistency here21:14
odyssey4meso bootstrap-aio runs the configure_dispace function early: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/bootstrap-aio.sh#L16421:15
odyssey4meso if that results in the 'cinder-volumes' vg, then great - otherwise: https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/bootstrap-aio.sh#L199-L21421:16
odyssey4meso either way - hpcloud or rax - a vg is being used for volumes... but the underlying disk is different21:16
sigmavirus24Apsu: I was misreading the logs21:17
*** phalmos has quit IRC21:18
odyssey4meI don't see how a thin/thick volume issue could arise - unless somehow the driver is detecting badly. Doing lvm thin provisioning requires a special setup which we don't do: https://gist.github.com/sidnei/704133821:19
odyssey4mebut yes, we need that cinder-volume log21:20
odyssey4mesigmavirus24 did you figure out how to make the magic?21:20
sigmavirus24odyssey4me: not yet21:21
sigmavirus24odyssey4me: because sometiems cinder-backup.log doesn't appear either21:21
odyssey4mesigmavirus24 that would make sense if the failure is earlier in the process21:21
odyssey4meso I have been wondering - perhaps we should  carry a sha for tempest_lib or something else to help this21:22
sigmavirus24odyssey4me: we do21:22
sigmavirus24or don't we on master?21:22
odyssey4mesigmavirus24 we did - I removed them as an experiment21:22
odyssey4mewe should not carry sha's specifics unless we need to21:23
odyssey4mewe tend to carry baggae21:23
odyssey4me*baggage21:23
odyssey4mehowever - I looked at the lib, and it's been quite dormant21:23
sigmavirus24So the logs indicate this isn't a problem with tempest though21:23
sigmavirus24Or if it is, it's a very very very bizarre one21:23
odyssey4meI suspect the issue is flux relating to nova, cinder and/or neutron... and each fail relates to different herrings.21:24
odyssey4meI have learned that we've been far too willing to turn tests off.21:24
sigmavirus24We could try an earlier SHA for cinder to see if that fixes it but I doubt that's the issue21:25
sigmavirus24That said, tempest talks directly to cinder's API21:25
sigmavirus24I don't think nova has any part of this21:26
odyssey4mesigmavirus24 yes, that's my lowest order suspect - it's only suspect in the interaction with neutron and cinder21:27
*** wmlynch_ has quit IRC21:29
*** k_stev has quit IRC21:32
*** mpmsimo has joined #openstack-ansible21:38
odyssey4meI'm pretty seriously considering changing the -infra timeout for our builds to 90 mins instead of 120 mins.21:40
d34dh0r53odyssey4me: +1 at 90 minutes "It's dead Jim"21:43
odyssey4med34dh0r53 urm, it looks like you hit the tree jim21:43
d34dh0r53Wow, I thought I was the only one who used old-school Links references :)21:44
odyssey4meyeah - we have a clear 70-80 min run on the longest running check21:44
odyssey4me:)21:44
d34dh0r53Links 2004 I think it was, one of the best golf games ever21:44
odyssey4methat and the original fifa games were pretty epic at the time21:45
*** k_stev has joined #openstack-ansible21:45
odyssey4menever mind that car game - what was it....21:45
d34dh0r53RalliSport Challenge?21:46
odyssey4med34dh0r53 Test Drive, I think.21:48
d34dh0r53odyssey4me: yeah, test drive, one of the best games ever21:48
*** Mudpuppy has quit IRC21:48
odyssey4mealthough Death Chase on the ZX Spectrum was the first I ever got to play - loaded off an audio tape!21:48
odyssey4meoh yeah, rock that game: https://www.youtube.com/watch?v=snpr8hFIf3U21:49
odyssey4meascii graphics to the maxx :p21:49
d34dh0r53lol, awesome21:51
odyssey4meand then there was https://www.youtube.com/watch?v=GtpKfSY0MBw21:52
odyssey4mehold me back21:52
*** yaya has quit IRC21:53
* d34dh0r53 thanks odyssey4me for the youtube black hole21:53
odyssey4meoh yes! https://www.youtube.com/watch?v=iJNfMqEK7VI21:54
*** JRobinson__ has joined #openstack-ansible21:54
odyssey4meplease do not cue windows 3.0 videos21:55
odyssey4meor windows 2.0 for that matter21:55
d34dh0r53many hours wasted https://www.youtube.com/watch?v=M9Bp3N9TdLc21:55
odyssey4med34dh0r53 I did enjoy this one in the arcade though: https://www.youtube.com/watch?v=J4tshJrkBw021:56
odyssey4meTest Drive! Yeah! It came on around 10-20 'floppy' disks as I recall21:56
d34dh0r53haha21:56
d34dh0r53Golden Axe was so much fun21:57
*** yaya has joined #openstack-ansible21:57
*** mpmsimo has quit IRC21:59
*** k_stev has quit IRC22:01
sigmavirus24so I can't seem to see anything that we committed recently to do this to ourselves22:06
palendaeTempest SHA change?22:06
sigmavirus24nah this is pretty clearly in cinder's court22:06
sigmavirus24no cinder-volume/backup logs22:06
sigmavirus24that's very very suspicious22:06
palendaeAre *their* things passing?22:06
sigmavirus24cinder's gate? I hadn't checked recently but the current state of their gate is irrelevant to our since we're pinning to what is likely quite a few commits ago22:07
palendaeOn our master?22:07
sigmavirus24Yes22:07
palendaeOk22:07
odyssey4meI agree that it looks more code-orientated.22:09
*** shoutm has joined #openstack-ansible22:09
sigmavirus24but it's so bizarre that this only happens on hpcloud22:09
odyssey4meWe have passing gate checks with the same underlying architecture for juno and kilo.22:09
odyssey4methat's the confusing part\22:09
sigmavirus24is there a way for me to make a (free|cheap) account there to test this?22:09
odyssey4mesigmavirus24 in hp cloud?22:11
sigmavirus24mhm22:11
odyssey4mewell, in a few days cloudnull_zzz will return and he has one22:12
odyssey4meomg he has lurked22:12
sigmavirus24lol22:12
palendaeodyssey4me: he was on company chat earlier22:12
palendaeWaiting in an airport22:12
palendaeAnd thankfully he tl;dr'd the scrollback22:12
odyssey4meairports... ugh22:12
sigmavirus24airports are the best places ever22:15
sigmavirus24especially when flights are delayed 3 hours22:15
sigmavirus24such that you have to sleep overnight in the next airport you get to22:16
palendaeOr like when O'Hare gives you free vouchers to a hotel that sends a shuttle22:16
sigmavirus24loooool22:17
sigmavirus24and I'm out22:17
palendae*never sends a shuttle22:17
sigmavirus24the shuttle will be there in the next 48 hours22:17
sigmavirus24you're welcome, we're sorry22:18
*** sigmavirus24 is now known as sigmavirus24_awa22:18
odyssey4mesigmavirus24 I do think that if we continue to keep up to date with SHA's in master, we probably need to be better at keeping touch with each project - including devstack.22:19
*** pradk has quit IRC22:24
*** mpmsimo has joined #openstack-ansible22:32
*** neillc_away is now known as neillc22:33
*** mpmsimo has quit IRC22:35
*** mpmsimo has joined #openstack-ansible22:35
*** jwagner is now known as jwagner_away22:38
*** yaya has quit IRC22:38
*** darrenc is now known as darrenc_afk22:40
*** spotz is now known as spotz_zzz22:43
*** KLevenstein has quit IRC22:49
*** tlian2 has joined #openstack-ansible22:50
*** tlian has quit IRC22:51
*** darrenc_afk is now known as darrenc22:56
*** britthouser has joined #openstack-ansible23:24
*** britthou_ has joined #openstack-ansible23:25
*** britthouser has quit IRC23:28
*** mpmsimo has quit IRC23:48
*** britthou_ has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!