Wednesday, 2015-09-02

*** darrenc is now known as darrenc_afk00:12
*** shoutm has quit IRC00:19
*** shoutm has joined #openstack-ansible00:23
*** darrenc_afk is now known as darrenc00:29
*** tlian2 has quit IRC04:05
*** shausy has joined #openstack-ansible04:53
*** markvoelker has joined #openstack-ansible05:06
*** markvoelker has quit IRC05:11
*** markvoelker has joined #openstack-ansible05:12
*** cloudtrainme has joined #openstack-ansible05:13
*** shausy has quit IRC05:36
*** shausy has joined #openstack-ansible05:36
*** markvoelker has quit IRC06:23
*** sdake has joined #openstack-ansible06:24
*** cloudtrainme has quit IRC07:08
*** cloudtrainme has joined #openstack-ansible07:08
*** gparaskevas has joined #openstack-ansible07:17
*** javeriak has joined #openstack-ansible07:54
*** javeriak_ has joined #openstack-ansible07:56
evrardjpgood morning everyone07:59
*** javeriak has quit IRC07:59
gparaskevasgm!08:04
mattthowzit guys08:04
evrardjpis it possible to have storage hosts that have container_vars to set them in an availability zone, but limits the cinder backends to some availability zone?08:06
evrardjpfor example, I'd like to have 2 storage hosts, one per AZ (let's call them main DC <-> backup DC)08:06
evrardjpand on the first host in mainDC, I'd have 2 backends: one with a replicated ceph (so available on both AZ), the other with local netapp storage (that stays is only available in the AZ mainDC)08:07
evrardjpthe second host in backupDC would have 2 backends also: one with the replicated ceph (same as the first one), the other with local netapp storage (that is only available in the AZ backupdc)08:09
evrardjpis that even possible here?08:09
*** javeriak_ has quit IRC08:13
evrardjp(and if possible, in the horizon dashboard have one of the options auto-selected, it's really a pain always change volume_type and availability_zone to make it work08:17
odyssey4memorning all08:18
odyssey4memattt I'm going to need to pull https://review.openstack.org/219364 into https://review.openstack.org/218572 to unblock the gate - can you then rebase and remove the pin in https://review.openstack.org/215584 as the netaddr bug is addressed by the newer sha in that patch08:19
*** javeriak has joined #openstack-ansible08:19
matttodyssey4me: i'm not pinning neutron to overcome any gate-related issues08:20
matttodyssey4me: i did it because it introduces the new --expand / --contract args08:20
odyssey4memattt my patch will be pinning, so I'm asking you to remove the pin once you've rebased08:20
odyssey4meThe choice we have is either to merge my patch and yours (frankenpatch, ugh) - or to add a temporary pin to my patch (which I prefer)08:21
matttok sure08:21
matttwhy do i need to rebase tho?08:22
mattti'll just remove the pin08:22
matttn/m coffee not kicked in yet08:22
odyssey4memattt if you don't mind, I'll revise the patch with what I mean :) it won't materially change your patch08:23
matttodyssey4me: hands off08:23
odyssey4meyour Neutron sha update is good08:23
matttyou get your grubby hands out of my patch08:23
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Wait for container ssh after apparmor profile update  https://review.openstack.org/21857208:24
*** javeriak has quit IRC08:24
matttodyssey4me: go ahead and do what you need to do :)08:24
odyssey4melol, ok - then look at the above patch - see the requirements.txt change - that needs to be reverted in your patch08:24
matttodyssey4me: i'm kidding :)  just let me know if you want me to do it and i'll do it, otherwise assuming you're on it08:24
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update how neutron migrations are handled  https://review.openstack.org/21558408:26
odyssey4memattt I'll do it quickly08:26
*** javeriak has joined #openstack-ansible08:27
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update how neutron migrations are handled  https://review.openstack.org/21558408:28
odyssey4memattt done - now we wait to see if any other surprises have crawled in to cause CI failure in the last 24 hours, otherwise we mergerate and unblock master08:30
matttodyssey4me: sounds good ... i have more changes to make to that review, but i'm going to let that one go through and create a subsequent patch for additional work08:30
matttgit-harry spotted some issues w/ it late yesterday afternoon08:31
odyssey4memattt well, you can continue to revise the review if you like - I just wanted to make sure we incorporate the requirements removal as we bump that sha08:31
matttodyssey4me: but if we are experiencing db syncing issues holding up the gate (even if periodically), i'd say get that guy through08:31
odyssey4mefair enough :) it'll definitely help improve the rate of success08:32
*** javeriak has quit IRC08:33
*** javeriak has joined #openstack-ansible08:33
*** shoutm has quit IRC08:36
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Change AppArmor profile application order  https://review.openstack.org/21701408:41
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Updated kilo to include Neutron netaddr fix - 2 Sep 2015  https://review.openstack.org/21960508:52
odyssey4mehughsaunders I need some help getting to the bottom of why http://logs.openstack.org/98/217098/3/check/gate-os-ansible-deployment-dsvm-commit/5032990/console.html#_2015-09-01_15_37_49_545 is the resulting tempest output in https://review.openstack.org/21709808:53
odyssey4mewe need to get the juno update out the door08:53
hughsaundersodyssey4me: ok08:53
odyssey4mewe were working through how to get into the venv yesterday08:54
hughsaundersyep08:54
*** markvoelker has joined #openstack-ansible08:57
odyssey4mehow was that done again?08:57
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Disable scatter-gather offload on host bridges  https://review.openstack.org/21929209:00
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Disable python buffering for gate checks  https://review.openstack.org/21859509:02
evrardjposad.readthedocs.org/en/latest doesn't work anymore?09:02
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Adds the ability to provide user certificates to HAProxy  https://review.openstack.org/21552509:02
*** markvoelker has quit IRC09:02
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Updated MariaDB to the new release version (10.0)  https://review.openstack.org/17825909:02
odyssey4meevrardjp odd, but http://os-ansible-deployment.readthedocs.org/en/latest/ does09:06
evrardjpthis isn't the one documented on the launchpad page ;) (also not the one in my browser history :p)09:06
evrardjpI used a good old make html, so it's not really for me09:07
hughsaundersodyssey4me: cd /opt/tempest_*; . bin/activate09:09
odyssey4meevrardjp fixed the wiki reference, thanks :)09:10
*** shoutm has joined #openstack-ansible09:11
odyssey4mehughsaunders ah, thanks - now I have the issue replicated09:12
hughsaundersodyssey4me: I just created a venv from the package list here: https://github.com/stackforge/os-ansible-deployment/blob/juno/rpc_deployment/roles/tempest/defaults/main.yml#L65-L93 using http://rpc-repo.rackspace.com/python_packages/juno/ and OpenSSL was installed ok :/09:13
evrardjpodyssey4me: np09:14
odyssey4mehughsaunders so I build a Juno build yesterday and once in the venv you can't import OpenSSL09:26
odyssey4mebut once you install pyOpenSSL, then you can09:26
odyssey4meso it seems that we just need to add that into tempest's requirements I guess09:27
odyssey4mehughsaunders what is odd though is that if the openstack clients need it, surely it'd be in their requirements and it would therefore install?09:32
hughsaundersodyssey4me: yeah, I'm doing an AIO as well, so will be in the same position soon09:35
odyssey4mehughsaunders ok - let me know09:35
odyssey4mewhen you're done with your AIO - I don't want to affect the upstream repo while you're mid build09:36
hughsaundersodyssey4me: hmm aio failed on heat retrieve domain id: ERROR: openstack Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)09:43
odyssey4mehughsaunders did you run the juno build including https://review.openstack.org/217098 ?09:44
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Updated juno to include fix for CVE-2015-3241 - 26 Aug 2015  https://review.openstack.org/21709809:44
hughsaundersno, tip of juno. I'll just run the tempest play manually, to get to this issue09:44
odyssey4metip of juno is broken, this patch is resolving several issues with it09:45
odyssey4mehughsaunders I'm busy updating http://rpc-repo.rackspace.com/python_packages/juno09:48
odyssey4meI don't think the wheels need updating, but that's my test set of wheels to work with.09:49
odyssey4meTo use them requires changing rpc_release in rpc_deployment/inventory/group_vars/all.yml to 'juno'09:49
odyssey4mehughsaunders mattt andymccr we have build success for the master unblocker - please review: https://review.openstack.org/21857209:50
hughsaundersodyssey4me: I see you capped netaddr, good idea10:00
odyssey4mehughsaunders yep, the cap can be removed with the sha bump in https://review.openstack.org/215584 - the cap removal's already there10:01
odyssey4methe netaddr cap works for https://review.openstack.org/217014 (kilo) too10:02
*** javeriak has quit IRC10:12
*** shoutm has quit IRC10:19
odyssey4memattt https://review.openstack.org/215584 is still working after my adjustment & rebase on the changed https://review.openstack.org/21857210:20
*** javeriak has joined #openstack-ansible10:21
odyssey4mehughsaunders it looks like the addition of pyOpenSSL to tempest_pip_packages in the tempest role defaults worked10:26
odyssey4memy test build is successful - waiting for a gate pass now: https://review.openstack.org/21709810:26
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove hardcoded config drive enforcement  https://review.openstack.org/21848010:31
*** javeriak has quit IRC10:31
hughsaundersodyssey4me: cool10:31
odyssey4mehughsaunders I'm thinking that the approach we're shifting to in juno where the client versions installed is determined by the service and global requirements is perhaps the best way to go overall - your thoughts?10:33
matttodyssey4me: are you leaving that change to requirements.txt in https://review.openstack.org/#/c/215584/ ?10:33
odyssey4meideally I'm thinking that we should avoid carrying sha's if we possibly can10:33
odyssey4memattt yes, that needs to be changed there10:33
odyssey4meyour sha bump removes the need for the cap on netaddr10:34
odyssey4mesee point 4 in the commit msg ;)10:34
hughsaundersodyssey4me: always a balance - carrying shas gives stability but forces us to update. Relying on upstream for versions is less work, but less stable10:34
odyssey4mehughsaunders how is the sha for the client more stable?10:35
hughsaundersodyssey4me: because an upstream change can't give us a new version of the client10:35
odyssey4meI would think that relying on the global/service requirements is more stable - as that's what is gate tested throughout the openstack-ci10:36
matttodyssey4me: ok, asking because you suggested a different change above10:36
matttwhich is why i was confused10:36
odyssey4memattt I added the cap in https://review.openstack.org/218572 which has worked to unblock master (please review)10:37
matttodyssey4me: right but that's not what you said you were doing :P10:37
matttanyway reviewing now !10:37
odyssey4meI'd like to ensure that the cap is removed - and it makes logical sense to me to remove it in https://review.openstack.org/215584 as you're doing a sha bump there.10:37
odyssey4meI'm confused about your confusion now. :)10:38
matttodyssey4me | mattt I'm going to need to pull https://review.openstack.org/219364 into https://review.openstack.org/218572 to unblock the gate - can you then rebase and remove the pin in https://review.openstack.org/215584 as the netaddr bug is addressed by the newer sha in that patch10:39
matttodyssey4me | mattt my patch will be pinning, so I'm asking you to remove the pin once you've rebased10:39
matttfrom above :)10:39
odyssey4methat's exactly what I did10:39
odyssey4meI added the pin (ie the netaddr version cap in requirements.txt) to https://review.openstack.org/218572 , then rebased https://review.openstack.org/215584 and added the removal of the netaddr cap10:40
odyssey4medid you perhaps  think I was going to bump the sha in https://review.openstack.org/215584 ?10:40
matttyes!10:40
matttbro operating on little sleep here :P10:41
odyssey4meah, that's what I wanted to avoid :)10:41
odyssey4mebumping the sha would have meant also needing to add the other fixes in https://review.openstack.org/215584 making a mega-franken-patch10:41
odyssey4mecapping netaddr is simpler :)10:41
odyssey4mejuno update build success! :) https://review.openstack.org/21709810:42
matttodyssey4me: question about https://review.openstack.org/#/c/21857210:44
matttodyssey4me: why in some playbooks are we not registering ssh_wait_check and retrying ?10:44
odyssey4memattt example?10:44
matttodyssey4me: any that have the wait for container ssh task already, ie. https://review.openstack.org/#/c/218572/5/playbooks/os-glance-install.yml10:45
odyssey4memattt ah, that's an unintentional omission - let me fix that in a follow-up patch10:45
matttdon't want to do it there, since you're already touching those playbooks ?10:46
odyssey4mebut, the reason it doesn't matter so much for those is that their container configs don't change as much10:46
odyssey4meso they tend to come up more quickly10:47
odyssey4mebut yes, that needs a follow-up patch to address10:47
odyssey4megood catch!10:47
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Additional retries for ssh wait check  https://review.openstack.org/21963810:53
odyssey4memattt this should cover the missing bits: https://review.openstack.org/21963810:54
odyssey4memattt hughsaunders can you please review the kilo & juno unblockers too: https://review.openstack.org/217014 (kilo) and https://review.openstack.org/217098 (juno)10:56
hughsaundersodyssey4me: reviewing 217098 at the moment, trying to figure out what happens when the clients are not specified in repo packages. Presumably that means wheels don't get built so packages have to be pulled from upstream pypi?10:58
odyssey4mehughsaunders yes, the packages do get put into the repo server (our upstream one) but they are pulled from pypi10:59
matttodyssey4me: thought we shouldn't backport until something has merged ?10:59
matttthat one review references a backport that's still in flight10:59
odyssey4memattt yeah, don't +w yet - I just want to get any discussion/concerns out the way11:00
matttodyssey4me: k11:00
odyssey4methe 11.2.0 release requires that patch and is several days overdue, so the sooner we can get the votes in the better11:01
odyssey4mehughsaunders an example of the resulting repo is http://rpc-repo.rackspace.com/python_packages/juno/11:01
matttodyssey4me: fair enough11:01
odyssey4meso we still build wheels, just not from git11:01
odyssey4mehughsaunders the critical thing this ends up resolving is this issue: https://bugs.launchpad.net/bugs/148831511:01
openstackLaunchpad bug 1488315 in openstack-ansible juno "The python-requests package is pulled in by apt via dependency" [High,In progress] - Assigned to Jesse Pretorius (jesse-pretorius)11:01
odyssey4meI think what's happening is that we're pulling in clients that are too new and have greater restrictions in some places for the deps.11:03
*** gparaskevas has quit IRC11:03
odyssey4meUsing the service/global requirements resolution of which clients to use means that we get a properly tested and known working client for the services - and likely something that the package maintainers are also working against, so there shouldn't be conflicts like that bug.11:04
*** k_stev has joined #openstack-ansible11:28
hughsaundersOK I get it now, with_py_pkgs gets a list of git repos, one of which is openstack/requirements. Yaprt clones the repos and extracts the requirements in order to build dependant wheels. So by including global requirements and all the services we'll get all the clients without having to specify versions.11:33
matttman that is a lot of changes to unblock a stable branch11:34
odyssey4mehughsaunders yep, and the versions we get will be based on the upper and lower caps in each service project11:34
hughsaundersI tested with just bumping neutron, but I guess we'd have to bump the rest at some point.11:36
odyssey4memattt agreed, but unfortunately they're all tied together :/11:36
odyssey4memattt assuming you're talking about the juno review?11:36
matttodyssey4me: yeah 21709811:36
hughsaundersjust reading 217014, its annoying that the service playbooks have to have lxc related tasks. I guess better in the playbooks than the roles.11:40
odyssey4mehughsaunders yep - they were pulled into the playbooks to cut down the down-time11:41
odyssey4meI actually think that putting them into the playbooks is a good thing11:41
odyssey4mebut it is a departure from how we were doing things11:42
*** bapalm has quit IRC11:43
matttodyssey4me: do you have a juno instance up w/ 217098 applied ?11:46
odyssey4memattt yup11:46
odyssey4memattt I was thinking of perhaps firing up a fresh one and setting it up for maas11:47
matttodyssey4me: want me to do that?  i want to test something w/ heat11:48
matttuse the stack_heat_domain_name used to not work, i see the bug was fixed but we should be sure before we commit that11:48
matttbecause i don't think our gate will catch many heat issues :(11:48
odyssey4memattt sure - it'd be good to have a more thorough test11:48
matttok cool doing now11:48
odyssey4meI did see the upstream issue and resolution of it11:48
*** bapalm has joined #openstack-ansible11:49
odyssey4memattt you're talking about this one? https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CDUQFjADahUKEwjDy9bJpNjHAhUCtxQKHfl9Dcc&url=https%3A%2F%2Fbugs.launchpad.net%2Fbugs%2F1376213&usg=AFQjCNFl-TsodddEzf9xG7f3OPox2QIlVA&sig2=LgPrYMveDcM4l-J37mjUyg&bvm=bv.101800829,d.d2411:52
mgariepygood morning everyone11:52
odyssey4melolfail11:52
odyssey4mehttps://bugs.launchpad.net/heat/+bug/137621311:52
openstackLaunchpad bug 1376213 in heat "stack_user_domain_name not working as expected" [High,Fix released] - Assigned to Ethan Lynn (ethanlynn)11:52
matttodyssey4me: haha yeah that one11:52
matttmgariepy: morning, how's quebec doing today11:52
mgariepydoing fine ;)11:53
odyssey4mefixed in heat on nov 6: https://review.openstack.org/13359111:53
matttodyssey4me: yeah i just want to be sure, since the gate only tests the heat API iirc11:53
odyssey4mesure thing :)11:54
*** k_stev has quit IRC11:58
*** jaypipes has joined #openstack-ansible12:08
*** sigmavirus24_awa is now known as sigmavirus2412:08
openstackgerritMerged stackforge/os-ansible-deployment: Wait for container ssh after apparmor profile update  https://review.openstack.org/21857212:13
evrardjpgoog morning mgariepy12:25
evrardjpgood*12:25
mgariepyhow are you doing ?12:25
evrardjpfine... a little disappointed by cinder, but everything is fine... and you?12:25
mgariepyi'm doing fine.12:26
mgariepyi'll tests your keepalive today if my hardware is fast enough !12:27
mgariepykeepalived**12:27
*** mordred has quit IRC12:27
evrardjpok12:30
evrardjpdon't hesitate to ask12:30
evrardjpand don't forget to configure variables, but also stuff on your hosts!12:31
evrardjp(like configuring the NIC/bridge where the IP should bind on)12:31
evrardjpSam-I-Am: You there?12:31
evrardjpI'm reading the documentation here: http://openstack-ansible-deployment.readthedocs.org/en/latest/install-guide/configure-cinder-nfs.html12:31
evrardjpI'm trying to have NetApp NFS (netapp iscsi works fine) on my cinder12:32
evrardjpthe page starts with that, so I'm fine12:32
evrardjphowever the previous page mentions that I need to write a netapp stanza (or whatever name I put on) in my cinder_backends (with the additional configurations)12:33
evrardjpso I wrote in my netapp stanza a nfs_shares_config: like written on this previous page I just mentionned12:34
evrardjp(it's this one I'm talking about: http://openstack-ansible-deployment.readthedocs.org/en/latest/install-guide/configure-cinder.html)12:34
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [OSAD doc] Update link to Cloud Admin Guide  https://review.openstack.org/21908512:34
*** KLevenstein has joined #openstack-ansible12:35
evrardjphowever this DOESN'T create/template the shares at all12:35
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Removed rpc-repo upstream pip deps  https://review.openstack.org/21618112:35
evrardjpbecause my nfs_shares_config isn't in cinder_nfs_client12:36
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Fixing haproxy-playbook fails when installing on multiple hosts  https://review.openstack.org/21557912:40
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Shift irqbalance package from lxc_hosts to openstack_hosts  https://review.openstack.org/21835412:41
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add lvm.conf cleanup to teardown script  https://review.openstack.org/21782712:41
*** woodard has joined #openstack-ansible12:41
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch Nova/Tempest to use/test Cinder API v2  https://review.openstack.org/21404512:41
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update tempest configuration  https://review.openstack.org/21010712:42
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/20953712:44
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Implement tox.ini config for bashate and pep8 tests  https://review.openstack.org/21617012:45
*** sigmavirus24 is now known as sigmavirus24_awa12:45
*** cloudtrainme has joined #openstack-ansible12:46
*** cloudtrainme has quit IRC12:47
evrardjpI've dropped nfs just for that12:56
matttodyssey4me: no issues w/ heat in that patch, a stack created for me fine12:58
odyssey4memattt awesome :) did you test maas too?12:59
matttodyssey4me: working on that now13:00
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add variable for cirros url  https://review.openstack.org/21731013:01
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add sorting_method to swift proxy config as needed  https://review.openstack.org/20881713:03
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Implementation of keepalived for haproxy  https://review.openstack.org/21881813:04
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Use the pre-defined Ubuntu mirror for the AIO  https://review.openstack.org/21861113:05
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Fix the heat stack user create  https://review.openstack.org/21818413:06
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Ensure rsync restarts fully during swift setup  https://review.openstack.org/21734113:07
*** scarlisle has joined #openstack-ansible13:07
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove Juno to Kilo logic in upgrade script  https://review.openstack.org/21529113:08
cloudnullmorning13:09
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add profiling for Ansible tasks  https://review.openstack.org/21684913:10
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/21945713:10
odyssey4meo/ cloudnull13:11
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow Horizon setup with external SSL termination  https://review.openstack.org/21464713:11
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Container create/system tuning  https://review.openstack.org/21590513:13
cloudnullodyssey4me: where are we with things? hows anything i can help out with for a bit13:13
cloudnullor reviews that need doing ?13:14
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Container create/system tuning  https://review.openstack.org/21590513:14
cloudnullmost of us in the US are out at 10 for a team outing13:14
cloudnullbut i can bang on some things for the next couple of hours13:14
*** k_stev has joined #openstack-ansible13:15
evrardjpmorning cloudnull13:16
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Implement /usr/bin/env as the shebang in all bash scripts  https://review.openstack.org/21590613:17
cloudnullmorning evrardjp13:17
odyssey4mecloudnull master is unblocked, and I've just run through everything to rebase for up to date gate checks13:18
cloudnullyay!13:18
odyssey4mekilo needs a final reviewer: https://review.openstack.org/21701413:18
odyssey4methen it'll be unblocked and I can release 11.2.0 later13:18
matttodyssey4me: messed up that aio deploy for testing maas, didn't call it aio1 :(13:19
odyssey4memattt ah, that is a neat trick13:19
odyssey4memattt I usually do this instead: https://github.com/rcbops/rpc-openstack/blob/master/scripts/deploy.sh#L43-L4413:20
mattti thought we had something to update the inventory, must be mistaken13:20
cloudnullandymccr: mattt: hughsaunders: will need to review https://review.openstack.org/217014 im the authhor.13:20
hughsaunderswill do13:20
odyssey4mebut yeah, I think that needs to be done before deployment13:20
odyssey4mehughsaunders already has :)13:20
hughsaundersalready done :)13:21
odyssey4memattt will probably be happy to approve now that the master commit is merged13:21
*** k_stev has quit IRC13:21
cloudnullalso if we can get https://review.openstack.org/#/c/219457 accepted (assuming its something we all want) and backported to kilo itll be great13:21
cloudnullmhayden: ^ - cc13:21
matttodyssey4me: bah, i knew i saw that somewhere, couldn't remember where it was :-/13:22
odyssey4mecloudnull so I'm going to revise https://review.openstack.org/218611 to make the AIO use the locally set mirror, but also include changes in the gate check to test whether it's in rax/hpcloud and to set the mirror to the default mirrors for the cloud in question13:25
*** k_stev has joined #openstack-ansible13:25
odyssey4mewe can recheck that for a bit and see how it goes, but it seems that even using ubuntu archive is helpful for improving the hpcloud success rate13:25
hughsaunderscloudnull: you want named veth pairs before the kilo release?13:25
cloudnullyes. if possible.13:26
cloudnullright now every contianer restart potentialy creates a new veth leaving an old one behind .13:26
odyssey4methat said, hpcloud-b4 still always breaks - the containers can't speak beyond the host, so I'm suspecting that our iptables clearing isn't helping and will run some tests to target that issue afterwards13:26
cloudnulli have an AIO with 500 + veths on it when it should only have around 12013:27
odyssey4mecloudnull oh dear, so that's another 11.2.0 blocker?13:27
cloudnullnaming them makes that go away.13:27
odyssey4medo we have a bug registered for this?13:27
hughsaundersahhh, I thought it was just for debug convenience13:27
odyssey4methat is a bit of an issue - evrardjp and svg would probably want to input on the conversation13:28
cloudnullit has the added side effect of providing an easy way to debug for sure but removing the extra load on the network stack is the real benifit13:28
mhaydencloudnull: i'd like to email out some of the dangling veth findings later today -- is that something good for the openstack-dev list with [openstack-ansible] tag?13:28
cloudnullindeed13:29
*** gparaskevas has joined #openstack-ansible13:29
odyssey4mecloudnull but changing them does result in another restart of the containers, right? if so then perhaps holding back 11.2.0 until we have that done is a good plan13:29
odyssey4meotherwise maybe an 11.3.0 should come out when it does13:29
odyssey4meI'm not keen to actually rush another entirely new thing through.13:29
evrardjpodyssey4me: sorry I didn't follow, you want my input for?13:30
*** scarlisle has quit IRC13:30
cloudnullit requires a restart to take effect but if i read the code right it will add the option and when the container restarts it will begin using the named veths. so its not forcing the restart.13:30
odyssey4meevrardjp see the above conversation about leftover veth pairs13:30
svgodyssey4me: not sure on what I should comment?13:31
svgCan't recall I had issues or ideas on veth pairs?13:32
cloudnullits not something we've seen a lot but when it begins to happen and on a long running cluster theyll clean themselves up over time if im not mistaken -cc mhayden13:33
* cloudnull needs coffee yet13:33
cloudnull when it begins to happen they stack up quickly when containers are restarted.13:33
odyssey4mecloudnull honestly I think that it's best that we do two things - one is to implement named veth pairs for liberty, then backport it into kilo in a feature release; the other is to provide some way of cleaning up the leftovers as a workaround until the named veth pairs are implemented13:34
evrardjpodyssey4me: I read it. You (in general) have found something important13:34
evrardjpred*13:34
evrardjpno doc impact on the commit though?13:34
evrardjpwhat should I look at to give you input about my current drift?13:35
odyssey4meevrardjp I have no idea - the guys in the US are looking into it.13:37
evrardjpthe id of the veth: on ip link or the amount of veth on a host?13:37
odyssey4memhayden can tell you, I think13:37
*** tlian has joined #openstack-ansible13:39
evrardjpI have ids that go up 1600, but only a small amount are really assigned, so I'm sure of the problem... will wait for mhayden input then13:40
cloudnullodyssey4me: im good with waiting on a major release .13:41
cloudnull11.3 is fine if we cant make it go for 11.213:42
gparaskevasodyssey4me: quick question can you define floating range to in the openstack_userconfig? And where does public_network binds?13:42
odyssey4mecloudnull yep - we can perhaps ship a workaround/cleanup script into a hotfix for operational execution until the major release is out13:42
mgariepyevrardjp, concerning keepalived, if I have 3 hosts on which i want to configure vrrp and stuff i guess i would need different priority for all of them ?13:43
openstackgerritMerged stackforge/os-ansible-deployment: Disable python buffering for gate checks  https://review.openstack.org/21859513:43
odyssey4megparaskevas honestly I have no idea off-hand13:43
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Add profiling for Ansible tasks  https://review.openstack.org/21684913:43
gparaskevasodyssey4me: ok! thanks !13:43
mhaydenevrardjp: the issue we saw is that a container goes down and the host side of the veth is still on the bridge13:44
mhaydenbut nothing from the container is connected to it13:44
mhaydenit's like a dangling ethernet cable ;)13:44
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update how neutron migrations are handled  https://review.openstack.org/21558413:45
odyssey4memhayden what do you think of my suggested approach mentioned to cloudnull ?13:46
mhaydenodyssey4me: getting named-veths merged and backported to kilo? or something else?13:46
odyssey4memhayden a script to clean up and the named veth setup - aimed at master13:47
mhaydensounds good to me13:47
mhaydeni need to update the named-veth patch a bit based on cloudnull's good suggestion13:47
odyssey4methen if need be we can backport the script alone to kilo, or do another feature release if it's confirmed to be good and ready for production use13:47
cloudnullApsu:  has one of those. He should PR that to liberty for review13:47
mhaydenas for the script, we're looking for something we can run that will auto-remove dangling veths?13:47
odyssey4memhayden personally I think that perhaps it shouldn't auto-run or even run in ansible13:48
odyssey4meit should just be a tool available to operators - they can choose when to run it13:48
cloudnullif we can make it part of the existing lxc-system-manage command it would be idea.13:48
cloudnull&ideal.13:48
cloudnull**13:48
* cloudnull still needs coffee13:48
odyssey4mecloudnull yeah, that would be nice if it's at all possible13:49
cloudnull++13:49
mhaydenodyssey4me / cloudnull: totally agree -- i think ol' Apsu already has quite a nice script ;)13:49
cloudnullyea idk for sure. ive not seen Apsu 's script13:49
* mhayden cozies up next to Apsu and asks him if i can toss his script into a gerrit review ;)13:49
odyssey4methe script may need some baking too - someone ran it during an install the other day and it broke everything :p13:49
mhaydenoops13:50
Sam-I-Ammhayden: the vetherator script?13:50
cloudnullSam-I-Am:  obviously .13:50
mhaydenhah yeah13:50
mhaydenwhy run the script when you could just reboot the server?13:50
cloudnullthe script does more than one thing, thus .*erator13:50
* mhayden hides13:50
odyssey4meanyway - I think the spec should be updated to make it cover both the issue and the convenience, not just the convenience13:50
Sam-I-Amits like terminator, but with veth pairs13:51
cloudnullmhayden: https://gist.github.com/cloudnull/936146113:51
cloudnullyw13:51
odyssey4methen we need the script and veth config submitted, probably in seperate commits/reviews to make it easier to selectively backport13:51
mhaydencloudnull: ah, haven't seen that one13:51
Sam-I-Am*hopefully* this named veth pair changes resolves the vethelberry problem, but having the script makes sense for existing deploys13:51
cloudnullfixes openstack guaranteed.13:51
mhaydenApsu's script did some interesting things in bash i haven't seen done much before ;)13:51
mhaydencloudnull: need that in a sticker13:51
Sam-I-Ammhayden: dont look at source, just run as root13:52
cloudnullmhayden: we do13:52
*** k_stev has quit IRC13:52
mhaydenSam-I-Am: that's how it works with Go, right? wget, chmod +x, ./run WOOOO13:52
*** k_stev has joined #openstack-ansible13:52
Sam-I-Ammhayden: pretty much13:53
Sam-I-Ammhayden: thinking i should test the veth namerator in juno, then apply patch to kilo, upgrade13:53
Sam-I-Amsee what blows up13:53
mhaydenthat'd be interesting13:54
mhaydencloudnull mentioned there were some juno-isms that might make named-veths a less than good idea13:54
cloudnull++ juno.*13:54
evrardjpok I'm back in the conversation13:54
mhaydenevrardjp: howdy!13:55
evrardjpI don't see much dangling veth on my host side13:55
evrardjpis there some kind of garbage collection ?13:55
Sam-I-Amcloudnull: increment juno?13:55
Sam-I-Amgehhh, i replied to a thread on the os mailing list13:55
evrardjpmhayden: I have veth link ids like 1600, but only a few hundreds of veths for the current deployment. but I sometimes reboot, so it cleanes up naturally at that time13:56
mhaydenevrardjp: one of the theories (based on an lxc thread) is that open tcp connections might hold the veth open13:56
evrardjpok13:56
mhaydenwe were seeing 1-5 veths dangling after rebooting all containers13:56
Sam-I-Amor its something with cgroups breaking on teardown13:57
mhaydensome disappeared on their own after a period of time13:57
Sam-I-Amperhaps a timing/race issue13:57
mhaydenbut some hung out for a long time13:57
Sam-I-Ammhayden: they're social veth pairs13:57
evrardjpafter rebooting or after teardown.sh ?13:57
mhaydenevrardjp: just after stopping containers13:57
Sam-I-Amrebooting clears up the things13:57
evrardjpdo you want me to test on my containers to have a confirmation?13:57
Sam-I-Amevrardjp: the patch?13:58
evrardjpI'll use something to kill the tcp connections13:58
mhaydenthe funny thing is that naming each named veth seems to eliminate the problem from my testing13:58
cloudnullodyssey4me:  in leiberty, IMO this is a priority if we can get it through soon-ish https://review.openstack.org/#/c/21558413:58
Sam-I-Ammhayden: hence mandelbug13:58
cloudnullmattt: -cc13:59
evrardjpSam-I-Am: you mean the vextherminator?13:59
cloudnullthers an issue with neutron db migrations still and that resolves it 100%, in my testing13:59
evrardjpor the renaming?13:59
evrardjpok13:59
evrardjpwe can just say that sometimes we need to reboot hosts13:59
evrardjpif it's in the docs, under a known issue section, you don't have to hurry a backport14:00
Sam-I-Amevrardjp: most of the time people don't reboot containers until an upgrade14:00
Sam-I-Amso that seems to be the first time this turns into a problem14:00
odyssey4mecloudnull agreed - although mattt was still busy doing some more work on the patch, so I've held back14:00
Sam-I-Amand only if the dangler has a mac/ip14:00
odyssey4memattt is that ready for review yet?14:00
cloudnullkk i've got to run. see you all back  online late.r14:01
Sam-I-Amcloudnull: running is bad, mmmkay14:01
*** markvoelker has joined #openstack-ansible14:01
cloudnullhave a good one guys.14:01
evrardjpthx bye!14:01
*** cloudtrainme has joined #openstack-ansible14:01
evrardjpSam-I-Am: so it shouldn't be a problem until upgrade, so it could be part of the known issues/procedure14:02
mhaydencloudnull: good call on the string splicing for named-veths14:02
matttodyssey4me: yeah, there is more work to do but i think that can go through for now14:02
odyssey4memattt awesome, thanks :)14:02
matttoh my goodness, the cinder-backup review finally passed14:02
evrardjp:)14:03
evrardjpmattt I'll test it in early october if everything goes fine14:03
evrardjpI'll check the impact of the named veth pairs too, it seems important14:03
matttevrardjp: nice thanks!14:04
Sam-I-Amevrardjp: definitely in known issues14:05
odyssey4memattt yep :)14:05
odyssey4memattt failed_when: false in https://review.openstack.org/#/c/215584/12/playbooks/roles/os_neutron/tasks/neutron_db_setup.yml,cm line 4714:06
*** spotz_zzz is now known as spotz14:06
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/21945714:06
*** markvoelker has quit IRC14:06
*** cloudtrainme has quit IRC14:06
odyssey4meis that right?14:06
Sam-I-Amodyssey4me: i *think* so14:07
Sam-I-Ambut i'd have to poke moar14:08
matttodyssey4me: i believe that's right assuming i'm using failed_when correctly14:08
matttwe don't want that task to fail if grep finds nothing14:08
evrardjpif the command fails hard before?14:10
odyssey4memattt if you run the plays on a blank slate, but then also on a db that's been initialised - does it work both ways?14:10
evrardjpI'd rather register the response code, and use another task to check response code and grep content14:10
matttevrardjp: yeah that was actually waht i needed to do on follow-up work14:11
*** phalmos has joined #openstack-ansible14:11
matttevrardjp: git-harry was concerned that the command itself would fail and we'd end up doing the wrong thing14:11
matttodyssey4me: it will work both ways, yep14:11
odyssey4memattt ok, happy to improve in follow up work - I just didn't want to create another blocker14:12
*** cloudtrainme has joined #openstack-ansible14:12
Sam-I-Amgoing for a walk bbiab14:13
evrardjpmattt: git-harry I right. And I don't say that because it was my concern too :p14:14
hughsaundersI can't find a bug for excess veth pairs, anyone know if there is one?14:14
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Removed rpc-repo upstream pip deps  https://review.openstack.org/21618114:15
*** phalmos has quit IRC14:20
mhaydenodyssey4me: bumped the named-veths patch and left the WIP off since it's further along now14:21
*** javeriak has joined #openstack-ansible14:22
*** k_stev has quit IRC14:25
mhaydenhughsaunders: imma make that bug for the danglers14:26
hughsaundersmhayden: thanks :)14:26
*** sdake has quit IRC14:28
*** cloudtrainme has quit IRC14:29
*** javeriak has quit IRC14:29
*** javeriak has joined #openstack-ansible14:30
*** alextricity has joined #openstack-ansible14:30
*** shausy has quit IRC14:30
*** sdake has joined #openstack-ansible14:31
*** cloudtrainme has joined #openstack-ansible14:32
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/21945714:32
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/21945714:33
*** javeriak_ has joined #openstack-ansible14:35
*** javeriak has quit IRC14:36
mhaydenhughsaunders: not sure what else to tickle on https://bugs.launchpad.net/openstack-ansible/+bug/1491440 -- just assigned it to myself14:37
openstackLaunchpad bug 1491440 in openstack-ansible "Restarting containers leads to 'dangling' veth interfaces" [Undecided,New] - Assigned to Major Hayden (rackerhacker)14:37
*** phalmos has joined #openstack-ansible14:39
hughsaundersmhayden: lgtm, for completeness it could be targeted at a milestone probably 11.2.0. Can be pushed back if not ready for some reason.14:40
mhaydengot it14:42
*** markvoelker has joined #openstack-ansible14:42
*** markvoelker has quit IRC14:42
*** markvoelker has joined #openstack-ansible14:43
*** cloudtrainme has quit IRC14:43
*** javeriak_ has quit IRC14:46
*** cloudtrainme has joined #openstack-ansible14:46
evrardjpI have a question about cinder configuration of default AZ14:48
evrardjpin the doc we define cinder_storage_availability_zone under the storage_hosts14:49
openstackgerritMerged stackforge/os-ansible-deployment: Change AppArmor profile application order  https://review.openstack.org/21701414:49
evrardjpin container_vars14:49
evrardjpbut by default cinder scheduler is NOT on the storage hosts, it's on storage-infra_hosts14:50
evrardjpit could be the same, it's not necessarily, right?14:50
*** javeriak has joined #openstack-ansible14:51
evrardjpso it means the cinder_storage_availability_zone isn't set on the storage-infra_hosts, and so cinder scheduler thinks the cinder_storage_availability_zone is nova14:51
evrardjp(or to be more precise: (storage|default)_availability_zone is nova in cinder.conf for cinder scheduler containers)14:52
evrardjpis this a bug or a feature?14:52
evrardjpwill I break everything if I change the AZ in cinder_scheduler?14:52
*** galstrom_zzz is now known as galstrom14:53
*** cloudtra_ has joined #openstack-ansible14:55
Sam-I-Ammhayden: should bug 1491440 be partial-bug in the commit message?14:56
openstackbug 1491440 in openstack-ansible "Restarting containers leads to 'dangling' veth interfaces" [Undecided,In progress] https://launchpad.net/bugs/1491440 - Assigned to Major Hayden (rackerhacker)14:56
*** cloudtrainme has quit IRC14:56
odyssey4mehmm, evrardjp I haven't worked with that in several cycles but as I recall cinder-volume should be assigned an AZ whereas cinder-scheduler schedules volumes into the AZ's14:56
odyssey4meSam-I-Am yes, as the script is the thing that will remove the dangling veths - the naming of the veths is a feature with the spec14:57
*** Mudpuppy has joined #openstack-ansible14:59
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Ensure rsync restarts fully during swift setup  https://review.openstack.org/21734114:59
*** cloudtra_ has quit IRC14:59
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add sorting_method to swift proxy config as needed  https://review.openstack.org/20881715:00
*** KLevenstein has quit IRC15:00
*** woodard has quit IRC15:00
*** cloudtrainme has joined #openstack-ansible15:01
*** KLevenstein has joined #openstack-ansible15:03
*** woodard has joined #openstack-ansible15:03
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable tempest testing of ceilometer  https://review.openstack.org/21760015:03
*** KLevenstein has quit IRC15:04
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Additional retries for ssh wait check  https://review.openstack.org/21963815:04
mhaydenSam-I-Am: that might be better15:05
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Adds the ability to provide user certificates to HAProxy  https://review.openstack.org/21552515:05
evrardjpodyssey4me: my concern is the following: If I create a volume with horizon, and I define the cinder AZ plus the cinder backend type, it works fine15:07
evrardjpif I create the volume, and don't mention the AZ ("Any AZ"), then the volume fails to create15:08
evrardjpbecause horizon tries to use "nova" as AZ15:08
odyssey4meevrardjp hmm, isn't there meant to be a default availability zone setting somewhere?15:08
odyssey4medoes horizon try to do that, or is that a default either in nova or cinder?15:08
odyssey4me(or both)15:09
evrardjpI think it's how we deploy15:09
evrardjpI think (no confirmation yet), that the variable I explained before isn't set on the scheduler, and therefore horizon gives nothing, and cinder scheduler uses its default, which is nova15:10
evrardjpI'll tell you that when my cinder has finished redeploying15:10
evrardjpis redeployed*15:10
odyssey4mewait, horizon has nothing in the list?15:10
evrardjpnope15:11
evrardjpmy system is redeployed15:11
evrardjpand it's confirmed15:11
odyssey4meok, if you list the zones in cinder - do yuo see any?15:11
evrardjpyes, and in horizon too15:12
evrardjpit's perfectly fine15:12
evrardjpeverything works as it should be15:12
*** gparaskevas has quit IRC15:12
odyssey4meoh - so horizon and cinder do show zones?15:12
evrardjpyes15:12
evrardjphowever, cinder-scheduler has nova as default AZ15:12
*** phalmos has quit IRC15:12
evrardjpand as AZ15:12
odyssey4meah15:12
evrardjpwhich isn't good15:12
evrardjpso the variables that we are setting under storage_hosts15:13
evrardjpfor AZ15:13
evrardjpshould also be set on storage-infra_hosts15:13
evrardjpnot sure how to have a clean version of it though15:13
odyssey4meyeah, that sounds like a bug related to the switch from cohabitating cinder-volume and cinder-scheduler to splitting them out15:13
*** k_stev has joined #openstack-ansible15:14
evrardjpif you put them on the same host, it won't be a problem15:14
odyssey4meso you're definining them in the openstack_user_config?15:14
*** phalmos has joined #openstack-ansible15:15
evrardjpwhat I had before (and thus failing), was a definition of cinder_storage_availability_zone and cinder_default_availability_zone in container_vars under storage_hosts in openstack_user_config15:15
evrardjp(I'm skipping a few levels, but you understand ;) )15:16
evrardjpLike described in the doc15:16
odyssey4meso I would think that user_variables should be used to set the default zone site-wide, then each storage host should have a specific zone allocated15:16
evrardjpthat would work15:16
odyssey4meI don't know if it makes sense to set a storage zone on a scheduler, nor do I think it makes sense to set a default zone on a storage server.15:17
evrardjpbut that's not how it's described on the doc15:17
odyssey4meyep, this is a bug for sure15:17
odyssey4meyou aren't going mad :p15:17
evrardjpodyssey4me: still, there are 2 variables in the scheduler15:18
evrardjpcinder scheduler container I mean15:18
odyssey4meyep, I'm just not sure if it gets used15:18
evrardjpstorage_availability_zone and default_.*15:18
evrardjpok15:18
evrardjplet me try and tell you15:18
odyssey4metry only setting the default for the scheduler, and the zone on the storage server15:19
hughsaundersso where is the best place for group vars? Because this could go in group_vars/cinder_all. However we don't seem to have any group vars except for hosts and all.15:19
odyssey4mehughsaunders sure, but then would openstack_user_config be able to override it on a per host basis?15:19
odyssey4meit seems to me more that we just set a default in the role and user_config is used to override the zone var on a host by host basis... and the default zone is set in user_variables15:20
hughsaundersI presume so, as a hostvar should override a group var15:20
odyssey4meI don't think a group var is needed anyway, but if host vars override then that's good. :)15:21
evrardjpI think it could be useful to have a group vars cinder_all if the scheduler has a problem with only have default_ and no storage_availability_zone15:28
evrardjpI need to think of all implications though15:29
*** cloudtrainme has quit IRC15:29
*** cloudtrainme has joined #openstack-ansible15:30
evrardjpok it works15:31
odyssey4meevrardjp but the role has a default - so I'm not sure what the issue is15:31
evrardjpno need to worry about storage_availability_zone15:31
odyssey4mecool, thought as much15:31
odyssey4mewe should perhaps not even bother placing it in cinder.conf for the hosts that don't need it (like we're doing for neutron)15:32
evrardjpthe role as a default: nova, for both the storage_availabilty_zone and default_availability_zone15:32
evrardjplike the schedulers?15:32
odyssey4mecan you register a bug with the doc issue and your findings15:32
evrardjpyup15:32
evrardjpI'll15:32
odyssey4meyeah - ie don't put default_zone in the storage node's cinder.conf15:33
odyssey4meand don't put storage_zone in the the scheduler's cinder.conf15:33
odyssey4methat makes it less confusing - otherwise you'll see the mismatch and get confused15:33
odyssey4memattt - are you still around? how's the testing going of maas for https://review.openstack.org/217098 ?15:34
*** britthouser has quit IRC15:34
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/21945715:36
mhayden^^ switched to Partial-Bug per Sam-I-Am15:36
odyssey4memhayden I thought it was already.15:37
odyssey4meanyway15:37
matttodyssey4me: hey, question ... neutron_db_revision ... do we actually still need that now, or will we always point to head/heads ?15:39
odyssey4memattt I think when liberty releases then it might change - dunno15:39
matttodyssey4me: yeah it's not clear, kilo points to head, juno points to head15:42
matttodyssey4me: icehouse seems to point to icehouse tho15:42
odyssey4memattt well, perhaps the qeuestion should be - is it static through the whole cycle or not, and should a user ever change it?15:43
odyssey4meif it's always static and the user never needs to override it, then no var is needed15:43
matttodyssey4me: hmm, think that may be a question for cloudnull15:46
mattti don't believe a user would ever change that15:46
odyssey4meI'm not even sure that a dev would change it?15:47
matttyeah agreed15:47
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: More complete explanation of availability zones  https://review.openstack.org/21975115:49
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: More complete explanation of availability zones  https://review.openstack.org/21975115:49
*** markvoelker has quit IRC16:03
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: [WIP] Remove unnecessary tasks in neutron_db_setup.yml  https://review.openstack.org/21975916:03
*** k_stev has quit IRC16:10
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment-specs: Add spec for the Liberty cycle upgrade path  https://review.openstack.org/20771316:11
*** shoutm has joined #openstack-ansible16:12
*** devlaps has joined #openstack-ansible16:16
*** cloudtra_ has joined #openstack-ansible16:25
*** cloudtrainme has quit IRC16:27
*** woodard has quit IRC16:28
*** cloudtra_ has quit IRC16:32
*** galstrom is now known as galstrom_zzz16:33
*** cloudtrainme has joined #openstack-ansible16:34
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set the Ubuntu mirror used based on the environment  https://review.openstack.org/21861116:34
*** woodard has joined #openstack-ansible16:37
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set the Ubuntu mirror used based on the environment  https://review.openstack.org/21861116:42
*** cloudtrainme has quit IRC16:45
odyssey4memattt did you get an AIO up to test maas for https://review.openstack.org/217098 ?16:55
mhaydenwell, dangling veths came back to get me once the box started swapping16:57
mhaydensigh16:57
*** gparaskevas has joined #openstack-ansible17:04
*** cloudtrainme has joined #openstack-ansible17:06
*** phalmos has quit IRC17:17
*** woodard has quit IRC17:18
*** jbweber has joined #openstack-ansible17:28
*** cloudtrainme has quit IRC17:29
*** javeriak has quit IRC17:32
*** markvoelker has joined #openstack-ansible17:35
*** markvoelker has quit IRC17:35
*** markvoelker has joined #openstack-ansible17:36
*** javeriak has joined #openstack-ansible17:38
*** galstrom_zzz is now known as galstrom17:45
*** javeriak_ has joined #openstack-ansible17:46
*** woodard has joined #openstack-ansible17:47
*** javeriak has quit IRC17:48
*** k_stev has joined #openstack-ansible17:54
*** cloudtrainme has joined #openstack-ansible18:00
*** markvoelker has quit IRC18:05
*** woodard has quit IRC18:07
*** sdake_ has joined #openstack-ansible18:13
*** sdake has quit IRC18:17
*** sdake has joined #openstack-ansible18:21
openstackgerritMerged stackforge/os-ansible-deployment: Implement tox.ini config for bashate and pep8 tests  https://review.openstack.org/21617018:23
*** jwagner_away is now known as jwagner18:24
*** sdake_ has quit IRC18:25
*** javeriak has joined #openstack-ansible18:30
*** phalmos has joined #openstack-ansible18:30
*** javeriak_ has quit IRC18:31
openstackgerritMerged stackforge/os-ansible-deployment: [OSAD doc] Update link to Cloud Admin Guide  https://review.openstack.org/21908518:39
openstackgerritMerged stackforge/os-ansible-deployment: [OSAD doc] Update link to Cloud Admin Guide  https://review.openstack.org/21908518:39
*** phalmos has quit IRC18:39
*** phalmos has joined #openstack-ansible18:41
*** sdake_ has joined #openstack-ansible18:41
*** woodard has joined #openstack-ansible18:43
*** sdake has quit IRC18:45
*** ybabenko has joined #openstack-ansible18:47
*** woodard has quit IRC18:48
*** harlowja has quit IRC18:54
*** harlowja has joined #openstack-ansible18:58
*** markvoelker has joined #openstack-ansible19:00
*** phalmos has quit IRC19:01
*** ybabenko has quit IRC19:02
*** aslaen has joined #openstack-ansible19:09
*** phalmos has joined #openstack-ansible19:12
mhaydenso after digging through lxc code, and some fun debugging, it appears that LXC simply sends a halt signal to the init process within the LXC container19:14
mhaydenand it expects the system to clean up after the container19:14
*** mgariepy has left #openstack-ansible19:17
*** mgariepy has joined #openstack-ansible19:17
*** aslaen has quit IRC19:25
*** Mudpuppy has quit IRC19:28
*** Mudpuppy has joined #openstack-ansible19:31
*** metral is now known as metral_zzz19:31
*** Mudpuppy has quit IRC19:33
*** metral_zzz is now known as metral19:33
*** Mudpuppy has joined #openstack-ansible19:35
*** Mudpuppy has quit IRC19:38
*** Mudpuppy has joined #openstack-ansible19:38
*** jbweber has quit IRC19:41
*** woodard has joined #openstack-ansible19:50
*** javeriak_ has joined #openstack-ansible19:58
*** javeriak has quit IRC19:58
*** jmckind has joined #openstack-ansible20:04
*** alop has joined #openstack-ansible20:10
*** ybabenko has joined #openstack-ansible20:10
*** cloudtrainme has quit IRC20:11
*** Mudpuppy has quit IRC20:14
*** Mudpuppy has joined #openstack-ansible20:14
*** sdake_ is now known as sdake20:15
*** gparaskevas has quit IRC20:17
*** markvoelker has quit IRC20:21
*** cloudtrainme has joined #openstack-ansible20:25
*** galstrom is now known as galstrom_zzz20:26
*** Mudpuppy has quit IRC20:29
*** markvoelker has joined #openstack-ansible20:37
*** Mudpuppy has joined #openstack-ansible20:41
*** harlowja has quit IRC20:45
*** k_stev has quit IRC20:46
*** markvoelker has quit IRC20:48
*** Mudpuppy has quit IRC20:53
*** javeriak_ has quit IRC21:01
*** harlowja has joined #openstack-ansible21:21
*** woodard has quit IRC21:27
*** cloudtrainme has quit IRC21:36
*** aslaen has joined #openstack-ansible21:50
*** jmckind has quit IRC21:54
*** jaypipes has quit IRC22:10
*** shoutm has quit IRC22:14
*** sigmavirus24_awa is now known as sigmavirus2422:23
*** aslaen has quit IRC22:26
*** phalmos has quit IRC22:35
openstackgerritMerged stackforge/os-ansible-deployment: Switch Nova/Tempest to use/test Cinder API v2  https://review.openstack.org/21404522:54
openstackgerritMerged stackforge/os-ansible-deployment: Fixing haproxy-playbook fails when installing on multiple hosts  https://review.openstack.org/21557922:55
*** ybabenko has quit IRC22:55
*** spotz is now known as spotz_zzz23:06
*** arbrandes1 has joined #openstack-ansible23:10
*** arbrandes has quit IRC23:11
*** sdake has quit IRC23:15
*** sdake has joined #openstack-ansible23:27
*** alop has quit IRC23:30
*** arbrandes1 has quit IRC23:36
*** harlowja has quit IRC23:42
*** harlowja has joined #openstack-ansible23:43
*** markvoelker has joined #openstack-ansible23:46
*** shoutm has joined #openstack-ansible23:49
*** arbrandes1 has joined #openstack-ansible23:54
*** markvoelker has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!