Tuesday, 2015-04-14

*** sacharya has joined #openstack-ansible00:48
*** KLevenstein has joined #openstack-ansible00:59
*** KLevenstein has quit IRC01:00
*** sacharya has quit IRC01:03
*** daneyon has joined #openstack-ansible01:27
*** daneyon has quit IRC01:31
*** stevemar has joined #openstack-ansible02:40
javeriakhey, can we pre-define what ips the infra containers will take?03:06
*** stevemar has quit IRC03:06
*** froots has quit IRC03:06
*** hughsaunders has quit IRC03:06
openstackgerritNolan Brubaker proposed stackforge/os-ansible-deployment-specs: Propose new developer documentation spec  https://review.openstack.org/17315503:10
*** stevemar has joined #openstack-ansible03:13
*** froots has joined #openstack-ansible03:13
*** hughsaunders has joined #openstack-ansible03:13
*** daneyon has joined #openstack-ansible03:16
*** bluebox has joined #openstack-ansible03:17
*** sacharya has joined #openstack-ansible03:19
*** britthouser has joined #openstack-ansible03:19
*** daneyon has quit IRC03:21
openstackgerritNolan Brubaker proposed stackforge/os-ansible-deployment-specs: Propose new developer documentation  https://review.openstack.org/17315503:21
*** britthou_ has joined #openstack-ansible03:21
*** javeriak has quit IRC03:23
*** britthouser has quit IRC03:24
*** sdake has joined #openstack-ansible03:56
*** sdake_ has quit IRC03:58
*** sdake_ has joined #openstack-ansible04:02
*** sdake has quit IRC04:04
*** JRobinson__ is now known as JRobinson__afk04:18
*** britthou_ has quit IRC04:22
*** britthouser has joined #openstack-ansible04:23
*** bluebox has quit IRC04:27
*** JRobinson__afk has quit IRC04:32
*** JRobinson__afk has joined #openstack-ansible04:34
*** JRobinson__afk is now known as JRobinson__04:37
*** bilal has joined #openstack-ansible04:41
*** bluebox has joined #openstack-ansible04:42
*** javeriak has joined #openstack-ansible04:43
*** bluebox has quit IRC04:43
bilalI have 3 controller rackspace 10 up and running. when i try to create network im getting ERROR: neutronclient.shell <html><body><h1>504 Gateway Time-out</h1> The server didn't respond in time.04:44
*** sacharya has quit IRC04:53
*** daneyon has joined #openstack-ansible05:04
*** daneyon has quit IRC05:09
*** mahito has joined #openstack-ansible05:10
*** ishant has joined #openstack-ansible05:14
*** JRobinson__ has quit IRC05:57
*** javeriak has quit IRC06:01
*** javeriak has joined #openstack-ansible06:01
*** javeriak has quit IRC06:08
*** javeriak has joined #openstack-ansible06:08
*** javeriak has quit IRC06:13
*** javeriak has joined #openstack-ansible06:14
*** javeriak has quit IRC06:43
*** daneyon has joined #openstack-ansible06:53
*** daneyon has quit IRC06:58
*** stevemar has quit IRC07:09
odyssey4mebilal it sounds like your load balancer can't speak to the back-end service, which is most likely a service misconfiguration of some sort08:06
bilalodyssey4me: which configuration files should i check for?08:07
odyssey4mebilal no, I mean that you've either misconfigured an IP somewhere or the LB IP or something like that - is your load balancer working, check it's health and status08:08
odyssey4meA pertinant question would probably also be whether you've configured a load balancer at all?08:09
bilalload balancer is configured and its working. also the request to authentiacte from keystone etc is going to the right ip when i see the logs08:18
*** sdake has joined #openstack-ansible08:22
bilalnot sure about neutron.. every neutron request should also go to lb first. right?08:22
*** sdake_ has quit IRC08:26
odyssey4meyes, all the services are configured to go through the LB address unless you've changed them to do otherwise08:30
*** sdake has quit IRC08:30
odyssey4methe 500 message usually comes back from the LB08:30
odyssey4meI guess you'll have to track down whether it is coming from the LB or from one of the back-ends08:30
matttodyssey4me: that message looks like an haproxy message08:31
odyssey4memattt agreed08:32
bilalhere is the trace back:neutron --debug net-create net1 DEBUG: keystoneclient.session REQ: curl -g -i -X GET http://10.22.37.149:35357/v2.0 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" DEBUG: keystoneclient.session RESP: [200] date: Tue, 14 Apr 2015 06:21:57 GMT vary: X-Auth-Token content-length: 423 content-type: application/json server: Apache/2.4.7 (Ubuntu)  RESP BODY: {"version": {"status": "stable08:41
odyssey4mebilal you'll need to put that into a pastebin, not into IRC08:42
bilal: [{"href": "http://10.22.37.149:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}}  DEBUG: stevedore.extension found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('shell = cliff.formatters.shell:ShellFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('08:42
bilaloh ok08:42
odyssey4metry using http://paste.openstack.org/08:42
*** daneyon has joined #openstack-ansible08:42
bilalhttp://paste.openstack.org/show/203827/08:43
matttbilal: try logging into the neutron-server containers to see what is going on08:46
matttbilal: have a poke at /var/log/neutron/neutron-server.log etc.08:47
*** daneyon has quit IRC08:47
bilalmattt: neutron-server.log saying AMQP unreachable. it is trying to find it in localhost. i dont see any amqp service/process running on neutron-server container. should it be running on this container or a seperate one? http://paste.openstack.org/show/203828/08:55
odyssey4mebilal it runs in a cluster in its own containers08:56
matttbilal: get it ?09:15
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] first shot at implementing Ceph/RBD support  https://review.openstack.org/17322909:29
matttsvg: woo!  will test this when i get some free time!09:34
* svg just got a mail from Rackspace Support Sales telling Ceph will be supported in kilo+1 release09:35
mancdazsvg ish09:38
svgI'm looking at https://review.openstack.org/#/c/173229/ can't seem to find some 'review' button?09:40
hughsaunderssvg: are you logged in to gerrit?09:40
svgyes09:40
svgjust want to add the -1 (WIP)09:41
svgIs that a permission thing?09:41
odyssey4mesvg that's a pretty good start - can see a few things which look odd and it's unfortunate that it's not a master patch, but thanks for the submission - it provides a great base!09:42
hughsaunderssvg: do you have this button? http://i.imgur.com/EtKWFh9.png09:42
hughsaundersI think we'll also need a spec for ceph support09:46
hughsaundersso we can agree on an approach, and potentially implement it in stages09:47
hughsaunderssvg: I've WIPd it for you.09:47
svghughsaunders: no, I have https://dl.dropboxusercontent.com/u/13986042/20150414114810.png09:48
odyssey4mehmm, what browser are you using - the issue may be browser related09:49
svgfirefox09:49
odyssey4meoh, that should work09:49
hughsaunderssvg: thats the new change screen, you need to click reply09:50
svgok, could do a -1 from there, but still a totally different screen09:51
svgsame in chromium09:52
svgperhaps I don;t have review permission?09:53
svgodyssey4me: tight now, it;s all about getting juno to work, and ready to possibly deploy it in production; but I'l definitely work on a master patch later;09:54
odyssey4mesvg have you signed the CLA? you may not be part of a general group which would give you access09:54
svgI did, but there was an issue with that, and somone here pointed me to another way to do it.09:56
svgI do have a status Verified on the ICLA in my settings09:56
odyssey4mesvg ah, that's fairly common - I think it might need some input from #openstack-infra to determine what's causing the button to be missing09:56
*** ctgriffiths_ has joined #openstack-ansible09:57
svgok; nothing urgent for now, I'' check on -infra later, thx09:57
openstackgerritMerged stackforge/os-ansible-deployment: Fix bug in playbooks/library/neutron  https://review.openstack.org/17148909:57
hughsaunderssvg: so do you have any radio boxes for workflow in the reply popup?09:58
svgyes, workflow & code-review09:58
openstackgerritMerged stackforge/os-ansible-deployment: Correctly deploy neutron_metering_agent_check  https://review.openstack.org/17157709:58
hughsaunderssvg: so all you need to do to mark a review as wip is to set workflow to -109:59
svghughsaunders: yes, that's what I did in the meantime now;10:00
mancdazhughsaunders what determines if one sees the 'new' screen or not?10:00
svgpatch on master + spec: should that still be - possibly -  timely for 'kilo'? When is the v11/kilo release planned?10:01
mancdazsvg we are going to cut an rc1 for kilo in a few days10:02
*** ctgriffiths has quit IRC10:02
mancdazanything requiring a new spec would likely make it into one of the minor kilo releases 11.1, 11.2 ...10:03
hughsaundersmancdaz: settings > preferences > change view10:04
mancdazhughsaunders ah k10:04
svgAh, ok, I had the new shiny view enbled. Should have reverted that as I didn't find any ponies.10:07
odyssey4meerm, interesting... this'll take some getting used to10:07
* odyssey4me is trying out the new view for no particular reason10:07
hughsaundersI disabled it because it makes the patchset chain less clear10:07
*** mahito has quit IRC10:07
hughsaundersall related patches appear in the same list10:08
hughsaundersin the old view required and dependant patches are in separate lists - very clear.10:08
hughsaundersThe only realy pony in the new view is live zuul job status10:08
odyssey4methe collapsed history is quite nice, although it defaults to expanded for me (even though the button says expand)10:10
odyssey4meI don't see a live zuul status, where is that?10:10
svgmancdaz: Does this means, in time, there might be an updated 10.x with ceph added?10:10
mancdazsvg it's not something that the core group would be directly working on backporting into juno10:11
mancdazthat's not to say somebody couldn't do it...10:12
svgyes, but it ight be possible if I work on it then10:12
mancdazsvg there's going to be a lot of activity working on getting ceph (block) storage support in to master10:12
mancdazwhether it's worth waiting and attempting a backport, or working completely independently on juno, I'm not sure10:13
svgok, i c10:13
mancdazjuno and master are quite different in terms of the way ansible hangs together, so the solutions are likely to be quite different10:14
mancdazbut if we waited for the master work, and then based the juno work on that, at least the approach would be consistent10:14
openstackgerritMerged stackforge/os-ansible-deployment: Ensure OpenStack commands are run as correct user  https://review.openstack.org/17236810:14
svgit's just the I am in the middle of things, and will need to start using ceph support before it gets merged upstream10:15
svgso trying to find out which will be the best approach to handle that10:15
hughsaundersodyssey4me: Can't find a live example https://twitter.com/sdague/status/58360345977519308810:17
openstackgerritgit-harry proposed stackforge/os-ansible-deployment: Add HP monitoring playbook  https://review.openstack.org/17122310:26
openstackgerritgit-harry proposed stackforge/os-ansible-deployment: Add network.yml monitoring playbook  https://review.openstack.org/17006210:29
*** daneyon has joined #openstack-ansible10:31
odyssey4mehughsaunders I don't see the same in my view - it seems that I'm missing the results box on the right under the commit msg10:35
*** daneyon has quit IRC10:36
hughsaundersodyssey4me: yeah, I'm not seeing it at the moment.. maybe it has to be disabled for some reason :-/10:36
*** ishant has quit IRC10:50
*** daneyon has joined #openstack-ansible12:20
*** daneyon has quit IRC12:25
cloudnullmorning12:26
*** markvoelker has joined #openstack-ansible13:00
*** markvoelker_ has joined #openstack-ansible13:06
*** markvoelker has quit IRC13:09
*** KLevenstein has joined #openstack-ansible13:30
*** britthouser has quit IRC13:32
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331713:33
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331713:34
*** sdake has joined #openstack-ansible13:35
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment: Update keystone middleware in neutron for Kilo  https://review.openstack.org/17331813:36
*** KLevenstein has quit IRC13:37
*** sigmavirus24_awa is now known as sigmavirus2413:37
*** sdake has quit IRC13:40
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331713:46
svghey guys, think I aready asked this, but still  not sure: as per interfaces/network interfaces for the metal hosts, what exactly is meant for use by "Container management bridge br-mgmt"?13:51
svgand that I relation to the ocntainer's eth0 that get's defined with a 10.0.3.x address13:52
Sam-I-Amsvg: thats the interface used for managing containers13:52
svgSam-I-Am: as in, the sysadmin accessing it from his machije?13:52
Sam-I-Amsvg: containers use snat to access the outside world... through that 10.0.3 range13:52
Sam-I-Ambut to get into them, the eth1 in the container attaches to br-mgmt on the host which attaches to a physical interface on the host13:53
Sam-I-Amsvg: http://docs.rackspace.com/rpc/api/v10/bk-rpc-installation/content/sec_overview_host-networking.html13:53
svgaccessing outside world, in our setup, would happen here through the management network, too13:54
svgI'm not sure why I would need that extra network13:54
Sam-I-Amwhat extra network?13:54
Sam-I-Amby default, the outbound container traffic will go through whatever has the default route13:55
svgextra networlk = the container's eth0 network with snat// We have the mgmt network for that already13:56
Sam-I-Amthat network doesn't see the light of day13:56
svgbasically, I'd like to disable that snat network, and not deploy it, and have my default gateway point to the network attached to br-mgmt13:57
*** sdake has joined #openstack-ansible13:57
sigmavirus24stevelle: ping13:57
stevellepong sigmavirus2413:58
sigmavirus24did you fix the horizon ssl cipher suite thing you pinged me about?13:59
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331713:59
stevellesigmavirus24: yes13:59
sigmavirus24thank you kind sir13:59
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331714:01
*** britthouser has joined #openstack-ansible14:03
*** jaypipes has joined #openstack-ansible14:06
*** daneyon has joined #openstack-ansible14:09
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17331714:13
*** daneyon has quit IRC14:13
*** markvoelker has joined #openstack-ansible14:19
*** markvoelker_ has quit IRC14:19
*** stevemar has joined #openstack-ansible14:30
*** Mudpuppy has joined #openstack-ansible14:33
openstackgerritMerged stackforge/os-ansible-deployment: Updated the repo scripts  https://review.openstack.org/17177714:37
*** alextric_ is now known as alextricity14:41
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update tempest to use admin user  https://review.openstack.org/17331714:42
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17335814:45
*** stevemar has quit IRC14:49
*** stevemar has joined #openstack-ansible14:49
svgAny extra resources available on how to make the design of the hosts/containers (rpc_user_config.yml)? I'm looking at a two datacenter setup for starters/14:51
Sam-I-Amwhat do you mean design?14:53
svghow many metal hosts are needed, and especially which components to put where14:59
Sam-I-Amin general, metal hosts are just compute15:00
Sam-I-Ami think swift storage nodes are too15:00
svgdefault setup talks about 3 hosts, where all components are equally put on all, also neutron - other openstack projects tend to put neutron on a separate dedicated hosts15:01
Sam-I-Amthe default is 3 infra, 1 storage, 1 compute iirc15:01
svgyes15:01
cloudnullodyssey4me: the issue is "playbooks/inventory/dynamic_inventory.py:20:1: F401 'hashlib' imported but unused"15:01
alextricityThe AIO script is failing on the XEN Server Information section. Anybody seeing the same thing?15:02
odyssey4mealextricity it shouldn't, that has || true - unless you're using a very old clone - which branch is that?15:02
cloudnullwhich went in here https://github.com/stackforge/os-ansible-deployment/commit/5341949f02a1c0ae056e84eeaf4a295ebf4a86f515:03
odyssey4mesvg one more for logging15:03
alextricityodyssey4me: it's master. All I get is [ Error Info -275 0 ]15:03
alextricityThen [ Status: Failure ]15:03
svgI know about the defaults :)15:03
svgand network_hosts is a separate group also15:04
svginfra_hosts: storage_hosts: log_hosts: network_hosts: compute_hosts:15:04
odyssey4mecloudnull odd how that didn't show up in the build result15:04
svgthe latter computes are obviously separate dedicated ones15:04
svginfra, storage, network and log can share certain hosts15:05
svgalso, log with one host wouldnt be redundant15:05
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Fix lint issue with dynamic_inventory.py  https://review.openstack.org/17336915:06
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add network.yml monitoring playbook  https://review.openstack.org/17006215:07
svgsay e.g. I have 4 servers to use, spread amongst two datacenters, two in each dc, how would I define those target hosts, and optionally  which containers to put where15:07
odyssey4mesvg container membership is group names is in the environment file15:13
svgodyssey4me: I understand that yes15:13
odyssey4mechanging that will require quite a bit of fiddling though15:13
svgI'm talking aboout how to fill in infra_hosts: storage_hosts: log_hosts: network_hosts: in  rpc_user_config.yml15:14
svgand possibly using the optional limit_host option15:14
svg(or was it limit container)15:14
odyssey4meso essentially your user_config needs the hosts in their groups - you can set a host from each DC in whichever groups, and as long as your inter-DC link is great then it'll be no different than everything being in one DC15:14
odyssey4mehmm, not sure - multi-DC configurations are something we have planned to tackle after kilo releases15:15
matttsvg: sorry if i'm stating obvious, but we typically spread containers across 3 nodes, w/ logging going on a dedicated host (for IO reasons)15:23
matttsvg: rabbitmq/galera clusters across datacentres could be interesting, and also keep in mind galera should have an odd # of nodes15:30
*** jwagner_away is now known as jwagner15:39
*** sacharya has joined #openstack-ansible15:41
*** markvoelker has quit IRC15:44
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment: Use proposed/kilo branch instead of master  https://review.openstack.org/17339715:47
*** stevemar has quit IRC15:49
*** stevemar has joined #openstack-ansible15:49
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users  https://review.openstack.org/17335815:50
b3rnard0is anyone else seeing issues with OS etherpad? i can't get anything to load15:55
Sam-I-Amb3rnard0: you need to clear your cache15:55
Sam-I-Ami had that problem this morning15:56
b3rnard0cool thanks for the pointer15:56
*** daneyon has joined #openstack-ansible15:58
cloudnullhelo peoples16:00
Sam-I-AmEHLO ?16:00
* Sam-I-Am speaks in smtp16:01
cloudnull# /quit16:01
cloudnullready for another exciting day of bug sifting ?16:01
ApsuSam-I-Am: I don't want your enhanced greetings. I'll take the standard old SMTP thanks.16:01
b3rnard0hello16:01
Apsucloudnull: Bug sifting is life16:01
Sam-I-AmApsu: HELO16:01
ApsuSam-I-Am: +116:02
b3rnard0when are we going to get meetbot in here so he can handle bug triage?16:02
Sam-I-AmApsu: mail from:16:02
*** daneyon has quit IRC16:02
b3rnard0https://etherpad.openstack.org/p/openstack_ansible_bug_triage.2015-04-14-16.0016:02
cloudnulli thought you were the meat bot b3rnard016:02
Sam-I-Amrcpt to16:02
cloudnull:)16:02
Sam-I-Amcloudnull: he's hipsterbot16:02
hughsaundersb3rnard0: you should create meatbot that orders lunch16:03
cloudnulloh thats where i went wrong16:03
cloudnull^ hughsaunders  +116:03
b3rnard0no more soup for you, hughsaunders16:03
cloudnullb3rnard0 action item16:03
cloudnullso without further adieu16:04
cloudnullhttps://bugs.launchpad.net/openstack-ansible/+bug/144136316:04
openstackLaunchpad bug 1441363 in openstack-ansible "nf_conntrack schould be unloaded on swift object server" [Undecided,New]16:04
cloudnull"causing nf_conntrack to be violated" poor nf_conntrack16:04
cloudnullthis seems like a sensable request.16:05
Sam-I-Amyes, it does16:05
cloudnullandymccr: you around ?16:05
andymccryeh im reading it now16:05
ApsuI disagree with this request. It assumes that swift object servers will be all by themselves owning an entire physical host. While that may be the current state or even a mostly desired state, it also reduces flexibility and really isn't necessary at all.16:06
ApsuIf you've got TIME_WAIT issues, set the reuse sysctl.16:06
ApsuIf you've got extremely aggressive connections, raise the conntrack limit or set the recycle sysctl.16:06
andymccri think Apsu is correct, as an example or possibility we may deploy rsyslog containers16:07
andymccror the swift hosts16:07
ApsuThis isn't a new problem and it's an inflexible hack to just remove conntrack.16:07
andymccror some other generic container that people may require16:07
ApsuAnd a really old way of trying to solve connection problems in Linux :P16:07
andymccrit also makes aio testing difficult16:07
Apsu^16:07
*** Bjoern__ has joined #openstack-ansible16:07
cloudnullboom! "wont-fix'd"16:07
Apsu+116:08
andymccrcan we resolve the actual issue in a different way though?16:08
cloudnullso can we or should we set the recycle by default ?16:08
ApsuYes. Set the reuse sysctl.16:08
andymccrok cool so we can fix that16:08
ApsuDefinitely not recycle. That's a last ditch effort on super busy boxen.16:08
ApsuIt can cause TCP state errors and lost connections16:08
cloudnull's/recycle/reuse/'16:08
Apsu+216:08
andymccryeh ok cool so leave it open with that as the triage solution16:09
cloudnullApsu can you drop some knowledge in that issue.16:09
ApsuSure.16:09
andymccrthanks Apsu!16:09
Apsunp16:09
cloudnullwhat do we think the priority is on that ?16:10
cloudnullalso do we want to backport to juno ?16:10
andymccrsure16:10
andymccrit seems sensible to me16:10
andymccrits causing issues so lets backport it16:10
cloudnullok16:10
svgmattt: odyssey4me fyi both dc's have a 10gb interlink16:10
cloudnulli've set medium and confirmed the issue. Apsu once you drop some knowledge in that can you change it triaged?16:11
cloudnullnext issue: https://bugs.launchpad.net/openstack-ansible/+bug/144180016:11
openstackLaunchpad bug 1441800 in openstack-ansible "add secure_proxy_ssl_header for heat" [Undecided,New]16:11
*** sdake_ has joined #openstack-ansible16:11
*** sdake_ has quit IRC16:11
Apsucloudnull: can do16:11
cloudnulltyvm sir16:12
*** sdake_ has joined #openstack-ansible16:12
cloudnullso this also seems like a sensible request. looks like a templated config option would do the trick .16:14
cloudnullis miguelgrinberg  around ?16:14
miguelgrinbergcloudnull: yup16:14
cloudnullcan you make this go https://bugs.launchpad.net/openstack-ansible/+bug/1441800 ?16:14
openstackLaunchpad bug 1441800 in openstack-ansible "add secure_proxy_ssl_header for heat" [Undecided,New]16:14
*** markvoelker has joined #openstack-ansible16:14
miguelgrinbergI certainly can16:14
*** sdake has quit IRC16:15
cloudnulli confirmed the issue and tagged it medium for both juno and trunk.16:16
cloudnulltyvm miguelgrinberg16:16
miguelgrinbergsure, I'll look into it today, just back from pycon ready to start on something16:17
cloudnullmuch appreciated miguelgrinberg16:17
cloudnullis regex master Bjoern__  here ?16:18
cloudnullhttps://bugs.launchpad.net/openstack-ansible/+bug/144223916:18
openstackLaunchpad bug 1442239 in openstack-ansible "Commit c5d488059d9407f1b9b96552159ffc298c8dc547 is invalidating sshd_config" [Undecided,New]16:18
*** Bjoern__ is now known as BjoernT16:18
BjoernTlol16:18
cloudnull^ there have been some updates on that issue from people in the community . did you see that ?16:18
BjoernTRegex for dummies helps, lol16:18
BjoernTI did see approd0 request16:18
cloudnullit seems that its confirmed that the issues are ansible related. which i assume is "1.6.10"?16:19
cloudnullin master we're running "v1.9.0.1-1" curious if we update to the latest stable if we see that same issue?16:20
BjoernTin the end we agree that the missing linefeed triggers this issue and we just talking about how to fix it16:20
cloudnullyup/16:21
BjoernTsupposedly not. I did only test with our 1.6.10 ansible version16:21
cloudnullim of the opinion that we update to the latest stable ansible.16:22
cloudnullwe only stayed on 1.6.10 because 1.7 had delegation issues.16:22
ApsuSo you're saying the lineinfile module is behaving differently on whole line matching depending on if you have a newline or not, i.e., if it's the last line in the file or not?16:22
hughsaundersdeja-vu on that bug, I feel like I've fixed it before16:23
ApsuIn python regex that behavior is generally dictated by the presence or absence of $, but it looks like the regexp in question doesn't have that.16:23
Sam-I-Amthen you didnt fix it? :)16:23
andymccrhttps://bugs.launchpad.net/openstack-ansible/+bug/1416626 its basically this.16:25
openstackLaunchpad bug 1416626 in openstack-ansible trunk "invalid sshd_config entry after ssh_config.yml task runs" [Low,Confirmed] - Assigned to Miguel Grinberg (miguelgrinberg)16:25
andymccrthere is a link to the ansible bug16:25
miguelgrinbergthe ansible bug has been fixed in a newer release than the one we use16:25
andymccr^ that16:26
*** markvoelker has quit IRC16:26
cloudnullso if we backport our ansible-boostrap.sh script to juno, we can be done with it16:26
miguelgrinberghave you verified the version we use in master has this fix? I haven't16:27
Sam-I-Ami think we use 1.9.0?16:28
cloudnullyes.16:28
miguelgrinbergI think this wasn't a released fix back when I tested it, it was in master, haven't looked since then if the fix got released16:28
cloudnullv1.9.0.1-116:28
cloudnullok decreed. we'll backport that and do the things16:30
cloudnullso next: https://bugs.launchpad.net/openstack-ansible/+bug/144236616:31
openstackLaunchpad bug 1442366 in openstack-ansible "nova user is removed from libvirtd group" [Undecided,New]16:31
cloudnullso this seems like something we need to dig into for juno16:34
cloudnullidk that it effects trunk16:34
Sam-I-Amhmmm16:34
BjoernTfyi, I was not able to reproduce the issue16:35
BjoernTfrom what I heard,  a wrong ansible version (lower than 1.6.10) was causing this issue.16:35
Sam-I-Amon master now, i have nova in the libvirtd group16:35
Sam-I-Amdont have a juno box atm16:36
BjoernTyeah, I read the playbooks and was not seeing any way how that can happen16:36
cloudnullok so. do we have antying else we want to talk about here?16:40
Sam-I-Amcloudnull: did you want to bring up the neutron bits thing?16:41
Sam-I-Amor just on this bug...16:41
* Sam-I-Am needs more coffee16:41
cloudnullah yes ,this one https://bugs.launchpad.net/openstack-ansible/+bug/144392716:41
openstackLaunchpad bug 1443927 in openstack-ansible "Neutron configuration files should depend on container type" [Undecided,New]16:41
cloudnulli say fix in kilo16:41
cloudnulldont bp to juno16:41
Sam-I-Ammakes sense16:42
Sam-I-Amits not fixing stuff that is breaking16:42
cloudnullits just a update to best practices.16:43
Sam-I-Amda16:43
Sam-I-Amonly case for l3/meta agents on compute nodes would be if we use dvr16:44
Sam-I-Amwhich implies... dun dun dun... ovs.16:44
cloudnullwont-fix16:44
cloudnull:)16:44
Sam-I-Amexactly16:44
Sam-I-Ameveryone has a soft spot for ovs :)16:44
openstackgerritMerged stackforge/os-ansible-deployment: Fix lint issue with dynamic_inventory.py  https://review.openstack.org/17336916:44
cloudnullso confirmed and targeted to 1116:44
cloudnulland we're done here16:44
cloudnullthanks everyone .16:45
Sam-I-Amexcellent.16:45
* cloudnull goes to eat16:45
Sam-I-Ammmmfood16:45
Sam-I-Ami should do that16:45
Sam-I-Amalso might need to rebase all of these patches for the lint fix16:45
BjoernTcan I get the status of https://bugs.launchpad.net/openstack-ansible/+bug/1428833 ?16:45
openstackLaunchpad bug 1428833 in openstack-ansible trunk "Add novnc console support in favor of spice" [High,Triaged] - Assigned to Andy McCrae (andrew-mccrae)16:45
openstackgerritMiguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP]  https://review.openstack.org/17306716:46
b3rnard0BjoernT: we are officially done but that has been triaged and assigned16:46
BjoernTYeah I need to get someone working on it16:46
BjoernTI fixed all issues but the spice mouse issue is there and won't go away looks like16:47
b3rnard0andymccr appears to handling for juno. we just need to determine target milestone/priority16:47
andymccrthe previous comment regarding grub there isnt anything we can fix with that16:47
andymccrthe other comment is new so i havent looked at it yet16:47
BjoernTthe primary issue is windows not linux16:47
BjoernTso we either spend more time than I did already to get the mouse syncronozied with spice-html5 proxy or new talk about getting vnc back16:48
BjoernTI did test yesterday the support in libvirt to enable vdagent in libvirt and inside the windows guest but no luck16:49
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment: Use proposed/kilo branch instead of master  https://review.openstack.org/17339716:49
BjoernTApart from the fact that openstack does not support to enable vdagent in libvirt, i did use a ugly workaround to enable the libvirt instance to enable it16:49
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment: Update keystone middleware in neutron for Kilo  https://review.openstack.org/17331816:50
BjoernTb3rnard0: let's talk once you are back16:51
alextricityHas anyone seen this error? http://pastebin.com/rFp3hMqN16:57
alextricityIt looks like the AIO is having a hard time creating containers16:57
alextricitySomething about the template, but I don't have enough info to make it out16:57
Sam-I-Amalextricity: version?17:00
alextricityMaster17:00
Sam-I-Amresources available?17:02
alextricityIt's a rackspace standard-1617:05
alextricityVM17:05
alextricity15GB RAM, 6vCPUS17:06
alextricity620GB system disk17:06
alextricityMaybe someone else can spin up an instance and give it a go. I'm following these instructions: https://github.com/stackforge/os-ansible-deployment/blob/master/development-stack.rst17:07
*** sdake has joined #openstack-ansible17:07
*** jwagner is now known as jwagner_away17:07
Sam-I-Amhave you tried rerunning that playbook?17:08
alextricityWell, I'm simply running the gate script.17:09
alextricityBut if you are asking if I tried rerunning the lxc-create play, then no17:10
alextricityI have not17:10
*** sdake_ has quit IRC17:11
Sam-I-Ami'd do that first. could be a fluke.17:13
*** sdake_ has joined #openstack-ansible17:14
*** sdake has quit IRC17:18
*** sigmavirus24 is now known as sigmavirus24_awa17:30
*** sdake has joined #openstack-ansible17:31
*** javeriak has joined #openstack-ansible17:31
*** sdake_ has quit IRC17:35
*** javeriak has quit IRC17:39
*** javeriak has joined #openstack-ansible17:40
*** sdake_ has joined #openstack-ansible17:44
*** sdake has quit IRC17:45
cloudnullwho has a v10 install that they want to break? https://gist.github.com/cloudnull/b3471271e78bb82938d4 <- WIP upgrade script to kilo - should work (TM)17:47
openstackgerritTom Cameron proposed stackforge/os-ansible-deployment: Kilofication of Neutron playbooks  https://review.openstack.org/17343517:47
cloudnullBOOM rackertom in da house!17:47
*** sdake_ has quit IRC17:48
rackertomHad to stop my birds from eating actual paint for a minute there.17:48
*** sdake has joined #openstack-ansible17:48
cloudnullhahahaha17:48
cloudnullSam-I-Am, can you sync up with rackertom  on https://bugs.launchpad.net/openstack-ansible/+bug/144392717:48
openstackLaunchpad bug 1443927 in openstack-ansible trunk "Neutron configuration files should depend on target location" [Low,Confirmed]17:48
cloudnullin that way we can get those fixes into the neutron kilo work, if at all possible17:49
rackertomIs that a bug where the playbooks are putting configs on all hosts receiving a neutron component?17:49
rackertomSorry that was supposed to say "...all configs on all hosts..."17:50
*** sdake has quit IRC17:52
*** sdake has joined #openstack-ansible17:52
svgodyssey4me: in what way does oad not have multi-dc configurations? what would be miussing for that?17:53
cloudnullrackertom no thats not a bug, its clean up.17:53
cloudnullwe're dropping config places where its not needed.17:53
cloudnulland when we branch out beyond ml2 / linuxbridge we're going to have to do some of that to clean up things17:54
cloudnullsvg, if you're just looking to orchestrate between multiple DC's that have internal access to both sides, IE VPN mesh, this shouldn't be a problem in OSAD.17:55
cloudnulljust fill in the user_config.yml with the ip address of the hosts.17:55
cloudnullbut if you're looking for region x, region y for use within openstack for the various DCs then that will be a post Kilo release item .17:56
svgI have a 10gb connection between both dc's :)18:01
svgthe idea would be to use availability zones18:02
svgand configure separate storage hosts in each dc with a separate ceph backend18:02
*** javeriak has quit IRC18:04
*** bilal has quit IRC18:09
*** jwagner_away is now known as jwagner18:13
*** daneyon has joined #openstack-ansible18:16
*** sigmavirus24_awa is now known as sigmavirus2418:16
*** daneyon_ has joined #openstack-ansible18:20
cloudnullsvg: that should go. az's already work, but is not something that OSAD sets up for you.18:23
*** daneyon has quit IRC18:23
svgnot needed, that's more os config afterwards18:23
cloudnullfor sure.18:23
*** markvoelker has joined #openstack-ansible18:25
*** sdake_ has joined #openstack-ansible18:25
*** jwagner is now known as jwagner_away18:26
*** sdake has quit IRC18:29
*** jwagner_away is now known as jwagner18:29
*** markvoelker has quit IRC18:31
*** sdake has joined #openstack-ansible18:39
*** sdake_ has quit IRC18:43
openstackgerritMiguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP]  https://review.openstack.org/17306718:48
*** javeriak has joined #openstack-ansible18:57
*** erikmwil_ has joined #openstack-ansible19:01
*** erikmwilson is now known as Guest2210819:01
*** erikmwil_ is now known as erikmwilson19:01
*** jwagner is now known as jwagner_away19:01
*** erikmwilson_ has joined #openstack-ansible19:01
*** javeriak has quit IRC19:05
*** javeriak_ has joined #openstack-ansible19:07
javeriak_hey, does anyone know if the ip's assigned to containers can be pre-configured?19:08
*** markvoelker has joined #openstack-ansible19:29
*** markvoelker has quit IRC19:34
*** sdake_ has joined #openstack-ansible19:37
matttalextricity: you still there ?19:38
*** BjoernT has quit IRC19:38
matttalextricity: not sure it's related, but make sure you use the PVHVM image19:38
*** rromans has quit IRC19:40
*** sdake has quit IRC19:41
*** jwagner_away is now known as jwagner19:46
*** sdake has joined #openstack-ansible19:51
*** sdake_ has quit IRC19:54
alextricitymattt: Thanks. I tried both images but both didn't work for me :(19:55
alextricityOn another note, is there work being done on the nova_console?19:56
alextricityI see that it has been removed from the deployment19:57
alextricityIf that's the case then it should also be removed from the haproxy config: https://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/vars/configs/haproxy_config.yml19:57
*** jwagner is now known as jwagner_away20:04
*** sdake_ has joined #openstack-ansible20:05
*** sdake has quit IRC20:07
*** rrrobbb has joined #openstack-ansible20:08
alextricityjaveriak_: I guess you could modify your inventory file before running the plays.20:10
alextricityI'm not sure if the inventory_manage script does that20:10
alextricitybut if not you can modify them yourself20:10
*** rromans has joined #openstack-ansible20:11
javeriak_alextricity: I suppose /etc/rpc_deploy/rpc_user_config.yml is the appropriate place to put these?20:13
alextricityjaveriak. Not necessarily. I'm talking about the /etc/openstack_deploy/openstack_inventory.json20:14
alextricityor in your case, /etc/rpc_deploy/rpc_inventory.json20:15
javeriak_alextricity: nope I dont have the openstack_deploy directory and as far as I can tell the /etc/rpc_deploy/rpc_inventory.json gets generated after the play and containers get created, I dont have it in a new setup20:18
alextricityjaveriak_: Ah. Then you're wanting to change the IPs of existing containers?20:20
*** sdake has joined #openstack-ansible20:23
javeriak_alextricity: Nope, I would like to define them before they get created20:24
javeriak_Thats incase they aren't already pre-defined and I don't that being done anywhere20:25
javeriak_see*20:25
*** sdake_ has quit IRC20:25
alextricityjaveriak_: You can generate the inventory file before running the plays to create the containers by running playbooks/inventory/dynmaic_inventory.py. Then go in there and edit the IPS from the assigned ones to your desired ones.20:26
alextricityYou should see entries for "conatiner_address" for each container in the inventory file20:27
alextricityOther than that, I don't know any other ways20:27
*** rrrobbb has quit IRC20:28
*** jwagner_away is now known as jwagner20:29
*** markvoelker has joined #openstack-ansible20:33
javeriak_alextricity: okay cool, thanks20:36
*** markvoelker has quit IRC20:37
cloudnullalextricity the nova_spice_console has been renamed to nova_console .20:52
alextricitycloudnull: yeah I see that now :/ was using an old inventory20:52
alextricitylol20:52
cloudnullit happens. :)20:52
*** jwagner is now known as jwagner_away20:59
*** sigmavirus24 is now known as sigmavirus24_awa21:03
*** KLevenstein has joined #openstack-ansible21:05
*** KLevenstein has quit IRC21:17
*** sigmavirus24_awa is now known as sigmavirus2421:17
*** sacharya has quit IRC21:20
*** daneyon has joined #openstack-ansible21:30
*** daneyon_ has quit IRC21:32
*** daneyon has quit IRC21:44
*** Mudpuppy_ has joined #openstack-ansible21:45
*** Mudpuppy has quit IRC21:49
*** Mudpuppy_ has quit IRC21:49
*** KLevenstein has joined #openstack-ansible21:50
*** JRobinson__ has joined #openstack-ansible21:54
*** KLevenstein has quit IRC22:00
*** markvoelker has joined #openstack-ansible22:10
*** markvoelker_ has joined #openstack-ansible22:10
*** markvoelker has quit IRC22:14
*** markvoelker has joined #openstack-ansible22:36
*** markvoelker_ has quit IRC22:37
*** markvoelker_ has joined #openstack-ansible22:41
*** markvoelker has quit IRC22:44
*** britthouser has quit IRC22:44
*** javeriak has joined #openstack-ansible22:46
*** javeriak_ has quit IRC22:46
*** erikmwilson has quit IRC22:50
*** sigmavirus24 is now known as sigmavirus24_awa22:59
*** javeriak has quit IRC23:01
*** javeriak has joined #openstack-ansible23:02
*** erikmwilson_ is now known as erikmwilson23:09
javeriakhey, im pulling 10.1.2 playbooks and they seem to be generating an aio container in the inventory, which then errors out  in the play, this isnt a problem, but I was wondering if its supposed to be there?23:41
*** britthouser has joined #openstack-ansible23:50
*** markvoelker_ has quit IRC23:54
cloudnulljaveriak check to see if you have something in the /etc/openstack_deploy/conf.d/23:57
cloudnulllikely you have a swift.yml file in there that is creating it23:57
cloudnullthats from the examples.23:58
javeriakThere is no openstack_deploy under /etc, is that supposed to be there on juno too?23:58
palendaeJuno it's rpc_deploy I think23:58
javeriakcloudnull: you mean the deploy node right?23:59
*** britthouser has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!