Monday, 2016-10-24

*** thorst_ has joined #openstack-ansible00:13
*** fops_ has joined #openstack-ansible00:14
*** fops has quit IRC00:17
*** smatzek has quit IRC00:20
*** thorst_ has quit IRC00:21
openstackgerritNish Patwa(nishpatwa_) proposed openstack/openstack-ansible-ops: Adding influx relay to make the existing monitoring stack highly available(WIP)  https://review.openstack.org/39012800:36
*** jrosser_ has joined #openstack-ansible00:42
*** rromans_ has joined #openstack-ansible00:42
*** klamath has quit IRC00:42
*** promethe1nfire has joined #openstack-ansible00:42
*** sulo has quit IRC00:43
*** jrosser has quit IRC00:43
*** sulo_ has joined #openstack-ansible00:43
*** rromans has quit IRC00:43
*** prometheanfire has quit IRC00:43
*** promethe1nfire is now known as prometheanfire00:44
*** zerick_ has joined #openstack-ansible00:51
*** markvoelker has joined #openstack-ansible00:51
*** kysse_ has joined #openstack-ansible00:52
*** hwoarang_ has joined #openstack-ansible00:53
*** zerick has quit IRC00:53
*** hwoarang has quit IRC00:53
*** kysse has quit IRC00:53
*** markvoelker has quit IRC00:56
*** drifterza has quit IRC01:13
*** thorst_ has joined #openstack-ansible01:17
*** thorst_ has quit IRC01:26
*** openstack has joined #openstack-ansible05:34
*** thorst_ has quit IRC05:37
*** markvoelker has joined #openstack-ansible05:41
*** Jack_Iv has joined #openstack-ansible05:44
*** thorst_ has joined #openstack-ansible06:05
*** drifterza has quit IRC06:14
*** alij has quit IRC06:20
*** alij has joined #openstack-ansible06:20
*** pcaruana has joined #openstack-ansible06:21
*** alij has quit IRC06:33
*** javeriak has joined #openstack-ansible06:43
*** v1k0d3n has joined #openstack-ansible06:44
*** v1k0d3n_ has joined #openstack-ansible06:48
*** v1k0d3n has quit IRC06:49
*** alij has joined #openstack-ansible06:52
*** v1k0d3n_ has quit IRC06:52
*** fxpester has joined #openstack-ansible06:55
*** thorst_ has quit IRC06:56
*** thorst_ has joined #openstack-ansible06:57
*** fandi has quit IRC07:00
*** thorst_ has quit IRC07:02
*** Jack_Iv has quit IRC07:02
*** Jack_Iv has joined #openstack-ansible07:03
*** admin0 has quit IRC07:10
*** v1k0d3n has joined #openstack-ansible07:12
*** Guest49900 has quit IRC07:14
*** Guest49900 has joined #openstack-ansible07:14
*** Guest49900 is now known as ioni07:14
*** v1k0d3n has quit IRC07:16
*** v1k0d3n has joined #openstack-ansible07:16
*** ybabenko has joined #openstack-ansible07:18
*** winggundamth_ has joined #openstack-ansible07:22
*** drifterza has joined #openstack-ansible07:29
drifterzaHello boys07:29
*** hwoarang_ is now known as hwoarang07:36
*** thorst_ has joined #openstack-ansible07:40
*** gtrxcb has quit IRC07:43
*** markvoelker has quit IRC07:44
*** rgogunskiy has joined #openstack-ansible07:45
*** markvoelker has joined #openstack-ansible07:46
*** haad1 has joined #openstack-ansible07:47
*** markvoelker has quit IRC07:50
*** admin0 has joined #openstack-ansible07:55
*** alij_ has joined #openstack-ansible07:58
*** alij has quit IRC07:58
*** Mudpuppy has quit IRC08:00
*** Mudpuppy has joined #openstack-ansible08:01
*** Mudpuppy has quit IRC08:06
*** javeriak has quit IRC08:06
*** pester has joined #openstack-ansible08:07
*** berendt has joined #openstack-ansible08:10
*** fandi has joined #openstack-ansible08:10
*** fxpester has quit IRC08:11
*** pcaruana has quit IRC08:12
*** admin0 has quit IRC08:14
*** gouthamr has joined #openstack-ansible08:17
*** alij_ has quit IRC08:19
*** fandi has quit IRC08:19
*** alij has joined #openstack-ansible08:30
*** admin0 has joined #openstack-ansible08:40
*** gouthamr has quit IRC08:42
*** gouthamr has joined #openstack-ansible08:44
*** thorst_ has quit IRC08:50
*** thorst_ has joined #openstack-ansible08:51
*** jrosser_ is now known as jrosser08:52
*** berendt has quit IRC08:54
*** admin0 has quit IRC08:55
*** admin0 has joined #openstack-ansible09:01
*** admin0 has quit IRC09:03
*** thorst_ has quit IRC09:13
*** thorst_ has joined #openstack-ansible09:13
*** thorst_ has quit IRC09:17
*** electrofelix has joined #openstack-ansible09:25
*** ybabenko has quit IRC09:26
*** alij has quit IRC09:26
*** alij has joined #openstack-ansible09:26
*** thorst_ has joined #openstack-ansible09:29
*** thorst_ has quit IRC09:32
*** thorst_ has joined #openstack-ansible09:33
*** jamielennox|away is now known as jamielennox09:37
*** Trident has joined #openstack-ansible09:37
*** thorst_ has quit IRC09:37
*** jamielennox is now known as jamielennox|away09:39
*** Jeffrey4l has joined #openstack-ansible09:40
*** thorst_ has joined #openstack-ansible09:40
*** v1k0d3n has quit IRC09:41
*** v1k0d3n has joined #openstack-ansible09:42
*** fedruantine has quit IRC09:45
*** v1k0d3n has quit IRC09:46
*** alij_ has joined #openstack-ansible09:49
*** alij has quit IRC09:49
*** markvoelker has joined #openstack-ansible09:50
*** alij has joined #openstack-ansible09:50
*** alij_ has quit IRC09:53
*** Jeffrey4l has quit IRC10:00
*** thorst_ has quit IRC10:03
*** thorst_ has joined #openstack-ansible10:06
*** thorst_ has quit IRC10:10
*** thorst_ has joined #openstack-ansible10:12
*** berendt has joined #openstack-ansible10:12
*** thorst_ has quit IRC10:17
*** berendt has quit IRC10:18
*** ybabenko has joined #openstack-ansible10:19
*** m4rx has joined #openstack-ansible10:20
*** ybabenko has quit IRC10:24
*** gouthamr has quit IRC10:30
*** fedruantine has joined #openstack-ansible10:56
*** admin0 has joined #openstack-ansible11:00
neithhello guys11:08
neithregarding this page, http://docs.openstack.org/developer/openstack-ansible/install-guide/targethosts-networkconfig.html should I do the network config manually or OSA will do it?11:09
admin0manually11:10
admin0OSA does not touch networking11:10
*** ybabenko has joined #openstack-ansible11:10
*** ybabenko has quit IRC11:12
neithadmin0: ok11:12
*** ybabenko has joined #openstack-ansible11:14
*** admin0 has quit IRC11:24
*** johnmilton has quit IRC11:27
*** Jeffrey4l has joined #openstack-ansible11:31
*** berendt has joined #openstack-ansible11:32
*** ybabenko has quit IRC11:34
*** NewJorg has quit IRC11:35
*** NewJorg has joined #openstack-ansible11:35
*** retreved has joined #openstack-ansible11:36
*** ybabenko has joined #openstack-ansible11:40
*** sulo_ is now known as sulo11:41
*** ybabenko has quit IRC11:48
*** alij has quit IRC11:51
*** alij has joined #openstack-ansible11:51
*** ybabenko has joined #openstack-ansible11:53
*** johnmilton has joined #openstack-ansible11:55
*** alij has quit IRC11:56
*** alij has joined #openstack-ansible11:56
*** johnmilton has joined #openstack-ansible11:57
neithdoes OSA works on kvm guest instances? for evaluation purpose12:03
sigmavirusneith: there's http://git.openstack.org/cgit/openstack/openstack-ansible-ops which has a way of building a "multi-node" deployment of OSA on a single node using kvm/virsh iirc12:04
sigmavirusneith: that would probably give you a good way of evaluating OSA12:05
neithsigmavirus: sounds cool, but I also wanted to be close from a real deployment using several kvm guests12:06
neithsigmavirus: especially dealing with network configuration12:07
*** phalmos has joined #openstack-ansible12:09
*** gouthamr has joined #openstack-ansible12:13
*** phalmos has quit IRC12:16
*** marst_ has joined #openstack-ansible12:35
*** adrian_otto has joined #openstack-ansible12:38
*** z-_ has joined #openstack-ansible12:40
*** adrian_otto has quit IRC12:43
*** coolj_ has joined #openstack-ansible12:43
*** andymccr_ has joined #openstack-ansible12:43
*** dgonzalez_ has joined #openstack-ansible12:43
*** timrc_ has joined #openstack-ansible12:43
*** irtermit- has joined #openstack-ansible12:43
*** jduhamel_ has joined #openstack-ansible12:43
*** Maeca has joined #openstack-ansible12:44
*** xar has joined #openstack-ansible12:44
*** coolj has quit IRC12:45
*** openstackgerrit has quit IRC12:45
*** marst has quit IRC12:45
*** xar- has quit IRC12:45
*** jcannava has quit IRC12:45
*** antonym has quit IRC12:45
*** irtermite has quit IRC12:45
*** nishpatwa_ has quit IRC12:45
*** dgonzalez has quit IRC12:45
*** Maeca_ has quit IRC12:45
*** jascott1 has quit IRC12:45
*** z- has quit IRC12:45
*** timrc has quit IRC12:45
*** jduhamel has quit IRC12:45
*** galstrom_zzz has quit IRC12:45
*** Jolrael has quit IRC12:45
*** andymccr has quit IRC12:45
*** dgonzalez_ is now known as dgonzalez12:45
*** adrian_otto has joined #openstack-ansible12:45
*** galstrom_zzz has joined #openstack-ansible12:46
*** ricardobuffa_ufs has joined #openstack-ansible12:49
*** nishpatwa007 has joined #openstack-ansible12:51
*** shasha_t_ has joined #openstack-ansible12:51
*** askb has quit IRC12:52
*** adrian_otto1 has joined #openstack-ansible12:52
*** antonym has joined #openstack-ansible12:52
*** adrian_otto has quit IRC12:52
*** alij_ has joined #openstack-ansible12:53
*** jascott1 has joined #openstack-ansible12:53
*** openstackgerrit has joined #openstack-ansible12:54
*** alij__ has joined #openstack-ansible12:54
*** johnmilton has quit IRC12:54
*** alij has quit IRC12:55
*** ricardobuffa_ufs has quit IRC12:55
*** karimb has joined #openstack-ansible12:56
*** Jolrael has joined #openstack-ansible12:56
*** gouthamr has quit IRC12:57
*** ricardobuffa_ufs has joined #openstack-ansible12:57
*** alij_ has quit IRC12:57
*** johnmilton has joined #openstack-ansible12:58
*** mgariepy has joined #openstack-ansible12:59
*** markvoelker has quit IRC13:01
*** adrian_otto1 has quit IRC13:05
*** cathrichardson has joined #openstack-ansible13:07
*** cathrich_ has quit IRC13:09
*** alij__ has quit IRC13:10
*** alij has joined #openstack-ansible13:10
*** jheroux has joined #openstack-ansible13:11
*** alij_ has joined #openstack-ansible13:12
*** alij__ has joined #openstack-ansible13:13
*** alij has quit IRC13:14
*** alij_ has quit IRC13:17
*** klamath has joined #openstack-ansible13:24
*** klamath has quit IRC13:24
*** klamath has joined #openstack-ansible13:24
*** bapalm has joined #openstack-ansible13:26
mgariepygood morning everyone13:27
*** alij__ has quit IRC13:32
*** alij has joined #openstack-ansible13:32
*** jheroux has quit IRC13:32
*** jheroux has joined #openstack-ansible13:35
*** alij has quit IRC13:36
*** ybabenko has quit IRC13:38
*** alij has joined #openstack-ansible13:38
*** m4rx has quit IRC13:41
*** ybabenko has joined #openstack-ansible13:41
*** ybabenko has quit IRC13:43
*** ybabenko has joined #openstack-ansible13:45
*** ybabenko has quit IRC13:51
*** thorst_ has joined #openstack-ansible13:56
cloudnullmornings13:56
*** allanice001 has quit IRC13:57
cloudnullneith: have a look at https://github.com/openstack/openstack-ansible-ops/tree/master/multi-node-aio -- builds ~14 vms and creates everything using a partition scheme and nic layout which I use in production.13:58
*** Mudpuppy has joined #openstack-ansible13:58
*** Matias has quit IRC13:59
klamathcan someone give me the command syntax to remove an old swift node from ansible inventory?14:01
*** haad1 has quit IRC14:03
*** aleph1 is now known as agarner14:05
*** alij has quit IRC14:05
*** alij has joined #openstack-ansible14:06
*** alij has quit IRC14:06
*** alij has joined #openstack-ansible14:06
sigmavirusklamath: from your checkout of openstack-ansible "scripts/inventory-manage -r hostname"14:09
sigmaviruser scripts/inventory-manage.py14:09
klamathcool, thank you14:09
sigmavirusklamath: that script explains to you how to use it14:11
sigmavirusklamath: so if you do ./scripts/inventory-manage.py -h14:11
sigmavirusIt tells you what it can do and how14:11
klamathcool, im decoming some swift nodes, i assume remove them from swift.yml, then remove them from inventory, run swift-sync and should be done right?14:13
*** chris_hultin|AWA is now known as chris_hultin14:13
sigmavirusI'm no swift expert, I can't verify that.14:14
*** karimb has quit IRC14:16
cloudnullklamath: that should be all that's needed. -cc andymccr_14:16
openstackgerritMarc Gariépy proposed openstack/openstack-ansible-lxc_hosts: LXC version to 2.0.5 on CentOS  https://review.openstack.org/39035314:17
*** alij has quit IRC14:22
*** berendt has quit IRC14:23
jwitkoHey All,  My setup-openstack.yml keeps failing on the swift install.  The testing/check script is failing.   I've detailed my config, the errors, and the host here http://cdn.pasteraw.com/a1cl6gn0u1i236uufjgwaic3676wtz9   It looks like something is not happy in the storage ring?  I've tried to follow the docs as closely as possible14:23
jwitkothis is the latest newton release on ubuntu 16.04, fyi14:24
cloudnullo/ jwitko looking now14:24
*** h5t4 has joined #openstack-ansible14:24
jwitkohey cloudnull!  thank you!14:24
jwitkoare you in Barcelona?14:24
*** alij has joined #openstack-ansible14:25
cloudnullno sadly.14:25
cloudnullis this a new setup?14:25
*** winggundamth_ has quit IRC14:26
kysse_whats happenin in barcelona?14:26
jwitkocloudnull, yes this is a brand new setup14:26
*** h5t4 has quit IRC14:27
*** h5t4 has joined #openstack-ansible14:29
*** alij has quit IRC14:29
*** v1k0d3n has joined #openstack-ansible14:31
mgariepycan someone review this please ? https://review.openstack.org/39035314:32
cloudnullmgariepy: looking14:34
*** marst_ has quit IRC14:34
jwitkoalso looking14:34
cloudnulljwitko: it looks like a node is missing from the righ14:34
cloudnull*ring14:34
mgariepygreat :) thanks guys14:34
cloudnullmgariepy: LGTM14:35
jwitkocloudnull, Yea but why/how?   I'm very new to swift so I don't fully understand the ring but I thought I configured OSA correctly14:35
*** v1k0d3n has quit IRC14:35
jwitkomgariepy, I don't have the ability to review this one  :(14:35
jwitkobut LGTM as well :)14:35
*** thorst_ has quit IRC14:36
*** haad1 has joined #openstack-ansible14:36
*** thorst_ has joined #openstack-ansible14:36
mgariepyjwitko, well it's only to use lxc-2.0.5 instead of the 2.0.4 on centos :)14:37
jwitkohaha yea I can see the code change.  But I think onlu cloudnull has permission to review14:37
jwitkoonly*14:37
*** Jeffrey4l has quit IRC14:37
cloudnulljwitko: you can +1 :)14:38
jwitkoReally?  I don't see the normal button14:38
jwitkooh, doh.14:38
mgariepyI'm currently having some issue whit my containers not starting eth0 at boot sometimes.14:38
cloudnullor -1 -- all of which are important.14:38
jwitkoi see it, thanks14:38
jwitko+114:38
*** thorst_ has quit IRC14:42
jwitkocloudnull, so yea was there something I was supposed to do in addition for setting up this host?14:42
*** alij has joined #openstack-ansible14:43
cloudnullstill thinking.14:43
*** hj-hpe has joined #openstack-ansible14:47
*** shausy has quit IRC14:47
cloudnulljwitko: so if this is new and there's no data within it maybe just nuke the rings and rerun the playbooks ?14:48
jwitkosure, I'm down for that14:48
jwitkois there an ansible way of nuking?14:48
cloudnullI'm not sure how / where it went off the rails14:48
jwitkowell I've run it like a dozen times trying to get it to work14:48
jwitkoso I'm sure this is my fault14:48
cloudnullansible -m shell -a 'rm -rf /etc/swift' swift_all; openstack-ansible os-swift-install.yml14:49
jwitkore-running14:49
*** rgogunskiy has quit IRC14:52
mgariepycloudnull, can you +w https://review.openstack.org/#/c/390353/ or I need to find someone else ? :D14:54
cloudnullmgariepy: sadly we need another core to +2 before we can workflow it14:54
*** ybabenko has joined #openstack-ansible14:55
mgariepyHo I though we only needed one +214:55
cloudnulland i believe most of them are at the summit14:55
cloudnullmgariepy: no we need 2 +2's then it can be +w14:55
*** woodard has joined #openstack-ansible14:55
mgariepycloudnull, how come you are not at the summit ?14:55
cloudnullgood question :)14:55
mgariepyyou missed the plane ?14:56
mgariepyhehe14:56
cloudnullI wish. then i'd have myselft to blame .14:56
*** allanice001 has joined #openstack-ansible14:57
*** ybabenko has quit IRC14:59
jwitkocloudnull, same failure http://cdn.pasteraw.com/2wfq5s2suk3v6qcv9xzyouur1shd5fm15:00
cloudnulljwitko: are the drives mounted ?15:01
cloudnullk2_swift{1,3} on all of the nodes?15:01
cloudnullwithin /srv/node15:01
*** woodard has quit IRC15:02
jwitkoyes, but while creating a paste to show you that15:03
jwitkoI just saw an error15:03
jwitkohttp://cdn.pasteraw.com/hwkxoltt1elqcdkbb0yuzbhv0wuleu815:03
*** haad1 has quit IRC15:03
jwitkoshould /srv/node be owned by swift:swift or root:root ?15:04
jwitkowow... wtf15:05
jwitkocdn.pasteraw.com/slcvzzwunyko28vuf5b026on75hz7la15:05
cloudnullhttp://cdn.pasteraw.com/p4cb81rspfpuaxd8xhgmtau8uu4lt5a15:05
cloudnullmine are all owned by swift:swift15:05
jwitkook, /srv/node i had owned by root.  I changed that but these other errors about the iscsi directories are alarming15:07
jwitkothe directories are all mounted http://cdn.pasteraw.com/cd7vpksti8ojaiogs5izjweg7n4kf2u15:08
cloudnullis that mounted on all 3 of the swift nodes?15:11
jwitkoThere is only one swift node15:11
jwitkohttp://cdn.pasteraw.com/a1cl6gn0u1i236uufjgwaic3676wtz9  -- config here15:11
cloudnullis 10.89.112.3 == swift-node1 ?15:12
jwitkoyes15:12
jwitkothats the br-storage interface15:12
cloudnullok.15:12
jwitkothe interface is up and it can ping its gateway no problem15:13
cloudnullok.15:13
jwitkoso I unmounted those LUNs15:13
jwitkoand the directory is still screwed up15:13
jwitkoI can't cd to it or anything15:13
cloudnullinteresting.15:13
cloudnullcold boot ?15:13
jwitkoi was able to delete them15:14
cloudnullmaybe iscsi is having a bad time .15:14
jwitkono this is the OS,  I unmounted the dirs no problem15:14
cloudnulldo you have something executing on that path and is there anything in a sleepwait state?15:14
*** alij has quit IRC15:15
jwitkono, but check this out.  I just touched a file in those dirs as a test15:15
*** pester has quit IRC15:16
jwitkoand I guess they needed to be "primed" so to speak15:16
jwitkobecause the file created and now I can cd and ls and everything15:16
jwitkorunning again to see if that fixes it15:16
*** allanice001 has quit IRC15:16
cloudnullha. gremlins15:17
jwitkothis hasn't happened on this SAN with ext4 formatted volumes15:17
jwitkomight be something going on with this san/xfs15:17
*** adrian_otto has joined #openstack-ansible15:21
*** weezS has joined #openstack-ansible15:23
jwitkoit didn't fix it...  but I also didn't do the rm -rf /etc/swift stuff15:24
jwitkoremoving the existing configs and running again15:24
*** h5t4 has quit IRC15:24
*** shananigans has joined #openstack-ansible15:33
jwitkocloudnull, ok so with the directories working/fixed and the permissions matched for swift to own /srv/node and below I am still receiving the same error15:35
jwitkoand this is after I removed all the existing config15:35
jwitkothe 6000,6001,6002 ports are open and listening.  I can connect to them.15:37
cloudnullhum. if you run the swift ring command manually does it tell you anything else?15:38
jwitkosorry not sure how to do that ?15:39
cloudnullsadly the people most knowledgable about the ring and OSA are at the summit.15:39
cloudnullthe command : "/etc/swift/scripts/swift_rings_check.py -f /etc/swift/scripts/account.contents"15:40
jwitkoyea, thats the one that fails15:41
cloudnullI'm looking through https://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2 now15:41
cloudnulltrying to figure out what/where/why15:41
jwitkoyea, I did that last night15:42
jwitkohttps://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2#L10715:42
jwitkoso it only runs the check on the first inventory item for swift_hosts15:43
jwitkowhen: inventory_hostname == groups['swift_hosts'][0]15:43
jwitkono sorry thats not right.15:43
jwitkocloudnull, I may have found a possible cause15:48
cloudnullLooking at the config I think its the config w/ r1,2,315:49
jwitkowell I tried to recreate the command to build the ring15:49
jwitkoto see what was happening15:49
jwitkohttp://cdn.pasteraw.com/k3gkzpv0ptn785eb97182obklxpmiwt15:49
jwitkoI didn't have repl_number set in conf.d/swift.yml which defaults to '3'15:49
cloudnullI also believe http://cdn.pasteraw.com/rdxpypois1sq31v1luxsp8i6ej48890 this could be it15:50
jwitkooh yea?15:50
cloudnullyou have r1,2,3 which translates to regions 1,2,315:50
cloudnullI have a similar setup with a single storage node and all of my region settings are the same using only r115:51
jwitkoah wow, ok thank you15:51
jwitkoif you also look at my paste15:52
jwitkoit looks like the directory name is getting mashed15:52
jwitkodue to the underscore?15:52
* cloudnull looking now15:52
openstackgerritNolan Brubaker proposed openstack/openstack-ansible: Add command to remove IPs from inventory  https://review.openstack.org/39037515:52
*** allanice001 has joined #openstack-ansible15:53
cloudnulljwitko: that may be too15:53
cloudnullhttps://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2#L2615:53
*** alij has joined #openstack-ansible15:54
cloudnulllooks like it's doing string replacement in a few places and could be dropping the "_.*" I'm not sure why that'd be quite yet but it makes sense based on your paste15:55
jwitkook, I've renamed all "r2" and "r3" to "r1" in the config.  I've also renamed and remounted the LUNs to disk1,2,315:56
jwitkoEdited the deploy config to reflect those new mounts15:56
jwitkodeploying again15:56
cloudnullcool :)15:56
cloudnullsorry i dont have good answers on these bits15:57
jwitkooh please, like you ever need to apologize15:58
*** alij has quit IRC15:58
jwitkocloudnull, it passed that error!  :)15:59
cloudnullwoot!15:59
cloudnullon to a new one i'm sure ;)15:59
cloudnull:p16:00
jwitko"ensure services are started"   - we're at a holding pattern here.  was OK on the bare metal but hanging on the containers it looks like16:00
cloudnullI remember there was a loop that caused it to cycle a bunch in some cases.16:01
cloudnullautomagically: you around?16:01
cloudnulldid you look into that, the swift system init script issue?16:01
cloudnullI thought that got resolved. but maybe not or not in your checkout?16:02
jwitkocloudnull, it looks to be proceeding16:06
jwitkostill on the same step but progress16:06
*** rromans_ is now known as rromans16:09
*** agrebennikov has joined #openstack-ansible16:10
jwitkocloudnull, it finished, no errors!16:12
jwitkono how in the world to test haha16:12
cloudnullnice!16:12
*** tschacara has joined #openstack-ansible16:12
*** Matias has joined #openstack-ansible16:12
cloudnullyou can upload an object and run swift stat16:12
jwitkogoing to try and create a container16:12
jwitkofor glance_images16:12
cloudnullthat's a good one too. if glance is configured to use swift you can upload a bunch of images. if they save it's working16:13
jwitkohm, so I see this error in the logs when I set up a tail before I created the container16:13
jwitkohttp://cdn.pasteraw.com/56gxkzp8i3tggym4ea1eoigcqr3qdtq16:13
jwitkothe file does exist though16:14
jwitkoso maybe it recovered16:14
cloudnullhttps://github.com/openstack/openstack-ansible-ops/blob/master/multi-node-aio/openstack-service-setup.sh#L97-L16816:14
cloudnullthat's generally my step on uploading a mess of images16:14
jwitkocontainer created!16:15
jwitkothats further than before haha16:15
cloudnullalso in newton there are no flavors by default so this maybe handy https://github.com/openstack/openstack-ansible-ops/blob/master/multi-node-aio/openstack-service-setup.sh#L7-L4316:16
cloudnullgoing to grab a coffee . back in a few16:17
*** zerick_ is now known as zerick16:18
jwitkoty!16:18
jwitkocloudnull, so the image imported successfully according to the CLI16:20
jwitkobut the swift container shows an object count of zero16:20
jwitkohowever glance says it has the image and the status is active lol16:21
*** hughmFLEXin has joined #openstack-ansible16:22
*** TxGirlGeek has joined #openstack-ansible16:22
*** hughmFLEXin has quit IRC16:23
*** hughmFLEXin has joined #openstack-ansible16:24
*** TxGirlGeek has quit IRC16:29
chris_hultinjwitko: Can you actually build a server with the image?16:43
jwitkochris_hultin, about to try now16:43
jwitkoprobably not I'd guess16:43
*** fops has joined #openstack-ansible16:47
jwitkoError: Failed to perform requested operation on instance "test1", the instance has an error status: Please try again later [Error: Build of instance 45915aa3-91f6-45d7-9afd-4da80fab76d9 aborted: Block Device Mapping is Invalid.].16:47
jwitkolooks like I have some block storage issues16:48
cloudnullback16:49
cloudnulljwitko: was that created using horizon16:49
jwitkocloudnull, think I found a bug in the cinder setup16:49
jwitkohttp://cdn.pasteraw.com/mky3b4ge863s288in2qv0wdk9cgud7w16:49
*** fops_ has quit IRC16:50
cloudnullseeing if i can reproduce that now.16:51
jwitkomy lvm "cinder-volume" is running16:53
*** adrian_otto has quit IRC16:56
*** fops_ has joined #openstack-ansible17:00
*** fops has quit IRC17:02
cloudnulljwitko: was that during volume attachment or create?17:02
jwitkocloudnull, looks like creation17:03
*** weezS has quit IRC17:03
jwitkocloudnull, i manually installed tgt package17:03
jwitkoI'll paste new full error logs17:03
cloudnulldo you have the cinder-volumes VG ?17:03
jwitkocloudnull, yes I do.17:04
jwitkocloudnull, http://cdn.pasteraw.com/sbd0u1vanm3r5odkj1qsy4hfq3ym69r17:04
jwitkoso it looks like tgt is expecting a certain config17:04
jwitkothat my basic apt package install didn't provide17:05
*** c-mart has joined #openstack-ansible17:06
cloudnullmy test env is using 14.04 so it's not exactly like yours but I'm not seeing tgt specific issues at this point.17:08
*** drifterza has quit IRC17:08
cloudnullcinder is spawning volumes and nova is attaching without issues.17:08
cloudnullI do see cinder volume disconnects from the DB from time to time. but i generally chaulk that up as normal.17:09
jwitkocloudnull, it looks like the issue is there is no tgt package install for 16.04 cinder.  due to this the tgt conf that allows targets to cinder volumes is not in place17:10
jwitkoI installed tgt manually and I'm running os-cinder-install again to verify17:10
jwitkolooks like it might be skipping TASK [os_cinder : Ensure cinder tgt include] ***********************************17:13
cloudnullthose packages should be there using this var https://github.com/openstack/openstack-ansible-os_cinder/blob/master/vars/ubuntu-16.04.yml#L41-L4417:14
cloudnullcurious if it's skipping for some other reason or if it just didn't complete first time ?17:14
jwitko  when:17:15
jwitko    - inventory_hostname in groups['cinder_volume']17:15
jwitko    - cinder_backend_lvm_inuse | bool17:15
cloudnull those packages should be installed here https://github.com/openstack/openstack-ansible-os_cinder/blob/master/tasks/cinder_install_apt.yml#L51-L6217:15
cloudnulllol, yes. that task17:15
jwitkoso the first conditional I can confirm is true17:16
cloudnullthe second one should auto enable https://github.com/openstack/openstack-ansible-os_cinder/blob/master/defaults/main.yml#L20117:17
cloudnullassuming that the cinder config has lvm set as the backend type17:17
cloudnullhttp://cdn.pasteraw.com/kvkd9grr7xka1xz1zrrqxn7odz1busb17:18
jwitkooh wow17:18
jwitkoI am missing the entire "cinder_backends: " block17:18
jwitkoI have everythign under it17:18
jwitkobut not itself17:18
jwitkohttp://cdn.pasteraw.com/9zzd55m67js0t3s2v9p510lbdxcog4417:19
cloudnullthat'll do it17:19
jwitkosorry17:19
cloudnullno worries.17:19
jwitkoI should just be able to run os-cinder-install again right?17:19
cloudnullyup17:19
jwitkoI see it went through this time.   ugh.17:22
jwitkonot sure how I lost that guy lol17:22
cloudnullshit happens17:23
*** alij has joined #openstack-ansible17:29
*** asettle has joined #openstack-ansible17:30
*** admin0 has joined #openstack-ansible17:32
*** maeker has joined #openstack-ansible17:34
*** thorst_ has joined #openstack-ansible17:34
*** gouthamr has joined #openstack-ansible17:41
*** allanice001 has quit IRC17:44
agrebennikovhi folks, have a question about variables topology in OSA17:45
agrebennikovI put the version of openstack to user_variables.yml17:46
*** h5t4 has joined #openstack-ansible17:46
agrebennikovand it turned out that the deployment process took the one from /opt/openstack-ansible/playbooks/inventory/group_vars/all.yml17:46
agrebennikovis this expected behaviour?17:46
*** thorst_ has quit IRC17:47
*** gouthamr has quit IRC17:47
*** thorst_ has joined #openstack-ansible17:47
*** johnmilton has quit IRC17:48
cloudnullagrebennikov: which var are you setting?17:49
agrebennikovopenstack_version17:49
agrebennikovstruggled for about 2 hours trying to figure out where the wrong one comes from17:49
cloudnullI dont know where openstack_version is set .17:51
cloudnullI know we have openstack_release17:51
agrebennikovah, sorry17:51
agrebennikovmy bad17:51
agrebennikovthat one17:51
*** allanice001 has joined #openstack-ansible17:52
agrebennikovso in stable it keeps changing17:52
agrebennikovwhile I needed static one17:52
agrebennikovbut in the playbooks (not everywhere) the one from all.yml was used17:52
agrebennikovI made some debugging and found out that "openstack_release" itself was right, but while creating urls for venvs - the wrong one was used17:53
cloudnullinteresting. that may be an ansible variable precedence problem.17:54
cloudnullsetting anything in user_variables.yml should always win17:54
mgariepyagrebennikov, which version of openstack-ansible ?17:55
agrebennikovmitaka17:55
*** johnmilton has joined #openstack-ansible17:55
*** electrofelix has quit IRC17:56
*** thorst_ has quit IRC17:56
cloudnullmitaka used 1.9.x which does have precedence issues in certain situations.17:56
cloudnull2.x is more strict17:56
cloudnullhttp://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable17:56
*** johnmilton has joined #openstack-ansible17:56
cloudnullansible should ALWAYS use the var sepcified on the CLI which is what user_.*yaml does17:57
agrebennikovyeah, looking to this page right now17:57
cloudnullbut i've heard of folks running into these types of issues with 1.9.x17:57
mgariepylike this one : https://bugs.launchpad.net/openstack-ansible/+bug/160530217:57
openstackLaunchpad bug 1605302 in openstack-ansible "it's not possible to change cinder or nova ceph client via user_variables for the ceph_client role" [Low,Fix released] - Assigned to Marc Gariépy (mgariepy)17:57
agrebennikovwhich is telling me that user_vars have a precedence17:57
*** poopcat has joined #openstack-ansible17:58
mgariepy1.9.4 have vars precedence issue and 1.9.5 - 1.9.6 have other issue..17:58
cloudnullyea. there really should be a 1.9.final release where those things were fixed but there never was.17:59
jwitkohm... I'm trying to create an instance from a volume snapshot and I'm seeing the following error from nova18:00
jwitkoAddrFormatError: failed to detect a valid IP address from 'openstack-int.lab.mgmt.saucelabs.net'18:00
jwitkoit created the first instance no problem18:00
jwitkolooks like it moves past that error though and finds an address18:02
jwitkobut then I end up with Error: Failed to perform requested operation on instance "test2", the instance has an error status: Please try again later [Error: Build of instance 0a23d982-5568-41e9-b4ec-728be592764f aborted: Block Device Mapping is Invalid.].  and nothing in the cinder logs18:02
cloudnullagrebennikov: if you can figure out a way to force it to be the highest precedence then we can backport that into mitaka18:04
cloudnullwhich I think folks will really appreciate.18:04
cloudnullbut right now its simply a known issue.18:04
agrebennikov:) gotcha18:04
agrebennikovbut is it ansible problem or OSA?18:05
jwitkoit looks like my volume is actually being created without issue but Horizon doesn't like the amount of time it takes18:05
jwitkoand the instance creation times out waiting for it18:05
mgariepyagrebennikov, https://github.com/ansible/ansible/pull/1465218:05
mgariepyit's ansible 1.9.4 issue18:06
*** poopcat has quit IRC18:06
agrebennikovand just to clarify - st/newton is already on 2.x, isn't it?18:06
openstackgerritNolan Brubaker proposed openstack/openstack-ansible: Add command to remove IPs from inventory  https://review.openstack.org/39037518:07
cloudnullagrebennikov: yes. newton is on 2.118:07
agrebennikovhm.. so the fix is backported in march...18:07
cloudnulland should a 2.2 be released not break everything I think we'll end up on that in future newton releases.18:07
agrebennikovso basically everybody just have to switch to 1.9.5 then18:09
agrebennikov(for mitaka)18:09
cloudnullyea. 1.9.5 fixed vars but broke lxc.18:09
cloudnullhttps://github.com/ansible/ansible/issues/1509318:10
*** admin0 has quit IRC18:11
cloudnullsorry wrong bug that was the bug where they broke host/groupvars in 1.9.518:12
cloudnull1.9.6 broke lxc which was here https://github.com/ansible/ansible-modules-extras/issues/204218:12
cloudnulllast I recall the fix was added to the stable branch18:12
cloudnullbut never released.18:13
cloudnullagrebennikov: if you dont mind giving the stable/1.9 branch a try we could lockin on the sha for mitaka18:14
cloudnullassuming it fixes those issues18:14
bazIn Horizon (Newton), are ipv6 FIPs supposed to be available?18:14
cloudnullit can be18:15
bazcurrently only getting v4 addresses as options on a network with both v4 and v618:15
cloudnullhttps://github.com/openstack/openstack-ansible-os_horizon/blob/master/defaults/main.yml#L95-L9618:16
cloudnulloh FIPs18:16
bazyeah FIPs18:16
cloudnullIDK if neutron support's FIPs with v618:16
bazworks fine for directly attached18:16
cloudnullhttp://docs.openstack.org/newton/networking-guide/config-ipv6.html18:17
cloudnullI think the best you can do is assign the port and attach a fixed IP if you need a specific address.18:18
cloudnullyou can do a v4 subnet on the same network and request a FIP from it.18:19
cloudnullbut that's not what you were asking for.18:19
*** poopcat has joined #openstack-ansible18:19
bazIt's not a huge deal, at some point it'll be in OS. If someone really wants a dual stack router they can make their own with a VM.18:22
cloudnullin the osic cloud1 we're using a dual stack net but its v6 direct return with v4 on a router.18:23
cloudnullit'd be nice it it worked both ways.18:24
bazI haven't tried yet, but can OS routers not be NATs?18:28
cloudnullno i dont think so .18:28
cloudnullat least not with the ML2 plugins18:28
bazwhich is why v6 FIPs wouldn't make sense.18:28
cloudnullnot that i've seen that is18:28
agrebennikovcloudnull, sorry, was out... what is the proposal? do you want me to try to pull stable 1.9 and retry the same I got stuck with?18:28
cloudnullagrebennikov: if you have time that'd be great!18:29
*** thorst_ has joined #openstack-ansible18:31
*** McMurlock1 has joined #openstack-ansible18:32
openstackgerritMerged openstack/openstack-ansible-os_neutron: Update paste, policy and rootwrap configurations 2016-10-21  https://review.openstack.org/38970518:32
agrebennikovcloudnull, sure can do it18:35
cloudnullthat'd be awesome.18:37
*** McMurlock1 has quit IRC18:46
*** Jeffrey4l has joined #openstack-ansible18:48
*** weezS has joined #openstack-ansible18:49
*** alij has quit IRC18:50
c-martdeployment pulled from newton/stable last friday, doesn't know about its compute nodes, even though I specified two. Anyone seen this?18:54
*** irtermit- is now known as irtermite18:55
c-martthe nova-compute service is running on both of the nodes18:56
*** drifterza has joined #openstack-ansible18:57
*** alij has joined #openstack-ansible18:57
c-martbut they aren't in the nova database, and not shown in Horizon UI18:59
*** alij has quit IRC18:59
*** alij has joined #openstack-ansible19:00
*** alij has quit IRC19:01
*** alij has joined #openstack-ansible19:02
*** ricardobuffa_ufs has quit IRC19:03
*** alij has quit IRC19:05
*** alij has joined #openstack-ansible19:05
*** Jack_Iv has quit IRC19:07
cloudnullthey're not in the db ?19:09
cloudnullno stack traces on the running compute services?19:10
cloudnullalso do you see anything in the logs showing that it's checking in ?19:10
cloudnullc-mart: also if you get a chance can you review https://review.openstack.org/#/c/389965/19:11
cloudnullI believe that should resolve the issue with the repo-server selecting the wrong node to clone or build wheels into19:11
c-martcorrect, not in the DB. Logs show "2016-10-24 19:11:50.912 95637 WARNING nova.conductor.api [req-85a5d38d-fdc9-43df-bed7-265c61e7e3bc - - - - -] Timed out waiting for nova-conductor.  Is it running? Or did this service start before nova-conductor?  Reattempting establishment of nova-conductor connection..."19:12
c-martcloudnull, awesome, i'll take a look. I've been seeing the repo container sync issue with every rebuild.19:14
cloudnullc-mart: i wonder if its just a matter of restarting conductor then the compute nodes?19:16
c-martaha! conductor service wasn't running19:17
c-martI have had to manually start several services that should have been brought up automatically by OSA. Nova API metadata service, compute service, and now the conductor service.19:19
c-martand there are my hypervisors :)19:19
c-martnow in the database and shown in Horizon.19:20
*** schwicht has joined #openstack-ansible19:22
mgariepycloudnull, https://github.com/openstack/openstack-ansible-ops/tree/master/cluster_metrics would that works with mitaka and ansible 1.9.4 ?19:23
*** johnmilton has quit IRC19:23
*** alij has quit IRC19:24
*** allanice001 has quit IRC19:27
*** thorst_ has quit IRC19:29
jwitkocloudnull, so everything seems to be sorted well now.  can't create volumes off of volume snapshots because the creation time takes longer than the timeout for spinning up an instance and waiting for its block device lol19:32
jwitkobut thats probably because of my crap hardware and network in this lab19:32
jwitkothe one weird thing is my glance_images container still shows no objects within it19:32
*** Mudpuppy_ has joined #openstack-ansible19:32
jwitkowhich makes me almost think its hidden ?19:32
jwitkoand the one I created is just not being used19:32
cloudnulljwitko: that'll be owned by the swift tenant .19:34
cloudnullso you'll need to use the swift creds which you can extract from the glance-api.conf ot see those objects.19:34
cloudnullmgariepy: yes that'll work with mitaka just fine19:34
mgariepycloudnull, ok great:) thanks19:35
cloudnullwe've been using that in the osic since liberty19:35
*** Mudpuppy has quit IRC19:35
cloudnullhttps://cloud1.osic.org/grafana/19:35
cloudnullhttps://cloud1.osic.org/grafana/dashboard/db/openstack-all-compute-aggregates19:35
cloudnullin case you were curious.19:35
mgariepynice :D19:36
*** admin0 has joined #openstack-ansible19:36
*** Mudpuppy_ has quit IRC19:36
cloudnullright now it'll only work on 1 of the logging nodes by default.19:36
cloudnullwith this pr https://review.openstack.org/#/c/390128/19:36
cloudnullwe should be able to get it working on more than 1 for data queries and storage.19:37
admin0\o19:37
cloudnullo/ admin019:37
admin0arrived in barcelona :D19:37
cloudnullhow is it ?19:37
admin0hot :)19:38
admin0netherlans is 9*19:38
admin0now19:38
admin0got a question .. i see that on friday from 2:00 - 6:00 is OpenStackAnsible: Contributors meetup — who will be there ?19:38
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: [TEST] Update to Ansible v2.2.0.0-0.2.rc2  https://review.openstack.org/38929219:39
cloudnullandymccr_: odyssey4me: RE: [14:38] <admin0> got a question .. i see that on friday from 2:00 - 6:00 is OpenStackAnsible: Contributors meetup — who will be there ?19:40
cloudnulladmin0: sadly idk19:40
cloudnullevrardjp: automagically: ^ idk if either of you are at the summit19:40
admin0logan pjm and winggundamth and me are here19:41
*** javeriak has joined #openstack-ansible19:42
*** allanice001 has joined #openstack-ansible19:42
*** z-_ is now known as z19:43
*** z is now known as z-19:44
*** z- has quit IRC19:44
*** z- has joined #openstack-ansible19:44
*** admin0_ has joined #openstack-ansible19:46
*** admin0 has quit IRC19:46
*** admin0_ is now known as admin019:46
*** tschacara has quit IRC19:58
*** pjm6 has joined #openstack-ansible19:58
*** tschacara has joined #openstack-ansible19:59
cloudnullgoing afk for a bit, bbl.20:00
*** c-mart has quit IRC20:00
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-repo_build: Ensure the venv creation process is able to work with isolated deps  https://review.openstack.org/38676520:03
castulocloudnull: I still have issues using the run-upgrade script from openstack-ansible, I have run the script probably like 8 times and it keeps failing in different part every time.... for example last time it failed here: infra03_neutron_server_container-3f7c461a : ok=45   changed=9    unreachable=0    failed=1 but it is a different place every time :S20:03
castuloyou guys have not seen problems using it to upgrade from stable/mitaka to stable/newton?20:04
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-repo_build: Ensure the venv creation process is able to work with isolated deps  https://review.openstack.org/38676520:05
*** adrian_otto has joined #openstack-ansible20:06
cloudnullcastulo: I've not been testing upgrade from stable/mitaka > stable/newton quite yet.20:06
*** alij has joined #openstack-ansible20:07
cloudnullin truth some of the projects are still wrestling with bugs within their own upgrade process so we're very likely going to have to wait to get that tackled.20:07
cloudnullthat said, if you have a list of possible break points it'd be great to get those noted.20:08
cloudnullcastulo: do you have an etherpad for the issues you've been seeing ?20:09
cloudnullor a list of launchpad bugs?20:09
cloudnullcastulo: also -- for example last time it failed here: infra03_neutron_server_container-3f7c461a : ok=45   changed=9    unreachable=0    failed=1 -- where is there? what was the name of the task where it failed?20:11
*** alij has quit IRC20:12
*** c-mart has joined #openstack-ansible20:14
*** askb has joined #openstack-ansible20:14
cloudnullcastulo: can you outline what the hardware and config profile looks like for these tests. distro, releaes, kernel, openstack_user_config, enabled services.20:16
cloudnull?20:16
*** Mudpuppy has joined #openstack-ansible20:17
*** Mudpuppy has quit IRC20:17
*** Mudpuppy has joined #openstack-ansible20:18
*** Mudpuppy_ has joined #openstack-ansible20:20
*** Mudpuppy has quit IRC20:22
c-martadmin0, we should talk about Ceph later20:26
c-martI have it working :)20:26
castuloI think this was the task: TASK [os_neutron : Add Ubuntu Cloud Archive Repository]20:26
admin0mine is working as well :)20:26
admin0with multiple pools20:26
c-martwhat did it take?20:26
admin0a bug report :D20:27
admin0hold20:27
castulocloudnull I'm not sure I understand what you mean by hardware and config profiles, remember I'm very new to ansible20:28
admin0c-mart: https://bugs.launchpad.net/openstack-ansible/+bug/163601820:28
openstackLaunchpad bug 1636018 in openstack-ansible "cinder ceph not working if documentation followed " [Undecided,New] - Assigned to Praveen N (praveenn)20:28
admin0drifterza is the detective here to find this :)20:28
drifterzahuh20:29
drifterzaI'm hating icehouse20:29
drifterzajeeze20:29
mattticehouse??20:30
drifterzayeah20:30
drifterzastuck in migration phase20:30
cloudnullcastulo: I want to setup a test env similar to what you have. so I'd like to know what the setup is ?20:31
cloudnulldrifterza: is that icehouse deployed by OSA  ?20:32
drifterzano it was a manual thing dude20:33
cloudnullthat sounds like a blast !20:33
drifterzabefore OSA was even a twinkle in odyssey4me's eyes20:33
cloudnullit's cowboy cloud20:33
admin0our region1 is similar :)20:34
admin0cowboy icehouse20:34
drifterzaI'm just struggling with live migration20:34
cloudnullI've taken part in a quite a few cowboy clouds. great times had by all.20:34
drifterzaso my current issue is with the conductor20:36
drifterzaException during message handling: Migration error: 'ComputeNode' object has no attribute 'ram_allocation_ratio'20:36
castulocloudnull: ooh we are using an onmetal server from Rackspace private cloud, and using the playbooks here to deploy openstack: https://github.com/osic/qa-jenkins-onmetal20:36
*** dxiri has joined #openstack-ansible20:37
drifterzabut I'm no python genius20:37
cloudnullis that an option set within the nova.conf file?20:39
drifterzawell usually its ram_allocation_ratio20:40
drifterzaim trying to migrate to an instance thats already calculated the ram as fully used.20:40
drifterzabut it lies20:40
*** dxiri has quit IRC20:41
drifterzaUnable to migrate 79464cf4-cec6-4543-bb8b-9327c0583e13 to cp13.ran1.isoc.co.za: Lack of memory(host:-94910 <= instance:2048)20:42
*** dxiri has joined #openstack-ansible20:42
drifterzahttps://bugs.launchpad.net/nova/+bug/141311920:43
openstackLaunchpad bug 1413119 in OpenStack Compute (nova) "Pre-migration memory check- Invalid error message if memory value is 0" [Low,Confirmed]20:43
drifterzahttps://bugs.launchpad.net/nova/+bug/121494320:43
*** allanice001 has quit IRC20:43
openstackLaunchpad bug 1214943 in OpenStack Compute (nova) "Live migration should use the same memory over subscription logic as instance boot" [High,Fix released] - Assigned to Sylvain Bauza (sylvain-bauza)20:43
drifterzaI cant get the latter bug-fix to work20:44
drifterzacos icehouse code is so old20:44
drifterzadammit20:44
cloudnullyea, that's going to be tough.20:45
*** admin0 has quit IRC20:45
*** admin0_ has joined #openstack-ansible20:45
*** h5t4 has quit IRC20:46
admin0_“Simplify Day 2 Operations (And Get Some Sleep!) Through Craton Fleet Management” — is this a good topic ?20:49
*** c-mart has quit IRC20:52
*** retreved has quit IRC20:53
*** dxiri has quit IRC20:54
*** dxiri has joined #openstack-ansible20:54
*** aludwar has quit IRC20:55
cloudnulladmin0_: ++20:59
*** dxiri has quit IRC20:59
cloudnullcastulo: I have a couple upgrade env's running using the head of stable/mitaka to stable/newton21:02
cloudnullI should know more in a bit21:02
*** aludwar has joined #openstack-ansible21:02
*** kvcobb has joined #openstack-ansible21:04
*** c-mart has joined #openstack-ansible21:08
*** dxiri has joined #openstack-ansible21:14
*** javeriak has quit IRC21:22
*** schwicht has quit IRC21:24
*** tschacara has quit IRC21:30
*** jheroux has quit IRC21:32
castulocool :)21:33
*** javeriak has joined #openstack-ansible21:35
bazso I'm having difficulty getting Neutron QoS (traffic shaping) to work in Newton from an OSA deploy.21:35
*** adrian_otto has quit IRC21:38
bazhttp://cdn.pasteraw.com/dpl8g0ff3y68pw9mapkw54htkbqy89h21:41
bazthe port in that paste was created after the QoS policy was attached to the network. According to the documentation, any QoS policy applied to the network is supposed to be the policy used by any port that's created (or recreated) as long as the port itself doesn't have a qos_policy_id as well (most specific wins).21:43
bazhowever the behavior is, when using linux bridge, no tc commands appear to ever be run during creating all the virtual interfaces and bridges associated with instance creation.21:44
*** schwicht has joined #openstack-ansible21:45
cloudnullbaz: I've not messed with QoS /w neutron, so I'm not really sure what should and should not be there.21:46
cloudnulldo you see anything in the logs indicating there's a fault ?21:46
cloudnullor is it just not doing anything ?21:46
bazthere's no errors being thrown, it creates the stack as if the QoS stuff wasn't there.21:47
cloudnullI assume qos was added "neutron_plugin_base" and that you're seeing qos as an extension driver?21:48
bazI've turned on debugging so it logs which iproute2 and brctl commands it's issuing, and there's nothing from tc, even tho there's rootwrapper for it, and the qos settings are where they should be on the compute hosts, and server/agent containers.21:49
bazneutron ext-list  has a line for: | qos                       | Quality of Service                            |21:50
cloudnullif you # grep extension_drivers /etc/neutron/plugins/ml2/ml2_conf.ini21:53
cloudnulldo you see qos in the exension driver list ?21:53
bazhttp://cdn.pasteraw.com/60mu9a1g0tgdpbn9cfxiiyw1yo6ddkp21:53
cloudnullok21:53
bazfrom the agents container21:53
bazthe agents and server containers *are* on the compute hosts, but that shouldn't matter since they're containers.21:54
cloudnullI for sure see the reno for it https://github.com/openstack/neutron/blob/master/releasenotes/notes/QoS-for-linuxbridge-agent-bdb13515aac4e555.yaml21:54
*** admin0_ has quit IRC21:55
bazthe bare metal compute /etc/neutron/plugins/ml2/ml2_conf.ini is identical to the paste above.21:55
cloudnullok. so that's all good.21:56
bazyeah21:56
bazit *should* be working21:56
bazat least as far as I can tell from the neutron docs on qos21:57
*** sdake_ has joined #openstack-ansible21:57
cloudnullbaz: looking at the docs21:59
cloudnullhttp://docs.openstack.org/mitaka/networking-guide/config-qos.html21:59
*** schwicht has quit IRC22:00
cloudnullwhile the config reference there is for ovs22:00
cloudnullmaybe we need to add qos to the agent section as an extension  ?22:00
cloudnullthat can be done with a config_template override globally.22:00
cloudnullbut if that turns out to be the fix we should update the templates when qos is enabled22:01
cloudnullwhich would make it automatic22:01
bazk, will have a look22:02
*** hughmFLE_ has joined #openstack-ansible22:02
cloudnulladding http://cdn.pasteraw.com/p8czlnm06v2jj9rynqvi9ge50cugxck into your user-variables.yml file and then running ``openstack-ansible os-neutron-install.yml --tags neutron-config`` should do the needful22:03
*** asettle has quit IRC22:03
*** chris_hultin is now known as chris_hultin|AWA22:03
*** asettle has joined #openstack-ansible22:04
*** berendt has joined #openstack-ansible22:04
*** hughmFLEXin has quit IRC22:05
*** thorst_ has joined #openstack-ansible22:05
cloudnullcastulo: running the upgrade on the first system has encountered an issue within ceilometer22:05
cloudnullbut I'm past neutron and nova22:05
cloudnulland so far my instances have stayed up without an interuption in ping or ssh22:06
bazso if that turns out to work, the change to OSA's template should be fairly straight forward.22:06
cloudnullyes.22:06
cloudnullassuming that's the casue22:06
cloudnull*cause.22:06
cloudnullwe should be able to use the same conditional found here for qos22:07
cloudnullhttps://github.com/openstack/openstack-ansible-os_neutron/blob/master/templates/plugins/ml2/ml2_conf.ini.j2#L822:07
bazexactly22:07
*** asettle has quit IRC22:08
*** berendt has quit IRC22:09
bazml2_conf.ini should be identical between the agents container and the compute host, correct?22:11
*** adrian_otto has joined #openstack-ansible22:13
*** karimb has joined #openstack-ansible22:15
*** klamath has quit IRC22:15
*** phalmos has joined #openstack-ansible22:17
*** alij has joined #openstack-ansible22:18
*** alij has quit IRC22:22
*** johnmilton has joined #openstack-ansible22:29
bazwell look at that: qdisc ingress ffff: dev tap524993ab-51 parent ffff:fff1 ----------------22:30
bazso good sign so far.22:30
*** sdake_ has quit IRC22:31
*** schwicht has joined #openstack-ansible22:34
*** schwicht has quit IRC22:37
*** karimb has quit IRC22:37
*** weezS has quit IRC22:44
*** phalmos has quit IRC22:46
*** thorst_ has quit IRC22:46
*** thorst_ has joined #openstack-ansible22:47
*** javeriak has quit IRC22:49
bazcloudnull: that looks like the fix so far. I can change the policy with neutron and it applies while live.22:49
*** javeriak has joined #openstack-ansible22:50
dxirihey everyone!22:52
dxiriquick question, what is the container_interface setting I am seeing multiple times on the config file? (in openstack_user_config.yml)22:55
dxirifor example         container_interface: "eth10"22:55
*** thorst_ has quit IRC22:55
*** gtrxcb has joined #openstack-ansible22:56
*** schwicht has joined #openstack-ansible23:00
*** agrebennikov has quit IRC23:01
*** hughmFLE_ has quit IRC23:01
*** Mudpuppy_ has quit IRC23:01
*** Mudpuppy has joined #openstack-ansible23:02
*** hughmFLEXin has joined #openstack-ansible23:03
*** schwicht has quit IRC23:09
*** hughmFLEXin has quit IRC23:14
*** alij has joined #openstack-ansible23:19
*** pmannidi has quit IRC23:20
*** pmannidi has joined #openstack-ansible23:21
*** alij has quit IRC23:24
*** hughmFLEXin has joined #openstack-ansible23:24
*** javeriak has quit IRC23:25
c-martdxiri: those are the "name of unique interface in containers to use for this network. Typical values include 'eth1', 'eth2', etc."23:28
dxirimust they match my phisical interfaces?23:29
*** hughmFLEXin has quit IRC23:29
c-martno. in fact, they probably shouldn't to avoid confusion23:29
dxiriok so, eth10 is good23:29
dxirisince I don't have eth10 at all23:29
c-martyeah, I just used the names from the boilerplate, and it works fine23:30
dxirithat will be the name of the nic inside the container itself23:30
c-martyes. hopefully that's namespaced away from the NICs known to the physical host. but I don't know, I'm new, and just stuck with the recommended config here :)23:30
*** javeriak has joined #openstack-ansible23:31
c-marton a different topic: has anyone had to manually start a bunch of systemctl services inside their containers after the OSA deployment finished?23:32
c-martnova-conductor, nova-scheduler, nova-api-os-compute, and nova-api-metadata all had to be started manually here23:33
c-martI had to re-run the nova playbook a couple of times, perhaps it got in a weird state that's unique to me :) just curious.23:34
*** maeker has quit IRC23:41
*** johnmilton has quit IRC23:43
*** shasha_t_ has quit IRC23:43
*** irtermite has quit IRC23:43
*** coolj_ has quit IRC23:43
*** z- has quit IRC23:43
*** Trident has quit IRC23:43
*** shananigans has quit IRC23:43
*** jrosser has quit IRC23:43
*** bsv has quit IRC23:43
*** neillc has quit IRC23:43
*** rackertom has quit IRC23:43
*** maximov_ has quit IRC23:43
*** kong has quit IRC23:43
*** dolphm has quit IRC23:43
*** jroll has quit IRC23:43
*** homerp_ has quit IRC23:43
*** evrardjp has quit IRC23:43
*** castulo has quit IRC23:43
*** jwitko has quit IRC23:43
*** mgagne has quit IRC23:43
*** arif-ali has quit IRC23:43
*** hughsaunders has quit IRC23:43
*** hughsaunders has joined #openstack-ansible23:44
*** johnmilton has joined #openstack-ansible23:44
*** shananigans has joined #openstack-ansible23:44
*** shasha_t_ has joined #openstack-ansible23:44
*** irtermite has joined #openstack-ansible23:44
*** coolj_ has joined #openstack-ansible23:44
*** z- has joined #openstack-ansible23:44
*** Trident has joined #openstack-ansible23:44
*** jrosser has joined #openstack-ansible23:44
*** bsv has joined #openstack-ansible23:44
*** neillc has joined #openstack-ansible23:44
*** maximov_ has joined #openstack-ansible23:44
*** rackertom has joined #openstack-ansible23:44
*** kong has joined #openstack-ansible23:44
*** dolphm has joined #openstack-ansible23:44
*** jroll has joined #openstack-ansible23:44
*** homerp_ has joined #openstack-ansible23:44
*** evrardjp has joined #openstack-ansible23:44
*** castulo has joined #openstack-ansible23:44
*** jwitko has joined #openstack-ansible23:44
*** mgagne has joined #openstack-ansible23:44
*** arif-ali has joined #openstack-ansible23:44
*** rackertom has quit IRC23:46
*** maximov_ has quit IRC23:46
*** kong has quit IRC23:46
*** Drago has joined #openstack-ansible23:50
*** LiYuenan has quit IRC23:52
*** kong has joined #openstack-ansible23:52
*** thorst has joined #openstack-ansible23:52
*** Mudpuppy has quit IRC23:54
*** maximov_ has joined #openstack-ansible23:56
*** rackertom has joined #openstack-ansible23:58
*** gouthamr has joined #openstack-ansible23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!