Tuesday, 2016-04-12

openstackgerritSteve Lewis (stevelle) proposed openstack/openstack-ansible-os_gnocchi: Enabling bashate and pep8 lint checks  https://review.openstack.org/30432800:04
openstackgerritSteve Lewis (stevelle) proposed openstack/openstack-ansible-os_gnocchi: Enable ansible lint and syntax tests  https://review.openstack.org/30434300:04
*** fawadkhaliq has quit IRC00:07
*** fawadkhaliq has joined #openstack-ansible00:08
*** ChrisBenson has joined #openstack-ansible00:27
*** ChrisBenson has quit IRC00:28
*** ChrisBenson has joined #openstack-ansible00:28
*** fawadkhaliq has quit IRC00:28
*** fawadkhaliq has joined #openstack-ansible00:29
*** markvoelker has joined #openstack-ansible00:35
*** ChrisBenson1 has joined #openstack-ansible00:35
*** ChrisBenson has quit IRC00:35
openstackgerritMerged openstack/openstack-ansible: Fix idempotency bug in AIO bootstrap  https://review.openstack.org/30422700:37
*** keedya has quit IRC00:37
*** fawadkhaliq has quit IRC00:40
*** markvoelker has quit IRC00:41
*** automagically has joined #openstack-ansible00:42
*** daneyon__ has joined #openstack-ansible00:42
*** daneyon has quit IRC00:45
*** ChrisBenson1 has quit IRC00:49
*** ChrisBenson has joined #openstack-ansible00:49
*** busterswt has quit IRC00:50
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Enable SSL termination for all services  https://review.openstack.org/27719900:51
*** elgertam has joined #openstack-ansible00:52
*** woodard has joined #openstack-ansible00:53
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_swift: Resolve issues with swift tests  https://review.openstack.org/30435000:54
*** woodard has quit IRC00:55
openstackgerritMerged openstack/openstack-ansible-repo_build: Remove global-requirements build from the build process  https://review.openstack.org/30058900:56
*** woodard has joined #openstack-ansible00:56
openstackgerritAndy McCrae proposed openstack/openstack-ansible-os_swift: Resolve issues with swift tests  https://review.openstack.org/30435000:56
*** daneyon has joined #openstack-ansible01:00
*** sdake_ has joined #openstack-ansible01:02
*** sdake has quit IRC01:02
*** fishcried has quit IRC01:03
*** daneyon__ has quit IRC01:04
openstackgerritAlexandra Settle proposed openstack/openstack-ansible: Minor fix to correct passive to active voice  https://review.openstack.org/30435101:07
*** busterswt has joined #openstack-ansible01:07
*** sdake_ is now known as sdake01:08
*** woodard has quit IRC01:14
*** woodard has joined #openstack-ansible01:15
*** thorst has quit IRC01:16
*** thorst has joined #openstack-ansible01:17
*** elgertam has quit IRC01:18
*** elgertam has joined #openstack-ansible01:24
*** thorst has quit IRC01:25
*** phalmos has joined #openstack-ansible01:28
*** weezS has quit IRC01:29
*** phalmos has quit IRC01:39
*** weezS has joined #openstack-ansible01:40
*** automagically has quit IRC01:46
*** asettle has quit IRC01:57
*** asettle has joined #openstack-ansible02:05
*** bapalm has quit IRC02:06
*** bapalm has joined #openstack-ansible02:13
*** Bjoern has joined #openstack-ansible02:18
*** Bjoern has quit IRC02:19
*** fishcried has joined #openstack-ansible02:21
*** fishcried has quit IRC02:21
*** thorst has joined #openstack-ansible02:23
*** gfa is now known as gfa_02:24
*** gfa_ is now known as gfa02:24
*** thorst has quit IRC02:30
mhaydencloudnull: i like seeing 'ssl' and 'all' in the same commit02:33
mhaydeni'll gander in the morning02:33
*** markvoelker has joined #openstack-ansible02:37
*** markvoelker has quit IRC02:42
*** asettle has quit IRC03:01
*** fishcried has joined #openstack-ansible03:06
*** furlongm has quit IRC03:07
*** fishcried has quit IRC03:07
*** sdake_ has joined #openstack-ansible03:12
*** elgertam has quit IRC03:13
*** sdake has quit IRC03:15
*** sdake has joined #openstack-ansible03:18
*** sdake_ has quit IRC03:20
*** thorst has joined #openstack-ansible03:28
*** mongo2 has quit IRC03:29
*** jayc has joined #openstack-ansible03:30
*** mongo2 has joined #openstack-ansible03:31
*** asettle has joined #openstack-ansible03:33
*** thorst has quit IRC03:34
*** mongo2 has quit IRC03:36
*** asettle has quit IRC03:38
*** mongo2 has joined #openstack-ansible03:38
*** javeriak has joined #openstack-ansible03:45
*** furlongm has joined #openstack-ansible03:45
*** asettle has joined #openstack-ansible03:58
*** woodard has quit IRC04:14
*** woodard has joined #openstack-ansible04:15
*** woodard has quit IRC04:19
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-plugins: Update the config_template plugin  https://review.openstack.org/30438504:23
prometheanfirecloudnull: nn04:24
*** busterswt has quit IRC04:31
*** thorst has joined #openstack-ansible04:33
*** markvoelker has joined #openstack-ansible04:38
*** thorst has quit IRC04:40
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-plugins: Update the config_template plugin  https://review.openstack.org/30438504:41
*** markvoelker has quit IRC04:42
openstackgerritMerged openstack/openstack-ansible-ironic: Add tests for the ironic CLI  https://review.openstack.org/30310404:48
*** furlongm has left #openstack-ansible04:51
*** saneax_AFK is now known as saneax04:59
*** LiftedKilt has quit IRC05:08
*** admin0 has joined #openstack-ansible05:10
*** LiftedKilt has joined #openstack-ansible05:13
*** javeriak has quit IRC05:16
*** admin0 has quit IRC05:21
*** asettle has quit IRC05:27
*** asettle has joined #openstack-ansible05:30
*** thorst has joined #openstack-ansible05:38
*** markvoelker has joined #openstack-ansible05:38
*** fishcried has joined #openstack-ansible05:40
*** markvoelker has quit IRC05:43
*** thorst has quit IRC05:45
*** fawadkhaliq has joined #openstack-ansible05:49
*** javeriak has joined #openstack-ansible05:51
*** woodard has joined #openstack-ansible06:07
*** woodard has quit IRC06:13
*** fawadkhaliq has quit IRC06:26
*** fawadkhaliq has joined #openstack-ansible06:29
*** thorst has joined #openstack-ansible06:43
*** weezS has quit IRC06:45
*** czunker has joined #openstack-ansible06:50
*** thorst has quit IRC06:51
*** asettle has quit IRC07:00
*** mikelk has joined #openstack-ansible07:13
*** fishcried has quit IRC07:15
*** clsacramento has quit IRC07:19
*** clsacramento has joined #openstack-ansible07:22
*** sdake_ has joined #openstack-ansible07:24
*** sdake has quit IRC07:26
*** sdake has joined #openstack-ansible07:27
*** admin0 has joined #openstack-ansible07:29
*** asettle has joined #openstack-ansible07:30
*** sdake_ has quit IRC07:30
*** thorst has joined #openstack-ansible07:33
evrardjpgood morning everyone07:34
*** asettle has quit IRC07:34
*** markvoelker has joined #openstack-ansible07:39
*** thorst has quit IRC07:40
*** jamielennox is now known as jamielennox|away07:42
*** markvoelker has quit IRC07:44
*** Oku_OS-away is now known as Oku_OS07:47
*** ChrisBenson1 has joined #openstack-ansible07:51
*** ChrisBenson has quit IRC07:51
*** neilus has joined #openstack-ansible07:55
*** fawadkhaliq has quit IRC08:01
mancdazjmccrory no, odyssey4me had asked me to take a look at the issues with secondary and tertiary nodes joining the new cluster in the gate. Your patch addresses a definite problem, but I'm not sure it addresses this specific problem08:03
*** javeriak_ has joined #openstack-ansible08:10
*** javeriak has quit IRC08:10
openstackgerritMatt Thompson proposed openstack/openstack-ansible-security: [WIP] Unattended upgrades  https://review.openstack.org/30409608:12
*** ChrisBenson1 has quit IRC08:24
*** thorst has joined #openstack-ansible08:39
*** agireud has quit IRC08:39
openstackgerritMerged openstack/openstack-ansible-os_glance: Update min_ansible_version to 1.9  https://review.openstack.org/30404008:40
*** neilus has quit IRC08:41
*** neilus has joined #openstack-ansible08:41
*** agireud has joined #openstack-ansible08:42
*** agireud has quit IRC08:44
*** thorst has quit IRC08:45
openstackgerritMerged openstack/openstack-ansible-os_keystone: Update min_ansible_version to 1.9  https://review.openstack.org/30404408:46
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: If a global pip.conf file exists, let the AIO use it for containers  https://review.openstack.org/30445208:46
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible: If a global pip.conf file exists, let the AIO use it for containers  https://review.openstack.org/30445308:46
openstackgerritMerged openstack/openstack-ansible-os_neutron: Update min_ansible_version to 1.9  https://review.openstack.org/30404508:47
openstackgerritMerged openstack/openstack-ansible-os_cinder: Update min_ansible_version to 1.9  https://review.openstack.org/30403908:47
openstackgerritMerged openstack/openstack-ansible: Minor fix to correct passive to active voice  https://review.openstack.org/30435108:48
openstackgerritMerged openstack/openstack-ansible-os_heat: Update min_ansible_version to 1.9  https://review.openstack.org/30404208:48
openstackgerritMerged openstack/openstack-ansible-os_horizon: Update min_ansible_version to 1.9  https://review.openstack.org/30404308:52
*** agireud has joined #openstack-ansible08:52
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible: Fix idempotency bug in AIO bootstrap  https://review.openstack.org/30445608:52
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-plugins: Update the config_template plugin  https://review.openstack.org/30438508:55
*** nhadzter has joined #openstack-ansible09:10
*** thorst has joined #openstack-ansible09:12
*** openstackgerrit has quit IRC09:17
*** openstackgerrit has joined #openstack-ansible09:18
*** javeriak_ has quit IRC09:18
*** asettle has joined #openstack-ansible09:18
*** asettle has quit IRC09:23
*** javeriak has joined #openstack-ansible09:27
*** thorst has quit IRC09:29
*** yatin has joined #openstack-ansible09:30
*** Oku_OS is now known as Oku_OS-away09:31
*** markvoelker has joined #openstack-ansible09:40
*** yatin has quit IRC09:44
*** markvoelker has quit IRC09:45
*** Oku_OS-away is now known as Oku_OS09:49
*** openstackstatus has quit IRC09:57
*** openstack has joined #openstack-ansible10:01
*** yatin has joined #openstack-ansible10:07
*** yatin has quit IRC10:07
*** yatin has joined #openstack-ansible10:08
odyssey4memattt could you take a peek at https://review.openstack.org/303770 please?10:09
*** sdake has quit IRC10:11
*** yatin has quit IRC10:19
*** yatin has joined #openstack-ansible10:22
*** javeriak has quit IRC10:36
*** javeriak has joined #openstack-ansible10:39
*** pjm6 has joined #openstack-ansible10:58
pjm6morning all10:58
*** asettle has joined #openstack-ansible10:59
*** asettle has quit IRC10:59
*** yatin has quit IRC11:04
*** yatin has joined #openstack-ansible11:07
*** pjm6 has quit IRC11:10
*** johnmilton has joined #openstack-ansible11:13
*** pjm6 has joined #openstack-ansible11:14
*** jaypipes has joined #openstack-ansible11:28
*** clickboom has joined #openstack-ansible11:38
*** markvoelker has joined #openstack-ansible11:41
matttodyssey4me: sure, just going to grab a bite then i'll review11:42
*** yatin has quit IRC11:44
*** retreved has joined #openstack-ansible11:44
*** markvoelker has quit IRC11:45
*** jayc has quit IRC11:46
*** Oku_OS is now known as Oku_OS-away11:51
*** asettle has joined #openstack-ansible11:53
*** neilus has quit IRC11:57
*** Oku_OS-away is now known as Oku_OS12:04
*** asettle has quit IRC12:04
*** neilus has joined #openstack-ansible12:04
*** openstack has quit IRC12:04
*** openstack has joined #openstack-ansible12:08
*** pjm6 has quit IRC12:12
*** markvoelker has joined #openstack-ansible12:12
*** tlbr has quit IRC12:14
*** retreved has joined #openstack-ansible12:14
Bofu2Umorning12:15
*** tlbr has joined #openstack-ansible12:15
*** pjm6 has joined #openstack-ansible12:17
*** klamath has joined #openstack-ansible12:19
*** klamath has quit IRC12:19
*** klamath has joined #openstack-ansible12:20
*** jamielennox|away is now known as jamielennox12:26
*** v1k0d3n has joined #openstack-ansible12:26
*** Oku_OS is now known as Oku_OS-away12:26
*** pjm6 has quit IRC12:27
*** b3rnard0_away is now known as b3rnard012:28
*** severion has joined #openstack-ansible12:29
*** v1k0d3n has quit IRC12:29
*** thorst has joined #openstack-ansible12:32
*** chhavi has joined #openstack-ansible12:32
mhaydenbuenos dias12:32
*** saneax is now known as saneax_AFK12:35
*** thorst has quit IRC12:36
*** thorst has joined #openstack-ansible12:37
*** gregfaust has joined #openstack-ansible12:37
*** thorst has quit IRC12:41
*** tlbr has quit IRC12:43
*** keedya has joined #openstack-ansible12:43
*** tlbr has joined #openstack-ansible12:43
mhaydenwho broke gerrit? :P12:45
*** neilus1 has joined #openstack-ansible12:47
gregfaustmhayden: possibly a team effort: https://etherpad.openstack.org/p/gerrit_server_replacement12:49
*** neilus has quit IRC12:50
matttmhayden: can you peep https://review.openstack.org/#/c/304096/ when you get a minute?12:51
*** elgertam1 has joined #openstack-ansible12:51
matttmhayden: i'm not that familiar w/ ubuntu auto update best practices, so feedback welcome :)12:51
mhaydenmattt: last time i looked at it, i wanted to cry12:52
* mhayden will gander12:52
mhaydenthanks for taking that on!12:52
*** neilus1 has quit IRC12:53
*** tlbr has quit IRC12:54
*** Oku_OS-away is now known as Oku_OS12:56
*** elgertam1 has quit IRC12:58
*** javeriak has quit IRC13:00
*** briancubed has joined #openstack-ansible13:07
*** tlbr has joined #openstack-ansible13:08
*** neilus has joined #openstack-ansible13:09
admin0feature request: https://cloudplatform.googleblog.com/2016/04/OpenStack-users-backup-your-Cinder-volumes-to-Google-Cloud-Storage.html :D13:11
*** automagically has joined #openstack-ansible13:13
automagicallymorning13:16
*** pjm6 has joined #openstack-ansible13:16
evrardjpmorning automagically13:17
mgariepygood morning all13:17
*** Bjoern_ has joined #openstack-ansible13:19
*** Bjoern_ is now known as Bjoern_zZzZzZzZ13:19
*** galstrom_zzz is now known as galstrom13:22
*** tlbr has quit IRC13:23
*** tlbr has joined #openstack-ansible13:24
*** yatin has joined #openstack-ansible13:25
briancubedAre admin0, cloudnull, and odyssey4me around?13:28
openstackgerritMatt Thompson proposed openstack/openstack-ansible-security: WIP] Unattended upgrades  https://review.openstack.org/30409613:28
* admin0 is hiding under the bed13:28
admin0briancubed: you found me .. whats up13:29
briancubedjust wanted to report that I found the root cause for the ssh timeouts I was seeing last week. It's, well, silly.13:29
briancubedI had used the wrong network mask on the internal network for containers. 255.255.255.0 instead of the correct 255.255.252.013:30
briancubedOnce I found and fixed it, lxc_container_create playbook ran to completion without error.13:30
admin0oh .. not in the config but in your interfaces in ubuntu ?13:30
briancubedyes13:30
admin0:D13:31
admin0how many days spent ?13:31
admin0on that ?13:31
briancubedhappy that it works, frustrated that it was such a silly mistake.13:31
admin0how big was the smile on your face when you found out :D ?13:32
briancubedIt feels good to have it behind me, for sure. Days? Calendar days 5. But I was time-slicing between this and 3 other activities. So may 2 days of effort.13:32
admin0:)13:33
briancubedI wanted to let you know that it's fixed. Thanks for listening13:33
admin0ok .. thanks for the feedback .. we will know what to check next when someone else reports the same issue13:33
*** busterswt has joined #openstack-ansible13:34
*** Bjoern_zZzZzZzZ is now known as Bjoern_13:37
*** Oku_OS is now known as Oku_OS-away13:38
lbragstadmhayden I hear you're looking to rework a bunch of the ssl cert stuff in osa?13:38
*** Oku_OS-away is now known as Oku_OS13:39
admin0lbragstad: we want to have all endpoints in SSL - i think :D13:39
mhaydenlbragstad: i did? :)13:39
mhaydenol' cloudnull is working that angle13:39
mhaydeni need to gander at his patch13:39
*** czunker has quit IRC13:39
lbragstadmhayden dolphm must have volu-told you13:39
*** jthorne has joined #openstack-ansible13:40
*** asettle has joined #openstack-ansible13:42
mhaydenhaha13:42
lbragstadmhayden ah - this must be the patch you're talking about https://review.openstack.org/#/c/277199/13:43
admin0lbragstad: https://review.openstack.org/#/c/277199/ — this is the one you are referring to ?13:43
mhaydeni've wanted it for a while, but i've been hacking more on the internals (like rabbitmq)13:43
admin0that patch is in one of my Must haves :D13:43
*** ametts has joined #openstack-ansible13:43
lbragstadI'm only curious because I'm trying to deploy keystone with self-signed certs13:43
lbragstadbut it doesn't look like cloudnull's patch will affect what I'm doing13:44
*** rohanp_ has joined #openstack-ansible13:45
*** mgoddard_ has joined #openstack-ansible13:45
*** asettle has quit IRC13:46
*** mgoddard has quit IRC13:49
*** sanjay__u has joined #openstack-ansible13:51
*** yatin_ has joined #openstack-ansible13:54
*** neilus has quit IRC13:55
cloudnullmorning13:56
*** yatin has quit IRC13:57
cloudnullbriancubed: you pinged ping <-> pong13:57
*** stian__ has quit IRC13:57
admin0hello cloudnull13:58
admin0yes … we are all asking for SSL :D13:58
admin0;)13:59
cloudnullha13:59
*** Mudpuppy has joined #openstack-ansible13:59
admin0remember he had a SSH connection issue13:59
*** Mudpuppy has quit IRC14:00
admin0he was seekign us to tell that his netmask in interfaces was /24 while in the config was /2214:00
admin0solved and he is happy14:00
*** Mudpuppy has joined #openstack-ansible14:00
*** yatin_ has quit IRC14:00
cloudnullah .14:01
briancubedyes, happy is the word, admin014:01
cloudnullthat will do it :)14:01
* cloudnull is reading scroll back14:01
briancubedthank you all for the assistance14:02
adreznecHey odyssey4me, thanks for the reviews on https://review.openstack.org/#/c/302941 so far. In the comments you mention pivoting all nova.conf settings on nova_virt_type key going forward. Does having the driver-specific config included only on a nova_virt_type conditional (like in the Ironic patch referenced) fulfill that requirement?14:02
cloudnullbriancubed: hows it going btw (besides the now fixed cidr problems) ?14:06
briancubedGood, cloudnull. thanks for asking. I'm moving on to the rest of the playbooks for deployment this morning (EDT). I am very close to complete.14:07
cloudnullsweet!14:07
admin0cloudnull: what release do I need to be in for this ? https://review.openstack.org/#/c/277199/  — mitaka ?14:08
admin0so i need to checkout mitaka, pull in this change ..   and do a deploy and cross fingers :D ?14:08
cloudnulladmin0: master which would be newton14:09
cloudnullwe may be able to backport to mitaka14:09
cloudnullbut at present its master(newton)14:09
admin0if i have to wait another 6 months for a SSL, its bye bye ansible :D14:09
admin0i can help backporting to mitaka :D14:09
*** thorst_ has joined #openstack-ansible14:10
cloudnullso everything to do ssl is in mitaka14:10
cloudnullthis patch just turns it on14:10
cloudnullso if you clone openstack-ansible you can cherry pick this patch and run and all will be well14:11
cloudnullgit fetch https://git.openstack.org/openstack/openstack-ansible refs/changes/99/277199/21 && git cherry-pick FETCH_HEAD14:11
admin0 my test bed is currently liberty .. i need to be in at least mitaka for that right ?14:11
cloudnulli think so14:12
cloudnulli mean you can implement the horizon keystone and nova scheme pass through using the config_template in liberty14:12
cloudnullthen pull in the patch and same thing14:12
cloudnullbut mitaka is the first to "officially" support it14:12
admin0i will start all new deployments in mitaka now14:13
*** spotz_zzz is now known as spotz14:13
mancdazodyssey4me have we lost all git commit history from anything moved into separate repo/roles?14:13
cloudnullcool14:13
admin0if that is our current “stable” relase14:13
cloudnullmancdaz: some yes, some no.14:13
mancdaz:sadface:14:13
cloudnullit is the stable current release14:13
cloudnullmancdaz: looking for anything in particular?14:14
matttcloudnull: i thought most was retained14:14
admin0maybe i will do a upgrade first and see if it upgrades . by just checking out to the mitaka branch :D14:14
cloudnullmattt:  in the os_* roles we were able to keep it all14:14
cloudnullin the rest no14:14
matttcloudnull: ok14:14
cloudnullthat was my fault14:15
mancdazcloudnull yeah, wondering why we create mysql data dirs with ansible when mysql will do it itself. I'm guessing it's a timing thing since we prevent mysql starting up until we've dropped config into place, or something14:15
cloudnulli hadnt figured out how to keep the history until we were at that point14:15
cloudnullmancdaz:  i believe its exactly that14:15
mancdazcloudnull trying to debuf second/third node join fails14:15
mancdazdebug14:15
matttcloudnull: yep :(  but i'm glad we were able to keep a bunch of history14:16
*** woodard has joined #openstack-ansible14:16
*** woodard has quit IRC14:16
cloudnullitll do the data dir create on startup LIC , so we create it.14:16
cloudnullmancdaz:  is it throwing an error there?14:16
*** woodard has joined #openstack-ansible14:17
cloudnullis it an sst problem maybe  ?14:17
mancdazcloudnull yeah something weird is happening on the initial SST that causes a failure. If you log in later and remove the .sst dir, it will work. But the .sst dir is not there in the first place so that can't be the initial problem...14:17
*** sigmavirus24_awa is now known as sigmavirus2414:18
cloudnullso weve seen that before14:18
cloudnullwe added https://github.com/openstack/openstack-ansible-galera_server/blob/master/handlers/main.yml14:18
cloudnullwhich should clean that up14:19
cloudnullhowever master is a little different than liberty14:19
mancdazcloudnull yeah, it cleans it up after the first 3x fail, then the fallback restart also fails so it's not fixing it14:19
mancdazbut a manual removal of .sst later does fix it14:19
mancdazso still trying to work it out14:19
cloudnullgit-harry: did some tune up here https://github.com/openstack/openstack-ansible-galera_server/commit/72a1dfb4d71eceae43ad6a4fb0bd986205c92484 -- so maybe you need to pull that in and try ?14:19
mancdazcloudnull this is in master14:20
odyssey4mebriancubed good to see that you found it - it's crazy how often we stumble on the basics :)14:20
*** Bjoern_ is now known as BjoernT14:20
cloudnullah .14:20
cloudnullmancdaz: nevermind me :)14:20
mancdazcloudnull see Jimmy comment on https://review.openstack.org/#/c/303770/214:21
*** pjm6 has quit IRC14:21
briancubedodyssey4me yeah. Wanted to do the dance of joy with my head down in shame. ;-)14:21
cloudnullok .14:22
cloudnullbriancubed: shit happens :)14:22
briancubed:-)14:23
* cloudnull if i had a nickle every time i broke a cloud...14:23
*** michaelgugino has joined #openstack-ansible14:23
*** pjm6 has joined #openstack-ansible14:23
odyssey4meadreznec perhaps, I guess that it'll come down to the details in review - I don't think the spec should spell too much out, but I think that both cloudnull and I were trying to ensure that the spec did make reference to related work and existing patterns which the spec should make reference to and that the spec work should try to use as far as possible, or evolve to something better14:23
odyssey4memancdaz commit history has been lost for the first set of roles - but all the openstack roles have kept the history thanks to a trick jmccrory gave to cloudnull :)14:24
*** andrei_ has quit IRC14:24
cloudnullmancdaz: hum... thats an odd one. happy to help / debug where i can.14:25
adreznecodyssey4me: Yeah ok, totally agree. Wanted to make sure I wasn't completely off-base before I pushed up my next spec iteration14:25
spotzMorning gang14:26
cloudnulladreznec: ++ Im excited to see OpenPower as a compute option.14:26
odyssey4me++ :)14:27
odyssey4meok, caught up on scrollback - time for some coffee14:27
cloudnullmorning spotz14:27
cloudnullthanks for the review on the ssl patch yesterday :)14:27
*** phalmos has joined #openstack-ansible14:29
*** sigmavirus24 is now known as sigmavirus24_awa14:29
*** sigmavirus24_awa is now known as sigmavirus2414:30
*** sdake has joined #openstack-ansible14:32
*** flwang has quit IRC14:32
*** pjm6 has quit IRC14:33
evrardjpadmin0 if you really want to have all this SSL sorted out before the next release, you could run your own configuration for haproxy, it would solve all your problems14:35
*** flwang has joined #openstack-ansible14:36
admin0evrardjp: i can manually do that, but i would prefer if its managed by ansible :)14:36
admin0so if cherry picking the review from cloudnull works, that would be great as well14:36
evrardjpyou mean openstack-ansible?14:37
admin0which i plan to do this evening14:37
admin0yep14:37
cloudnull++ that should work just fine, all of the framework for enabling ssl termination is already there in mitaka14:38
admin0“mitaka .. here i come” :D14:39
mancdazjmccrory ping14:39
*** pjm6 has joined #openstack-ansible14:39
*** weezS has joined #openstack-ansible14:41
evrardjpcloudnull what I also liked with my haproxy role, it's the ssl_termination option, to let the deployer easily chose what he wants. but I guess if you do SSL all the way, life is going to be simpler14:42
evrardjpgood commit14:43
*** mgoddard_ has quit IRC14:43
*** elgertam has joined #openstack-ansible14:44
*** mgoddard has joined #openstack-ansible14:44
*** sdake_ has joined #openstack-ansible14:51
adrezneccloudnull: Same here, it should be pretty cool to get running14:54
*** sdake has quit IRC14:55
*** ametts has quit IRC14:59
prometheanfirewhich review is that one?14:59
prometheanfirethe ssl terminiation one14:59
admin0say yes :D15:01
*** thorst_ has quit IRC15:02
*** galstrom is now known as galstrom_zzz15:02
*** metral is now known as metral_zzz15:07
*** Brew has joined #openstack-ansible15:08
*** phalmos has quit IRC15:09
*** sdake has joined #openstack-ansible15:12
*** metral_zzz is now known as metral15:15
*** sdake_ has quit IRC15:16
openstackgerritRohan Parulekar proposed openstack/openstack-ansible-os_nova: Nuage nova configuration ansible changes  https://review.openstack.org/29653815:16
*** openstackgerrit has quit IRC15:18
*** openstackgerrit has joined #openstack-ansible15:18
*** galstrom_zzz is now known as galstrom15:19
*** jayc has joined #openstack-ansible15:19
*** TxGVNN has joined #openstack-ansible15:24
*** phalmos has joined #openstack-ansible15:28
*** asettle has joined #openstack-ansible15:30
*** logan- has quit IRC15:34
*** asettle has quit IRC15:35
*** yarkot_ has joined #openstack-ansible15:38
*** phalmos has quit IRC15:39
*** logan- has joined #openstack-ansible15:40
*** sdake_ has joined #openstack-ansible15:41
*** eil397 has joined #openstack-ansible15:42
*** galstrom is now known as galstrom_zzz15:43
*** sdake has quit IRC15:43
*** phalmos has joined #openstack-ansible15:43
*** mikelk has quit IRC15:43
*** fawadkhaliq has joined #openstack-ansible15:55
briancubedMy deployment joy was short-lived. I am hitting an error deploying galera on infra1. (infra2 and infra3 passed...) You can find a snippet from the log here: http://pastebin.com/bqcHVtQS15:55
*** sdake has joined #openstack-ansible15:55
*** sdake_ has quit IRC15:55
*** gregfaust has quit IRC15:55
odyssey4mebriancubed it looks like you're using an old tag - which tag /branch is it?15:55
briancubedi'll need to check with rohan on that. This is his set of playbooks from his fork/PR.15:56
briancubedthis is the nuage integration work...15:56
briancubedWhen you say old tag, are you referring to the os-releases/12.0.8 that appears in the log?15:58
*** galstrom_zzz is now known as galstrom15:58
briancubedFWIW, the error in the log is the second. The first was the same error on python-memcached. I was able to attach to the container and do a pip install. The next time I ran the playbook it hit this next error.16:00
odyssey4mehmm, 'No route to host' definitely seems a little simpler - networking from the container to the LB or from the LB to the repo container isn't working16:01
odyssey4meI expect that it's more likely to be the first16:01
odyssey4meas the second would give you a 404 error16:01
odyssey4mebug triage here cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden, scarlisle, luckyinva, ntt, javeriak, automagically, spotz, vdo, jmccrory, alextricity25, jasondotstar, KLevenstein, admin0, michaelgugino, ametts, v1k0d3n, severion, bgmccollum16:02
briancubedSorry. What's LB?16:02
izaakko/16:02
spotz\o/16:02
odyssey4mebriancubed LB = load balancer16:02
*** jmccrory_ has joined #openstack-ansible16:03
odyssey4mebriancubed make sure that host network configs are right, and all containers too16:03
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bugs?search=Search&field.status=New16:03
briancubedokay. something seems inconsistent in the config because the galera containers on infra2 and infra3 didn't experience this error.16:04
*** admin0 has quit IRC16:04
odyssey4mefirst up: https://bugs.launchpad.net/openstack-ansible/+bug/156802916:04
openstackLaunchpad bug 1568029 in openstack-ansible "Security: Disable role during major version upgrades" [Wishlist,New]16:04
odyssey4mebriancubed we'll get back to it after bug triage, but I suspect that this action is only run against that container and skipped on the others?16:05
odyssey4memhayden this seems like a documentation itsem, but perhaps also an edit to the upgrade automation to disable the hardening when executing the playbooks16:05
*** michaelgugino_ has joined #openstack-ansible16:06
evrardjpmakes sense but the security role isn't applied by default right?16:06
michaelgugino_here16:06
odyssey4mecan you self assign and look into this?16:06
automagicallyo/16:06
odyssey4meevrardjp yeah, but in the upgrade scripts it makes better sense to be certain16:06
*** michaelgugino has quit IRC16:07
evrardjpassign to mhayden for the upgrade scripts part?16:07
odyssey4meI see that https://bugs.launchpad.net/openstack-ansible/+bug/1568070 has been picked up by Ala16:07
openstackLaunchpad bug 1568070 in openstack-ansible "Security: Identify which changes require a reboot" [Wishlist,New] - Assigned to Ala Raddaoui (raddaoui-ala)16:07
raddaoui0/16:07
mhaydensorry, running behind16:07
mhaydenodyssey4me: would probably be a doc16:07
odyssey4meah, I'll mark that as confirmed raddaoui16:08
odyssey4memhayden yeah, makes sense to me16:08
odyssey4menext up https://bugs.launchpad.net/openstack-ansible/+bug/156662916:08
openstackLaunchpad bug 1566629 in openstack-ansible "Missing insecure flag for [neutron] section of nova.conf" [Undecided,New] - Assigned to Ala Raddaoui (raddaoui-ala)16:08
odyssey4methis same issue may also be in the heat conf templates and other places where keystone auth is used from a client standpoint16:09
odyssey4methis seems pretty high importance to me? thoughts anyone?16:09
*** fawadkhaliq has quit IRC16:09
automagicallyagreed16:10
raddaouinoted16:10
odyssey4methanks raddaoui - marked as confirmed after inspection, and high importance16:11
odyssey4menext up https://bugs.launchpad.net/openstack-ansible/+bug/156698516:11
openstackLaunchpad bug 1566985 in openstack-ansible "Policies do not support multi domain setups" [Undecided,New]16:11
odyssey4methis is a wishlist item, but not something we should support exactly16:12
odyssey4meour stance is to only use the upstream defaults16:12
odyssey4meif there's anything we should be doing, is to allow deployers to upload their own custom policies16:12
automagicallyThat sounds reasonable16:12
*** fawadkhaliq has joined #openstack-ansible16:13
odyssey4meright now we allow a config_override, but if the policy changes upstream the config override may result in a broken policy file16:14
odyssey4meI'm inclined to say that as the bug is written now, this is invalid. We would accept a feature to allow a deployer to upload a custom policy file, but we'll not be changing the default policy file.16:15
odyssey4methoguhts?16:15
michaelgugino_bug is not really specific in what they're asking16:18
stevellethink I agree with that16:18
stevellealso, would be nice if there was a way to validate a policy file format16:19
evrardjpodyssey4me I agree16:20
evrardjpstevelle isn't it standard json ?16:20
stevelleevrardjp: not all json is a valid policy16:20
evrardjpk16:21
palendaeevrardjp: Might need to validate the included keys16:21
odyssey4memarking as won't fix - but I've added a note16:21
odyssey4meyeah, validating a policy is something that should perhaps be included in the cross project discussion around the config classification16:22
stevelle+116:22
evrardjpstevelle a tool for validating json policies should be done by keystone guys, right?16:22
evrardjpor that, yes16:22
odyssey4mewe've asked in that for devops tooling to validate config files, and perhaps policy files should be included in that initiative - or as a follow-on initiative16:22
odyssey4meevrardjp I expect that something should be done in oslo to provide the tool, but validation would have to be done by the projects as each policy file would need to reflect the API for the project16:23
odyssey4meanyway, we digress16:23
odyssey4menext up: https://bugs.launchpad.net/openstack-ansible/+bug/156917116:23
openstackLaunchpad bug 1569171 in openstack-ansible "Logging not enabled for memcached" [Undecided,New]16:23
*** weezS has quit IRC16:24
*** javeriak has joined #openstack-ansible16:24
odyssey4meit seems valid as a wishlist item - we'd need to ensure that log rotation and rsyslog redirection is also implemented16:24
odyssey4methoughts?16:24
automagicallySeems like a good wishlist item16:24
evrardjpit seems right16:25
evrardjpI'll take it16:25
odyssey4meok cool, thanks evrardjp16:26
*** pjm6 has quit IRC16:26
odyssey4menext up https://bugs.launchpad.net/openstack-ansible/+bug/156944616:26
openstackLaunchpad bug 1569446 in openstack-ansible "Secondary nodes fail to join galera cluster" [Undecided,New]16:26
odyssey4meYEah, I've seen this - it's very evident in the galera_server role.16:26
odyssey4meI know that mancdaz and jmccrory have been poking at it.16:26
odyssey4meI'm inclined to call this a critical bug. Thoughts?16:27
odyssey4meIf not critical - then High.16:27
evrardjpcritical16:27
automagicallyIf its intermittent, perhaps High16:27
evrardjp"Secondary nodes fail to join galera cluster" sounds really bad16:27
automagicallyBut, I don’t have strong feelings either way16:28
*** TxGVNN has quit IRC16:28
jmccrorybetween that and the 30+ min runtime of the role. it's extremely hard to get commits in galera_server16:28
mancdazodyssey4me yes I'm continuing to try and decipher exactly what's going on16:28
mancdazodyssey4me so far all my theories have been disproved16:29
odyssey4me++ jmccrory16:29
*** flwang has quit IRC16:29
mancdazbut I will continue16:29
odyssey4meI'm waiting for the Ceph mirrors to be added to OpenStack-CI's mirrors as it sets a precedent, after which I can follow with a MariaDB mirror which will likely make a big difference to the run time.16:30
odyssey4meBut alas the ceph mirror patch is languishing. Maybe I should step that up and try and get something going there. Let me see what I can do.16:30
odyssey4memancdaz / jmccrory who will self assign that one?16:30
*** pjm6 has joined #openstack-ansible16:30
mancdazodyssey4me I'll take it for now if you like16:31
michaelgugino_perhaps there needs to be a check in the play that validates the secondary host has joined the cluster16:31
mancdazmight need to hand it off if my day job takes over16:31
jmccroryi'll keep helping wherever i can on it as well16:31
odyssey4memancdaz jmccrory perhaps the best is to each take over the bug when you're working on it, and to add your daily notes at the end of each day?16:33
jmccrorymichaelgugino_ the problem seems to be the join itself. state transfer is failing for some reason16:33
*** flwang has joined #openstack-ansible16:33
mancdazodyssey4me sure I can add notes later16:33
jmccroryodyssey4me works for me16:34
odyssey4meawesome, thanks16:34
odyssey4methat's it for the new bug list (the others are all waiting for confirmation and have been previously discussed)16:35
openstackgerritSteve Lewis (stevelle) proposed openstack/openstack-ansible-os_gnocchi: Enable ansible lint and syntax tests  https://review.openstack.org/30434316:35
michaelgugino_I know I've seen mysql fail to replicate if a server is started as id 0, and then restarted under another id.16:36
openstackgerritMerged openstack/openstack-ansible-os_aodh: Updated role using the Multi-Distro framework  https://review.openstack.org/29562016:36
michaelgugino_so, the correct server id needs to be in my.cnf before the server is started initially, or things aren't going to go well for replication.16:36
mancdazmichaelgugino_ this is different from normal mysql replication so the server id doesn't matter16:36
michaelgugino_I understand that it's different, but each server appears to have an id in the logs16:37
mancdazgalera/wsrep does things differently - on initial start of a secondary node, it connects to the primary server and does a full SST (State snapshot transfer) by streaming the entire contents of the mysql data dir from the primary node to the secondary node16:38
mancdazthis SST is what's failing, but it's not to do with server IDs16:38
*** jwagner is now known as jwagner_lunch16:39
*** eil397 has quit IRC16:40
*** Oku_OS is now known as Oku_OS-away16:41
*** eil397 has joined #openstack-ansible16:41
*** ChrisBenson has joined #openstack-ansible16:44
*** woodard has quit IRC16:45
michaelgugino_looks like from the paste logs that a transfer was started and failed at some point16:46
*** fawadkhaliq has quit IRC16:50
briancubedodyssey4me still there?16:52
*** fawadkhaliq has joined #openstack-ansible16:53
*** pjm6 has quit IRC16:54
*** elgertam has quit IRC16:59
michaelgugino_everyone leave?17:01
*** b3rnard0 is now known as b3rnard0_away17:05
*** elgertam has joined #openstack-ansible17:06
*** ggillies has quit IRC17:08
*** michaelgugino_ has quit IRC17:10
*** ggillies has joined #openstack-ansible17:10
*** michaelgugino has joined #openstack-ansible17:10
*** mgoddard has quit IRC17:11
*** admin0 has joined #openstack-ansible17:11
*** mgoddard has joined #openstack-ansible17:12
odyssey4mebriancubed back - popped away from the desk for a bit17:12
*** admin0 has quit IRC17:14
*** admin0 has joined #openstack-ansible17:14
briancubedhey, odyssey4me17:24
* stevelle is almost expecting a knock knock joke17:24
briancubedi want to poke a bit at your assertion about LB. I don't have an LB. The docs say I can use haproxy, instead.17:24
briancubedso i'm thinking i have the config wrong17:25
briancubedwhen I did the openstack_user_config.yml, I didn't know what IP addresses to assign to the vips17:26
briancubedsetting up a pastebin...17:27
briancubedsnippet of user config: http://pastebin.com/scsMXa3C17:28
odyssey4mebriancubed yep, so haproxy is the lb17:28
briancubedRight, so what should internal_lb_vip_address be set to?17:29
odyssey4mebriancubed is this an AIO, or a multi-node environment?17:29
briancubedodyssey4me multi node17:29
briancubed3 infra, 1 log, 1 compute17:30
odyssey4meok, and is that address reachable by the containers and hosts?17:30
briancubed(all vms running on KVM)17:30
odyssey4medo you have the hosts setup in openstack_user_config? one of the groups set there should be haproxy_hosts, for example: https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L130-L13217:31
briancubedthe address in the file at the moment (I just changed it 10 minutes ago) is the container-network address of infra117:31
briancubedso, yes, it can be reached17:31
odyssey4meok, in that case you should ensure that haproxy_hosts has one member and that is infra117:32
briancubedbut I don't think it's correct. With this config, the error has changed to "too many 503 errors"17:32
odyssey4meif that's done, then run the haproxy-install.yml playbook and haproxy will actually be setup on infra117:32
briancubedah, so maybe that's the problem. When I ran haproxy-install.yml the first time I had the vip addr set to an open address, one not used by any server or node17:33
*** woodard has joined #openstack-ansible17:34
odyssey4meso external_lb_vip_address is only used for the keystone public endpoints when the endpoints are setup17:34
odyssey4meinternal_lb_vip_address is used for almost everything as the internal address to connect to for services, and that is meant to point at the internal vip for an environment's load balancer17:35
odyssey4methe load balancer may be a hardware LB, or haproxy17:35
briancubedexcellent. good to know. I was wondering about that17:35
odyssey4meto actually setup haproxy, you have to tell the playbooks where to put it, which is why something like this is needed: https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L130-L13217:35
briancubedso if I set internal_lb_vip_address to the container network address of infra1, re-run haproxy-install, then my galera install will work????17:36
odyssey4meby 'container', you mean 'vm' right?17:36
briancubedright, VM. Sorry.17:37
odyssey4meok, so if you have the group 'haproxy_hosts' in 'openstack_user_config.yml', with 'infra1' as the key, and the key:value pair of 'ip: <infra1's ip address>'17:38
odyssey4meand then you have 'internal_lb_vip_address: <infra1's ip address>' in global_overrides17:38
odyssey4methen you run the haproxy-install playbook (to actually setup haproxy)17:38
odyssey4methen the setup-infrastructure playbook will progress beyond the repo build17:39
odyssey4methe internal and external lb address can be the same, as long as you're not hoping to do SSL then it will just work17:40
*** sdake_ has joined #openstack-ansible17:40
briancubedI can do that. Thank you for the explanation. That goes a long way to aid my understanding.17:40
odyssey4mesure, no problem :)17:41
*** chhavi has quit IRC17:41
*** sdake has quit IRC17:43
*** ametts has joined #openstack-ansible17:45
*** tricksters has joined #openstack-ansible17:48
*** tricksters is now known as elopez17:49
*** eric_lopez has quit IRC17:52
*** sdake has joined #openstack-ansible17:54
*** sigmavirus24 is now known as sigmavirus24_awa17:55
*** sdake_ has quit IRC17:57
*** yarkot_ has quit IRC17:57
*** javeriak_ has joined #openstack-ansible17:58
*** javeriak has quit IRC18:01
*** mkrish004c has joined #openstack-ansible18:01
mkrish004chi guys, i am trying out ceilometer installation via openstack ansible, i am not getting any notification from glance or any service. Do i need to install any agent in other service containers as well ?18:04
mkrish004ci just installed mongo DB on the node and ceilometer API and ceilometer collector in the respective containers18:05
*** jwagner_lunch is now known as jwagner18:05
mkrish004cand run aodh playbook as well18:06
*** jayc has quit IRC18:06
palendaemkrish004c: I don't know a ton about ceilometer, but I do know there's per-service variables to enable or disable it. Not sure what their defaults are18:07
palendaeA downstream project disables them in its user_variables.yml file: https://github.com/rcbops/rpc-openstack/blob/master/rpcd/etc/openstack_deploy/user_variables.yml#L80-L8718:08
mkrish004c@palendae, i have enabled notification in all the containers, but i am running this ceilometer playbook separately18:09
mkrish004cwill that work18:09
palendaeI believe the ceilometer playbook running by itself will work, so long as the right variables are set18:10
mkrish004cwhat is the purpose of aodh services, do i need to run that as well ?18:10
palendaeYeah, aodh is a new metrics gathering service I think18:10
palendaeI know ceilometer depends on it as of liberty18:10
*** weezS has joined #openstack-ansible18:12
*** javeriak_ has quit IRC18:12
*** javeriak has joined #openstack-ansible18:12
*** sigmavirus24_awa is now known as sigmavirus2418:12
*** elgertam has quit IRC18:16
*** clickboom has quit IRC18:17
*** falanx has quit IRC18:17
*** javeriak_ has joined #openstack-ansible18:18
*** jayc has joined #openstack-ansible18:19
*** javeriak has quit IRC18:19
stevelleaodh is the alarm engine for ceilometer18:20
*** clickboom has joined #openstack-ansible18:20
stevelleit was extracted from the ceilometer code base18:21
mkrish004cif i need to remove the ceilometer containers and re create this service alone without disturbing other services, is that possible ?18:21
*** kukacz has quit IRC18:24
stevellemkrish004c: that should not disturb other services.18:25
admin0git clone -b mitaka https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible — we do not have mitaka yet :D ?18:26
admin0how do I checkout/test it out ?18:26
*** javeriak_ has quit IRC18:26
admin0hmm.. so 13.0.018:27
admin0sorry :)18:27
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-os_barbican: Enable functional convergence testing  https://review.openstack.org/30142218:27
*** falanx has joined #openstack-ansible18:27
palendaeadmin0: Looks like it's named 'stable/mitaka'18:27
palendaehttp://git.openstack.org/cgit/openstack/openstack-ansible/log/?h=stable/mitaka18:28
admin0so we will also have unstable/neuton ?18:28
admin0or why the stable prefix ?18:28
palendaePretty sure that's the convention across OpenStack. openstack-ansible used to have them a couple cycles back, I don't remember why they were removed18:29
palendaeI think because we weren't being as strict as the rest of OpenStack about backports18:29
admin0going to give mitaka  a spin18:29
*** javeriak has joined #openstack-ansible18:30
*** lihg has joined #openstack-ansible18:30
*** clickboom has quit IRC18:32
*** sigmavirus24 is now known as sigmavirus24_awa18:35
*** sigmavirus24_awa is now known as sigmavirus2418:35
LiftedKiltwith the openstack-ansible neutron configuration, are there any known host scaling limits?18:36
mkrish004cthanks stevelle, i will give a try, how many container it should create if we run ceilometer alone?18:36
LiftedKiltI've got between 1.5-2k physical servers - do I need Calico/Opencontrail/etc or will the stability tweaks in OSA be sufficient?18:37
*** fawadkhaliq has quit IRC18:38
stevellemkrish004c: not sure how to answer that.18:38
*** fawadkhaliq has joined #openstack-ansible18:38
*** skamithi has joined #openstack-ansible18:38
*** admin0 has quit IRC18:40
*** skamithi has quit IRC18:42
*** skamithi has joined #openstack-ansible18:42
*** skamithi has quit IRC18:43
*** fawadkhaliq has quit IRC18:43
*** fawadkhaliq has joined #openstack-ansible18:43
*** skamithi has joined #openstack-ansible18:45
openstackgerritMerged openstack/openstack-ansible-galera_server: Fix handlers  https://review.openstack.org/30377018:45
*** yarkot_ has joined #openstack-ansible18:46
*** jthorne has quit IRC18:47
*** jthorne has joined #openstack-ansible18:47
openstackgerritMajor Hayden proposed openstack/openstack-ansible-security: Security: flake8 fixes in conf.py  https://review.openstack.org/30481518:50
*** javeriak_ has joined #openstack-ansible18:52
*** fawadkhaliq has quit IRC18:52
*** fawadkhaliq has joined #openstack-ansible18:54
*** keedya has quit IRC18:55
*** javeriak has quit IRC18:56
*** admin0 has joined #openstack-ansible18:58
*** fawadkhaliq has quit IRC18:58
*** ametts has quit IRC18:58
*** fawadkhaliq has joined #openstack-ansible18:59
*** fawadkhaliq has quit IRC19:00
*** fawadkhaliq has joined #openstack-ansible19:01
*** javeriak_ has quit IRC19:07
*** javeriak has joined #openstack-ansible19:07
*** admin0 has quit IRC19:08
*** admin0 has joined #openstack-ansible19:08
*** mkrish004c has quit IRC19:11
*** admin0 has quit IRC19:13
*** admin0 has joined #openstack-ansible19:14
evrardjpLiftedKilt 2k is quite different than standards deployments19:16
palendaeThe OSIC cluster is 1k nodes, pretty sure it uses OSA19:16
evrardjp;)19:17
evrardjphardware load balancers for the OSIC cluser? I don't remember19:17
*** admin0 has quit IRC19:17
palendaeevrardjp: I think so19:18
evrardjpit makes sense19:18
evrardjpLiftedKilt for calico/contrail/standard it really depends on what you need as features19:18
*** jmccrory_ has quit IRC19:19
evrardjpimo19:19
LiftedKiltevrardjp: how so?19:19
*** fawadkhaliq has quit IRC19:20
*** daledude has joined #openstack-ansible19:20
*** fawadkhaliq has joined #openstack-ansible19:20
LiftedKiltlxd for hypervisor, ceph for block/object, software LBs19:20
daledudeis it yet possible to upgrade from 12.x to the new 13.x?19:20
evrardjpLiftedKilt software lb would probably require fine tuning19:22
evrardjpa lot of fine tuning19:22
LiftedKiltcurrently we are running everything on haproxy/heartbeat on dedicated servers19:22
evrardjpok19:23
evrardjpeverything means? openstack?19:23
*** admin0 has joined #openstack-ansible19:23
LiftedKiltno - everything meaning 1500 servers running openvz containers all managed solely with a frankenstein of bash scripts to provision and manage everything19:24
LiftedKiltwhile the number of compute hosts is high, each host is only running 4-6 containers so the aggregate load is low19:25
LiftedKiltit's just terribly utilized19:25
evrardjpopenstack neutron will be quite verbose compared to standard flat openvz, that's my point of view19:32
stevelledaledude: we don't have upgrades for that yet, I expect it to come together before Milestone 1 of Newton.19:32
evrardjpLiftedKilt I didn't try on that scale, but I'd give it a go19:33
evrardjpjust to test if you can :D19:33
LiftedKiltyeah right now there's no traffic from openvz since they aren't connected to each other - I'm ok with deploying something like calico, I was just wondering if it was necessary19:33
evrardjposic cluser works with a large amount of nodes, so why not19:33
*** javeriak has quit IRC19:34
LiftedKiltevrardjp: these nodes are also all dual gig nic only19:34
LiftedKiltI assume osic has 10gb nics?19:34
evrardjpi was afraid of large vxlan meshings of tunnels19:36
evrardjpbut it should scale anyway19:36
evrardjpand you're not forced to use vxlan19:36
palendaeLiftedKilt: I think so, but not totally sure. busterswt or jthorne would know better19:36
LiftedKiltit's probably worthwhile to just deploy calico then? I'm thinking I'll need to squeeze every ounce of efficiency I can get out of this network, especially at this scale19:37
admin0LiftedKilt: do not use a single cluster .. break into regions .. say 400 nodes each .. should do fine19:38
admin0you said 4-6 containers per host so thats around 9000 max now ..19:38
admin0should do fine .. but not in a single region19:38
LiftedKiltadmin0: can multiple regions work together seamlessly?19:38
evrardjpLiftedKilt admin0 has a good advice there19:38
admin0seamlessly — depends on how you make it .. :)19:39
LiftedKiltmeaning like live migration betweek regions19:39
admin0nope19:39
admin0but migration is possible19:39
LiftedKilthmm19:39
evrardjpavailability zones, and maybe cells are what you would be looking for19:39
admin0use federated swift , glance backend to swift19:39
logan-when you say dual gig im more scared of running ceph than the network setup19:39
evrardjpI don't know cells 'though19:39
LiftedKiltabout 750 of the servers are for jenkins19:39
admin0availability zones are just logical seperations .. does not account for network issues19:39
evrardjpand region is better when you have large amount19:40
admin0need to use regions at scale19:40
evrardjpprecisely19:40
LiftedKiltlogan- the plan was to use a single 1tb ssd in each node19:40
LiftedKiltlogan-: is that a terrible idea?19:40
admin0LiftedKilt:  i run aound 5000 vms now on a single region, 2 avaibaility zones .. .. next cluster we are building is planned for 20k vms19:40
admin0i am dumping avaibality zones, moving into regions19:41
logan-you are stretching the throughput pretty thin doing that without considering any non-storage traffic going over those links19:41
admin0LiftedKilt: at your scale, you can use a single ceph clsuter and do live migration19:41
LiftedKiltadmin0: so multiple openstack regions on top of a single ceph cluster?19:42
evrardjpbut logan has a point too, ceph is usually requiring more than one gig19:42
admin0LiftedKilt: we have 3 sepearte datacenters .. but you can have all, different racks per region to keep latency local .. and then if you are going to use ceph, you can pretty much do inter region migrations19:43
admin0however :D19:43
admin0there is a however :D19:43
admin0at scale, even we see 96 gbps saturated due to ceph .. at scale, disks break .. and ceph would like to fix it at the earliest .. saturating every bit of network19:43
LiftedKiltright now we are using moosefs on platter drives - I imagine that ceph can't be that much more network intensive that moosefs, right?19:44
evrardjponce again a good advice from admin0, LiftedKilt19:45
admin0ceph is awesome when things do not break .. when disks break .. there would be a saturation point when ceph is rebalncing19:45
LiftedKiltevrardjp: I'm going to print all of this and read it a few times to try and absorb it all haha19:45
evrardjpcome on admin0 just stop being faster at typing than me19:45
admin0:D19:45
evrardjp:p19:45
logan-how write heavy is your environment19:46
admin0i am also watching a movie and making a coffee for my wife :D19:46
LiftedKiltadmin0: we'll only be keeping 2 copies of a lot of the data, so that should help a little19:46
evrardjpI'm with a new keymap admin019:46
evrardjp:p19:46
LiftedKilt750 of the nodes are jenkins slaves doing CI builds19:46
logan-i guess it will be similar to moose probably, i haven't used it but iirc its similar replicated storage so similar network requirements id expect19:46
LiftedKilton which we only need one backup copy of their data19:47
evrardjpit's fire and forget right?19:47
evrardjpwhy should you need live migration and all these things?19:47
evrardjpjust build cinder lvm and tada!19:47
LiftedKiltevrardjp: the jenkins is, yeah19:47
admin0no :D19:47
evrardjp;)19:47
admin0no cinder lvm :D19:47
admin0very bad advice19:47
michaelguginoour ceph guys have tuned auto-healing all the way to the lowest setting to avoid network sat19:48
evrardjpit was a joke19:48
admin0:D19:48
LiftedKiltbut I think they output their results to a central place for some reason or another - I really have no idea what it's doing19:48
LiftedKiltthen the rest of the nodes are going to be containers holding liferay + mariadb19:48
LiftedKiltnodes will hold containers of liferay + mariadb, that is19:49
admin0LiftedKilt: use regions, 400 nodes per region .. use a  federated keystone and swift  .. use swift as backend for glance,     use ceph for cinder .. use local storage .. for people who want live migration, maybe have a different region with nova backend to ceph  ..   live life happy :D19:49
admin0you have enough machines to have local storage  and live-migration regions19:49
admin0just sell as premium for those who still insist to have pets on cloud and not cattles19:50
*** sigmavirus24 is now known as sigmavirus24_awa19:50
admin0most people will want faster iops .. don’t care as long as it runs .. with multiple regions, make it their burden to do HA and ensure they are covered in event of a breakdown ..19:50
LiftedKiltadmin0: this is all great advice19:52
LiftedKiltthank you guys so much19:53
admin090% of customers will happily run 2 mysql databases on local SSDs and do replication themself between regions, 10% will need live migration and all the belss and wishles and even automatic fallback ..etc etc .. so do not do a global ceph at your scale and be sad 90% of the time ..  with regions you can isolate issues, offerings etc19:53
admin0bells*19:53
LiftedKiltadmin0: oh this is an entirely private cloud19:54
LiftedKiltit's for our internal developers19:54
evrardjpyou really like your developers then19:54
evrardjp:D19:54
admin0well in terms of openstack @ scale .. issues are issues … public and private is if you bill it or not :)19:54
LiftedKiltadmin0: fair enough19:55
admin0network issues/cpe will not know its running in private cloud :D19:55
LiftedKiltevrardjp: I don't, but management does19:55
*** yarkot_ has quit IRC19:55
LiftedKiltevrardjp: haha19:55
evrardjp:D19:55
admin0devs to management — “but we are nto getting enough iops ..  a disk dies and the whole system crawls “ — you to manaagement “ yes, its not on local ssds.. its on awesome ceph"19:55
evrardjpadmin0 one could say that you will bill internally, but I'm becoming picky ;)19:55
admin0before this public cloud job, i worked on a fairly large private cloud ( gaming company, 120million gamers served per month ) ..19:56
admin0and issues are common  :_)19:57
evrardjpriot?19:57
evrardjpblizzard?19:57
admin0nah .:D19:57
admin0look up my linkedin :D19:57
admin0https://www.linkedin.com/in/sashidahal19:57
evrardjpoh yeah19:57
admin0LiftedKilt: what is yuor use case ? dbs ? vms ? that would matter how you need to design19:58
admin0boo.. my spellings :D19:59
falanxadmin0: dbs, logs, flat data, no vms20:00
*** asettle has joined #openstack-ansible20:00
falanx<--- works with LiftedKilt20:00
LiftedKiltadmin0: what he said ^20:00
admin0falanx: evrardjp , evrardjp  falanx20:00
admin0introducing both of you to each other :D20:00
admin0\o/20:01
admin0:D20:01
evrardjpredundancy20:01
falanxo/20:01
evrardjpyou should have that in two lines :p20:01
jthorneLiftedKilt: you were asking about OSIC hardware?20:02
LiftedKiltjthorne: we were discussing scalability of osa with out of the box networking, and I said that OSIC probably has 10gb or greater nics on the nodes20:03
jthornethis is true20:03
LiftedKiltjthorne: we're looking to build a 1.5k node openstack cluster on commmodity hardware with dual gig nics20:04
*** jayc has quit IRC20:04
evrardjpfalanx nice to meet you20:04
jthorneLiftedKilt: so Cloud 1 is only 352 nodes. the rest of the environment is broken up into pure bare metal environments due to community requests20:04
jthorneLiftedKilt: this is the design of Cloud 1: http://public.thornelabs.net/osic-cloud-1-rpc-physical-connectivity-and-specs-diagram.pdf20:04
*** keedya has joined #openstack-ansible20:04
*** asettle has quit IRC20:05
*** sigmavirus24_awa is now known as sigmavirus2420:05
LiftedKiltjthorne: Yeah we are in a whole different world - I've got "enterprise" netgear TOR switches20:06
admin0:D20:06
admin0netgeat .. i have one for my home lab .20:06
admin0they work good actually20:06
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484020:06
LiftedKiltthey have 10gb uplinks, which I was planning on connecting to a pair of cumulous linux 10gb spines20:06
LiftedKiltI have 66 netgear switches, split up into 22 stacks of three20:07
admin0LiftedKilt: so when you want to design yoru new openstack .. be here and we can offer advice and help20:08
evrardjpI guess now it's always broadcom silicons anyway :D20:09
spotzgerrit hates me!20:10
*** jayc has joined #openstack-ansible20:10
evrardjpyeah, LiftedKilt, don't hesitate to come here20:10
LiftedKiltadmin0: I started looking at openstack beginning of this year - tried fuel and juju, and now I'm here - I feel like ansible is probably the only tool that's going to give me the flexibility to make something this janky actually work20:10
evrardjpspotz ?20:10
spotzI get 503 on login since the server change20:10
LiftedKiltevrardjp admin0: I really appreciate all your guys' input20:10
admin0!gerrit--20:10
openstackadmin0: Error: "gerrit--" is not a valid command.20:10
admin0no karma points here :(20:11
spotzNo reviews until tonight:(20:11
admin0\o/ — party time spotz  ?20:11
spotzhah admin0 - real work time:)20:11
*** fawadkhaliq has quit IRC20:11
*** fawadkhaliq has joined #openstack-ansible20:12
*** Nepoc has quit IRC20:12
admin0LiftedKilt: i have tried all .. but sticking to ansible for the flexibility20:12
admin0spotz: you are not in the UK ?20:12
* admin0 thinks everyone of rackspace is in the UK 20:12
admin0at least people in this channel20:13
spotzNope I'm US with cloudnull, mhayden, prometheanfire, etc20:13
admin0ok20:13
*** Nepoc has joined #openstack-ansible20:13
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484020:14
*** b3rnard0_away is now known as b3rnard020:14
admin0shouldn’t gerrit also be redundant deployed on the cloud :D20:14
admin0we must implement what we build20:14
admin0for redundancy :)20:14
cloudnullanyone around that can give this a review https://review.openstack.org/#/c/304385/  ?20:15
cloudnullalso is anyone is wanted to test something new, I'd appreciate feedback on https://review.openstack.org/#/c/304840/20:16
cloudnull^ -cc automagically jmccrory admin0 -- we've talked about being able to do multiple deploys from a single host and that can help to get it going.20:17
cloudnullthat is w/out having to much dirs/files about20:17
cloudnull's/much/munge/g'20:18
automagicallycloudnull: will take a look20:18
cloudnullno rush20:18
admin0cloudnull: i have 25 servers ready to be fiddled with ..  anything you want me to test, i can test :)20:18
cloudnullsweet!20:18
cloudnull:)20:18
cloudnullfiddle away :)20:18
admin0i was planning to first do the mitaka install ( without the SSL ) and then run again with the SSL patch and see if it breaks stuff20:19
admin0before moving into other stuff20:19
cloudnullcool. that'd be super useful20:19
admin0i repurpose my env once per day, so i can test things at scale20:20
cloudnullmattt: https://review.openstack.org/#/c/296839/ -cc odyssey4me20:21
cloudnull1.9.5 is busted20:22
*** weezS has quit IRC20:22
cloudnullwhen 1.9.6 comes out we should give that a go however I think 1.9.5 should be avoided.20:22
eil397cloudnull: 304840 . I was talking about this thing " scripts/scripts-library.sh: line 202: tracepath: command not found" ?20:23
eil397s/I/you/d20:23
cloudnulleil397: whats that ?20:23
eil397cloudnull: https://review.openstack.org/#/c/304840/ failed20:24
eil397gate-openstack-ansible-dsvm-commit with one of errors about tracepath not found20:25
openstackgerritMajor Hayden proposed openstack/openstack-ansible-security: Fix flake8 violation in conf.py  https://review.openstack.org/30481520:25
jmccrorycloudnull: cool, ansible change looks good after quick glance. only thing with deploying multiple environments would be making sure that deployment host is able to reach each container network20:25
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484020:27
cloudnulleil397: ^ that should fix that oversight20:27
cloudnullafk a min20:27
eil397cloudnull: cool : - ) I've not seen all changes but it is great thing to ahve20:29
spotzcloudnull I'm trying to review:(20:30
spotzI blame mhayden20:30
* mhayden wats20:31
*** pjm6 has joined #openstack-ansible20:31
spotzok even an /etc/hosts to review.openstack.org for 104.130.246.91 isn't helping me20:32
*** yarkot_ has joined #openstack-ansible20:35
admin0https://review.openstack.org works well from here (Netherlands) .. signed out, signed in20:35
admin0same ip - 104.130.246.9120:36
palendaecloudnull: Is there a spec fo that?20:36
spotzIf all else fails change VPNs:)20:36
mhaydeni use Cat-6 cable, that helps20:38
spotzcloudnull got into the editor, you mind if I fix grammar while here?:)20:39
admin0spotz: you are very keen on grammatical correctness :)20:40
spotzIt's why they keep me around20:40
admin0:D20:40
admin0well no complaints there .. my  patches are of a better quality20:41
admin0due to you20:41
* admin0 sends spotz a pizza :D 20:41
spotzThanks;)20:41
cloudnullspotz: ++20:46
cloudnullplease do20:46
cloudnullpalendae:  nope20:47
spotzthanks cloudnull. I would do it more often but I always get in the editor by accident:)20:48
*** weezS has joined #openstack-ansible20:48
openstackgerritAmy Marrich (spotz) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484020:54
admin0cloudnull:  acually i will start the mitaka install using your patch .. i see no use to do mitaka and not have those SSls in21:00
*** asettle has joined #openstack-ansible21:00
*** phalmos has quit IRC21:02
*** Brew1 has joined #openstack-ansible21:02
*** Brew1 has quit IRC21:03
*** fawadkhaliq has quit IRC21:03
*** Brew1 has joined #openstack-ansible21:03
*** fawadkhaliq has joined #openstack-ansible21:03
*** Brew has quit IRC21:04
matttcloudnull: cool, i figured something there wasn't right :)21:04
*** johnmilton has quit IRC21:05
*** sdake has quit IRC21:05
*** ametts has joined #openstack-ansible21:06
admin0cloudnull: i did a git clone -b stable/mitaka  .. followed by: git fetch https://git.openstack.org/openstack/openstack-ansible/refs/changes/99/277199/21  && git cherry-pick FETCH_HEAD21:08
admin0now can you give me an example gist on how to pass the SSL certs that i have21:08
*** sdake has joined #openstack-ansible21:08
admin0so that i can start this and validate21:09
*** rohanp_ has quit IRC21:10
*** skamithi has quit IRC21:10
*** Mudpuppy has quit IRC21:12
*** gregfaust has joined #openstack-ansible21:14
palendae^ such an example should probably go in docs :)21:14
admin0cloudnull: for liberty, to have horizon and keystone in ssl, i had these in the variables21:15
admin0https://gist.github.com/a1git/3c2b3b3faa3bd631d5c6d936f77cafa221:15
admin0so i have the patch now .. what should go in the variables ?21:16
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484021:17
cloudnull^ palendae updated based on your initial review . thanks for that btw  :)21:18
*** daledude has quit IRC21:18
cloudnulladmin0: you have a cert you want to push out ?21:18
*** fawadkhaliq has quit IRC21:18
*** fawadkhaliq has joined #openstack-ansible21:18
*** michaelgugino has quit IRC21:20
*** galstrom is now known as galstrom_zzz21:21
*** asettle has quit IRC21:22
admin0i have a real cert21:25
*** skamithi has joined #openstack-ansible21:28
*** elgertam has joined #openstack-ansible21:29
admin0cloudnull: could not find anywhere how to use this patch :D21:31
admin0https://review.openstack.org/#/c/277199/21/releasenotes/notes/haproxy_ssl_terminiation-cdf0092a5bfa34b5.yaml — was hoping it would be there21:31
admin0especially on how to get/set the user_variables file21:31
mrdaMorning OSA21:32
admin0morning mrda21:33
palendae'lo mrda21:34
cloudnullah admin0 -- http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-haproxy.html?highlight=haproxy#securing-haproxy-communication-with-ssl-certificates21:36
cloudnullnothing there has changed21:36
cloudnullour haproxy role has supported ssl for some time21:36
cloudnullits the openstack services that did not21:36
admin0so cloudnull , i should just keep using this : https://gist.github.com/a1git/3c2b3b3faa3bd631d5c6d936f77cafa2  and it should work ?21:36
admin0and i can set the internal and admin also to true i guess21:37
mrdacloudnull: Just replied to https://review.openstack.org/#/c/301712 and change my vote.21:37
cloudnulladmin0: you should only need https://gist.github.com/a1git/3c2b3b3faa3bd631d5c6d936f77cafa2#file-gistfile1-txt-L15-L1721:37
admin0just those 3 lines and nothing else :D ?21:37
admin0\o/21:37
admin0will give it a try21:37
cloudnulli do believe so21:37
cloudnullmrda: cool21:37
* cloudnull looking now21:37
mrdacloudnull: your welcome :)21:38
cloudnullmrda: do you really not have the username option in the keystone_auth section? and its working?21:43
cloudnullkeystone auth should throw an exception because it cant validate tokens21:43
cloudnullhttps://bugs.launchpad.net/ironic/+bug/141834121:43
openstackLaunchpad bug 1418341 in Ironic "keystone_authtoken configuration error in ironic.conf from devstack" [Medium,In progress] - Assigned to Pavlo Shchelokovskyy (pshchelo)21:43
mrdacloudnull: uh-huh21:43
cloudnullthat would seem suspect to me.21:44
cloudnullbut if thats the case ill pull it21:44
mrdaor, if you think it's required, and it's not hurting anything, leave it21:45
cloudnullI had seen this in my logs which is why i put it ,  WARNING ironic.conductor.manager [-] Error in deploy of node 08a45b46-f123-4f19-a10d-faff54c8342b: Could not authorize in Keystone: A username and password or token is required. , but maybe i have something else going on21:45
cloudnullill pull it for now and see how it goes.21:46
mrdacloudnull: See Yuki Nishiwaki's comment on that bug21:46
cloudnulli see it however i have my logs telling me otherwise.21:46
cloudnullill redeploy without it21:47
cloudnulland see what happens21:47
mrdacloudnull: See Dmitry's comment.21:47
mrda"So, correct ones according to http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html and https://github.com/openstack/keystonemiddleware/blob/b562b04ee5db309268716e0e1b8270f30bdf1a76/keystonemiddleware/auth_token.py#L645-L661 are ones with admin_ prefix"21:47
*** briancubed has quit IRC21:47
mrdacloudnull: ^^^21:47
cloudnullright but thats old21:48
cloudnullhttps://github.com/openstack/keystonemiddleware/tree/master/keystonemiddleware21:48
cloudnulldoesnt exist in master.21:49
cloudnullso maybe i was ahead21:49
cloudnull?21:49
*** automagically has quit IRC21:49
* mrda thinks keystone_auth is a bit of a mess21:49
cloudnullsee https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/keystonemiddleware/tests/unit/auth_token/test_auth_token_middleware.py21:50
cloudnullrather https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/keystonemiddleware/tests/unit/auth_token/test_auth_token_middleware.py#L558-L56321:50
mrdacloudnull: since you are more likely to be within physical reach of a keystone core, feel free it poke hiome with a wet fish, but make sure you do it ironically.21:50
cloudnulland https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/keystonemiddleware/tests/unit/auth_token/test_auth_token_middleware.py#L2287-L229321:50
cloudnullim not, working from home these dats21:51
cloudnull*days21:51
*** asettle has joined #openstack-ansible21:51
mrdacloudnull: who would be crazy enough to work from home?21:51
*** aludwar has quit IRC21:52
cloudnullhahaa21:53
*** aludwar has joined #openstack-ansible21:53
mrdaok, there's enough confusion here to leave the admin_ and non-admin version of these vars in place.21:53
*** sdake_ has joined #openstack-ansible21:53
mrdaSo cloudnull, I'd suggest leaving the review as is.21:53
cloudnull"auth_plugin: This is the plugin used for authentication, such as password and token. For example, if the auth_plugin configuration option is set to password then set username, password, project_name, project_domain_name, user_domain_name and auth_url accordingly."21:53
cloudnulldolphm lbragstad dstanek ^21:53
cloudnullis that still true?21:54
*** Brew1 is now known as Brew21:54
*** sdake has quit IRC21:54
cloudnullyou guys mind having a look at https://review.openstack.org/#/c/301712/21/templates/ironic.conf.j2 specifically the keystone_auth section21:54
*** asettle has quit IRC21:55
*** asettle has joined #openstack-ansible21:55
*** Brew1 has joined #openstack-ansible21:57
*** Brew has quit IRC21:57
*** Brew1 is now known as Brew21:57
lbragstadcloudnull I think that looks right21:57
cloudnullthere's been some back and forth on if we need the admin_.* and not vars.21:58
cloudnullbeing that keystone is partially your fault i figured id ask21:58
cloudnull:p21:58
*** elgertam has quit IRC21:59
*** woodard has quit IRC22:00
mrdathanks lbragstad for the clarification!22:01
*** KLevenstein has joined #openstack-ansible22:01
*** woodard has joined #openstack-ansible22:01
admin0cloudnull:  this https://gist.github.com/a1git/86a05ba025680c0aa69d2d7ed6ce54bd is what I have now in my /etc/openstack_deploy/user_variables.yml  .. i did a openstack-ansible haproxy-install.yml ..  i checked inside the haproxy setup .. I do not see SSL anywhere :D22:03
admin0so i think i am missing to enable 1/more important variable22:03
admin0help please22:03
*** sdake has joined #openstack-ansible22:04
openstackgerritAmy Marrich (spotz) proposed openstack/openstack-ansible: Isolate Ansible from the deployment host  https://review.openstack.org/30484022:04
*** sdake_ has quit IRC22:06
cloudnulladmin0: yes my fault you also need `haproxy_ssl: true`22:07
*** busterswt has quit IRC22:08
*** woodard has quit IRC22:10
admin0cloudnull: did that, ran the playbooks again .. except horizon, no other files even mention crt22:12
cloudnullall of the ssl is terminated at the hap. is that working ?22:13
admin0how to check ?22:14
admin0in haproxy configs, i see only horizon doing ssl22:14
admin0none others are22:14
openstackgerritAdam Reznechek proposed openstack/openstack-ansible-specs: PowerVM Virt Driver Support  https://review.openstack.org/30294122:14
admin0updated config: https://gist.github.com/a1git/86a05ba025680c0aa69d2d7ed6ce54bd22:15
*** fawadkhaliq has quit IRC22:16
cloudnullwhen you rerun the haproxy play it should enable ssl termination at haproxy22:16
*** fawadkhaliq has joined #openstack-ansible22:16
cloudnullcan you curl https://$VIP:5000 ?22:16
cloudnullso something similar ?22:16
cloudnull*or22:16
admin0without SSL = response .. with SSL = no response22:17
admin0cloudnull: https://gist.github.com/anonymous/95a7a353a5b1a337c8739694c98fc83422:19
admin0the whole haproxy ansible run22:19
cloudnull40422:21
admin0cloudnull: https://gist.github.com/a1git/dc339cf45ae40ffd43b40543ff08677a — updated one with the relevant config and run22:21
*** sigmavirus24 is now known as sigmavirus24_awa22:23
cloudnulladmin0: can you restart hap -- ansible haproxy_hosts -m shell -a 'service haproxy restart'22:23
cloudnullmaybe its not being triggered ?22:23
cloudnullidk22:23
cloudnullsadly i have to run in a couple of mins22:24
*** sigmavirus24_awa is now known as sigmavirus2422:24
logan-that play output shows haproxy_ssl false on all endpoints22:24
admin0cloudnull: https://gist.github.com/a1git/6cdf0129dfa8b8553bc67f47e380d71d  — that is what I get22:24
admin0on the restart command22:24
*** Brew has quit IRC22:25
*** sdake has quit IRC22:25
admin0should True be in caps or inside ‘ ‘ or “ “ ?22:25
admin0in haproxy_ssl: true22:26
admin0logan-: do you know what i need to do to fix it ?22:27
*** spotz is now known as spotz_zzz22:27
logan-i am looking at the role atm, but i think you might have to override the entire haproxy services dict like that gist I sent you a week or two back22:27
logan-looks like haproxy_ssl is pulled into the service config only for horizon, keystone, and nova console endpoints22:30
logan-but even then, for some reason your play is passing haproxy_ssl: false for those endpoints22:31
admin0logan- i did the following:  git clone -b stable/mitaka  .. followed by: git fetch https://git.openstack.org/openstack/openstack-ansible/refs/changes/99/277199/21  && git cherry-pick FETCH_HEAD  .. and thus run the playbook with that config22:33
admin0also i get this strange restart message: https://gist.github.com/a1git/6cdf0129dfa8b8553bc67f47e380d71d22:34
*** sigmavirus24 is now known as sigmavirus24_awa22:34
logan-yeah, it doens't look like haproxy_ssl is being picked up from the config though. try openstack-ansible -e 'haproxy_ssl=true' haproxy-install.yml22:35
admin0ok22:35
*** elgertam has joined #openstack-ansible22:36
*** ametts has quit IRC22:37
*** elgertam has quit IRC22:41
stevelleI assume this is something others have seen as well on master: http://paste.openstack.org/show/6wF4OOnb3dRfUupsduGk/ would love to get a confirmation22:41
admin0logan-:  https://gist.github.com/a1git/cb178caea150e49e7e31ba848cd7f39e22:41
admin0still the same output ..22:41
logan-horizon shows haproxy_ssl true hmm22:42
logan-oh, because that is the new default with that patch anyway.22:43
admin0logan-:  https://gist.github.com/a1git/cb178caea150e49e7e31ba848cd7f39e  — i updated that with the first 2 lines of what cloudnull said i need to do22:44
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ironic: Update ironic.conf for swift and keystone compat  https://review.openstack.org/30171222:44
logan-yeah22:44
cloudnullsorry , have a hard stop today .22:44
cloudnullmrda: i updated that review to match your config, well see how it goes.22:44
cloudnullother wise, cheers everyone22:45
admin0i need to get off as well ..  12:45 AM .. need to sleep .. hopefully cloudnull  or logan- or someelse else can read this https://gist.github.com/a1git/cb178caea150e49e7e31ba848cd7f39e  from thier chat log and try to see where it goes wrong22:45
openstackgerritSteve Lewis (stevelle) proposed openstack/openstack-ansible-os_gnocchi: WIP Initial convergence testing  https://review.openstack.org/30488722:45
admin0thanks all.. see ya tomorrow22:46
*** admin0 has quit IRC22:47
logan-set keystone_service_publicuri_proto to https22:47
logan-https://github.com/openstack/openstack-ansible/blob/stable/mitaka/playbooks/vars/configs/haproxy_config.yml#L9222:47
*** galstrom_zzz is now known as galstrom22:51
*** b3rnard0 is now known as b3rnard0_away22:54
*** sanjay__u has quit IRC22:55
*** elgertam has joined #openstack-ansible22:57
*** BjoernT has quit IRC23:03
*** retreved has quit IRC23:03
*** gregfaust has quit IRC23:03
*** jamielennox is now known as jamielennox|away23:09
*** galstrom is now known as galstrom_zzz23:09
*** elgertam has quit IRC23:11
*** jamielennox|away is now known as jamielennox23:13
*** fawadkhaliq has quit IRC23:13
*** fawadkhaliq has joined #openstack-ansible23:14
*** elgertam has joined #openstack-ansible23:15
*** sdake has joined #openstack-ansible23:18
*** klamath has quit IRC23:19
*** KLevenstein has quit IRC23:19
*** weezS has quit IRC23:29
*** pjm6 has quit IRC23:46
*** asettle has quit IRC23:47
*** jayc has quit IRC23:48
*** sdake has quit IRC23:50
*** busterswt has joined #openstack-ansible23:51
*** eil397 has quit IRC23:52
*** yarkot_ has quit IRC23:55
*** jauyeung has joined #openstack-ansible23:58
*** skamithi has left #openstack-ansible23:58
*** elopez has quit IRC23:59
*** jauyeung has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!