Tuesday, 2015-07-28

*** markvoelker has joined #openstack-ansible00:09
*** yapeng has quit IRC00:23
*** jmckind has joined #openstack-ansible00:27
*** openstackgerrit has quit IRC00:46
*** openstackgerrit has joined #openstack-ansible00:47
*** andymccr has quit IRC00:54
*** annashen has joined #openstack-ansible00:54
*** andymccr has joined #openstack-ansible00:56
*** annashen has quit IRC00:59
*** jmckind has quit IRC01:05
*** jmckind has joined #openstack-ansible01:07
*** jmckind has quit IRC01:09
*** annashen has joined #openstack-ansible01:55
*** jlvillal has quit IRC01:58
*** cloudnull is now known as cloudnull_afk01:58
*** jlvillal has joined #openstack-ansible01:59
*** annashen has quit IRC02:00
*** jwagner_away has quit IRC02:07
*** b3rnard0 has quit IRC02:07
*** dolphm has quit IRC02:07
*** d34dh0r53 has quit IRC02:07
*** sigmavirus24 has quit IRC02:07
*** bogeyon18 has quit IRC02:07
*** eglute has quit IRC02:07
*** cloudnull_afk has quit IRC02:07
*** eglute has joined #openstack-ansible02:09
*** dolphm has joined #openstack-ansible02:09
*** dolphm has quit IRC02:09
*** eglute has quit IRC02:09
*** dolphm has joined #openstack-ansible02:10
*** eglute has joined #openstack-ansible02:10
*** cloudnull_afk has joined #openstack-ansible02:13
openstackgerritMiguel Grinberg proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration  https://review.openstack.org/19439502:15
*** d34dh0r53 has joined #openstack-ansible02:34
jwitkoHi, can anyone help me please?  During the os_glance install of the setup-openstack.yml playbook I am erroring during the install pip packages.  All of the pip packages install except glance.  When I go to the glance container to execute “pip install glance” and see why its failing, it starts to download and install the packages but eventually errors on “No matching distribution02:42
jwitkofound for urllib3<1.11,>=1.8.3 (from oslo.vmware<0.12.0,>=0.11.1->glance)”02:42
jwitkoI can even install it manually,   pip list | grep urllib  :  urllib3 (1.11),  but I still receive the same error when attempting to install glance from pip02:44
palendaejwitko: Might want to look in your ~/.pip/links.d (I think that's it) directory and see where it's trying to pull from02:44
palendaeIt might be that it's not being pulled into the repo server02:45
jwitkopalendae, inside my links.d/openstack_release.link,  there is only one line02:45
jwitkohttp://<vip_addr>:8181/os-releases/11.0.4/02:45
jwitkowhich I can visit manually, and see the list of packages02:46
*** daneyon_ has quit IRC02:47
palendaeEvidently urllib3 was capped up until recently https://review.openstack.org/#/c/205155/2/requirements.txt02:50
palendaeThough I'm not familiar with why02:50
*** annashen_ has joined #openstack-ansible02:56
*** annashen_ has quit IRC03:01
jwitkopalendae, any idea how i can work around this?03:02
jwitkoi’m using 11.0.4 of the os-ansible deployment repo.  it does not have urllib3 in the requirements file03:08
*** annashen has joined #openstack-ansible03:57
*** annashen has quit IRC04:01
*** markvoelker has quit IRC04:04
*** daneyon has joined #openstack-ansible04:09
*** daneyon has quit IRC04:22
*** annashen has joined #openstack-ansible04:58
*** annashen has quit IRC05:02
*** markvoelker has joined #openstack-ansible05:05
*** markvoelker has quit IRC05:10
*** yapeng has joined #openstack-ansible05:53
*** annashen has joined #openstack-ansible05:59
*** javeriak has joined #openstack-ansible06:02
*** annashen has quit IRC06:03
*** javeriak has quit IRC06:07
*** javeriak has joined #openstack-ansible06:11
*** yapeng has quit IRC06:13
*** javeriak has quit IRC06:16
*** javeriak has joined #openstack-ansible06:17
*** javeriak_ has joined #openstack-ansible06:23
*** javeriak_ has quit IRC06:25
*** javeriak has quit IRC06:26
*** javeriak has joined #openstack-ansible06:26
*** javeriak has quit IRC06:26
-openstackstatus- NOTICE: zuul is stuck and about to undergo an emergency restart, please be patient as job results may take a long time06:44
*** ChanServ changes topic to "zuul is stuck and about to undergo an emergency restart, please be patient as job results may take a long time"06:44
*** annashen has joined #openstack-ansible07:00
*** annashen has quit IRC07:05
*** markvoelker has joined #openstack-ansible07:06
*** sdake has joined #openstack-ansible07:07
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: Ceph/RBD support  https://review.openstack.org/18195707:08
svgmattt, d34dh0r53, git-harry ^^07:09
svgThis is a WIP update, contains my updates to latest comments on rev 27, but still has some #TODO's for which I asked feedback to leseb07:09
*** markvoelker has quit IRC07:11
matttsvg: cool, i'll have a look at this this morning!  thank you for updating07:24
svgnp, just a matter of squashing the WIP ans pushing te review :)07:25
* mattt throws svg a bacon sandwich07:32
svgthx, but a bit early on the day for beacon :)07:33
matttWUT07:35
matttGET OUT07:35
matttGEEEET OUT07:35
* mattt hopes bacon doesn't mean something else in belgium07:36
svgI actually don't get most of those bacon refs :)07:45
*** javeriak has joined #openstack-ansible07:46
svgno clue what all the fuss about that is, to me it's something I might occasionally fry on a bbq07:48
*** sdake has quit IRC07:57
matttsvg: i largely agree w/ that sentiment07:59
*** ChanServ changes topic to "Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible"08:00
-openstackstatus- NOTICE: zuul has been restarted and queues restored. It may take some time to work through the backlog.08:00
*** annashen has joined #openstack-ansible08:01
*** annashen has quit IRC08:05
*** evrardjp has joined #openstack-ansible08:16
*** annashen has joined #openstack-ansible09:01
*** annashen has quit IRC09:06
*** markvoelker has joined #openstack-ansible09:07
*** markvoelker has quit IRC09:11
*** javeriak has quit IRC09:16
*** javeriak has joined #openstack-ansible09:20
*** javeriak has quit IRC09:33
evrardjpgood morning everyone09:47
odyssey4meo/ evrardjp09:49
odyssey4meevrardjp was it you who I discussed SSL brokenness with some time ago?09:51
*** annashen has joined #openstack-ansible10:02
*** annashen has quit IRC10:07
*** javeriak has joined #openstack-ansible10:07
evrardjpI discussed about SSL on openstack, how it could be done, but didn't complain on some specific things10:46
evrardjpI had just a different view10:46
*** annashen has joined #openstack-ansible11:03
*** annashen has quit IRC11:07
*** markvoelker has joined #openstack-ansible11:08
odyssey4meevrardjp ah, I thought you might just like to know that we fixed a few SSL things along the way11:12
odyssey4meHorizon SSL cert/key management: https://review.openstack.org/20297711:12
*** markvoelker has quit IRC11:13
odyssey4meSSL support for haproxy: https://review.openstack.org/19895711:13
odyssey4meKeystone SSL cert/key management (still in review): https://review.openstack.org/19447411:13
*** javeriak has quit IRC11:43
*** javeriak_ has joined #openstack-ansible11:43
*** annashen has joined #openstack-ansible12:04
*** markvoelker has joined #openstack-ansible12:09
*** annashen has quit IRC12:09
*** javeriak has joined #openstack-ansible12:09
*** javeriak_ has quit IRC12:09
*** markvoelker has quit IRC12:14
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration  https://review.openstack.org/19439512:18
jwitkoHi, can anyone help me please?  During the os_glance install of the setup-openstack.yml playbook I am erroring during the install pip packages.  All of the pip packages install except glance.  When I go to the glance container to execute “pip install glance” and see why its failing, it starts to download and install the packages but eventually errors on “No matching distribution12:24
jwitkofound for urllib3<1.11,>=1.8.3 (from oslo.vmware<0.12.0,>=0.11.1->glance)”12:24
jwitkoI can even install urllib3 v1.11 manually,   pip list | grep urllib  :  urllib3 (1.11),  but I still receive the same error when attempting to install glance from pip.12:25
odyssey4mejwitko so that issue happened and we plugged the hole with https://review.openstack.org/#/c/204365/ , then removed it once it was fixed upstream12:27
odyssey4mejwitko I'm confused - you're doing your testing with an AIO aren't you?12:27
jwitkoodyssey4me, no12:27
jwitkojust trying to install openstack12:27
odyssey4meso you have multiple servers and are deploying on them?12:28
jwitkoyes, 3 controllers, 1 logging, 5 compute12:28
jwitkoi was actually able to install without issue last week, but i had to tear it down because we wanted to use different hardware12:28
*** markvoelker has joined #openstack-ansible12:29
jwitkoodyssey4me, the confusing part to me about what you’re showing is I don’t know how to implement this work-around ?12:29
jwitkodo I just add this to the requirements.txt file in the top level of the repo?12:29
odyssey4mewell, yes - you simply add that line and then rebuild your repo12:30
jwitkoah, rebuild the repo, ok thats what i was missing12:30
odyssey4meyou'll have to remove the installed pip packages from the glance containers as well12:30
jwitkoodyssey4me, correct me if I’m wrong here but to rebuild the repo I only need to execute  “repo-server.yml” and “repo-build.yml”  ?12:31
odyssey4mewe'll be tagging the next release in the next few days which will contain the updated kilo which contains this fix12:31
odyssey4mejwitko just repo-build12:31
odyssey4merepo-server sets the server up, repo-build builds the actual repo12:31
jwitkooh ok cool.  and when you say remove the pip packages,  you’re just referring ot the ones I installed manually right ?12:31
jwitkoor should I be destroying and rebuilding the containers12:32
odyssey4menot entirely sure which packages - at least urllib/glance/glanceclient12:32
odyssey4meotherwise yeah, you can lxc-stop & lxc-destroy the glance containers, then execute the host setup play to rebuild them - then go back to the glance install play12:33
*** jwagner has joined #openstack-ansible12:34
*** jmckind has joined #openstack-ansible12:36
jwitkocd play12:44
jwitkowrong window :)12:44
*** tlian has joined #openstack-ansible13:01
*** annashen has joined #openstack-ansible13:05
*** KLevenstein has joined #openstack-ansible13:08
*** annashen has quit IRC13:09
*** jmckind has quit IRC13:11
*** TheIntern has joined #openstack-ansible13:13
*** prad has joined #openstack-ansible13:15
*** javeriak has quit IRC13:15
*** Mudpuppy has joined #openstack-ansible13:44
*** Mudpuppy has quit IRC13:44
*** Mudpuppy has joined #openstack-ansible13:45
*** yapeng has joined #openstack-ansible13:52
*** bogeyon18 has joined #openstack-ansible13:52
*** b3rnard0 has joined #openstack-ansible13:54
*** fawadkhaliq has joined #openstack-ansible13:55
*** richoid has quit IRC13:56
*** sigmavirus24_awa has joined #openstack-ansible13:57
*** spotz_zzz is now known as spotz13:57
*** sigmavirus24_awa is now known as sigmavirus2413:57
jwitkoodyssey4me, that worked thank you13:59
odyssey4mejwitko excellent :)14:00
*** annashen has joined #openstack-ansible14:06
*** annashen has quit IRC14:11
*** richoid has joined #openstack-ansible14:14
*** jmckind has joined #openstack-ansible14:14
odyssey4mesigmavirus24 I've confirmed that Glance only fails against a Keystone v3 endpoint when using Swift as a store. When using a file store it's fine.14:21
sigmavirus24oh14:21
sigmavirus24I bet I know why14:21
odyssey4meIt seems that this is a known issue: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067381.html14:21
sigmavirus24odyssey4me: Let me test out https://review.openstack.org/#/c/193422/ to see if it fixes it14:22
odyssey4meHowever, in digging a little more into it it seems to me that glance never gets to interact with swift (unless my swift proxy is misbehaving) because it fails in the token validation in the middleware14:22
sigmavirus24odyssey4me: yeah, I'm guessing it's related to14:23
sigmavirus24*related to the review I just pasted14:23
odyssey4mesigmavirus24 yep, that looks like a sensible shortcut fix14:25
odyssey4mejamie actually only went to bed an hour or so ago14:25
sigmavirus24odyssey4me: yeah, I've been meaning to find a good way to review that and this seems a perfect case14:26
odyssey4meI'm trying to see if there's a workaround - something along the lines of having the service still use v2, but allowing the clients to use v3.14:27
odyssey4mewith the caveat that the project & user has to be in the default domain14:27
*** javeriak has joined #openstack-ansible14:28
*** fawadk has joined #openstack-ansible14:30
*** fawadkhaliq has quit IRC14:30
*** yapeng has quit IRC14:30
sigmavirus24odyssey4me: I'm not sure if that will be the best course of action14:30
*** fawadkhaliq has joined #openstack-ansible14:33
*** fawadk has quit IRC14:33
*** jmckind has quit IRC14:45
*** jmckind has joined #openstack-ansible14:47
*** daneyon has joined #openstack-ansible14:55
*** sdake has joined #openstack-ansible14:55
openstackgerritDarren Birkett proposed stackforge/os-ansible-deployment: set correct swift dispersion tenant  https://review.openstack.org/20657214:57
*** daneyon has quit IRC15:00
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint  https://review.openstack.org/20657515:02
*** javeriak has quit IRC15:02
*** ntpttr2 has quit IRC15:04
*** ntpttr2 has joined #openstack-ansible15:05
*** annashen has joined #openstack-ansible15:06
*** daneyon has joined #openstack-ansible15:11
*** annashen has quit IRC15:11
*** TheIntern has quit IRC15:22
*** TheIntern has joined #openstack-ansible15:25
*** alop has joined #openstack-ansible15:27
* sigmavirus24 waves to bgmccollum 15:31
bgmccollumsigmavirus24 sups15:31
bgmccollumdoccing the issue...15:31
bgmccollumsigmavirus24: https://github.com/rcbops/rpc-openstack/issues/29415:33
*** sdake has quit IRC15:35
sigmavirus24bgmccollum: cool, I might DM you for the creds to the test box you apparently have up15:35
bgmccollumsigmavirus24: sure thing15:36
*** jwagner is now known as jwagner_away15:48
*** javeriak has joined #openstack-ansible15:50
*** logan2 has quit IRC15:55
*** yaya has joined #openstack-ansible15:59
*** b3rnard0 is now known as b3rnard0_away15:59
*** erikwilson_ has joined #openstack-ansible16:00
odyssey4mehi everyone: cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung16:01
odyssey4mebug triage meeting for those who wish to attend16:01
Apsuo/16:02
stevelleo/16:02
serverascodeo/16:02
odyssey4meSo it seems that we have quite a list to work through, so let's roll.16:03
odyssey4meFirst one: https://bugs.launchpad.net/openstack-ansible/+bug/147817816:03
openstackLaunchpad bug 1478178 in openstack-ansible "MongoDB can't connect to server on AIO installation" [Undecided,New]16:03
odyssey4methis is a suplicate I think16:03
odyssey4me*duplicate16:03
palendaeodyssey4me: I think you filed a fix for those dev-stack.rst instructions16:04
odyssey4mepalendae yeah, I did: https://review.openstack.org/20601616:05
odyssey4mebut that's not related unless I add a bit about disabling ceilometer16:07
palendaeDoes run-playbooks install ceilometer now?16:07
openstackgerritMerged stackforge/os-ansible-deployment: set correct swift dispersion tenant  https://review.openstack.org/20657216:07
odyssey4mewhere is the review for patching master & kilo to make mongo wait until it's up again16:07
*** annashen has joined #openstack-ansible16:07
palendaeI thought that was just in the gating16:07
odyssey4mepalendae yeah, I'm in favour of making it default to not deploying, except in the gate as it requires an existing mongodb setup for multi-node environments16:08
odyssey4mefor the AIO it'll set itself up16:08
palendaeOh, it's in bootstrap16:08
palendaeYeah, IMO it should be on gate jobs only16:09
palendaeChange for making mongo wait in master https://review.openstack.org/#/c/200252/16:09
odyssey4meok, so this was resolved unless the reporter has found something different16:10
*** annashen has quit IRC16:12
odyssey4meok, updated the bug and commented16:12
odyssey4menext16:13
odyssey4methanks palendae for finding that review16:13
odyssey4menext: https://bugs.launchpad.net/openstack-ansible/+bug/147811016:13
openstackLaunchpad bug 1478110 in openstack-ansible trunk " Change to set the container network MTU" [Undecided,New]16:13
odyssey4meso this looks to be auto-created based on the DocImpact tag16:13
odyssey4meI wasn't expecting this. :/ Any thoughts on what to do with these bugs?16:14
palendaeSam-I-Am, KLevenstein ^16:15
KLevensteinSam-I-Am is in an airplane right now. looking.16:15
odyssey4meKLevenstein effectively the DocImpact tags are creating another bug specifically for documentation.16:16
KLevensteinright16:16
odyssey4meboth bugs in the same project, which isn't great - we seem to have quite a few of these16:16
stevellethat's one way to make sure we don't just throw work over the wall I guess16:17
KLevensteinthis can be closed.16:17
KLevensteinthe only fix I can see it needing is additional annotation describing the container_mtu option, and that was included in https://review.openstack.org/#/c/204796/ anyway16:18
palendaeWe should probably dig into the infra project and see if we can auto-add people to reviews/bugs instead of making a new one each time16:18
KLevensteinit would be nice if instead of creating new bugs, docimpact would just assign the rpcdocs launchpad group16:19
palendaeYeah16:19
palendaeI'm not sure about assign, since it might 'steal' it away from the person working on it, but yeah, fewer dupe bugs16:20
odyssey4meok, we're going to have to figure out how to make this work better - but now's not really the time16:20
*** TheIntern has quit IRC16:20
palendaeRight16:20
odyssey4mefor now, shall I just assign the docimpact bugs to rpcdocs and move along?16:20
KLevensteinyes16:21
odyssey4meok, great - let me find a non doc bug :)16:21
jwitkoHey Guys, glance issue happening after what looked to be successful osad deploy.  Adding an images just “queues” and never moves any further.  The glance logs don’t really show any errors on it.16:21
odyssey4menext: https://bugs.launchpad.net/openstack-ansible/+bug/147774716:21
openstackLaunchpad bug 1477747 in openstack-ansible trunk "nova_console_endpoint not used correctly in Kilo" [Undecided,New]16:21
*** sdake has joined #openstack-ansible16:21
odyssey4mejwitko we're in the middle of a bug triage meeting - would you mind waiting 30 mins or so?16:22
jwitkooh sorry my apologies.16:22
palendaeodyssey4me: I'll update that one - looks like we had a variable name change.16:22
odyssey4mejwitko no problem - we'll be with you shortly :)16:23
odyssey4mepalendae ok, moving on then16:23
*** logan2 has joined #openstack-ansible16:24
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bug/147543616:24
openstackLaunchpad bug 1475436 in openstack-ansible trunk "VLAN range issue in ml2_conf.ini" [Undecided,New] - Assigned to Evan Callicoat (diopter)16:24
odyssey4mehmm, this is not going to be easy to try and fix for juno - but it seems from Apsu's message that this is not an issue in kilo/master16:25
odyssey4meAny thoughts on how we should handle this?16:26
stevellePriority for OSAD doesn't need to dictate that it not get worked on. Rate as low?16:27
stevellebut Juno is in support upstream so maybe higher?16:28
*** yaya has quit IRC16:29
odyssey4mestevelle yeah, I'm inclined to think that this is an edge case and we already have Kilo around with a better method16:29
odyssey4meswitching juno to use the kilo method is a bit of a forklift, so I'm inclined to try to leave it as-is unless someone figures out a patch that doesn't result in a non-compatible syntax for upgrades16:30
odyssey4meok, next: https://bugs.launchpad.net/openstack-ansible/+bug/147453116:33
openstackLaunchpad bug 1474531 in openstack-ansible "Inventory group rabbitmq_all missing from Juno to Kilo upgrade" [Undecided,New]16:33
odyssey4methis is similar to https://bugs.launchpad.net/openstack-ansible/+bug/1474992 unless I'm reading it wrong16:35
openstackLaunchpad bug 1474992 in openstack-ansible trunk "RabbitMQ cluster upgrade failing" [High,Fix committed] - Assigned to git-harry (git-harry)16:35
odyssey4mepalendae have you seen this?16:35
palendaeI have not...I'm not 100% sure it's related, because it sounds like the group is just gone from inventory16:36
palendaeWhich wasn't happening with the restart fix, just needed to make sure they restarted in order16:38
palendaeI don't see BjoernT around on internal IRC16:38
odyssey4mepalendae looks there to me: https://github.com/stackforge/os-ansible-deployment/blob/master/etc/openstack_deploy/env.d/rabbitmq.yml#L1916:38
*** annashen has joined #openstack-ansible16:38
palendaeRight, but that's not as a result of an upgrade16:38
odyssey4meah, so we need more info for this?16:39
palendaeStill, I haven't heard more about it and Bjoern's not around to confirm/deny16:39
palendaeYeah, i think so16:39
odyssey4meright, next: https://bugs.launchpad.net/openstack-ansible/+bug/144076216:40
openstackLaunchpad bug 1440762 in OpenStack Compute (nova) kilo "Rebuild an instance with attached volume fails" [High,In progress] - Assigned to Matt Riedemann (mriedem)16:40
odyssey4mehmm, this looks like it was added to openstack-ansible for reference - it seems more for a case of keeping track of an upstream bug?16:41
odyssey4meunless there is some sort of workaround we can implement for this?16:41
palendaeYeah, was being tracked here: https://bugs.launchpad.net/openstack-ansible/+bug/1400881. Basically we were going to make sure that the next SHA bump included the nova fix16:42
openstackLaunchpad bug 1400881 in openstack-ansible trunk "Cannot rebuild a VM created from a Cinder volume backed by NetApp" [Medium,In progress] - Assigned to David (david-alfano)16:42
odyssey4meok, it looks to me like we should remove that attachment and just follow the progress in the other bug we already have assigned16:43
palendaeI'm cool wit hthat16:44
odyssey4methere's not much we can do for the upstream bug other than test it and lobby for it to be backported, which I doubt will happen to juno but may happen to kilo16:44
odyssey4meok, that's it for new bugs that have yet to be classified16:46
odyssey4methank you all - I'll work through the doc bugs quickly16:46
*** b3rnard0_away is now known as b3rnard016:46
ApsuThanks odyssey4me16:46
odyssey4medoes anyone have anything specific they want to raise?16:46
javeriakhey guys, need a little help, ive noticed that every time the infra's get rebooted the galera backend is always down and i have to go start mysql manuallly, problem is that now it wont start, and the galera error logs say 'failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use)'16:46
Apsujaveriak: Address in use when binding to 0.0.0.0 means you're already listening on that port on a more specific address16:47
javeriakwhoops, did i just step into a meeting, sorry folks :)16:47
Apsuss -lntp sport = :4567 to identify what process is listening on a more specific address16:48
odyssey4mejaveriak also, you should only reboot one infra at a time, otherwise you have to get your cluster into a healthy state again :)16:48
stevellejaveriak: we just wrapped the meeting, timing is fine16:48
odyssey4mejaveriak nope, we're done - ask away16:48
javeriakodyssey4me: yes i usually wait for everything to come back up again, galera usually doesnt16:49
javeriakApsu : I'm getting users:(("mysqld",1272,11))16:50
Apsujaveriak: So you starting mysql manually, as you mentioned, is somehow binding to a specific address. Seems like a difference in config being used, or however you're starting manually being different than the automated mechanism's configuration16:51
javeriakIs a background service taking care of the automated restarting? im starting with 'service mysql start --wsrep-new-cluster' for the first infra and simple service start for the rest16:52
*** weezS has joined #openstack-ansible16:52
odyssey4mejaveriak yes, the services are set to start on reboot so they should already be running but the cluster may be in a broken state16:54
odyssey4meso you may have to shut them all down, then bring them up one at a time16:54
odyssey4meas I recall there's some detail about mariadb that requires them to be done in a particular order - you'll have to look that up16:54
odyssey4meunless someone here knows how to do that in detail16:55
javeriakodyssey4me: should i wait for services to fully come up on one before starting the next?16:55
*** Mudpuppy has quit IRC16:55
odyssey4mejaveriak you'll have to check mariadb docs... I don't know the details personally16:55
ApsuThat's really not related to which address is being listened on.16:55
Apsujaveriak: Should probably grep the configs for the IP the currently running mysqld is listening on16:56
ApsuThere's a difference here somewhere16:56
*** annashen has quit IRC16:57
javeriakApsu: you're right, let me check, however it was fine some hours and restarts ago, i was usually able to get mysql started manually after every reboot, then something happened, and now it wont work on any power recycle16:57
ApsuMy guess is a config change happened :)16:58
*** annashen has joined #openstack-ansible16:59
*** yaya has joined #openstack-ansible17:06
*** jwagner_away is now known as jwagner17:11
javeriakso apparently all it needs is to kill the mysqld processes on the node that say 'Address already in use'..17:14
ApsuSo it must have been running from prior to a config update17:21
*** Mudpuppy has joined #openstack-ansible17:23
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint  https://review.openstack.org/20657517:24
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Fix Keystone URI/URL defaults  https://review.openstack.org/20519217:25
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Fix Keystone URI/URL defaults  https://review.openstack.org/20519217:27
jwitkoHey Guys, glance issue happening after what looked to be successful osad deploy.  Adding an images just “queues” and never moves any further.  The glance logs don’t really show any errors on it even in debug mode.  In fact in glance-registry I see “Successfully created image17:33
jwitko17:33
odyssey4mejwitko did you add it using --file, --copy-from or --location ?17:34
jwitkoodyssey4me, location.  i’m using it via horizon17:35
odyssey4mejwitko I think Horizon actually does it with --copy-from17:36
odyssey4mecan your glance container reach the url that the image is on?17:36
jwitkoits an option, i choose location and then specify a url17:36
odyssey4meand how big is the image?17:37
jwitkoyes, I was able to wget it without issue.  However I have a proxy set on the container that I’m not sure horizon/glance is picking up on?  but i would’ve expected an error from that17:37
jwitkoodyssey4me, its very small.  its the cirros 64bit image17:37
odyssey4mefyi glance logs are not very verbose unless you set glance-api.conf with debug=True17:37
jwitkoi have glance-api.conf debug=true, still no errors/issues17:37
odyssey4mejwitko ok, then I'd suggest eliminating horizon first to determine where the issue is17:38
odyssey4meget into the utility container, then: source /root/openrc and use the cli to add the image with the --copy-from option17:39
odyssey4meif that works, try the --location option too17:39
*** annashen has quit IRC17:42
jwitkoodyssey4me, seems to just be hanging17:45
odyssey4mejwitko with the cli add --debug before the command17:46
odyssey4meto see where it hangs17:46
*** yaya has quit IRC17:46
jwitkoodyssey4me, seems like it never gets past the initial curl17:46
Apsujwitko: The proxy is probably the issue. I assume you're setting your proxy in an environment variable?17:47
*** markvoelker has quit IRC17:47
jwitkoyea, i just removed it though and now I’m getting a 401 unauthorized from keystone17:48
ApsuMy guess is that curl glance calls is in an environment that doesn't inherit the proxy variable17:49
ApsuSince it's being called via popen, probably isn't passing that through17:49
ApsuPerhaps there's a proxy config option in the conf17:49
*** yaya has joined #openstack-ansible17:52
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration  https://review.openstack.org/19439517:54
*** jmckind has quit IRC17:55
*** fawadkhaliq has quit IRC17:55
*** jmckind has joined #openstack-ansible17:59
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint  https://review.openstack.org/20657518:00
*** fawadkhaliq has joined #openstack-ansible18:04
*** fawadkhaliq has quit IRC18:05
odyssey4memiguelgrinberg fyi marekd is asking for testers for https://review.openstack.org/202176 (fernet token fix for federation) - perhaps you can give it a whirl?18:13
marekdodyssey4me: ++!18:13
*** annashen has joined #openstack-ansible18:27
jwitkohey is there a tag or something i could use to push all proxy settings to containers/hosts18:35
*** fawadkhaliq has joined #openstack-ansible18:39
*** markvoelker has joined #openstack-ansible18:42
*** markvoelker_ has joined #openstack-ansible18:46
*** markvoelker has quit IRC18:48
*** TheIntern has joined #openstack-ansible18:57
*** yaya has quit IRC19:00
jwitkoApsu, odyssey4me, so I am able to reach the VIP and I removed the proxy from the internal LB IP.  Now when I curl I get “401 Unauthorized”19:10
*** fawadk has joined #openstack-ansible19:11
*** fawadkhaliq has quit IRC19:12
*** erikwilson_ has left #openstack-ansible19:16
*** KLevenstein has quit IRC19:23
jwitkoah that was my token sorry19:24
jwitkoso if i go to the utility container and I execute the glance command line19:24
jwitkoit eventually (after a long time) errors with an output of  “An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-48b00ec6-3e35-4716-8970-6e417854b0f4)”19:24
*** yaya has joined #openstack-ansible19:25
jwitkonothing output to the glance logs19:25
*** KLevenstein has joined #openstack-ansible19:28
*** markvoelker_ has quit IRC19:29
*** annashen has quit IRC19:30
jwitkooh actually i see in scrubber.log  NotAuthenticated: Authentication required19:40
jwitkoapparently glance is not authenticating properly to keystone19:40
sigmavirus24jwitko: what version of osad are you using?19:41
jwitko110.4.19:41
jwitko11.0.4*19:41
jwitko(kilo)19:42
jwitkook... very odd now.  apparently all services are failing with Auth... just tried to do a keystone tenant-list and received the same19:46
jwitkoeven from the keystone container itself after sourcing openrc19:46
sigmavirus24"received the same"19:46
sigmavirus24jwitko: did you accidentally the HEAD of kilo?19:46
jwitkosigmavirus24, no, 100% positive I’m on 11.0.4, as I already made that mistake19:47
jwitkoDISTRIB_RELEASE="11.0.4"19:47
sigmavirus24jwitko: I assume you just checked out the right version and re-ran the playbooks?19:47
jwitkoAuthorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-816d51ee-3f48-4cac-8178-a693e5fce70a)19:48
sigmavirus24Hm19:48
jwitkosigmavirus24, that is correct.  reran everything from scratch,  destroyed the containers, etc.19:48
sigmavirus24You should look at the Keystone logs to see if you can find what's causing the 500 error19:48
sigmavirus24jwitko: that's something I haven't seen on 11.0.4 (500 errors like that)19:48
sigmavirus24Okay, I was concerned that you hadn't destroyed the containers19:48
sigmavirus24jwitko: can you confirm that there's nothing in your openrc that is pointing to Keystone v3?19:49
jwitkoah ha!19:49
jwitkosigmavirus24,  Can't connect to MySQL server19:50
sigmavirus24jwitko: yeah, it's usually good to check the logs for stuff like that19:50
sigmavirus24:D19:50
sigmavirus24I assume you can determine why galera is failing, yes?19:50
jwitkoactually first time dealing with galera, but happy to give it a shot19:51
jwitkosigmavirus24, to attempt to start is there anything to do besides /etc/init.d/mysql start ?19:52
javeriakhey, does OSAD use keepalived for any services?19:52
palendaejaveriak: I don't think so19:53
sigmavirus24jwitko: I'd look at the galera containers logs to see what's going on19:54
sigmavirus24if anything's going on19:54
sigmavirus24and then try and restart using service mysql restart19:54
jwitkowtf19:55
jwitko“failed to open backend connection”19:55
javeriakjwitko: you should check /openstack/log/infra1_galera_container-7b5eda29/galera_server_error.log20:03
javeriakthat should tell you why it can't start mysql20:04
jwitkojaveriak, thanks I found out how to restart the cluster after a bad shutdown20:04
jwitkogot it all up and running20:04
jwitkoappreciate your help and yours too sigmavirus2420:04
jwitkostill trying to figure out glance though20:04
sigmavirus24jwitko: it's still angry?20:04
openstackgerritTom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container  https://review.openstack.org/20368320:06
jwitkosigmavirus24, yes when i restart the glance services  scubber.log still reports “Can not get scrub jobs from queue: Authentication required”20:06
jwitkosigmavirus24, but from that same container that is failing glance i can successfully authenticate to keystone and do things like tenant-list20:10
jwitkoi am watching keystone logs and not seeing any errors come across20:11
jwitkoinfact i see tons of 200 OKs20:11
sigmavirus24jwitko: yeah, make sure glance is configured with the correct credentials for mysql20:11
sigmavirus24oh20:12
jwitkosigmavirus24, yes I can connect to mysql server from glance container no issue20:12
sigmavirus24actually20:12
jwitkousing credentials in glance-api.conf20:12
sigmavirus24this looks like it could be that glance-api can't authenticate to glance-registry20:12
sigmavirus24interesting20:12
jwitkosigmavirus24,  any ideas?20:14
sigmavirus24None yet20:17
sigmavirus24Did you configure glance-registry explicitly?20:17
jwitkonot sure what you mean by that20:18
*** javeriak has quit IRC20:18
*** annashen has joined #openstack-ansible20:20
jwitkoi didn’t do anything specific to that config file, no20:22
jwitkosigmavirus24,  so i have made a pastebin with the specifics of whats going on20:27
jwitkohttp://paste.openstack.org/show/uK26ccvZjET9WBpAHhKx/20:27
jwitkothe glance image-create has moved on to a 400 error instead of the auth failure.    however the scrubber.logs still experience an auth failure however it doesn’t seem to be a keystone issue20:28
sigmavirus24jwitko: i'll take a look in a second20:29
*** yaya has quit IRC20:30
*** logan2 has quit IRC20:42
sigmavirus24jwitko: what happens if you add --disk-format?20:43
sigmavirus24e.g., --disk-format bare20:43
jwitkodisk-format i have as qcow2,  do you mean container-format ?20:43
jwitkoi just tried with --container-format bare,  same error.  400 invalud URL20:44
*** logan2 has joined #openstack-ansible20:45
jwitkosigmavirus24, you can see that here:  http://paste.openstack.org/show/mFbUVukpUGUDGc4noMKK/20:45
sigmavirus24jwitko: what happens if you leave off --copy-from?20:46
jwitkooh wow20:46
jwitkoit creatres20:47
jwitkocreates20:47
jwitkoI can access the url from the glance container with no issue though20:49
sigmavirus24jwitko: right20:49
jwitkoand i wouldn’t think an invalid URL would create a 400 bad request?20:49
sigmavirus24And you're using swift as a backend right?20:49
jwitkono20:49
sigmavirus24filesystem?20:49
jwitkoyes,  default_store = file20:50
jwitkoosad put my flavor as keystone+cachemanagement20:50
jwitko[paste_deploy]20:50
jwitkoflavor = keystone+cachemanagement20:50
jwitkoscrubber.log seems to be outputting a slightly different error now on glance restart20:52
jwitkohttp://paste.openstack.org/show/ES0PHA3J93CN42vlsgnC/20:52
*** fawadk has quit IRC20:57
jwitkosigmavirus24, i was able to grab an ISO file from a local repository inside my network21:00
jwitkousing copy-from21:00
jwitkoso i think this is an issue with glance recognizing proxy settings21:00
sigmavirus24jwitko: could be that OR it could be redirects21:00
sigmavirus24can you download the cirros image locally and upload with --copy-from?21:01
jwitkoI don’t believe it redirects21:01
sigmavirus24Okay21:01
* sigmavirus24 couldn't remember honestly21:01
jwitkoman that really sucks21:01
sigmavirus24jwitko: also to be clear, you don't actually configure glance-registry in a special way, right?21:02
jwitkono, i do nothing to that file21:02
sigmavirus24odyssey4me: unrelated to this, that glance_store patch does fix your keystone v3 uri/url patch if we add the right config options to the glance_store config for swift21:02
jwitkosigmavirus24, lol I spoke too soon.  So doing an image-create on a local URL does not error but also does not create an image21:08
jwitkoit reports back all the details of the image,  but then an “image-list” shows nothing21:08
sigmavirus24jwitko: so look in the logs for upload_utils something like glance.v1.images.upload_utils21:09
sigmavirus24And let me know if there are log statements from that module21:09
sigmavirus24specifically in glance-api.log21:09
jwitkoseems like permissions issues21:09
jwitkoglance-api.log:2015-07-28 16:59:53.484 863 ERROR glance.api.v1.upload_utils [req-b6a12990-22b4-47b4-961c-902d6dbfe784 f7eb0faede4e41c6aeb81321be2b4dec 7401109f377a476da18abecd92447747 - - -] Insufficient permissions on image storage media: Permission to write image storage media denied.21:09
sigmavirus24jwitko: so the directories glance is trying to write to are not owned by the correct user21:10
sigmavirus24probably something like /var/glance/images21:10
sigmavirus24That may be an os-a-d bug in which we create the directory but with the wrong user21:10
jwitkoyea, /var/lib/glance/images,  what expected owner/perms ?21:11
sigmavirus24owner should probably be glance21:11
sigmavirus24jwitko: probably glance:glance21:11
sigmavirus24probably 755 as the permissions21:12
jwitkosigmavirus24, excellent that fixed it21:14
sigmavirus24jwitko: I have an AIO up right now runnign master and it had the right permissions21:14
sigmavirus24what were the permissions on your container?21:14
jwitkodir was owned by “root:daemon”  and subdir was some odd UIDs21:15
jwitkoand GIDs21:15
sigmavirus24jwitko: odd21:15
* sigmavirus24 wonders if there have been any fixes on master not backported to kilo but21:16
sigmavirus24even so you wouldn't get those if your'e sticking to 11.0.421:16
jwitkoyea, I’m not sure21:23
jwitkoits an nfs mount so maybe it came from there21:23
jwitkobut I’m still seeing those errors in the scrubber.log21:23
jwitkoit complains about config settings not being set that are actually set21:23
jwitkoas well as MissingCredentialError21:24
*** ntpttr2 has quit IRC21:50
openstackgerritTom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container  https://review.openstack.org/20368321:55
openstackgerritTom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container  https://review.openstack.org/20368322:01
*** ntpttr2 has joined #openstack-ansible22:04
sigmavirus24jwitko: sorry, got distracted22:05
sigmavirus24jwitko: did you sort out why glance's registry client thinks you're missing credentials?22:05
jwitkono  :(22:06
jwitkoi wish there was an easier way to tail all the logs on all the containers22:06
*** Mudpuppy has quit IRC22:11
*** jmckind has quit IRC22:18
openstackgerritIan Cordasco proposed stackforge/os-ansible-deployment: Update glance_store configuration for Keystone v3  https://review.openstack.org/20674322:18
openstackgerritTom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container  https://review.openstack.org/20368322:19
sigmavirus24odyssey4me: if you want to test out your URI/URL change as a dependency of 206743 it should work. That said, that patch is a WIP and I am thoroughly opposed to merging it before the glance_store change merges22:20
*** ntpttr2 has quit IRC22:21
palendaejwitko: Do you want to tail the logs lie?22:23
palendaelive*22:23
jwitkopalendae, yes that would be nice22:24
jwitkobut i have 3 containers load balanced for each service22:24
*** jwagner is now known as jwagner_away22:25
palendaeCould do something like `ansible "hosts:all_containers" -m "shell" -a "tail <file>"`22:26
palendaeThat won't necessarily be live, but it'll run a shell command across all the containers22:26
palendaeIf you want a particular service, usually it would be "hosts:<service>_all"22:27
*** JRobinson__ has joined #openstack-ansible22:34
*** spotz is now known as spotz_zzz22:34
*** TheIntern has quit IRC22:42
jwitkothanks22:46
*** KLevenstein has quit IRC22:46
palendaeIf you want to run a command across just the physical host OS (not the containers they hold), `ansible "hosts:is_metal" -m "shell" -a "command"`22:49
*** tlian has quit IRC22:50
jwitkoyup, looking for continuous though23:03
palendaeSure. You could probably try the PYTHON_UNBUFFERED environment var for ansible, but you'll probably get things interleaved23:06
*** sigmavirus24 is now known as sigmavirus24_awa23:07
*** ntpttr2 has joined #openstack-ansible23:11
*** ntpttr2 has quit IRC23:19
*** weezS has quit IRC23:26
*** mattoliverau has quit IRC23:29
*** mattoliverau has joined #openstack-ansible23:29
*** jmckind has joined #openstack-ansible23:37
jwitkopalendae, still around?23:40
*** annashen has quit IRC23:55
*** sdake has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!