*** markvoelker has joined #openstack-ansible | 00:09 | |
*** yapeng has quit IRC | 00:23 | |
*** jmckind has joined #openstack-ansible | 00:27 | |
*** openstackgerrit has quit IRC | 00:46 | |
*** openstackgerrit has joined #openstack-ansible | 00:47 | |
*** andymccr has quit IRC | 00:54 | |
*** annashen has joined #openstack-ansible | 00:54 | |
*** andymccr has joined #openstack-ansible | 00:56 | |
*** annashen has quit IRC | 00:59 | |
*** jmckind has quit IRC | 01:05 | |
*** jmckind has joined #openstack-ansible | 01:07 | |
*** jmckind has quit IRC | 01:09 | |
*** annashen has joined #openstack-ansible | 01:55 | |
*** jlvillal has quit IRC | 01:58 | |
*** cloudnull is now known as cloudnull_afk | 01:58 | |
*** jlvillal has joined #openstack-ansible | 01:59 | |
*** annashen has quit IRC | 02:00 | |
*** jwagner_away has quit IRC | 02:07 | |
*** b3rnard0 has quit IRC | 02:07 | |
*** dolphm has quit IRC | 02:07 | |
*** d34dh0r53 has quit IRC | 02:07 | |
*** sigmavirus24 has quit IRC | 02:07 | |
*** bogeyon18 has quit IRC | 02:07 | |
*** eglute has quit IRC | 02:07 | |
*** cloudnull_afk has quit IRC | 02:07 | |
*** eglute has joined #openstack-ansible | 02:09 | |
*** dolphm has joined #openstack-ansible | 02:09 | |
*** dolphm has quit IRC | 02:09 | |
*** eglute has quit IRC | 02:09 | |
*** dolphm has joined #openstack-ansible | 02:10 | |
*** eglute has joined #openstack-ansible | 02:10 | |
*** cloudnull_afk has joined #openstack-ansible | 02:13 | |
openstackgerrit | Miguel Grinberg proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration https://review.openstack.org/194395 | 02:15 |
---|---|---|
*** d34dh0r53 has joined #openstack-ansible | 02:34 | |
jwitko | Hi, can anyone help me please? During the os_glance install of the setup-openstack.yml playbook I am erroring during the install pip packages. All of the pip packages install except glance. When I go to the glance container to execute “pip install glance” and see why its failing, it starts to download and install the packages but eventually errors on “No matching distribution | 02:42 |
jwitko | found for urllib3<1.11,>=1.8.3 (from oslo.vmware<0.12.0,>=0.11.1->glance)” | 02:42 |
jwitko | I can even install it manually, pip list | grep urllib : urllib3 (1.11), but I still receive the same error when attempting to install glance from pip | 02:44 |
palendae | jwitko: Might want to look in your ~/.pip/links.d (I think that's it) directory and see where it's trying to pull from | 02:44 |
palendae | It might be that it's not being pulled into the repo server | 02:45 |
jwitko | palendae, inside my links.d/openstack_release.link, there is only one line | 02:45 |
jwitko | http://<vip_addr>:8181/os-releases/11.0.4/ | 02:45 |
jwitko | which I can visit manually, and see the list of packages | 02:46 |
*** daneyon_ has quit IRC | 02:47 | |
palendae | Evidently urllib3 was capped up until recently https://review.openstack.org/#/c/205155/2/requirements.txt | 02:50 |
palendae | Though I'm not familiar with why | 02:50 |
*** annashen_ has joined #openstack-ansible | 02:56 | |
*** annashen_ has quit IRC | 03:01 | |
jwitko | palendae, any idea how i can work around this? | 03:02 |
jwitko | i’m using 11.0.4 of the os-ansible deployment repo. it does not have urllib3 in the requirements file | 03:08 |
*** annashen has joined #openstack-ansible | 03:57 | |
*** annashen has quit IRC | 04:01 | |
*** markvoelker has quit IRC | 04:04 | |
*** daneyon has joined #openstack-ansible | 04:09 | |
*** daneyon has quit IRC | 04:22 | |
*** annashen has joined #openstack-ansible | 04:58 | |
*** annashen has quit IRC | 05:02 | |
*** markvoelker has joined #openstack-ansible | 05:05 | |
*** markvoelker has quit IRC | 05:10 | |
*** yapeng has joined #openstack-ansible | 05:53 | |
*** annashen has joined #openstack-ansible | 05:59 | |
*** javeriak has joined #openstack-ansible | 06:02 | |
*** annashen has quit IRC | 06:03 | |
*** javeriak has quit IRC | 06:07 | |
*** javeriak has joined #openstack-ansible | 06:11 | |
*** yapeng has quit IRC | 06:13 | |
*** javeriak has quit IRC | 06:16 | |
*** javeriak has joined #openstack-ansible | 06:17 | |
*** javeriak_ has joined #openstack-ansible | 06:23 | |
*** javeriak_ has quit IRC | 06:25 | |
*** javeriak has quit IRC | 06:26 | |
*** javeriak has joined #openstack-ansible | 06:26 | |
*** javeriak has quit IRC | 06:26 | |
-openstackstatus- NOTICE: zuul is stuck and about to undergo an emergency restart, please be patient as job results may take a long time | 06:44 | |
*** ChanServ changes topic to "zuul is stuck and about to undergo an emergency restart, please be patient as job results may take a long time" | 06:44 | |
*** annashen has joined #openstack-ansible | 07:00 | |
*** annashen has quit IRC | 07:05 | |
*** markvoelker has joined #openstack-ansible | 07:06 | |
*** sdake has joined #openstack-ansible | 07:07 | |
openstackgerrit | Serge van Ginderachter proposed stackforge/os-ansible-deployment: Ceph/RBD support https://review.openstack.org/181957 | 07:08 |
svg | mattt, d34dh0r53, git-harry ^^ | 07:09 |
svg | This is a WIP update, contains my updates to latest comments on rev 27, but still has some #TODO's for which I asked feedback to leseb | 07:09 |
*** markvoelker has quit IRC | 07:11 | |
mattt | svg: cool, i'll have a look at this this morning! thank you for updating | 07:24 |
svg | np, just a matter of squashing the WIP ans pushing te review :) | 07:25 |
* mattt throws svg a bacon sandwich | 07:32 | |
svg | thx, but a bit early on the day for beacon :) | 07:33 |
mattt | WUT | 07:35 |
mattt | GET OUT | 07:35 |
mattt | GEEEET OUT | 07:35 |
* mattt hopes bacon doesn't mean something else in belgium | 07:36 | |
svg | I actually don't get most of those bacon refs :) | 07:45 |
*** javeriak has joined #openstack-ansible | 07:46 | |
svg | no clue what all the fuss about that is, to me it's something I might occasionally fry on a bbq | 07:48 |
*** sdake has quit IRC | 07:57 | |
mattt | svg: i largely agree w/ that sentiment | 07:59 |
*** ChanServ changes topic to "Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible" | 08:00 | |
-openstackstatus- NOTICE: zuul has been restarted and queues restored. It may take some time to work through the backlog. | 08:00 | |
*** annashen has joined #openstack-ansible | 08:01 | |
*** annashen has quit IRC | 08:05 | |
*** evrardjp has joined #openstack-ansible | 08:16 | |
*** annashen has joined #openstack-ansible | 09:01 | |
*** annashen has quit IRC | 09:06 | |
*** markvoelker has joined #openstack-ansible | 09:07 | |
*** markvoelker has quit IRC | 09:11 | |
*** javeriak has quit IRC | 09:16 | |
*** javeriak has joined #openstack-ansible | 09:20 | |
*** javeriak has quit IRC | 09:33 | |
evrardjp | good morning everyone | 09:47 |
odyssey4me | o/ evrardjp | 09:49 |
odyssey4me | evrardjp was it you who I discussed SSL brokenness with some time ago? | 09:51 |
*** annashen has joined #openstack-ansible | 10:02 | |
*** annashen has quit IRC | 10:07 | |
*** javeriak has joined #openstack-ansible | 10:07 | |
evrardjp | I discussed about SSL on openstack, how it could be done, but didn't complain on some specific things | 10:46 |
evrardjp | I had just a different view | 10:46 |
*** annashen has joined #openstack-ansible | 11:03 | |
*** annashen has quit IRC | 11:07 | |
*** markvoelker has joined #openstack-ansible | 11:08 | |
odyssey4me | evrardjp ah, I thought you might just like to know that we fixed a few SSL things along the way | 11:12 |
odyssey4me | Horizon SSL cert/key management: https://review.openstack.org/202977 | 11:12 |
*** markvoelker has quit IRC | 11:13 | |
odyssey4me | SSL support for haproxy: https://review.openstack.org/198957 | 11:13 |
odyssey4me | Keystone SSL cert/key management (still in review): https://review.openstack.org/194474 | 11:13 |
*** javeriak has quit IRC | 11:43 | |
*** javeriak_ has joined #openstack-ansible | 11:43 | |
*** annashen has joined #openstack-ansible | 12:04 | |
*** markvoelker has joined #openstack-ansible | 12:09 | |
*** annashen has quit IRC | 12:09 | |
*** javeriak has joined #openstack-ansible | 12:09 | |
*** javeriak_ has quit IRC | 12:09 | |
*** markvoelker has quit IRC | 12:14 | |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration https://review.openstack.org/194395 | 12:18 |
jwitko | Hi, can anyone help me please? During the os_glance install of the setup-openstack.yml playbook I am erroring during the install pip packages. All of the pip packages install except glance. When I go to the glance container to execute “pip install glance” and see why its failing, it starts to download and install the packages but eventually errors on “No matching distribution | 12:24 |
jwitko | found for urllib3<1.11,>=1.8.3 (from oslo.vmware<0.12.0,>=0.11.1->glance)” | 12:24 |
jwitko | I can even install urllib3 v1.11 manually, pip list | grep urllib : urllib3 (1.11), but I still receive the same error when attempting to install glance from pip. | 12:25 |
odyssey4me | jwitko so that issue happened and we plugged the hole with https://review.openstack.org/#/c/204365/ , then removed it once it was fixed upstream | 12:27 |
odyssey4me | jwitko I'm confused - you're doing your testing with an AIO aren't you? | 12:27 |
jwitko | odyssey4me, no | 12:27 |
jwitko | just trying to install openstack | 12:27 |
odyssey4me | so you have multiple servers and are deploying on them? | 12:28 |
jwitko | yes, 3 controllers, 1 logging, 5 compute | 12:28 |
jwitko | i was actually able to install without issue last week, but i had to tear it down because we wanted to use different hardware | 12:28 |
*** markvoelker has joined #openstack-ansible | 12:29 | |
jwitko | odyssey4me, the confusing part to me about what you’re showing is I don’t know how to implement this work-around ? | 12:29 |
jwitko | do I just add this to the requirements.txt file in the top level of the repo? | 12:29 |
odyssey4me | well, yes - you simply add that line and then rebuild your repo | 12:30 |
jwitko | ah, rebuild the repo, ok thats what i was missing | 12:30 |
odyssey4me | you'll have to remove the installed pip packages from the glance containers as well | 12:30 |
jwitko | odyssey4me, correct me if I’m wrong here but to rebuild the repo I only need to execute “repo-server.yml” and “repo-build.yml” ? | 12:31 |
odyssey4me | we'll be tagging the next release in the next few days which will contain the updated kilo which contains this fix | 12:31 |
odyssey4me | jwitko just repo-build | 12:31 |
odyssey4me | repo-server sets the server up, repo-build builds the actual repo | 12:31 |
jwitko | oh ok cool. and when you say remove the pip packages, you’re just referring ot the ones I installed manually right ? | 12:31 |
jwitko | or should I be destroying and rebuilding the containers | 12:32 |
odyssey4me | not entirely sure which packages - at least urllib/glance/glanceclient | 12:32 |
odyssey4me | otherwise yeah, you can lxc-stop & lxc-destroy the glance containers, then execute the host setup play to rebuild them - then go back to the glance install play | 12:33 |
*** jwagner has joined #openstack-ansible | 12:34 | |
*** jmckind has joined #openstack-ansible | 12:36 | |
jwitko | cd play | 12:44 |
jwitko | wrong window :) | 12:44 |
*** tlian has joined #openstack-ansible | 13:01 | |
*** annashen has joined #openstack-ansible | 13:05 | |
*** KLevenstein has joined #openstack-ansible | 13:08 | |
*** annashen has quit IRC | 13:09 | |
*** jmckind has quit IRC | 13:11 | |
*** TheIntern has joined #openstack-ansible | 13:13 | |
*** prad has joined #openstack-ansible | 13:15 | |
*** javeriak has quit IRC | 13:15 | |
*** Mudpuppy has joined #openstack-ansible | 13:44 | |
*** Mudpuppy has quit IRC | 13:44 | |
*** Mudpuppy has joined #openstack-ansible | 13:45 | |
*** yapeng has joined #openstack-ansible | 13:52 | |
*** bogeyon18 has joined #openstack-ansible | 13:52 | |
*** b3rnard0 has joined #openstack-ansible | 13:54 | |
*** fawadkhaliq has joined #openstack-ansible | 13:55 | |
*** richoid has quit IRC | 13:56 | |
*** sigmavirus24_awa has joined #openstack-ansible | 13:57 | |
*** spotz_zzz is now known as spotz | 13:57 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 13:57 | |
jwitko | odyssey4me, that worked thank you | 13:59 |
odyssey4me | jwitko excellent :) | 14:00 |
*** annashen has joined #openstack-ansible | 14:06 | |
*** annashen has quit IRC | 14:11 | |
*** richoid has joined #openstack-ansible | 14:14 | |
*** jmckind has joined #openstack-ansible | 14:14 | |
odyssey4me | sigmavirus24 I've confirmed that Glance only fails against a Keystone v3 endpoint when using Swift as a store. When using a file store it's fine. | 14:21 |
sigmavirus24 | oh | 14:21 |
sigmavirus24 | I bet I know why | 14:21 |
odyssey4me | It seems that this is a known issue: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067381.html | 14:21 |
sigmavirus24 | odyssey4me: Let me test out https://review.openstack.org/#/c/193422/ to see if it fixes it | 14:22 |
odyssey4me | However, in digging a little more into it it seems to me that glance never gets to interact with swift (unless my swift proxy is misbehaving) because it fails in the token validation in the middleware | 14:22 |
sigmavirus24 | odyssey4me: yeah, I'm guessing it's related to | 14:23 |
sigmavirus24 | *related to the review I just pasted | 14:23 |
odyssey4me | sigmavirus24 yep, that looks like a sensible shortcut fix | 14:25 |
odyssey4me | jamie actually only went to bed an hour or so ago | 14:25 |
sigmavirus24 | odyssey4me: yeah, I've been meaning to find a good way to review that and this seems a perfect case | 14:26 |
odyssey4me | I'm trying to see if there's a workaround - something along the lines of having the service still use v2, but allowing the clients to use v3. | 14:27 |
odyssey4me | with the caveat that the project & user has to be in the default domain | 14:27 |
*** javeriak has joined #openstack-ansible | 14:28 | |
*** fawadk has joined #openstack-ansible | 14:30 | |
*** fawadkhaliq has quit IRC | 14:30 | |
*** yapeng has quit IRC | 14:30 | |
sigmavirus24 | odyssey4me: I'm not sure if that will be the best course of action | 14:30 |
*** fawadkhaliq has joined #openstack-ansible | 14:33 | |
*** fawadk has quit IRC | 14:33 | |
*** jmckind has quit IRC | 14:45 | |
*** jmckind has joined #openstack-ansible | 14:47 | |
*** daneyon has joined #openstack-ansible | 14:55 | |
*** sdake has joined #openstack-ansible | 14:55 | |
openstackgerrit | Darren Birkett proposed stackforge/os-ansible-deployment: set correct swift dispersion tenant https://review.openstack.org/206572 | 14:57 |
*** daneyon has quit IRC | 15:00 | |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint https://review.openstack.org/206575 | 15:02 |
*** javeriak has quit IRC | 15:02 | |
*** ntpttr2 has quit IRC | 15:04 | |
*** ntpttr2 has joined #openstack-ansible | 15:05 | |
*** annashen has joined #openstack-ansible | 15:06 | |
*** daneyon has joined #openstack-ansible | 15:11 | |
*** annashen has quit IRC | 15:11 | |
*** TheIntern has quit IRC | 15:22 | |
*** TheIntern has joined #openstack-ansible | 15:25 | |
*** alop has joined #openstack-ansible | 15:27 | |
* sigmavirus24 waves to bgmccollum | 15:31 | |
bgmccollum | sigmavirus24 sups | 15:31 |
bgmccollum | doccing the issue... | 15:31 |
bgmccollum | sigmavirus24: https://github.com/rcbops/rpc-openstack/issues/294 | 15:33 |
*** sdake has quit IRC | 15:35 | |
sigmavirus24 | bgmccollum: cool, I might DM you for the creds to the test box you apparently have up | 15:35 |
bgmccollum | sigmavirus24: sure thing | 15:36 |
*** jwagner is now known as jwagner_away | 15:48 | |
*** javeriak has joined #openstack-ansible | 15:50 | |
*** logan2 has quit IRC | 15:55 | |
*** yaya has joined #openstack-ansible | 15:59 | |
*** b3rnard0 is now known as b3rnard0_away | 15:59 | |
*** erikwilson_ has joined #openstack-ansible | 16:00 | |
odyssey4me | hi everyone: cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung | 16:01 |
odyssey4me | bug triage meeting for those who wish to attend | 16:01 |
Apsu | o/ | 16:02 |
stevelle | o/ | 16:02 |
serverascode | o/ | 16:02 |
odyssey4me | So it seems that we have quite a list to work through, so let's roll. | 16:03 |
odyssey4me | First one: https://bugs.launchpad.net/openstack-ansible/+bug/1478178 | 16:03 |
openstack | Launchpad bug 1478178 in openstack-ansible "MongoDB can't connect to server on AIO installation" [Undecided,New] | 16:03 |
odyssey4me | this is a suplicate I think | 16:03 |
odyssey4me | *duplicate | 16:03 |
palendae | odyssey4me: I think you filed a fix for those dev-stack.rst instructions | 16:04 |
odyssey4me | palendae yeah, I did: https://review.openstack.org/206016 | 16:05 |
odyssey4me | but that's not related unless I add a bit about disabling ceilometer | 16:07 |
palendae | Does run-playbooks install ceilometer now? | 16:07 |
openstackgerrit | Merged stackforge/os-ansible-deployment: set correct swift dispersion tenant https://review.openstack.org/206572 | 16:07 |
odyssey4me | where is the review for patching master & kilo to make mongo wait until it's up again | 16:07 |
*** annashen has joined #openstack-ansible | 16:07 | |
palendae | I thought that was just in the gating | 16:07 |
odyssey4me | palendae yeah, I'm in favour of making it default to not deploying, except in the gate as it requires an existing mongodb setup for multi-node environments | 16:08 |
odyssey4me | for the AIO it'll set itself up | 16:08 |
palendae | Oh, it's in bootstrap | 16:08 |
palendae | Yeah, IMO it should be on gate jobs only | 16:09 |
palendae | Change for making mongo wait in master https://review.openstack.org/#/c/200252/ | 16:09 |
odyssey4me | ok, so this was resolved unless the reporter has found something different | 16:10 |
*** annashen has quit IRC | 16:12 | |
odyssey4me | ok, updated the bug and commented | 16:12 |
odyssey4me | next | 16:13 |
odyssey4me | thanks palendae for finding that review | 16:13 |
odyssey4me | next: https://bugs.launchpad.net/openstack-ansible/+bug/1478110 | 16:13 |
openstack | Launchpad bug 1478110 in openstack-ansible trunk " Change to set the container network MTU" [Undecided,New] | 16:13 |
odyssey4me | so this looks to be auto-created based on the DocImpact tag | 16:13 |
odyssey4me | I wasn't expecting this. :/ Any thoughts on what to do with these bugs? | 16:14 |
palendae | Sam-I-Am, KLevenstein ^ | 16:15 |
KLevenstein | Sam-I-Am is in an airplane right now. looking. | 16:15 |
odyssey4me | KLevenstein effectively the DocImpact tags are creating another bug specifically for documentation. | 16:16 |
KLevenstein | right | 16:16 |
odyssey4me | both bugs in the same project, which isn't great - we seem to have quite a few of these | 16:16 |
stevelle | that's one way to make sure we don't just throw work over the wall I guess | 16:17 |
KLevenstein | this can be closed. | 16:17 |
KLevenstein | the only fix I can see it needing is additional annotation describing the container_mtu option, and that was included in https://review.openstack.org/#/c/204796/ anyway | 16:18 |
palendae | We should probably dig into the infra project and see if we can auto-add people to reviews/bugs instead of making a new one each time | 16:18 |
KLevenstein | it would be nice if instead of creating new bugs, docimpact would just assign the rpcdocs launchpad group | 16:19 |
palendae | Yeah | 16:19 |
palendae | I'm not sure about assign, since it might 'steal' it away from the person working on it, but yeah, fewer dupe bugs | 16:20 |
odyssey4me | ok, we're going to have to figure out how to make this work better - but now's not really the time | 16:20 |
*** TheIntern has quit IRC | 16:20 | |
palendae | Right | 16:20 |
odyssey4me | for now, shall I just assign the docimpact bugs to rpcdocs and move along? | 16:20 |
KLevenstein | yes | 16:21 |
odyssey4me | ok, great - let me find a non doc bug :) | 16:21 |
jwitko | Hey Guys, glance issue happening after what looked to be successful osad deploy. Adding an images just “queues” and never moves any further. The glance logs don’t really show any errors on it. | 16:21 |
odyssey4me | next: https://bugs.launchpad.net/openstack-ansible/+bug/1477747 | 16:21 |
openstack | Launchpad bug 1477747 in openstack-ansible trunk "nova_console_endpoint not used correctly in Kilo" [Undecided,New] | 16:21 |
*** sdake has joined #openstack-ansible | 16:21 | |
odyssey4me | jwitko we're in the middle of a bug triage meeting - would you mind waiting 30 mins or so? | 16:22 |
jwitko | oh sorry my apologies. | 16:22 |
palendae | odyssey4me: I'll update that one - looks like we had a variable name change. | 16:22 |
odyssey4me | jwitko no problem - we'll be with you shortly :) | 16:23 |
odyssey4me | palendae ok, moving on then | 16:23 |
*** logan2 has joined #openstack-ansible | 16:24 | |
odyssey4me | https://bugs.launchpad.net/openstack-ansible/+bug/1475436 | 16:24 |
openstack | Launchpad bug 1475436 in openstack-ansible trunk "VLAN range issue in ml2_conf.ini" [Undecided,New] - Assigned to Evan Callicoat (diopter) | 16:24 |
odyssey4me | hmm, this is not going to be easy to try and fix for juno - but it seems from Apsu's message that this is not an issue in kilo/master | 16:25 |
odyssey4me | Any thoughts on how we should handle this? | 16:26 |
stevelle | Priority for OSAD doesn't need to dictate that it not get worked on. Rate as low? | 16:27 |
stevelle | but Juno is in support upstream so maybe higher? | 16:28 |
*** yaya has quit IRC | 16:29 | |
odyssey4me | stevelle yeah, I'm inclined to think that this is an edge case and we already have Kilo around with a better method | 16:29 |
odyssey4me | switching juno to use the kilo method is a bit of a forklift, so I'm inclined to try to leave it as-is unless someone figures out a patch that doesn't result in a non-compatible syntax for upgrades | 16:30 |
odyssey4me | ok, next: https://bugs.launchpad.net/openstack-ansible/+bug/1474531 | 16:33 |
openstack | Launchpad bug 1474531 in openstack-ansible "Inventory group rabbitmq_all missing from Juno to Kilo upgrade" [Undecided,New] | 16:33 |
odyssey4me | this is similar to https://bugs.launchpad.net/openstack-ansible/+bug/1474992 unless I'm reading it wrong | 16:35 |
openstack | Launchpad bug 1474992 in openstack-ansible trunk "RabbitMQ cluster upgrade failing" [High,Fix committed] - Assigned to git-harry (git-harry) | 16:35 |
odyssey4me | palendae have you seen this? | 16:35 |
palendae | I have not...I'm not 100% sure it's related, because it sounds like the group is just gone from inventory | 16:36 |
palendae | Which wasn't happening with the restart fix, just needed to make sure they restarted in order | 16:38 |
palendae | I don't see BjoernT around on internal IRC | 16:38 |
odyssey4me | palendae looks there to me: https://github.com/stackforge/os-ansible-deployment/blob/master/etc/openstack_deploy/env.d/rabbitmq.yml#L19 | 16:38 |
*** annashen has joined #openstack-ansible | 16:38 | |
palendae | Right, but that's not as a result of an upgrade | 16:38 |
odyssey4me | ah, so we need more info for this? | 16:39 |
palendae | Still, I haven't heard more about it and Bjoern's not around to confirm/deny | 16:39 |
palendae | Yeah, i think so | 16:39 |
odyssey4me | right, next: https://bugs.launchpad.net/openstack-ansible/+bug/1440762 | 16:40 |
openstack | Launchpad bug 1440762 in OpenStack Compute (nova) kilo "Rebuild an instance with attached volume fails" [High,In progress] - Assigned to Matt Riedemann (mriedem) | 16:40 |
odyssey4me | hmm, this looks like it was added to openstack-ansible for reference - it seems more for a case of keeping track of an upstream bug? | 16:41 |
odyssey4me | unless there is some sort of workaround we can implement for this? | 16:41 |
palendae | Yeah, was being tracked here: https://bugs.launchpad.net/openstack-ansible/+bug/1400881. Basically we were going to make sure that the next SHA bump included the nova fix | 16:42 |
openstack | Launchpad bug 1400881 in openstack-ansible trunk "Cannot rebuild a VM created from a Cinder volume backed by NetApp" [Medium,In progress] - Assigned to David (david-alfano) | 16:42 |
odyssey4me | ok, it looks to me like we should remove that attachment and just follow the progress in the other bug we already have assigned | 16:43 |
palendae | I'm cool wit hthat | 16:44 |
odyssey4me | there's not much we can do for the upstream bug other than test it and lobby for it to be backported, which I doubt will happen to juno but may happen to kilo | 16:44 |
odyssey4me | ok, that's it for new bugs that have yet to be classified | 16:46 |
odyssey4me | thank you all - I'll work through the doc bugs quickly | 16:46 |
*** b3rnard0_away is now known as b3rnard0 | 16:46 | |
Apsu | Thanks odyssey4me | 16:46 |
odyssey4me | does anyone have anything specific they want to raise? | 16:46 |
javeriak | hey guys, need a little help, ive noticed that every time the infra's get rebooted the galera backend is always down and i have to go start mysql manuallly, problem is that now it wont start, and the galera error logs say 'failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use)' | 16:46 |
Apsu | javeriak: Address in use when binding to 0.0.0.0 means you're already listening on that port on a more specific address | 16:47 |
javeriak | whoops, did i just step into a meeting, sorry folks :) | 16:47 |
Apsu | ss -lntp sport = :4567 to identify what process is listening on a more specific address | 16:48 |
odyssey4me | javeriak also, you should only reboot one infra at a time, otherwise you have to get your cluster into a healthy state again :) | 16:48 |
stevelle | javeriak: we just wrapped the meeting, timing is fine | 16:48 |
odyssey4me | javeriak nope, we're done - ask away | 16:48 |
javeriak | odyssey4me: yes i usually wait for everything to come back up again, galera usually doesnt | 16:49 |
javeriak | Apsu : I'm getting users:(("mysqld",1272,11)) | 16:50 |
Apsu | javeriak: So you starting mysql manually, as you mentioned, is somehow binding to a specific address. Seems like a difference in config being used, or however you're starting manually being different than the automated mechanism's configuration | 16:51 |
javeriak | Is a background service taking care of the automated restarting? im starting with 'service mysql start --wsrep-new-cluster' for the first infra and simple service start for the rest | 16:52 |
*** weezS has joined #openstack-ansible | 16:52 | |
odyssey4me | javeriak yes, the services are set to start on reboot so they should already be running but the cluster may be in a broken state | 16:54 |
odyssey4me | so you may have to shut them all down, then bring them up one at a time | 16:54 |
odyssey4me | as I recall there's some detail about mariadb that requires them to be done in a particular order - you'll have to look that up | 16:54 |
odyssey4me | unless someone here knows how to do that in detail | 16:55 |
javeriak | odyssey4me: should i wait for services to fully come up on one before starting the next? | 16:55 |
*** Mudpuppy has quit IRC | 16:55 | |
odyssey4me | javeriak you'll have to check mariadb docs... I don't know the details personally | 16:55 |
Apsu | That's really not related to which address is being listened on. | 16:55 |
Apsu | javeriak: Should probably grep the configs for the IP the currently running mysqld is listening on | 16:56 |
Apsu | There's a difference here somewhere | 16:56 |
*** annashen has quit IRC | 16:57 | |
javeriak | Apsu: you're right, let me check, however it was fine some hours and restarts ago, i was usually able to get mysql started manually after every reboot, then something happened, and now it wont work on any power recycle | 16:57 |
Apsu | My guess is a config change happened :) | 16:58 |
*** annashen has joined #openstack-ansible | 16:59 | |
*** yaya has joined #openstack-ansible | 17:06 | |
*** jwagner_away is now known as jwagner | 17:11 | |
javeriak | so apparently all it needs is to kill the mysqld processes on the node that say 'Address already in use'.. | 17:14 |
Apsu | So it must have been running from prior to a config update | 17:21 |
*** Mudpuppy has joined #openstack-ansible | 17:23 | |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint https://review.openstack.org/206575 | 17:24 |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Fix Keystone URI/URL defaults https://review.openstack.org/205192 | 17:25 |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Fix Keystone URI/URL defaults https://review.openstack.org/205192 | 17:27 |
jwitko | Hey Guys, glance issue happening after what looked to be successful osad deploy. Adding an images just “queues” and never moves any further. The glance logs don’t really show any errors on it even in debug mode. In fact in glance-registry I see “Successfully created image | 17:33 |
jwitko | “ | 17:33 |
odyssey4me | jwitko did you add it using --file, --copy-from or --location ? | 17:34 |
jwitko | odyssey4me, location. i’m using it via horizon | 17:35 |
odyssey4me | jwitko I think Horizon actually does it with --copy-from | 17:36 |
odyssey4me | can your glance container reach the url that the image is on? | 17:36 |
jwitko | its an option, i choose location and then specify a url | 17:36 |
odyssey4me | and how big is the image? | 17:37 |
jwitko | yes, I was able to wget it without issue. However I have a proxy set on the container that I’m not sure horizon/glance is picking up on? but i would’ve expected an error from that | 17:37 |
jwitko | odyssey4me, its very small. its the cirros 64bit image | 17:37 |
odyssey4me | fyi glance logs are not very verbose unless you set glance-api.conf with debug=True | 17:37 |
jwitko | i have glance-api.conf debug=true, still no errors/issues | 17:37 |
odyssey4me | jwitko ok, then I'd suggest eliminating horizon first to determine where the issue is | 17:38 |
odyssey4me | get into the utility container, then: source /root/openrc and use the cli to add the image with the --copy-from option | 17:39 |
odyssey4me | if that works, try the --location option too | 17:39 |
*** annashen has quit IRC | 17:42 | |
jwitko | odyssey4me, seems to just be hanging | 17:45 |
odyssey4me | jwitko with the cli add --debug before the command | 17:46 |
odyssey4me | to see where it hangs | 17:46 |
*** yaya has quit IRC | 17:46 | |
jwitko | odyssey4me, seems like it never gets past the initial curl | 17:46 |
Apsu | jwitko: The proxy is probably the issue. I assume you're setting your proxy in an environment variable? | 17:47 |
*** markvoelker has quit IRC | 17:47 | |
jwitko | yea, i just removed it though and now I’m getting a 401 unauthorized from keystone | 17:48 |
Apsu | My guess is that curl glance calls is in an environment that doesn't inherit the proxy variable | 17:49 |
Apsu | Since it's being called via popen, probably isn't passing that through | 17:49 |
Apsu | Perhaps there's a proxy config option in the conf | 17:49 |
*** yaya has joined #openstack-ansible | 17:52 | |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Keystone Federation Service Provider Configuration https://review.openstack.org/194395 | 17:54 |
*** jmckind has quit IRC | 17:55 | |
*** fawadkhaliq has quit IRC | 17:55 | |
*** jmckind has joined #openstack-ansible | 17:59 | |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Enable Horizon to consume a Keystone v3 API endpoint https://review.openstack.org/206575 | 18:00 |
*** fawadkhaliq has joined #openstack-ansible | 18:04 | |
*** fawadkhaliq has quit IRC | 18:05 | |
odyssey4me | miguelgrinberg fyi marekd is asking for testers for https://review.openstack.org/202176 (fernet token fix for federation) - perhaps you can give it a whirl? | 18:13 |
marekd | odyssey4me: ++! | 18:13 |
*** annashen has joined #openstack-ansible | 18:27 | |
jwitko | hey is there a tag or something i could use to push all proxy settings to containers/hosts | 18:35 |
*** fawadkhaliq has joined #openstack-ansible | 18:39 | |
*** markvoelker has joined #openstack-ansible | 18:42 | |
*** markvoelker_ has joined #openstack-ansible | 18:46 | |
*** markvoelker has quit IRC | 18:48 | |
*** TheIntern has joined #openstack-ansible | 18:57 | |
*** yaya has quit IRC | 19:00 | |
jwitko | Apsu, odyssey4me, so I am able to reach the VIP and I removed the proxy from the internal LB IP. Now when I curl I get “401 Unauthorized” | 19:10 |
*** fawadk has joined #openstack-ansible | 19:11 | |
*** fawadkhaliq has quit IRC | 19:12 | |
*** erikwilson_ has left #openstack-ansible | 19:16 | |
*** KLevenstein has quit IRC | 19:23 | |
jwitko | ah that was my token sorry | 19:24 |
jwitko | so if i go to the utility container and I execute the glance command line | 19:24 |
jwitko | it eventually (after a long time) errors with an output of “An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-48b00ec6-3e35-4716-8970-6e417854b0f4)” | 19:24 |
*** yaya has joined #openstack-ansible | 19:25 | |
jwitko | nothing output to the glance logs | 19:25 |
*** KLevenstein has joined #openstack-ansible | 19:28 | |
*** markvoelker_ has quit IRC | 19:29 | |
*** annashen has quit IRC | 19:30 | |
jwitko | oh actually i see in scrubber.log NotAuthenticated: Authentication required | 19:40 |
jwitko | apparently glance is not authenticating properly to keystone | 19:40 |
sigmavirus24 | jwitko: what version of osad are you using? | 19:41 |
jwitko | 110.4. | 19:41 |
jwitko | 11.0.4* | 19:41 |
jwitko | (kilo) | 19:42 |
jwitko | ok... very odd now. apparently all services are failing with Auth... just tried to do a keystone tenant-list and received the same | 19:46 |
jwitko | even from the keystone container itself after sourcing openrc | 19:46 |
sigmavirus24 | "received the same" | 19:46 |
sigmavirus24 | jwitko: did you accidentally the HEAD of kilo? | 19:46 |
jwitko | sigmavirus24, no, 100% positive I’m on 11.0.4, as I already made that mistake | 19:47 |
jwitko | DISTRIB_RELEASE="11.0.4" | 19:47 |
sigmavirus24 | jwitko: I assume you just checked out the right version and re-ran the playbooks? | 19:47 |
jwitko | Authorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-816d51ee-3f48-4cac-8178-a693e5fce70a) | 19:48 |
sigmavirus24 | Hm | 19:48 |
jwitko | sigmavirus24, that is correct. reran everything from scratch, destroyed the containers, etc. | 19:48 |
sigmavirus24 | You should look at the Keystone logs to see if you can find what's causing the 500 error | 19:48 |
sigmavirus24 | jwitko: that's something I haven't seen on 11.0.4 (500 errors like that) | 19:48 |
sigmavirus24 | Okay, I was concerned that you hadn't destroyed the containers | 19:48 |
sigmavirus24 | jwitko: can you confirm that there's nothing in your openrc that is pointing to Keystone v3? | 19:49 |
jwitko | ah ha! | 19:49 |
jwitko | sigmavirus24, Can't connect to MySQL server | 19:50 |
sigmavirus24 | jwitko: yeah, it's usually good to check the logs for stuff like that | 19:50 |
sigmavirus24 | :D | 19:50 |
sigmavirus24 | I assume you can determine why galera is failing, yes? | 19:50 |
jwitko | actually first time dealing with galera, but happy to give it a shot | 19:51 |
jwitko | sigmavirus24, to attempt to start is there anything to do besides /etc/init.d/mysql start ? | 19:52 |
javeriak | hey, does OSAD use keepalived for any services? | 19:52 |
palendae | javeriak: I don't think so | 19:53 |
sigmavirus24 | jwitko: I'd look at the galera containers logs to see what's going on | 19:54 |
sigmavirus24 | if anything's going on | 19:54 |
sigmavirus24 | and then try and restart using service mysql restart | 19:54 |
jwitko | wtf | 19:55 |
jwitko | “failed to open backend connection” | 19:55 |
javeriak | jwitko: you should check /openstack/log/infra1_galera_container-7b5eda29/galera_server_error.log | 20:03 |
javeriak | that should tell you why it can't start mysql | 20:04 |
jwitko | javeriak, thanks I found out how to restart the cluster after a bad shutdown | 20:04 |
jwitko | got it all up and running | 20:04 |
jwitko | appreciate your help and yours too sigmavirus24 | 20:04 |
jwitko | still trying to figure out glance though | 20:04 |
sigmavirus24 | jwitko: it's still angry? | 20:04 |
openstackgerrit | Tom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container https://review.openstack.org/203683 | 20:06 |
jwitko | sigmavirus24, yes when i restart the glance services scubber.log still reports “Can not get scrub jobs from queue: Authentication required” | 20:06 |
jwitko | sigmavirus24, but from that same container that is failing glance i can successfully authenticate to keystone and do things like tenant-list | 20:10 |
jwitko | i am watching keystone logs and not seeing any errors come across | 20:11 |
jwitko | infact i see tons of 200 OKs | 20:11 |
sigmavirus24 | jwitko: yeah, make sure glance is configured with the correct credentials for mysql | 20:11 |
sigmavirus24 | oh | 20:12 |
jwitko | sigmavirus24, yes I can connect to mysql server from glance container no issue | 20:12 |
sigmavirus24 | actually | 20:12 |
jwitko | using credentials in glance-api.conf | 20:12 |
sigmavirus24 | this looks like it could be that glance-api can't authenticate to glance-registry | 20:12 |
sigmavirus24 | interesting | 20:12 |
jwitko | sigmavirus24, any ideas? | 20:14 |
sigmavirus24 | None yet | 20:17 |
sigmavirus24 | Did you configure glance-registry explicitly? | 20:17 |
jwitko | not sure what you mean by that | 20:18 |
*** javeriak has quit IRC | 20:18 | |
*** annashen has joined #openstack-ansible | 20:20 | |
jwitko | i didn’t do anything specific to that config file, no | 20:22 |
jwitko | sigmavirus24, so i have made a pastebin with the specifics of whats going on | 20:27 |
jwitko | http://paste.openstack.org/show/uK26ccvZjET9WBpAHhKx/ | 20:27 |
jwitko | the glance image-create has moved on to a 400 error instead of the auth failure. however the scrubber.logs still experience an auth failure however it doesn’t seem to be a keystone issue | 20:28 |
sigmavirus24 | jwitko: i'll take a look in a second | 20:29 |
*** yaya has quit IRC | 20:30 | |
*** logan2 has quit IRC | 20:42 | |
sigmavirus24 | jwitko: what happens if you add --disk-format? | 20:43 |
sigmavirus24 | e.g., --disk-format bare | 20:43 |
jwitko | disk-format i have as qcow2, do you mean container-format ? | 20:43 |
jwitko | i just tried with --container-format bare, same error. 400 invalud URL | 20:44 |
*** logan2 has joined #openstack-ansible | 20:45 | |
jwitko | sigmavirus24, you can see that here: http://paste.openstack.org/show/mFbUVukpUGUDGc4noMKK/ | 20:45 |
sigmavirus24 | jwitko: what happens if you leave off --copy-from? | 20:46 |
jwitko | oh wow | 20:46 |
jwitko | it creatres | 20:47 |
jwitko | creates | 20:47 |
jwitko | I can access the url from the glance container with no issue though | 20:49 |
sigmavirus24 | jwitko: right | 20:49 |
jwitko | and i wouldn’t think an invalid URL would create a 400 bad request? | 20:49 |
sigmavirus24 | And you're using swift as a backend right? | 20:49 |
jwitko | no | 20:49 |
sigmavirus24 | filesystem? | 20:49 |
jwitko | yes, default_store = file | 20:50 |
jwitko | osad put my flavor as keystone+cachemanagement | 20:50 |
jwitko | [paste_deploy] | 20:50 |
jwitko | flavor = keystone+cachemanagement | 20:50 |
jwitko | scrubber.log seems to be outputting a slightly different error now on glance restart | 20:52 |
jwitko | http://paste.openstack.org/show/ES0PHA3J93CN42vlsgnC/ | 20:52 |
*** fawadk has quit IRC | 20:57 | |
jwitko | sigmavirus24, i was able to grab an ISO file from a local repository inside my network | 21:00 |
jwitko | using copy-from | 21:00 |
jwitko | so i think this is an issue with glance recognizing proxy settings | 21:00 |
sigmavirus24 | jwitko: could be that OR it could be redirects | 21:00 |
sigmavirus24 | can you download the cirros image locally and upload with --copy-from? | 21:01 |
jwitko | I don’t believe it redirects | 21:01 |
sigmavirus24 | Okay | 21:01 |
* sigmavirus24 couldn't remember honestly | 21:01 | |
jwitko | man that really sucks | 21:01 |
sigmavirus24 | jwitko: also to be clear, you don't actually configure glance-registry in a special way, right? | 21:02 |
jwitko | no, i do nothing to that file | 21:02 |
sigmavirus24 | odyssey4me: unrelated to this, that glance_store patch does fix your keystone v3 uri/url patch if we add the right config options to the glance_store config for swift | 21:02 |
jwitko | sigmavirus24, lol I spoke too soon. So doing an image-create on a local URL does not error but also does not create an image | 21:08 |
jwitko | it reports back all the details of the image, but then an “image-list” shows nothing | 21:08 |
sigmavirus24 | jwitko: so look in the logs for upload_utils something like glance.v1.images.upload_utils | 21:09 |
sigmavirus24 | And let me know if there are log statements from that module | 21:09 |
sigmavirus24 | specifically in glance-api.log | 21:09 |
jwitko | seems like permissions issues | 21:09 |
jwitko | glance-api.log:2015-07-28 16:59:53.484 863 ERROR glance.api.v1.upload_utils [req-b6a12990-22b4-47b4-961c-902d6dbfe784 f7eb0faede4e41c6aeb81321be2b4dec 7401109f377a476da18abecd92447747 - - -] Insufficient permissions on image storage media: Permission to write image storage media denied. | 21:09 |
sigmavirus24 | jwitko: so the directories glance is trying to write to are not owned by the correct user | 21:10 |
sigmavirus24 | probably something like /var/glance/images | 21:10 |
sigmavirus24 | That may be an os-a-d bug in which we create the directory but with the wrong user | 21:10 |
jwitko | yea, /var/lib/glance/images, what expected owner/perms ? | 21:11 |
sigmavirus24 | owner should probably be glance | 21:11 |
sigmavirus24 | jwitko: probably glance:glance | 21:11 |
sigmavirus24 | probably 755 as the permissions | 21:12 |
jwitko | sigmavirus24, excellent that fixed it | 21:14 |
sigmavirus24 | jwitko: I have an AIO up right now runnign master and it had the right permissions | 21:14 |
sigmavirus24 | what were the permissions on your container? | 21:14 |
jwitko | dir was owned by “root:daemon” and subdir was some odd UIDs | 21:15 |
jwitko | and GIDs | 21:15 |
sigmavirus24 | jwitko: odd | 21:15 |
* sigmavirus24 wonders if there have been any fixes on master not backported to kilo but | 21:16 | |
sigmavirus24 | even so you wouldn't get those if your'e sticking to 11.0.4 | 21:16 |
jwitko | yea, I’m not sure | 21:23 |
jwitko | its an nfs mount so maybe it came from there | 21:23 |
jwitko | but I’m still seeing those errors in the scrubber.log | 21:23 |
jwitko | it complains about config settings not being set that are actually set | 21:23 |
jwitko | as well as MissingCredentialError | 21:24 |
*** ntpttr2 has quit IRC | 21:50 | |
openstackgerrit | Tom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container https://review.openstack.org/203683 | 21:55 |
openstackgerrit | Tom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container https://review.openstack.org/203683 | 22:01 |
*** ntpttr2 has joined #openstack-ansible | 22:04 | |
sigmavirus24 | jwitko: sorry, got distracted | 22:05 |
sigmavirus24 | jwitko: did you sort out why glance's registry client thinks you're missing credentials? | 22:05 |
jwitko | no :( | 22:06 |
jwitko | i wish there was an easier way to tail all the logs on all the containers | 22:06 |
*** Mudpuppy has quit IRC | 22:11 | |
*** jmckind has quit IRC | 22:18 | |
openstackgerrit | Ian Cordasco proposed stackforge/os-ansible-deployment: Update glance_store configuration for Keystone v3 https://review.openstack.org/206743 | 22:18 |
openstackgerrit | Tom Cameron proposed stackforge/os-ansible-deployment: [WIP] Add new role for router container https://review.openstack.org/203683 | 22:19 |
sigmavirus24 | odyssey4me: if you want to test out your URI/URL change as a dependency of 206743 it should work. That said, that patch is a WIP and I am thoroughly opposed to merging it before the glance_store change merges | 22:20 |
*** ntpttr2 has quit IRC | 22:21 | |
palendae | jwitko: Do you want to tail the logs lie? | 22:23 |
palendae | live* | 22:23 |
jwitko | palendae, yes that would be nice | 22:24 |
jwitko | but i have 3 containers load balanced for each service | 22:24 |
*** jwagner is now known as jwagner_away | 22:25 | |
palendae | Could do something like `ansible "hosts:all_containers" -m "shell" -a "tail <file>"` | 22:26 |
palendae | That won't necessarily be live, but it'll run a shell command across all the containers | 22:26 |
palendae | If you want a particular service, usually it would be "hosts:<service>_all" | 22:27 |
*** JRobinson__ has joined #openstack-ansible | 22:34 | |
*** spotz is now known as spotz_zzz | 22:34 | |
*** TheIntern has quit IRC | 22:42 | |
jwitko | thanks | 22:46 |
*** KLevenstein has quit IRC | 22:46 | |
palendae | If you want to run a command across just the physical host OS (not the containers they hold), `ansible "hosts:is_metal" -m "shell" -a "command"` | 22:49 |
*** tlian has quit IRC | 22:50 | |
jwitko | yup, looking for continuous though | 23:03 |
palendae | Sure. You could probably try the PYTHON_UNBUFFERED environment var for ansible, but you'll probably get things interleaved | 23:06 |
*** sigmavirus24 is now known as sigmavirus24_awa | 23:07 | |
*** ntpttr2 has joined #openstack-ansible | 23:11 | |
*** ntpttr2 has quit IRC | 23:19 | |
*** weezS has quit IRC | 23:26 | |
*** mattoliverau has quit IRC | 23:29 | |
*** mattoliverau has joined #openstack-ansible | 23:29 | |
*** jmckind has joined #openstack-ansible | 23:37 | |
jwitko | palendae, still around? | 23:40 |
*** annashen has quit IRC | 23:55 | |
*** sdake has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!