johnsom | from sqlalchemy.orm import joinedload | 00:01 |
---|---|---|
rm_work | yeah i got it | 00:07 |
rm_work | it's ... | 00:07 |
rm_work | i dunno | 00:07 |
rm_work | i don't have many LBs here | 00:07 |
rm_work | so | 00:07 |
*** cpuga has quit IRC | 00:08 | |
rm_work | hmm so what I have noticed is, when it has to refresh policy, it does take a bit of extra time | 00:08 |
rm_work | but only like 1.5s | 00:10 |
rm_work | johnsom: it doesn't appear to make things ... worse? | 00:11 |
rm_work | not sure if better though | 00:11 |
rm_work | testing in staging and i only have like 1 LB | 00:11 |
johnsom | Your times didn't drop? | 00:11 |
rm_work | I can TRY to push it to prod | 00:12 |
rm_work | but it'd be easier if you had a patch up | 00:12 |
johnsom | Yeah, no I just wanted you to try it out. | 00:12 |
johnsom | It's a trade off. So, sqlalchemy (my fav) is the root of our slow get_all | 00:13 |
johnsom | That and our model stuff. | 00:13 |
rm_work | it doesn't break or make the times LONGER :P | 00:13 |
johnsom | So, without that, sqlalchemy makes like 10 calls to the DB for EACH LB. | 00:13 |
johnsom | Basically because we are building a octavia data model of the LB to them use for the API type object. | 00:14 |
johnsom | That change tells sqlalchemy, for that query, pull it all over at once. | 00:14 |
johnsom | My 400 LB system went from ~3 seconds to 0.43 with that change | 00:15 |
johnsom | The trade off is we are pulling move data over to the api process at one time, so the memory usage will be higher. | 00:15 |
johnsom | I looked at the query, SA will page it in 1000 row chunks. | 00:16 |
johnsom | Hmm, actually I should check what happens with over 1000 LBs | 00:16 |
*** gongysh has quit IRC | 00:17 | |
rm_work | hmmm | 00:18 |
rm_work | shouldn't that be OK? | 00:18 |
rm_work | it isn't going to be THAT MUCH memory, is it? | 00:18 |
rm_work | I mean ALL of the data for 1000 LBs should be what, like 1mb? lol | 00:19 |
rm_work | (of course it is never that simple...) | 00:19 |
rm_work | such objects, very overhead, wow | 00:19 |
johnsom | Oh, that is interesting.... | 00:21 |
johnsom | SA is limiting it to 1000 LBs | 00:21 |
johnsom | Period | 00:21 |
johnsom | I have 1,202 LBs, but I only get 1000 in a get_all | 00:22 |
johnsom | Maybe that is the pagination filter... | 00:23 |
johnsom | Yeah, ok, that was our limit. Cool | 00:24 |
*** aojea has joined #openstack-lbaas | 00:32 | |
rm_work | yeah | 00:33 |
*** aojea has quit IRC | 00:36 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Force get_all to only make one DB call https://review.openstack.org/482363 | 00:44 |
johnsom | rm_work ^^^^ There is your patch | 00:44 |
rm_work | kk | 00:45 |
rm_work | i'll have it deployed to prod soonish | 00:52 |
rm_work | turnaround is about ... an hour | 00:52 |
rm_work | eh maybe 30m | 00:52 |
johnsom | Well, it's mostly to help you out, so I have no rush. It dropped my 1000 LB query from 3.5 seconds to 0.5, so I'm happy with it | 00:53 |
rm_work | k | 00:55 |
rm_work | yeah it seems fine | 00:55 |
rm_work | soon i'll have testing data for prod | 00:56 |
rm_work | though possibly delayed by a trip to the store | 00:56 |
rm_work | so I can review it tomorrow | 00:56 |
rm_work | but preliminarily, i'm seeing average request times ~0.9-1.0 | 01:12 |
rm_work | which is less than the previous minimum of 1.35 | 01:12 |
rm_work | and according to API logs, 0.6 of that was network latency anyway | 01:12 |
rm_work | so yes | 01:12 |
rm_work | seems to be an improvement | 01:12 |
rm_work | I'm going to +2 | 01:12 |
rm_work | as soon as tests finish | 01:12 |
rm_work | seems in general more consistent | 01:17 |
rm_work | johnsom: haven't seen any long calls yet either | 01:17 |
rm_work | err i lied | 01:17 |
*** tongl has quit IRC | 01:17 | |
rm_work | first few calls were long | 01:17 |
rm_work | i assume loading up policy? :/ | 01:17 |
johnsom | And/or connecting to the DB | 01:18 |
rm_work | ahhh yeah | 01:18 |
rm_work | johnsom: out of 5900 (6000, with the first 100 removed because they were weird): | 01:24 |
rm_work | Min. 1st Qu. Median Mean 3rd Qu. Max. | 01:24 |
rm_work | 0.7330 0.8550 0.9240 0.9994 1.1090 6.9770 | 01:24 |
rm_work | same thing, but for the previous 13000 datapoints: | 01:25 |
rm_work | Min. 1st Qu. Median Mean 3rd Qu. Max. | 01:25 |
rm_work | 1.319 1.854 2.104 2.219 2.343 34.861 | 01:25 |
johnsom | Hmm, I don't see those max pops | 01:26 |
rm_work | so yeah, marked improvement | 01:26 |
rm_work | i assume it's environmental | 01:26 |
rm_work | like DB slowness | 01:26 |
rm_work | err, network slowness to the DB | 01:26 |
rm_work | transient network weirdness | 01:27 |
rm_work | which was multiplied by the huge number of queries happening | 01:27 |
johnsom | The one downer is we are going direct to SA module and not through oslo db, but I didn't see a way to do it | 01:27 |
rm_work | err | 01:29 |
rm_work | but you only touched like one line i thought | 01:29 |
rm_work | and it was just to add a thing? | 01:29 |
rm_work | oh | 01:30 |
rm_work | you just mean for specifying this option | 01:30 |
rm_work | meh | 01:30 |
rm_work | we're still *using* oslo.db | 01:30 |
johnsom | Yeah | 01:30 |
johnsom | A quick search for that in openstack showed it used, but everyone was native SA. So, whatever | 01:31 |
rm_work | ugh | 01:35 |
rm_work | ok 12k queries finished | 01:35 |
rm_work | got some more bad data | 01:35 |
rm_work | Min. 1st Qu. Median Mean 3rd Qu. Max. | 01:35 |
rm_work | 0.729 0.856 0.920 1.018 1.110 33.305 | 01:35 |
rm_work | still BETTER but | 01:35 |
rm_work | hold on | 01:35 |
rm_work | 50% 90% 99% 99.9% 99.99% | 01:37 |
rm_work | 0.920000 1.271000 1.632010 6.730674 31.973880 | 01:37 |
rm_work | percentiles | 01:38 |
rm_work | that didn't paste great, but | 01:38 |
rm_work | so yeah that was 12k, going to do *120k* and go to the store | 01:42 |
rm_work | will post results later tonight / tomorrow | 01:42 |
rm_work | WOW what happened to zuul | 01:43 |
rm_work | your change was queued until JUST NOW | 01:43 |
rm_work | actually mostly still queued | 01:43 |
rm_work | looks like the nodes all dropped off and then just got SLAMMED | 01:43 |
rm_work | ah yep looks like infra noticed nothing had been building for a while and just fixed it recently :) | 01:46 |
rm_work | probably we'll end up needing to recheck later | 01:46 |
rm_work | once this calms down | 01:47 |
*** aojea has joined #openstack-lbaas | 02:32 | |
*** aojea has quit IRC | 02:37 | |
*** sanfern has joined #openstack-lbaas | 02:49 | |
*** yamamoto has joined #openstack-lbaas | 02:58 | |
*** links has joined #openstack-lbaas | 03:41 | |
*** wasmum has quit IRC | 04:14 | |
*** wasmum has joined #openstack-lbaas | 04:28 | |
korean101 | hi guys. | 04:32 |
korean101 | i use LBv2 agent (not octavia) in newton (CentOS 7) | 04:33 |
korean101 | but i got a this ERROS (WARNING neutron_lbaas.drivers.haproxy.namespace_driver [-] Stats socket not found for loadbalancer 9329ba09-b017-422f-bca8-1c3ca044d66a) | 04:33 |
korean101 | periodically | 04:33 |
korean101 | any clues? | 04:33 |
johnsom | korean101 That is normal. During some states the haproxy process isn't running. | 04:36 |
korean101 | johnsom: but lbaas namespace doesn't show | 04:37 |
korean101 | lbaas-loadbalancer-create is OK | 04:38 |
korean101 | johnsom: | 71514b72-01f5-4403-8d87-23ed6a806730 | test-lb | 10.0.0.12 | ACTIVE | haproxy | | 04:38 |
korean101 | johnsom: but nothing about lbaas in network node | 04:39 |
johnsom | Hmm, well the warning message above means the agent could not connect to the haproxy stats socket. Usually that is because haproxy is not running, reloading, or a listener has not yet been deployed | 04:39 |
korean101 | johnsom: OH! listener... 1 minutes... | 04:42 |
korean101 | johnsom: sorry about my quick temper | 04:45 |
*** cpuga has joined #openstack-lbaas | 04:51 | |
*** cpuga has quit IRC | 04:52 | |
*** cpuga has joined #openstack-lbaas | 04:53 | |
*** cpuga has quit IRC | 04:58 | |
rm_work | > summary(times_new) | 05:07 |
rm_work | Min. 1st Qu. Median Mean 3rd Qu. Max. | 05:07 |
rm_work | 0.722 0.854 0.913 1.056 1.090 42.244 | 05:07 |
rm_work | > quantile(times_new, c(.5,.9,.99,.999,.9999)) | 05:07 |
rm_work | 50% 90% 99% 99.9% 99.99% | 05:07 |
rm_work | 0.91300 1.24700 1.79804 12.56131 31.18927 | 05:07 |
rm_work | johnsom: ^^ 120000 requests, 12 parallel threads (10000 each) | 05:07 |
johnsom | Good, bad, otherwise? | 05:12 |
*** sanfern has quit IRC | 05:15 | |
*** sanfern has joined #openstack-lbaas | 05:16 | |
*** aojea has joined #openstack-lbaas | 05:33 | |
*** amotoki_away is now known as amotoki | 05:35 | |
*** armax has quit IRC | 05:39 | |
*** dlundquist has quit IRC | 05:40 | |
*** dlundquist has joined #openstack-lbaas | 05:40 | |
*** rcernin has joined #openstack-lbaas | 05:41 | |
*** dlundquist has quit IRC | 05:51 | |
*** dlundquist has joined #openstack-lbaas | 05:51 | |
*** dlundquist has quit IRC | 06:00 | |
*** sbalukoff_ has quit IRC | 06:00 | |
*** sbalukoff_ has joined #openstack-lbaas | 06:02 | |
*** dlundquist has joined #openstack-lbaas | 06:02 | |
*** pcaruana has joined #openstack-lbaas | 06:04 | |
*** aojea has quit IRC | 06:07 | |
*** aojea has joined #openstack-lbaas | 06:07 | |
*** aojea has quit IRC | 06:12 | |
*** aojea has joined #openstack-lbaas | 06:12 | |
*** aojea has quit IRC | 06:16 | |
rm_work | johnsom: i mean, it's ... similar to the previous data :P meaning, my dataset is probably big enough to be reliably accurate | 06:17 |
rm_work | but that i still get really bad queries sometimes :/ | 06:18 |
rm_work | and this is with only 20 LBs | 06:18 |
rm_work | i wonder if it sometimes has to renegotiate a connection to the DB? and that takes a while? | 06:18 |
openstackgerrit | huangshan proposed openstack/neutron-lbaas master: Repair using neutron CLI Gets the list of lbaas-l7policy Closes-Bug: #1702589 https://review.openstack.org/482416 | 06:22 |
openstack | bug 1702589 in octavia "Failed to get list of neutron-l7policy neutronclient CLI" [Undecided,Incomplete] https://launchpad.net/bugs/1702589 - Assigned to huangshan (huangshan) | 06:22 |
*** basilAB has quit IRC | 06:23 | |
*** vaishali has quit IRC | 06:24 | |
*** gtrxcb has joined #openstack-lbaas | 06:25 | |
*** basilAB has joined #openstack-lbaas | 06:28 | |
*** vaishali has joined #openstack-lbaas | 06:29 | |
*** gcheresh_ has joined #openstack-lbaas | 06:40 | |
*** kobis has joined #openstack-lbaas | 06:40 | |
*** sanfern has quit IRC | 06:48 | |
*** kobis has quit IRC | 06:51 | |
*** isantosp has joined #openstack-lbaas | 06:53 | |
*** tesseract has joined #openstack-lbaas | 07:02 | |
*** fnaval has quit IRC | 07:05 | |
*** fnaval has joined #openstack-lbaas | 07:06 | |
korean101 | is any config to remove empty qlbaas-e0b32195-cbad-4e89-bb9f-b45d11354a6c namespaces? | 07:07 |
korean101 | lots of empty lbaas namespaces remains | 07:08 |
*** aojea has joined #openstack-lbaas | 07:15 | |
*** kobis has joined #openstack-lbaas | 07:26 | |
*** openstackgerrit has quit IRC | 07:33 | |
*** gtrxcb has quit IRC | 07:46 | |
*** fnaval has quit IRC | 07:59 | |
*** diltram has quit IRC | 08:06 | |
*** diltram has joined #openstack-lbaas | 08:17 | |
*** sanfern has joined #openstack-lbaas | 09:03 | |
*** links has quit IRC | 09:37 | |
*** links has joined #openstack-lbaas | 09:39 | |
*** links has quit IRC | 09:44 | |
*** links has joined #openstack-lbaas | 09:49 | |
*** yamamoto has quit IRC | 10:03 | |
*** links has quit IRC | 10:13 | |
*** yamamoto has joined #openstack-lbaas | 11:04 | |
*** yamamoto has quit IRC | 11:11 | |
*** yamamoto has joined #openstack-lbaas | 11:30 | |
*** openstackgerrit has joined #openstack-lbaas | 11:35 | |
openstackgerrit | Santhosh Fernandes proposed openstack/octavia master: Option to enable provisioning status to be sync with neutron db https://review.openstack.org/478385 | 11:35 |
*** atoth has joined #openstack-lbaas | 11:45 | |
*** links has joined #openstack-lbaas | 11:46 | |
*** links has quit IRC | 11:59 | |
*** slaweq has joined #openstack-lbaas | 12:11 | |
slaweq | hello, I have a question about neutron-lbaas and Ubuntu package | 12:11 |
slaweq | when I'm installing it on host with neutron-server it creates new file /etc/neutron/neutron_lbaas.conf which should be added to neutron-server process with "--config-file" option, am I right? | 12:12 |
slaweq | and if Yes why it's not added? | 12:13 |
*** yamamoto has quit IRC | 12:38 | |
*** openstackgerrit has quit IRC | 12:47 | |
*** openstackgerrit has joined #openstack-lbaas | 13:06 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE Topology: Initial Distributor Driver Mixin https://review.openstack.org/313006 | 13:06 |
*** yamamoto has joined #openstack-lbaas | 13:25 | |
*** cpuga has joined #openstack-lbaas | 13:31 | |
*** cpuga has quit IRC | 13:38 | |
*** cpuga has joined #openstack-lbaas | 13:38 | |
*** kobis has quit IRC | 13:54 | |
*** reedip_ has joined #openstack-lbaas | 14:05 | |
*** fnaval has joined #openstack-lbaas | 14:06 | |
*** yamamoto has quit IRC | 14:24 | |
*** fnaval has quit IRC | 14:28 | |
*** armax has joined #openstack-lbaas | 14:33 | |
*** gcheresh_ has quit IRC | 14:34 | |
*** armax has quit IRC | 14:37 | |
*** rcernin has quit IRC | 14:38 | |
*** yamamoto has joined #openstack-lbaas | 14:40 | |
*** rcernin has joined #openstack-lbaas | 14:40 | |
openstackgerrit | Bernard Cafarelli proposed openstack/octavia master: DIB: drop custom mirror elements https://review.openstack.org/482587 | 14:44 |
*** armax has joined #openstack-lbaas | 14:45 | |
*** openstackgerrit has quit IRC | 14:48 | |
*** slaweq has quit IRC | 14:53 | |
*** fnaval has joined #openstack-lbaas | 14:56 | |
*** openstackgerrit has joined #openstack-lbaas | 14:58 | |
openstackgerrit | sumitjami proposed openstack/neutron-lbaas master: fixed health monitor setting during tempest test, https://review.openstack.org/480233 | 14:58 |
xgerman_ | https://review.openstack.org/313006 passed - ship it! | 15:16 |
*** rcernin has quit IRC | 15:18 | |
*** catintheroof has joined #openstack-lbaas | 15:28 | |
openstackgerrit | Jason Niesz proposed openstack/octavia master: blueprint: l3-active-active https://review.openstack.org/453005 | 15:29 |
*** yamamoto has quit IRC | 15:58 | |
*** kobis has joined #openstack-lbaas | 15:59 | |
*** aojea has quit IRC | 16:03 | |
*** amotoki is now known as amotoki_away | 16:14 | |
*** armax has quit IRC | 16:17 | |
*** sshank has joined #openstack-lbaas | 16:27 | |
*** blogan has quit IRC | 16:45 | |
*** yamamoto has joined #openstack-lbaas | 16:59 | |
*** fnaval_ has joined #openstack-lbaas | 17:01 | |
*** sshank has quit IRC | 17:02 | |
*** fnaval has quit IRC | 17:04 | |
*** yamamoto has quit IRC | 17:04 | |
*** armax has joined #openstack-lbaas | 17:17 | |
*** tongl has joined #openstack-lbaas | 17:34 | |
*** cpuga has quit IRC | 17:39 | |
*** cpuga has joined #openstack-lbaas | 17:39 | |
*** aojea has joined #openstack-lbaas | 17:44 | |
*** aojea has quit IRC | 17:49 | |
*** yamamoto has joined #openstack-lbaas | 18:02 | |
*** yamamoto has quit IRC | 18:07 | |
*** kobis has quit IRC | 18:10 | |
*** sshank has joined #openstack-lbaas | 18:11 | |
*** tesseract has quit IRC | 18:25 | |
*** sshank has quit IRC | 18:44 | |
*** aojea has joined #openstack-lbaas | 18:45 | |
*** reedip_ has quit IRC | 18:49 | |
*** aojea has quit IRC | 19:04 | |
*** sanfern has quit IRC | 19:04 | |
*** sanfern has joined #openstack-lbaas | 19:07 | |
rm_work | o/ | 19:18 |
*** kobis has joined #openstack-lbaas | 19:20 | |
xgerman_ | o/ | 19:20 |
*** gcheresh_ has joined #openstack-lbaas | 19:26 | |
xgerman_ | rm_Work how do you feel about us releasing an amphora image for others to download: https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.html | 19:28 |
rm_work | hmmmm | 19:28 |
rm_work | actually that's interesting | 19:28 |
rm_work | I think it might be useful | 19:28 |
rm_work | like "here's the Pike image" | 19:28 |
xgerman_ | so do I | 19:28 |
rm_work | probably would save a lot of confusion | 19:28 |
xgerman_ | yep | 19:28 |
xgerman_ | since yesterday I am on docker-hub | 19:29 |
xgerman_ | johnsom is worried if we don;t automate it like with a weekly job they become stale, but we could have some addition to the diskimage script to publish after build | 19:30 |
rm_work | ^^ yeah i would worry too | 19:31 |
rm_work | but, this would only be for releases right? | 19:31 |
rm_work | or for changes to stable/* | 19:31 |
*** aojea has joined #openstack-lbaas | 19:31 | |
rm_work | otherwise, they SHOULD become stale -- you want the one for your version of octavia | 19:31 |
rm_work | if you're running master, you should build your own | 19:32 |
xgerman_ | yeah, we should limit to the release | 19:32 |
xgerman_ | agree, master you are on your own | 19:32 |
johnsom | I am just thinking of security fixes to the base OS. I mean what is our goal here? A reference image? That could be build daily and should work with the stables, right? | 19:35 |
xgerman_ | ideally a reference image but I can see us just providing demo images to get you off the ground | 19:36 |
xgerman_ | mabe something for tomorrow’s agenda | 19:36 |
*** armax has quit IRC | 19:38 | |
*** aojea has quit IRC | 19:39 | |
*** slaweq has joined #openstack-lbaas | 19:40 | |
*** kobis has quit IRC | 19:45 | |
rm_work | johnsom: I was assuming when we cut a release for Pike and it moves to stable/pike we also build an image based on that | 19:52 |
rm_work | since ... that won't change frequently | 19:52 |
rm_work | and we could be nice and go back and make an image for ocata | 19:53 |
rm_work | and newton | 19:53 |
rm_work | based on the stable/* | 19:53 |
rm_work | doesn't that make sense? | 19:53 |
*** jniesz has joined #openstack-lbaas | 19:53 | |
johnsom | Well, I'm mostly thinking about the OS security patches. Do we want to have images out there that have kernel issues, etc. | 19:53 |
johnsom | The agent should be compatible with an older controller right? | 19:54 |
*** aojea has joined #openstack-lbaas | 19:58 | |
*** slaweq has quit IRC | 20:12 | |
*** armax has joined #openstack-lbaas | 20:13 | |
*** krypto has joined #openstack-lbaas | 20:17 | |
*** cpuga has quit IRC | 20:29 | |
*** cpuga_ has joined #openstack-lbaas | 20:29 | |
*** cpuga_ has quit IRC | 20:34 | |
*** gcheresh_ has quit IRC | 20:55 | |
rm_work | ah true | 20:57 |
rm_work | theoretically we have kept it compatible I believe | 20:57 |
rm_work | and yeah I guess we'd have to refresh it for ubuntu changes | 20:57 |
rm_work | so yeah maybe a weekly build | 20:57 |
rm_work | of just the latest stable? | 20:58 |
rm_work | and say "use this for any version"? | 20:58 |
rm_work | then we REALLY commit to being fully backwards compatible at all times | 20:58 |
*** catintheroof has quit IRC | 21:04 | |
*** pcaruana has quit IRC | 21:23 | |
*** aojea has quit IRC | 21:26 | |
*** aojea has joined #openstack-lbaas | 21:33 | |
*** aojea has quit IRC | 21:33 | |
*** aojea has joined #openstack-lbaas | 21:35 | |
*** fnaval_ has quit IRC | 21:42 | |
*** fnaval has joined #openstack-lbaas | 21:51 | |
*** fnaval has quit IRC | 21:52 | |
*** fnaval has joined #openstack-lbaas | 21:52 | |
*** fnaval has quit IRC | 21:53 | |
*** fnaval has joined #openstack-lbaas | 21:56 | |
*** sshank has joined #openstack-lbaas | 21:56 | |
*** jniesz has quit IRC | 22:04 | |
*** krypto has quit IRC | 22:07 | |
*** aojea has quit IRC | 22:12 | |
johnsom | rm_work FYI, the new OSC client plugin has been released. 1.1.0 (had to bump the middle due to a g-r update) | 22:44 |
rm_work | k | 22:45 |
*** cpuga has joined #openstack-lbaas | 23:02 | |
*** cpuga has quit IRC | 23:12 | |
rm_work | johnsom: i'm trying to look into whether our indexes are bad | 23:39 |
rm_work | johnsom: but i'm not sure with sqlalchemy we explicitly set indexes? | 23:40 |
johnsom | I don't think we have added any indexes beyond those that are automatically generated for the key columns | 23:40 |
johnsom | All of the primary key columns are automatically indexed. | 23:41 |
johnsom | We are probably missing an index on the project_id column. Let me look. | 23:42 |
johnsom | Yeah, we might at some point need to add them for the project_id column. | 23:43 |
johnsom | Do you think you have had enough churn to need to run an optimize on the mysql table???? | 23:46 |
*** sshank has quit IRC | 23:54 | |
rm_work | no | 23:58 |
rm_work | i'm looking at slow query logs right now to see if that will shed any light | 23:58 |
rm_work | johnsom: yeah i saw the same thing, i think project_id might be universally worth indexing | 23:58 |
rm_work | especially given it's used when doing LIST queries | 23:58 |
rm_work | right? | 23:58 |
johnsom | Not with admin, no | 23:59 |
rm_work | like | 23:59 |
rm_work | select * from <object_table> where project_id = <myproject> | 23:59 |
rm_work | is basically the query | 23:59 |
rm_work | ah yes but with user accounts it is | 23:59 |
rm_work | right? | 23:59 |
johnsom | Yes | 23:59 |
rm_work | i would imagine it's worth an index <_< | 23:59 |
rm_work | honestly i'm tempted to index basically everything we could ever query on | 23:59 |
johnsom | Don't | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!