Wednesday, 2020-03-11

*** yamamoto has joined #openstack-lbaas00:01
*** yamamoto has quit IRC00:06
*** ianychoi has quit IRC00:07
*** ianychoi has joined #openstack-lbaas00:08
johnsomYeah, I can load old barbican containers fine, even up to master branch. So, not sure what is happening for Jorge00:11
*** armax has joined #openstack-lbaas00:16
*** yamamoto has joined #openstack-lbaas00:58
*** dmellado has quit IRC01:22
*** ianychoi has quit IRC01:30
*** ianychoi has joined #openstack-lbaas01:32
*** gthiemon1e has joined #openstack-lbaas01:43
*** gthiemonge has quit IRC01:43
*** dmellado has joined #openstack-lbaas01:46
*** happyhemant has quit IRC02:09
*** ianychoi has quit IRC02:10
*** gthiemonge has joined #openstack-lbaas02:15
*** gthiemon1e has quit IRC02:16
*** ianychoi has joined #openstack-lbaas02:18
*** hongbin has joined #openstack-lbaas02:20
*** yamamoto has quit IRC02:35
*** hongbin has quit IRC02:39
*** yamamoto has joined #openstack-lbaas02:43
dawzonI'm wondering how existing listeners should be migrated to the new configuration where every listener will have its own cipher string.  My first thought is to just set them to the built-in default (OWASP recommendations), would this cause any issues with existing deployments?  I suppose that existing deployments never chose their ciphers in the first place, so switching the ciphers may not be a big deal.03:23
*** psachin has joined #openstack-lbaas03:26
*** ianychoi has quit IRC03:35
*** ianychoi has joined #openstack-lbaas03:37
*** TrevorV has joined #openstack-lbaas03:57
*** armax has quit IRC04:13
*** gcheresh_ has joined #openstack-lbaas04:24
*** gcheresh_ has quit IRC04:45
*** ianychoi has quit IRC05:00
*** ianychoi has joined #openstack-lbaas05:03
*** TrevorV has quit IRC05:06
*** yamamoto has quit IRC05:34
*** yamamoto has joined #openstack-lbaas05:39
*** yamamoto has quit IRC05:43
*** rcernin has quit IRC05:58
*** ianychoi has quit IRC06:01
*** ianychoi has joined #openstack-lbaas06:03
*** gthiemonge has quit IRC06:05
*** gthiemonge has joined #openstack-lbaas06:05
*** ianychoi has quit IRC06:11
*** ianychoi has joined #openstack-lbaas06:13
rm_workdawzon: if the default is the same as whatever HAProxy does by default, then there will be no change -- not sure if that's exactly the same as OWASP, but that should be the setting for OLD stuff, I think, and NEW stuff can default to the OWASP list06:19
rm_workwould be awesome if they line up but not sure they will exactly06:19
*** yamamoto has joined #openstack-lbaas06:25
*** ianychoi has quit IRC06:29
*** ianychoi has joined #openstack-lbaas06:31
*** ianychoi has quit IRC06:39
*** ianychoi has joined #openstack-lbaas06:41
*** ianychoi has quit IRC06:58
*** ianychoi has joined #openstack-lbaas07:05
*** emccormick has joined #openstack-lbaas07:25
emccormickis there any way to get a loadbalancer in error state to reset and retry?07:31
emccormickIt would be really painful to have to delete and recreate them as several are part of a heat stack07:31
emccormickthe amphorae went awol during an upgrade because of a config error. I can make new LBs but the old ones won't spawn again07:32
*** gcheresh_ has joined #openstack-lbaas07:44
*** maciejjozefczyk has joined #openstack-lbaas07:49
*** yamamoto has quit IRC07:59
*** yamamoto has joined #openstack-lbaas08:00
*** tesseract has joined #openstack-lbaas08:02
*** yamamoto has quit IRC08:04
*** yamamoto has joined #openstack-lbaas08:05
*** ccamposr has joined #openstack-lbaas08:06
emccormickok, failover was the magic trick08:10
emccormick(for anyone that reads this history and wants an answer) to my stupidity)08:11
*** tkajinam has quit IRC08:14
*** psachin has quit IRC08:15
*** yamamoto_ has joined #openstack-lbaas08:18
*** yamamoto has quit IRC08:22
*** yamamoto_ has quit IRC08:35
*** sapd1_x has quit IRC08:36
*** yamamoto has joined #openstack-lbaas08:37
*** dulek has quit IRC08:49
*** vishalmanchanda has joined #openstack-lbaas08:56
*** rpittau|afk is now known as rpittau08:58
*** ccamposr has quit IRC09:01
*** ccamposr has joined #openstack-lbaas09:02
*** dulek has joined #openstack-lbaas09:03
*** yamamoto has quit IRC09:56
*** yamamoto has joined #openstack-lbaas09:57
*** yamamoto has quit IRC09:59
*** yamamoto has joined #openstack-lbaas10:20
-openstackstatus- NOTICE: The mail server for lists.openstack.org is currently not handling emails. The infra team will investigate and fix during US morning.10:26
*** lucadelmonte90 has joined #openstack-lbaas10:27
lucadelmonte90hello, i have a question regarding octavia amphora lb, probably a dumb one, i deployed a test openstack infrastracture, and manage ti deploy a loadbalancer, but when i try to add some serverto the pool or listener to the lb, the operating status is Offline, i connected to the amphora instance and the backend server are not reachable, also the10:27
lucadelmonte90amphora instance has only one network card connected to the managment octavia network (amp_boot_network_list), no ips on the backend network, wich i assigned when creating the load balancer, is that normal? Are backend server reached by the amphora instance in some other way ( router?)?10:27
*** yamamoto has quit IRC10:27
openstackgerritAnn Taraday proposed openstack/octavia-tempest-plugin master: TEST  https://review.opendev.org/70478310:39
*** yamamoto has joined #openstack-lbaas11:03
*** yamamoto has quit IRC11:09
openstackgerritMerged openstack/python-octaviaclient stable/queens: Fix long CLI error messages  https://review.opendev.org/70635311:17
*** maciejjozefczyk_ has joined #openstack-lbaas11:17
*** maciejjozefczyk has quit IRC11:18
openstackgerritMerged openstack/python-octaviaclient stable/stein: Fix long CLI error messages  https://review.opendev.org/70634811:19
openstackgerritMerged openstack/python-octaviaclient stable/train: Fix long CLI error messages  https://review.opendev.org/70634711:21
*** ianychoi has quit IRC11:24
cgoncalveslucadelmonte90, hi. you should have a network namespace named amphora-haproxy in the amphora. check with "ip netns exec amphora-haproxy ip a"11:26
lucadelmonte90cgoncalves thanks! you are right, and using that namespace to execute a curl i can reach the backend servers11:29
lucadelmonte90cgoncalves why the operating status is reporting Offline?11:30
cgoncalveslucadelmonte90, could you please clarify of which object is the operating status reporting offline?11:31
cgoncalvesis it the load balancer? pool? health monitor? members?11:31
lucadelmonte90cgoncalves the lodbalancer, loadbalancer,listener,pool11:32
lucadelmonte90the healt monitor is insted reporting  Online11:32
lucadelmonte90root@amphora-63d97a66-1909-45c0-ad1e-beadc51432bb:~# ip netns exec amphora-haproxy netstat -nlputActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 192.168.1.238:80        0.0.0.0:*               LISTEN      1231/haproxy11:33
lucadelmonte90but this command times out root@amphora-63d97a66-1909-45c0-ad1e-beadc51432bb:~# ip netns exec amphora-haproxy curl 192.168.1.23811:33
cgoncalveslucadelmonte90, OFFLINE operating status means the resource is administratively disabled. could you please share the output of "openstack loadbalancer status show $LB_ID"?11:37
lucadelmonte90[root@ostackkolla ~]# openstack loadbalancer status show lb2{    "loadbalancer": {        "name": "lb2",         "listeners": [            {                "pools": [                    {                        "name": "Pool1",                         "provisioning_status": "ACTIVE",                         "health_monitor": {11:37
lucadelmonte90"name": "monitorHTTP",                             "type": "HTTP",                             "id": "f6333447-2249-4de5-9b32-3f14f3608fba",                             "operating_status": "ONLINE",                             "provisioning_status": "ACTIVE"                        },                         "members": [                            {11:37
lucadelmonte90"name": "Test",                                 "provisioning_status": "ACTIVE",                                 "address": "192.168.1.159",                                 "protocol_port": 80,                                 "id": "02fc6366-182f-4d54-89dd-9517b70aae2a",                                 "operating_status": "NO_MONITOR"11:37
lucadelmonte90},                             {                                "name": "test2",                                 "provisioning_status": "ACTIVE",                                 "address": "192.168.1.225",                                 "protocol_port": 80,                                 "id": "c7d629bb-17f4-4c28-83bd-1a00f37c37df",11:37
lucadelmonte90"operating_status": "NO_MONITOR"                            }                        ],                         "id": "8c583f04-3684-42b0-ab49-86950b26e2f5",                         "operating_status": "OFFLINE"                    }                ],                 "name": "HTTP",                 "id": "7967857b-f281-4934-88cc-86ed98d816d3",11:37
lucadelmonte90"operating_status": "OFFLINE",                 "provisioning_status": "ACTIVE"            }        ],         "id": "8127ce81-7b94-42f1-a6d6-b401386acfa5",         "operating_status": "OFFLINE",         "provisioning_status": "ACTIVE"    }}11:37
*** xakaitetoia has joined #openstack-lbaas11:38
*** nicolasbock has joined #openstack-lbaas11:38
lucadelmonte90https://pastebin.com/UbDuLSRU11:38
cgoncalvesthanks for the pastebin. so the load balancer, listener and pool are in operating status OFFLINE. IIRC, this may indicate our network is not properly configured to allow the periodic health messages from the amphora to be received by the controller11:42
cgoncalveslucadelmonte90, please check that the network allows the amphora to reach the node where the octavia health manager service is running on port UDP/5555 and no firewall is dropping those packets11:43
lucadelmonte90cgoncalves i see, thanks for the help, i' ll investigate11:49
cgoncalvesoops, s/indicate our network/indicate the network/. sorry for the dyslexia :)11:50
lucadelmonte90no problem, :D11:50
lucadelmonte90my mistake is probably that the mgnt network that i used for testing the load balancer is an external network11:51
lucadelmonte90i deployed with kolla, and the control node where the octavia-health-manager container is is not reachable in that subnet11:52
lucadelmonte90i see that the healt_manager is binded to the mngmt network, which is not reachable from the network specified in amp_boot_network_list11:54
cgoncalvesI see. I'm not familiar with kolla-ansible deployments. maybe someone else here as experience and could help. best would be to contact #openstack-kolla11:55
lucadelmonte90the amphora sends the healtchecks to the controller via the default namespace right ( so the network specified in amp_boot_network_list), and not from the amphora netns11:57
lucadelmonte90?11:57
cgoncalvescorrect11:58
*** rpittau is now known as rpittau|bbl11:58
lucadelmonte90perfect, thank you very much for the clarifications, you have been very helpful : )11:58
cgoncalvesit sends to all controller nodes listed under controller_ip_port_list config in a round-robin way11:58
cgoncalvesglad we could help. feel free to reach out again anytime for octavia questions :)11:59
openstackgerritAnn Taraday proposed openstack/octavia master: Check  https://review.opendev.org/71209112:03
openstackgerritAnn Taraday proposed openstack/octavia-tempest-plugin master: TEST  https://review.opendev.org/70478312:16
*** lucadelmonte90 has quit IRC12:35
*** irclogbot_3 has quit IRC13:22
*** irclogbot_3 has joined #openstack-lbaas13:23
*** gcheresh_ has quit IRC13:24
*** gcheresh has joined #openstack-lbaas13:24
*** ataraday_ has joined #openstack-lbaas13:26
*** ataraday_ has quit IRC13:31
*** yamamoto has joined #openstack-lbaas13:36
*** maciejjozefczyk_ is now known as maciejjozefczyk13:41
*** yamamoto has quit IRC13:49
*** zigo has quit IRC13:49
*** servagem has joined #openstack-lbaas13:55
*** vishalmanchanda has quit IRC14:00
*** sapd1_x has joined #openstack-lbaas14:25
*** yamamoto has joined #openstack-lbaas14:25
johnsomSigh, yet another person struggling because they used kolla. I wish someone would go fix kolla to not be broken out of the box.14:27
johnsomcgoncalves: Thanks for helping them out.14:27
*** yamamoto has quit IRC14:31
*** armax has joined #openstack-lbaas14:38
*** vishalmanchanda has joined #openstack-lbaas14:41
rm_workI don't understand how it'd be possible to fix that? it TOTALLY depends on how your network is set up?15:00
rm_workthere's no automated way to make that happen AFAIU :/15:00
johnsomrm_work Kolla is the only deployment tool for OpenStack that doesn't setup the network for the user15:01
rm_worki have no idea how the other ones could possibly do it15:01
rm_worki assume they make assumptions and only work like 20% of the time or something15:02
* rm_work shrugs15:02
rm_worklike, conceptually it seems impossible to do, every deployment is completely different IME15:03
*** TrevorV has joined #openstack-lbaas15:09
*** yamamoto has joined #openstack-lbaas15:17
*** yamamoto has quit IRC15:25
*** gcheresh has quit IRC15:33
*** sapd1_x has quit IRC15:51
*** zigo has joined #openstack-lbaas15:52
*** ataraday_ has joined #openstack-lbaas15:57
*** yamamoto has joined #openstack-lbaas15:57
rm_work#startmeeting Octavia16:01
openstackMeeting started Wed Mar 11 16:01:05 2020 UTC and is due to finish in 60 minutes.  The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
*** openstack changes topic to " (Meeting topic: Octavia)"16:01
openstackThe meeting name has been set to 'octavia'16:01
rm_work#chair johnsom16:01
openstackCurrent chairs: johnsom rm_work16:01
rm_work#chair cgoncalves16:01
openstackCurrent chairs: cgoncalves johnsom rm_work16:01
cgoncalveshi16:01
gthiemongeHi16:01
rm_workhey, carlos can you run this? both johnsom and I are in meetings internally that are running over16:01
johnsomHi16:02
cgoncalvesI am on a call too :/16:02
johnsomDaylight time is fun!16:02
cgoncalves#topic Announcements16:02
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"16:02
ataraday_hi16:03
cgoncalvesI have not prepared anything, sorry. is there anything relevant to share?16:03
*** yamamoto has quit IRC16:03
johnsomThe last release of octavia-lib date is coming up quickly16:04
johnsomWeek of March 3016:04
cgoncalvesthanks16:05
cgoncalvesAny other announcements this week?16:05
cgoncalves#topic Brief progress reports / bugs needing review16:06
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"16:06
johnsom#link https://releases.openstack.org/ussuri/schedule.html16:06
*** maciejjozefczyk has quit IRC16:07
ataraday_Hightlight for #link https://review.opendev.org/#/c/709696/16:07
johnsomOye, Ummmm, I'm down to just the network driver testing work. I have had some internal distractions that have slowed this down.16:07
johnsomI also needed to address a few other small bugs, you probably saw the patches for.16:07
johnsomPlan is to wrap failover as soon as possible and start the v2 driver work on it.16:08
ataraday_And wanted to raise attention for this bug #link https://storyboard.openstack.org/#!/story/2007340 - please share you thoughts.16:08
johnsomI saw it, but have not had time to read/understand it.16:09
cgoncalvesI have been busy internally and with tripleo. I did some work in the octavia tempest plugin side to add coverage for the allowed CIDRs. sadly the jobs fail occasionally, it is a shortcoming in Neutron where the SG rule doesn't get configured in the agent faster than Octavia completes its flow, returns and tempest sends checks if the allowed CIDR is applied in the listener.16:10
johnsomataraday_ Are you keeping the priority review list up to date for the jobboard chain? I think in the next we or so we really need to prioritize getting the last base patches reviewed and merged. IMO, this is a *must* feature for U.16:12
ataraday_johnsom, yes, things are on track there. Just two changes left - one small I pointed earlier16:13
ataraday_and the main change16:13
johnsomOk, thank you16:14
johnsomAny other updates this week?16:15
johnsom#topic Open Discussion16:16
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"16:16
johnsomOne item I wanted to mention today, the upstream periodic image building jobs are not working16:16
johnsomThey stopped early last month.16:16
johnsomIt appears the python "yaml" package issue is going on there.16:17
johnsomIf anyone has some time, it would be great if you could track that down.16:17
johnsomMaybe work with those DIB cores....  grin16:17
cgoncalveshttp://zuul.openstack.org/builds?project=openstack%2Foctavia&pipeline=periodic-stable&pipeline=periodic16:17
* cgoncalves wears the Octavia hat at all times16:18
johnsomI think I had some patches up for new image building jobs, but it's been such chaos recently I'm not 100% sure if those are related or not16:18
cgoncalvesI will check what is going on there16:18
rm_workweird ok, we have not had issues with this internally and we build centos images from master :/16:19
rm_workhave not seen any build failures16:19
johnsomAh, mine is just a "test" not a publish, so different issues16:19
johnsomI *think* this may just be that DIB has not released a fixed package for this issue yet, but not sure16:20
johnsom#link https://review.opendev.org/#/c/706393/16:20
johnsomThat is the "test" job patch16:20
cgoncalvesianw released DIB 2.43.0 today (?)16:20
johnsomAh, yes, maybe that will fix it????16:21
cgoncalvesI can check16:21
johnsomThank you16:22
johnsomAny other topics today?16:22
cgoncalvesthanks for monitoring the periodic jobs!16:22
johnsomOk, if there are not any more topics we can close it out.16:24
johnsomMaybe our PTL rm_work can scrub the priority list and bring a proposed list of features that are at risk of not making Ussuri next week.16:24
johnsomWe are pretty much up to the feature in/out time.16:25
rm_workthat would be awesome if i could do that16:25
rm_workyes, so I have time in my sprint this week/next for actually working on the multi-vip stuff16:25
rm_workbut I am worried that may not make it into U even though the lib change did16:25
rm_workwhich is fine16:25
rm_workI think our real bottleneck is legitimately review cycles16:25
rm_workand even if the work can get done, there's no cycles to get people to review it16:26
rm_workfor instance, STILL waiting on reviews for outstanding AZ code bugs that are making AZ support unusable16:26
rm_work(thanks carlos for the +2 on the first one)16:26
rm_workso I will try to take some time to scrub that and make some decisions about what we will most likely have to delay till next cycle, and what we need to focus on for this cycle16:27
johnsomOk, any thing else today or  onward?16:27
johnsomOk, thanks everyone!16:29
rm_worko/16:29
johnsom#endmeeting16:29
*** openstack changes topic to "Discussions for OpenStack Octavia | Priority bug review list: https://etherpad.openstack.org/p/octavia-priority-reviews"16:29
openstackMeeting ended Wed Mar 11 16:29:06 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:29
cgoncalveso/16:29
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.html16:29
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.txt16:29
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.log.html16:29
rm_workwhelp, got out of my meeting in time to be here for 5 minutes of this one :D16:29
xgermanlol16:36
johnsomrm_work BTW, I tested the barbican client thing yesterday (with a master devstack) and it behaved as expected, so, not sure what is up with that story. Glad we stopped the revert.16:38
rm_workyes16:38
johnsomI think we need more information about what version they are running, etc.16:39
johnsomIt *seems* like they are passing a secret reference instead of the container into the listener create, but the story docs don't show that. If that is the case, the exception is valid.16:40
*** rpittau|bbl is now known as rpittau16:43
*** nicolasbock has quit IRC17:14
*** nicolasbock has joined #openstack-lbaas17:20
*** gthiemonge has quit IRC17:30
dawzonSo I did a little bit of digging and it seems like HAProxy doesn't have any of its own built-in defaults for ciphers (https://github.com/haproxy/haproxy/blob/751e5e21a9b9228dc035bd4c65fe65a043b31f77/Makefile#L237).   Either a list of defaults is specified at compile time, otherwise it uses the default cipher string provided by OpenSSL (i.e. there's not a master default we can rely on).  So there's potential for17:58
dawzonit to change between distros and different version of OpenSSL, although I can't say how much it actually varies in practice.  Seems like in an ideal scenario there would be a way to query the cipher strings of each instance of haproxy, then insert that into the database, but is there even a way to perform a migration like that?17:58
*** vishalmanchanda has quit IRC18:00
*** gthiemonge has joined #openstack-lbaas18:04
johnsomdawzon Well, I don't see ciphers, only protocols even being exposed in haproxy and I don't really like the idea of trying to poll haproxy for that information.18:07
johnsomI think we either need to have some type of "undefined", such as "None" implies today, or set a default in the DB at migration time and add a release note for operators to failover to pick up the new default.18:09
johnsomrm_work What do you think?18:09
*** gcheresh has joined #openstack-lbaas18:16
rm_workthe least possibly impactful would be a None that just doesn't set this18:16
rm_workfor legacy objects18:17
rm_work(and we set the default on all new objects moving forward)18:17
rm_workBUT that introduces ongoing overhead for maintainability18:17
rm_workpicking defaults from OpenSSL and backfilling everything with that seems sane, except that will have the side effect of deploying a technically new and unrelated config option on the next touch of the LB, which could lead to things like "i added a member and suddenly the LB is rejecting my SSL requests"18:19
rm_workas a maintainer I prefer option 2, but the safer option is 118:19
*** tesseract has quit IRC18:43
*** gcheresh has quit IRC19:31
*** rpittau is now known as rpittau|afk19:40
*** yamamoto has joined #openstack-lbaas20:33
*** yamamoto has quit IRC20:38
*** servagem has quit IRC20:38
*** gcheresh has joined #openstack-lbaas21:02
*** rcernin has joined #openstack-lbaas21:28
*** gcheresh has quit IRC21:50
*** spatel has joined #openstack-lbaas22:12
*** spatel has quit IRC22:17
*** jrosser has quit IRC22:31
*** rm_work has quit IRC22:31
*** andrein has quit IRC22:31
*** xakaitetoia has quit IRC22:33
*** rm_work has joined #openstack-lbaas22:33
*** jrosser has joined #openstack-lbaas22:36
*** andrein has joined #openstack-lbaas22:36
*** guilhermesp has quit IRC22:36
*** generalfuzz has quit IRC22:36
*** mnaser has quit IRC22:36
*** gmann has quit IRC22:36
*** fyx has quit IRC22:36
*** dawzon has quit IRC22:36
*** beisner has quit IRC22:36
*** dougwig has quit IRC22:36
*** luketollefson has quit IRC22:36
*** fyx has joined #openstack-lbaas22:38
*** dougwig has joined #openstack-lbaas22:38
*** mnaser has joined #openstack-lbaas22:39
*** beisner has joined #openstack-lbaas22:40
*** gmann has joined #openstack-lbaas22:40
*** guilhermesp has joined #openstack-lbaas22:41
*** luketollefson has joined #openstack-lbaas22:42
*** dawzon has joined #openstack-lbaas22:42
*** TrevorV has quit IRC22:44
*** generalfuzz has joined #openstack-lbaas22:44
*** irclogbot_3 has quit IRC22:47
*** irclogbot_0 has joined #openstack-lbaas22:48
*** openstackstatus has quit IRC22:48
*** stevenglasford has quit IRC22:50
*** xgerman has quit IRC22:50
*** nmickus has quit IRC22:50
*** coreycb has quit IRC22:50
*** ccamposr has quit IRC22:50
*** xgerman has joined #openstack-lbaas22:52
*** coreycb has joined #openstack-lbaas22:54
*** nmickus has joined #openstack-lbaas22:55
*** emccormick has quit IRC22:56
*** lxkong has quit IRC22:56
*** stevenglasford has joined #openstack-lbaas22:57
*** emccormick has joined #openstack-lbaas23:01
*** tkajinam has joined #openstack-lbaas23:01
*** lxkong has joined #openstack-lbaas23:02
*** ianychoi has joined #openstack-lbaas23:06
*** yamamoto has joined #openstack-lbaas23:47
*** KeithMnemonic has quit IRC23:51
*** KeithMnemonic has joined #openstack-lbaas23:51
*** yamamoto has quit IRC23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!