ut2k3 | Ok thank you so much | 00:00 |
---|---|---|
ut2k3 | I gonna sleep too! | 00:01 |
ut2k3 | Thank you and sorry for asking that much :/ | 00:01 |
*** ut2k3 has quit IRC | 00:04 | |
*** luksky has quit IRC | 00:14 | |
sapd1 | Do you know What are the use case of Multiple VIPs for Load Balancer? Could you guys tell me? | 01:26 |
*** ricolin has joined #openstack-lbaas | 01:36 | |
*** goldyfruit has quit IRC | 01:36 | |
*** goldyfruit has joined #openstack-lbaas | 02:06 | |
rm_work | sapd1: so let's say you want both ipv4 and ipv6 | 02:15 |
rm_work | :) | 02:15 |
rm_work | that requires one additional VIP | 02:15 |
rm_work | that is the best example | 02:15 |
rm_work | another would be if you want to be able to access it on a private AND a public network | 02:15 |
rm_work | depending on how routing works (maybe you have some nodes that have zero egress from their private subnet) | 02:16 |
rm_work | (assuming you can have a private and a public subnet on the same network, I guess, since it's limited to a single network with this design) | 02:16 |
rm_work | sapd1: if you have feedback about your own use-cases for multi-vip, i'd love to hear them, as i want to make the design fit as many cases as reasonably possible | 02:17 |
sapd1 | rm_work, Thank you for your answer. I will try after up finish your work :D | 02:24 |
*** irclogbot_3 has quit IRC | 02:26 | |
openstackgerrit | sapd proposed openstack/octavia master: Support create amphora instance from volume based. https://review.opendev.org/570505 | 02:27 |
rm_work | yeah this is eating up all my time right now, but as a bonus, I DID get my devstack back up and running | 02:28 |
rm_work | so maybe i can test your thing | 02:28 |
rm_work | if there's instructions on how to set it up... let me know | 02:29 |
*** irclogbot_2 has joined #openstack-lbaas | 02:29 | |
openstackgerrit | sapd proposed openstack/octavia master: Support create amphora instance from volume based. https://review.opendev.org/570505 | 02:30 |
sapd1 | rm_work, ah, about cinder volume, right? | 02:31 |
rm_work | yes | 02:32 |
sapd1 | :D Actually that patch was done by many contributors :) | 02:34 |
rm_work | yeah, but if you are testing it, you know how to enable it and use it and verify it in devstack> | 02:34 |
rm_work | right? | 02:34 |
rm_work | i don't know what i'd be looking at | 02:34 |
sapd1 | ah, You have to configure volume_driver in section controller_worker - volume_driver = volume_cinder_driver and authentication infor in section cinder. | 02:36 |
sapd1 | this variable (OCTAVIA_VOLUME_DRIVER) in local.conf for devstack | 02:37 |
*** goldyfruit has quit IRC | 04:06 | |
johnsom | We have a tempest gate for it. | 04:19 |
* johnsom passes through wondering why the fedora screen saver isn't working.... | 04:20 | |
*** gcheresh_ has joined #openstack-lbaas | 04:22 | |
*** luksky has joined #openstack-lbaas | 04:35 | |
*** luksky has quit IRC | 04:41 | |
*** luksky has joined #openstack-lbaas | 04:42 | |
*** luksky has quit IRC | 04:56 | |
openstackgerrit | sapd proposed openstack/octavia master: Support create amphora instance from volume based. https://review.opendev.org/570505 | 05:09 |
*** yboaron_ has quit IRC | 05:11 | |
cgoncalves | johnsom, better that and terminal crashing than what happened to my partner's Apple laptop: system updater kept on crashing at boot, no way to stop it from re-trying at every boot | 06:11 |
cgoncalves | second time it happened in the past 3 months | 06:11 |
openstackgerrit | sapd proposed openstack/octavia master: Support create amphora instance from volume based. https://review.opendev.org/570505 | 06:23 |
*** gthiemonge has joined #openstack-lbaas | 06:38 | |
*** tesseract has joined #openstack-lbaas | 06:41 | |
*** yboaron_ has joined #openstack-lbaas | 06:43 | |
*** yboaron_ has quit IRC | 06:48 | |
*** yboaron_ has joined #openstack-lbaas | 06:49 | |
*** rcernin has quit IRC | 07:00 | |
*** ccamposr has joined #openstack-lbaas | 07:03 | |
*** pcaruana has joined #openstack-lbaas | 07:07 | |
*** pnull has quit IRC | 07:19 | |
*** rpittau|afk is now known as rpittau | 07:20 | |
*** ivve has joined #openstack-lbaas | 07:42 | |
*** trident has quit IRC | 08:04 | |
*** trident has joined #openstack-lbaas | 08:05 | |
openstackgerrit | Adit Sarfaty proposed openstack/neutron-lbaas stable/stein: Support URL query params in healthmonitor url_path https://review.opendev.org/660930 | 08:31 |
*** ricolin has quit IRC | 09:15 | |
*** luksky has joined #openstack-lbaas | 09:29 | |
*** ccamposr has quit IRC | 09:38 | |
*** ccamposr has joined #openstack-lbaas | 09:38 | |
*** ccamposr__ has joined #openstack-lbaas | 09:38 | |
*** ccamposr__ has quit IRC | 09:38 | |
*** pnull has joined #openstack-lbaas | 10:32 | |
openstackgerrit | Ann Taraday proposed openstack/octavia master: [Jobboard] Importable flow functions https://review.opendev.org/659538 | 10:32 |
*** sapd1_x has joined #openstack-lbaas | 10:37 | |
*** pnull has quit IRC | 11:06 | |
openstackgerrit | Merged openstack/octavia master: Document health monitor UDP-CONNECT type https://review.opendev.org/660364 | 11:21 |
openstackgerrit | Merged openstack/octavia-dashboard master: Fix devstack plugin python3 support https://review.opendev.org/660813 | 11:34 |
*** ccamposr has quit IRC | 12:07 | |
*** boden has joined #openstack-lbaas | 12:09 | |
*** luksky has quit IRC | 12:33 | |
*** ccamposr has joined #openstack-lbaas | 13:05 | |
*** goldyfruit has joined #openstack-lbaas | 13:16 | |
*** luksky has joined #openstack-lbaas | 13:25 | |
*** ricolin has joined #openstack-lbaas | 13:49 | |
*** ccamposr__ has joined #openstack-lbaas | 14:10 | |
*** pcaruana has quit IRC | 14:10 | |
*** ccamposr has quit IRC | 14:12 | |
*** lemko has joined #openstack-lbaas | 14:21 | |
*** pcaruana has joined #openstack-lbaas | 14:29 | |
*** gthiemonge has quit IRC | 14:34 | |
*** gthiemonge has joined #openstack-lbaas | 14:34 | |
*** Vorrtex has joined #openstack-lbaas | 14:52 | |
*** ivve has quit IRC | 15:06 | |
*** yboaron_ has quit IRC | 15:11 | |
*** luksky has quit IRC | 15:12 | |
*** gcheresh_ has quit IRC | 15:39 | |
*** rpittau is now known as rpittau|afk | 15:51 | |
openstackgerrit | Merged openstack/octavia master: Fix tox for functional py36 and py37 https://review.opendev.org/660227 | 15:54 |
openstackgerrit | Merged openstack/octavia master: Correct OVN driver feature matrix https://review.opendev.org/660202 | 15:54 |
*** pcaruana has quit IRC | 16:09 | |
*** ivve has joined #openstack-lbaas | 16:23 | |
*** gthiemonge has quit IRC | 16:30 | |
*** gthiemonge has joined #openstack-lbaas | 16:30 | |
*** sapd1_x has quit IRC | 16:54 | |
*** ricolin has quit IRC | 16:55 | |
*** pcaruana has joined #openstack-lbaas | 17:00 | |
*** goldyfruit has quit IRC | 17:03 | |
johnsom | cores - It would be nice to get this backport merged so we can cut a dashboard release: https://review.opendev.org/#/c/660769/ | 17:14 |
*** goldyfruit has joined #openstack-lbaas | 17:36 | |
*** lemko has quit IRC | 17:51 | |
*** gcheresh_ has joined #openstack-lbaas | 17:53 | |
*** luksky has joined #openstack-lbaas | 18:20 | |
*** gcheresh_ has quit IRC | 18:23 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Convert listener flows to use provider models https://review.opendev.org/660236 | 18:38 |
*** tesseract has quit IRC | 19:05 | |
openstackgerrit | Merged openstack/octavia-dashboard stable/stein: Fix 403 issue when creating load balancers https://review.opendev.org/660769 | 19:43 |
*** gcheresh_ has joined #openstack-lbaas | 19:56 | |
*** rouk has joined #openstack-lbaas | 20:08 | |
rouk | when an amphora fails to provision a listener due to OOM, and it takes down the entire LB (which, not good), whats the recommended way to get it running again, now that its immutable and cant be scaled back down? | 20:09 |
rouk | since it cant be failed over because its immutable | 20:12 |
johnsom | rouk The retries/repair process will timeout and the controller working on the object will release the load balancer back to either ACTIVE or ERROR. | 20:26 |
rouk | so im in active error status right now, with all listeners sitting at provisioning errors (cause one listener OOM'd) | 20:27 |
rouk | whats step 1 to get the LB to not be dead? the amphoras are live. | 20:28 |
rouk | but... all the listeners are online operating status, but error provisioning status | 20:29 |
rouk | things are all over after an OOM | 20:29 |
johnsom | If it is in "ERROR" you can use the failover API to have the controller rebuild the LB. ERROR is not an immutable state for failover. | 20:33 |
rouk | well it is in my version, we spoke about this in the past. | 20:34 |
johnsom | You can also delete one of the failed listeners | 20:34 |
johnsom | What version are you running? | 20:34 |
rouk | best way to get the version youre looking for? | 20:34 |
*** gcheresh_ has quit IRC | 20:35 | |
johnsom | pip list, or from the logs | 20:35 |
johnsom | i.e.: ay 16 10:33:38 devstack octavia-worker[31892]: INFO octavia.common.config [-] /usr/local/bin/octavia-worker version 4.1.0.dev50 | 20:35 |
rouk | 3.0.2 | 20:37 |
johnsom | Ah, ok, yes, there was a bug fixed in 3.1.0 for the failover problem. | 20:39 |
johnsom | You should be able to delete one of the failed listeners. Have you tried that? | 20:40 |
rouk | it was immutable too, i had to delete the listener in the db, then set the status of the lb to active, then fail it over | 20:42 |
rouk | statuses are still broken, but at least its functional again, i guess. | 20:42 |
johnsom | Are you using a bionic based amphora image? Or did you update the haproxy to a newer version? | 20:43 |
johnsom | I'm curious how it ran out of memory. | 20:45 |
rouk | 2gb, 16 listeners | 20:45 |
rouk | dont have the LB flavors code yet, one size fits all does not fit all | 20:45 |
johnsom | We know that newer haproxy version are allocating a lot more memory than the older versions do. It's on our list of things to fix. | 20:46 |
*** pcaruana has quit IRC | 20:47 | |
rm_work | so... even though i have a depends-on for octavia lib, this is what happens during tox testing: | 20:48 |
rm_work | Collecting octavia-lib===1.1.1 (from -c /home/zuul/src/opendev.org/openstack/requirements/upper-constraints.txt (line 131)) | 20:48 |
rm_work | Downloading http://mirror.regionone.limestone.openstack.org/pypifiles/packages/b2/5f/fd0da2ce699b8bf83570f6df6d006f68baf4e437337004bc6f0fb9865947/octavia_lib-1.1.1-py2.py3-none-any.whl | 20:48 |
rm_work | which kinda makes sense, the devstack inclusions don't affect tox.ini or requirements.txt | 20:48 |
rm_work | so how do we fix this? | 20:48 |
rm_work | do we change tox.ini to do some checks of devstack variables, and change what it installs? | 20:49 |
rm_work | this is going to be a problem in the future for any patch to octavia main repo that relies on a change to octavia-lib | 20:50 |
*** Vorrtex has quit IRC | 20:50 | |
johnsom | rm_work: I think I have an answer, I just need to make a sandwich. | 20:51 |
johnsom | Give me 5-10 | 20:51 |
rm_work | kk np i'll do other stuff, back in 20 | 20:51 |
rm_work | will give you plenty of time, sandwiches are not to be rushed | 20:51 |
rm_work | (for real, sandwich making is serious business IMO) | 20:52 |
johnsom | One of those days where I've been working since before 6 and really didn't get much to eat until now. | 20:57 |
johnsom | So, I'm guessing the tempest gates work, but the unit/functional don't. Correct? | 20:58 |
rm_work | well, right now nothing works | 21:05 |
rm_work | so it's hard to say | 21:05 |
johnsom | Which patch? | 21:05 |
rm_work | :D | 21:06 |
rm_work | multivip | 21:06 |
johnsom | Yeah, so scenarios is because your DB migration is messed up. | 21:07 |
rm_work | right | 21:07 |
johnsom | http://logs.openstack.org/39/660239/9/check/octavia-v2-dsvm-scenario/9e19e9e/job-output.txt.gz#_2019-05-22_23_36_48_267958 | 21:07 |
rm_work | but i think the octavia-lib is installed fine there from source | 21:07 |
johnsom | As for the others: https://github.com/openstack/openstack-zuul-jobs/blob/master/zuul.d/jobs.yaml#L758 | 21:07 |
rm_work | so yes, tox is the real thing | 21:07 |
johnsom | We probably need to setup these | 21:07 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Allow multiple VIPs per LB https://review.opendev.org/660239 | 21:10 |
rm_work | there's the fix for the head issue | 21:10 |
rm_work | working on how i'm going to get the stuff to plug properly now... | 21:10 |
rm_work | though i have to run get a thing notarized before the notary place near me closes today | 21:11 |
*** boden has quit IRC | 21:37 | |
colin- | having trouble tracking this down in the housekeeping controller, any guidance on it? | 21:37 |
colin- | octavia-housekeeping[25577]: WARNING urllib3.connectionpool [-] Connection pool is full, discarding connection | 21:38 |
johnsom | oslo.db? | 21:38 |
johnsom | Ah, urllib3, probably not | 21:39 |
colin- | around the time i saw it, new amps were having certificate data written to them | 21:40 |
johnsom | The only thing I can think of in housekeeping that would use urllib3 is the cert rotation | 21:40 |
colin- | wasn't much else going on as far as i could tell | 21:40 |
*** ut2k3 has joined #openstack-lbaas | 21:40 | |
ut2k3 | Hi johnsom how are you? I am still unlucky with my reachability issue :/ | 21:41 |
ut2k3 | I've adjusted that I am able to SSH into the amphora via the octavia LCX-Container | 21:41 |
johnsom | colin- Were they new amps or old amps getting new certs? | 21:42 |
johnsom | ut2k3 Hi. Cool, let's jump into one of the amps, then "ip netns exec amphora-haproxy bash" to get a shell inside the network namespace. | 21:42 |
johnsom | Then from there let's do an "ifconfig" | 21:43 |
ut2k3 | https://imgshare.io/image/screenshot-2019-05-23-234312.rVQlS thats the amphora I am jumping on | 21:44 |
ut2k3 | http://paste.openstack.org/show/752007/ | 21:45 |
rm_work | ut2k3: not a fan of vehicles? definitely prefer ut2k4 myself :D | 21:45 |
johnsom | You should see eth1 and eth1:0, where eht1:0 will have your VIP IP | 21:45 |
johnsom | Ah, ok, different image. that is fine, both IPs are there | 21:46 |
johnsom | There is no eth2? | 21:46 |
ut2k3 | http://paste.openstack.org/show/752008/ | 21:46 |
johnsom | If there is no eth2, are the members on the same subnet as 10.123.x.x? | 21:48 |
ut2k3 | Yep all in the same | 21:48 |
ut2k3 | eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 | 21:49 |
ut2k3 | inet 10.123.0.19 netmask 255.255.0.0 broadcast 10.123.255.255 | 21:49 |
johnsom | Next we want to "tcpdump -nli eth1" then from another window, try connecting to the VIP / port as you would expect it to work | 21:49 |
ut2k3 | thats one eth0 from a pool member | 21:49 |
johnsom | Ok, cool, you are running a one-armed LB, that is fine. | 21:49 |
ut2k3 | Thats from a host > curl: (7) Failed to connect to 10.123.0.9 port 6443: No route to host | 21:53 |
ut2k3 | tcpdump -nli eth1 is running on the amphora | 21:53 |
ut2k3 | Tried to connect from a k8s minion which was able to connect to it before the problems started | 21:54 |
johnsom | You should see output in the tcpdump if the packet is getting to the amphora | 21:54 |
ut2k3 | `21:55:38.811404 ARP, Request who-has 10.123.0.9 tell 10.123.0.19, length 28` | 21:56 |
ut2k3 | 10.123.0.19 is k8s node I try to request currently with: `watch "curl -v https://10.123.0.9:6443"` | 21:56 |
johnsom | You should see something like this: | 21:58 |
johnsom | 21:58:10.867009 IP 172.24.4.1.47586 > 10.0.0.44.6443: Flags [S], seq 234743365, win 29200, options [mss 1460,sackOK,TS val 2802105723 ecr 0,nop,wscale 7], length 0 | 21:58 |
johnsom | That is from a tcpdump I did with an LB that has a listener on 6443 like you have. | 21:59 |
johnsom | There are a lot more packets in that connection, but that is the first one. | 21:59 |
ut2k3 | Thats the lb we are playing with: http://paste.openstack.org/show/752009/ | 21:59 |
*** ivve has quit IRC | 21:59 | |
johnsom | Yeah, If you don't see the packet in tcpdump, it's not making it to the amphora instance at all. As I said yesterday, I'm pretty sure the amphora is healthy. Let's try to prove that real quick. | 22:00 |
ut2k3 | http://paste.openstack.org/show/752010/ | 22:01 |
johnsom | Inside the amphora, in the netns, do a "ifconfig lo up", then curl the VIP from the amphora netns | 22:01 |
ut2k3 | In the `amphora-haproxy` netns yes? | 22:02 |
johnsom | yes | 22:02 |
ut2k3 | Yep `curl -v https://10.123.0.9:6443` is working there. | 22:04 |
johnsom | Yeah, ok, so the amphora and load balancer are healthy. The problem is somewhere in nova/neutron such that the requests are not making it to the amphora. | 22:05 |
johnsom | from your controller, do a "openstack network agent list" and check that all of the neutron agents are "UP" with ":-)" next to them. | 22:06 |
johnsom | It could be that a neutron agent is down and the traffic isn't flowing to the instance correctly | 22:06 |
ut2k3 | http://paste.openstack.org/show/752012/ (amphora running on the virt0 host currently, as well the 4 K8s nodes) | 22:08 |
johnsom | da45c5d6-a68a-4853-a04a-f12ca413a07e | L3 agent | virt0 | de-kar-1b | XXX | UP | neutron-l3-agent | 22:08 |
johnsom | That sick l3-agent might be the problem | 22:09 |
johnsom | But there are a lot of them unhealthy.... It could be any one of them | 22:09 |
ut2k3 | Ok let me check/restart them | 22:09 |
johnsom | It's the neutron-linuxbridge-agent and neutron -l3-agent that matter for this scenario. The others aren't needed for this test | 22:10 |
ut2k3 | http://paste.openstack.org/show/752013/ | 22:16 |
ut2k3 | I've cleaned up the things, as well restarted on virt0 the neutron-linuxbridge | 22:17 |
ut2k3 | Thats our current setup > https://docs.openstack.org/security-guide/_images/1aa-network-domains-diagram.png | 22:17 |
johnsom | colin- Here is a hunch based on a quick google. https://github.com/openstack/octavia/blob/master/octavia/amphorae/drivers/haproxy/rest_api_driver.py#L400 and https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L92 | 22:17 |
ut2k3 | The old agents seemed to be "zombies" listed there. | 22:17 |
johnsom | Hmm, ok, no L3 agent on vrt0 now... I don't know if that is ok or not. | 22:18 |
johnsom | Does the FIP work now? | 22:18 |
ut2k3 | Hmm according to https://docs.openstack.org/security-guide/_images/1aa-network-domains-diagram.png it should be ok | 22:21 |
johnsom | colin- Maybe we are exceeding the 10 connections default, though that seems *odd* | 22:22 |
ut2k3 | Still not luck after restarting full neutron on the two network nodes as well on the compute node | 22:24 |
johnsom | Hmmm, then I'm not sure why traffic isn't getting to that vm instance.... | 22:25 |
ut2k3 | For example on virt0 I am reaching the VM without problems via Floating IP as well Private IP to Private IP | 22:26 |
johnsom | So on host you can hit the VIP? | 22:26 |
ut2k3 | Sorry for not being precise => the k8s nodes which are in the same subnet with their private ips can reach each other. | 22:29 |
ut2k3 | Thats from a k8s node to the VIP http://paste.openstack.org/show/752015/ | 22:30 |
johnsom | Yeah, you shouldn't be able to ping the VIP, that is normal | 22:30 |
johnsom | This is troubling: (10.123.0.9) at <incomplete> on eth0 | 22:31 |
colin- | johnsom> colin- Were they new amps or old amps getting new certs? | 22:31 |
colin- | new amps | 22:31 |
ut2k3 | http://paste.openstack.org/show/752016/ | 22:31 |
johnsom | colin- Hmmm, not sure why housekeeping would be connecting to them... | 22:32 |
colin- | yeah good point | 22:32 |
colin- | wish i could recreate it reliably, don't want to just idle with debug on | 22:32 |
colin- | will look a bit more tomorrow | 22:32 |
johnsom | Ok, let me know | 22:32 |
colin- | will do, thx | 22:32 |
johnsom | ut2k3 You are getting into neutron debugging that I'm not an expert in. I would have expected detaching the FIP and re-attaching would have cleared things up if the network node was messed up. | 22:34 |
johnsom | ut2k3 I wonder what happens if you try to attach another FIP? | 22:34 |
ut2k3 | Sure can do that :) | 22:34 |
ut2k3 | sec | 22:34 |
*** goldyfruit has quit IRC | 22:38 | |
ut2k3 | Nope doesn't help | 22:40 |
johnsom | I don't know what is wrong. It's outside the VM though. The failovers have rebuild that those, a few times now. We know the code inside the VM is working. We know packets don't arrive on the interface inside the VM. | 22:42 |
johnsom | We confirmed the port was in the security group right? It should be, but just double checking. | 22:43 |
ut2k3 | NO SG => octavia-lb-acae625f-01ff-4bfc-9b74-df0df3f59be6 [10.123.0.9 fa:16:3e:8b:e4:e4Octavia] DOWN | 22:50 |
ut2k3 | Has SG => octavia-lb-vrrp-e66c2f18-8b87-42c6-9b13-fa4485c79c86 [10.123.0.24 fa:16:3e:be:03:ea] UP | 22:50 |
johnsom | Yeah, that SG has the port open in it right? | 22:50 |
*** goldyfruit has joined #openstack-lbaas | 22:53 | |
ut2k3 | I think I found something. I think the wrong SG is attached to that. | 22:56 |
ut2k3 | Could it be that I took something wrong when we rebuild the amphora? | 22:57 |
ut2k3 | May you remember I have 3 LB, and it seems the SG got mixed up.. | 22:57 |
ut2k3 | http://paste.openstack.org/show/752018/ | 23:04 |
ut2k3 | But the listener of that LB is => Listener: cluster-production-k8s-de-kar--4fo2nxngz7a6-api_lb-7erqpobqhuh3-listener-dwnqi4thczuzTCP6443 | 23:05 |
*** sapd1_x has joined #openstack-lbaas | 23:06 | |
johnsom | ut2k3: ok, so we got the ports put of order when we recreated the amp records | 23:11 |
ut2k3 | Seems like that. Could that explain also the missing arp entry here? Question is: can I swap the port via Database Updates and then do a proper failover again? | 23:13 |
ut2k3 | From the SG that are attached to this ports, I am able to spot which port belongs to which LB | 23:16 |
johnsom | Sure, that should work | 23:17 |
*** rcernin has joined #openstack-lbaas | 23:18 | |
ut2k3 | `UPDATE amphora SET load_balancer_id = 'CORRECT_LB_ID' WHERE vrrp_ip = 'PORT_IP'` | 23:21 |
ut2k3 | Then doing the failover, do you think that would work? | 23:22 |
*** goldyfruit has quit IRC | 23:22 | |
johnsom | Umm | 23:27 |
johnsom | Yeah, likely that will work | 23:28 |
ut2k3 | Hmm nope | 23:38 |
ut2k3 | So I gonna go to bed. I will follow that lead and update the amphora vrrp_port_id manually and then do the failover | 23:41 |
ut2k3 | Thanks, I will let you know if that helped | 23:41 |
*** ut2k3 has quit IRC | 23:41 | |
*** goldyfruit has joined #openstack-lbaas | 23:42 | |
*** trident has quit IRC | 23:51 | |
*** trident has joined #openstack-lbaas | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!