Monday, 2019-02-11

openstackgerritMichael Johnson proposed openstack/octavia master: Add LXD support to Octavia devstack  https://review.openstack.org/63606600:34
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Add an lxd octavia scenario job  https://review.openstack.org/63606900:35
*** sapd1 has joined #openstack-lbaas01:38
dayoujohnsom, yes, is there anything I can help?01:38
johnsomdayou Hi Mr. Dashboard wizard.01:44
dayou:-P01:45
johnsomdayou I was wondering if there was a chance you could add a flavor selector to the LB create page in the octavia-dashboard? It would be great if we had that for Stein.01:45
johnsomIt is the flavor ID here: https://developer.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-a-load-balancer-detail#id201:46
*** yamamoto has joined #openstack-lbaas01:46
johnsomI fixed the SDK to have it in this patch: https://review.openstack.org/#/c/633849/01:46
johnsomYou could present the user a list using this API: https://developer.openstack.org/api-ref/load-balancer/v2/index.html?expanded=list-flavors-detail#list-flavors01:47
johnsomWhich hasn't yet merged in SDK: https://review.openstack.org/#/c/634532/01:47
dayouCool, I'll start working on it this week01:47
johnsomIf you don't have time, that is ok too. I can attempt to fumble my way to making it happen.01:48
openstackgerritMichael Johnson proposed openstack/octavia master: Add LXD support to Octavia devstack  https://review.openstack.org/63606601:49
johnsomThank you sir. Let me know if I can help an any way.01:50
dayouYes, sir, it's my pleasure to help with it, I'll see whether I can I can come up with something in the next two weeks for review.01:52
johnsomdayou: Also, let me know if there are any priority dashboard patches we need to review. Feel free to ping me with those at any time.01:54
dayoujohnsom, got you01:55
sapd1johnsom: Hi02:02
sapd1you are working on lxd for octavia,02:02
sapd1We will run container instead of VM02:03
johnsomI am. I have it working locally, but need to get the gate working.  Though I have to say, I am not sure I would use it for production.02:09
johnsomNova-lxd seems to need some work.02:10
sapd1johnsom: why don't you use zun?02:24
johnsomI don’t think it support lxc.02:44
*** yamamoto has quit IRC02:48
sapd1johnsom: Oh, why lxd. why not docker02:52
*** psachin has joined #openstack-lbaas03:05
*** ramishra has joined #openstack-lbaas03:27
sapd1seem like octavia do not support ovn provider yet03:57
johnsomYes, there is an OVN driver04:04
sapd1johnsom: I'm trying to use ovn driver04:24
sapd1But I don't know how to config04:24
*** hongbin has joined #openstack-lbaas04:25
sapd1https://github.com/openstack/networking-ovn/blob/a4e69319ad/devstack/lib/networking-ovn#L59504:25
sapd1follow this script but it does not work04:25
*** hongbin has quit IRC04:53
*** AlexStaf has quit IRC05:33
*** gcheresh has joined #openstack-lbaas06:12
openstackgerritMichael Johnson proposed openstack/octavia master: Add LXD support to Octavia devstack  https://review.openstack.org/63606606:20
*** gcheresh has quit IRC06:31
*** yamamoto has joined #openstack-lbaas06:46
*** yamamoto has quit IRC06:51
*** velizarx has joined #openstack-lbaas07:17
*** gcheresh has joined #openstack-lbaas07:20
openstackgerritMichael Johnson proposed openstack/octavia master: Add LXD support to Octavia devstack  https://review.openstack.org/63606607:25
*** yboaron has quit IRC07:35
*** velizarx has quit IRC07:43
*** psachin has quit IRC07:45
*** velizarx has joined #openstack-lbaas07:56
*** AlexStaf has joined #openstack-lbaas08:02
*** rpittau has joined #openstack-lbaas08:07
*** Emine has joined #openstack-lbaas08:18
cgoncalvessapd1, try this http://paste.openstack.org/show/744822/08:25
cgoncalvesit works for me08:25
sapd1cgoncalves: thank you08:34
sapd1I have run it successfully. :D08:35
*** Emine has quit IRC08:37
*** Emine has joined #openstack-lbaas08:42
*** yboaron has joined #openstack-lbaas08:48
*** yboaron_ has joined #openstack-lbaas08:53
cgoncalvesgreat08:54
cgoncalvessapd1, ah, make sure you have LIBS_FROM_GIT=python-octaviaclient08:54
cgoncalvesso that you get https://review.openstack.org/#/c/633562/08:54
sapd1cgoncalves: I'm trying install manual08:54
sapd1yeah08:55
*** yboaron has quit IRC08:55
*** ccamposr has joined #openstack-lbaas09:06
*** Emine has quit IRC09:29
*** Emine has joined #openstack-lbaas09:36
*** kobis1 has joined #openstack-lbaas09:43
*** kobis1 has left #openstack-lbaas09:44
openstackgerritNir Magnezi proposed openstack/octavia master: Encrypt certs and keys  https://review.openstack.org/62706410:06
*** salmankhan has joined #openstack-lbaas10:21
*** salmankhan1 has joined #openstack-lbaas10:31
*** salmankhan has quit IRC10:32
*** salmankhan1 is now known as salmankhan10:32
*** mkuf_ has joined #openstack-lbaas10:48
*** mkuf has quit IRC10:52
*** mkuf_ has quit IRC11:24
*** sapd1 has quit IRC11:37
*** Emine has quit IRC11:46
*** velizarx has quit IRC11:56
*** velizarx has joined #openstack-lbaas11:58
openstackgerritNir Magnezi proposed openstack/octavia master: Encrypt certs and keys  https://review.openstack.org/62706412:09
*** mkuf has joined #openstack-lbaas12:13
*** Emine has joined #openstack-lbaas12:35
*** yamamoto has joined #openstack-lbaas13:00
*** gcheresh has quit IRC13:01
*** gcheresh_ has joined #openstack-lbaas13:01
openstackgerritNir Magnezi proposed openstack/octavia master: WIP: CentOS with multiple fixed ips  https://review.openstack.org/63606513:01
*** velizarx has quit IRC13:04
*** yamamoto has quit IRC13:05
*** velizarx has joined #openstack-lbaas13:10
*** trown|outtypewww is now known as trown13:13
*** sapd1 has joined #openstack-lbaas13:34
*** yamamoto has joined #openstack-lbaas13:40
*** gcheresh has joined #openstack-lbaas13:43
*** gcheresh_ has quit IRC13:44
*** yamamoto has quit IRC13:44
nmagnezicgoncalves, are you able to locally build a centos based image?13:54
nmagnezicgoncalves, asking because I get the following: Cannot uninstall 'virtualenv'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.13:55
cgoncalvesnmagnezi, yes. make sure you use DIB from master13:58
nmagnezicgoncalves, aye.. should we bump out deps or something? (Assuming there's a newer release there..)13:58
cgoncalveswe need a new release of DIB13:59
cgoncalvesCI is okay because it pulls from master13:59
nmagnezicgoncalves, aye. Added to LIBS_FROM_GIT locally for now14:00
*** yboaron_ has quit IRC14:03
*** yboaron_ has joined #openstack-lbaas14:03
cgoncalvesthought of the day: running grenade locally is a PITA. countless issues I've ran into14:48
openstackgerritboden proposed openstack/neutron-lbaas master: stop using common db mixin  https://review.openstack.org/63557014:56
cgoncalveskernel panic on cirros, great14:57
*** AlexStaf has quit IRC15:08
*** yboaron_ has quit IRC15:21
openstackgerritMargarita Shakhova proposed openstack/octavia master: Support create amphora instance from volume based.  https://review.openstack.org/57050515:21
*** yboaron_ has joined #openstack-lbaas15:21
*** fnaval has joined #openstack-lbaas15:38
*** yboaron_ has quit IRC15:42
*** yboaron_ has joined #openstack-lbaas15:42
*** hebul_ has joined #openstack-lbaas15:51
*** hebul_ has quit IRC15:54
*** hebul has joined #openstack-lbaas15:55
hebulHello All, who will (could) help me with some octavia questions ?15:57
johnsomhebul: you are in the right place. What can we help with?15:58
hebulThank you !15:59
*** sapd1 has quit IRC16:01
hebulQuestion 1. After amphora VM was created (with net id provided) I found out that in consume 2 IP in that network16:01
hebulit consume16:02
hebulone consumed IP is shown when we ask for lb list through CLI. But also we can see one more IP consumed when ask about vm list in openstack16:04
johnsomCorrect, each has a base port with IP and a secondary IP (allowed address pairs in neutron) that is used for the VIP and HA.16:05
*** ramishra has quit IRC16:09
hebul@johnsom Why didn't we use single one for all purposes ?16:10
johnsomThe VIP is a special port, called  an allowed address pair port in neutron. It allows us to move the VIP address between VMs in the case of a failure. When running in Active/Standby topology, this can happen in around a second. In single mode, it takes a bit longer16:12
colin-not crazy about the extra IPs either but the resilience he's describing made it a no-brainer for me16:13
*** AlexStaf has joined #openstack-lbaas16:17
*** fnaval has quit IRC16:17
johnsomYeah, in theory a single topology amp could use just one IP, but it would require a new network driver be written and no one has been motivated to do it.  Pretty much all of use always run Active/Standby.16:17
colin-have found that works very well, fwiw16:18
colin-(to hebul primarily hope that helps!)16:18
johnsomcolin- Did you tune it or are you running with the defaults?16:18
colin-the threads, similar to hm? default for now16:19
johnsomcolin- It can be tuned to failover much faster than the defaults if you want it to.16:19
colin-oh i see, the freshness16:19
colin-gotcha16:19
colin-also default for now16:19
johnsomI mean the Active/Standby settings.  Just curious. I run defaults, but have demoed with it tuned16:19
hebulThank you !!! So as I see it is true limitation for now (as we cant move VIP from one VM to another if VM doesn't have network port or we have to change driver)16:20
colin-like heartbeat_timeout?16:20
*** gcheresh has quit IRC16:20
johnsomhttps://docs.openstack.org/octavia/latest/configuration/configref.html#keepalived-vrrp16:20
johnsomThose first four settings16:21
*** Emine has quit IRC16:21
colin-oh no, wasn't familiar with this thanks16:22
johnsomHmm, forgot we dropped the vrrp_advert_int to 1, so that is good already.  We can go lower with the newer versions of keepalvied, but I'm not sure it's really needed.16:22
colin-oh i have been meaning to ask this but never think of it when we are chatting, is there any frequency adjustment on health check failuer by default?16:22
colin-old hw lbs i used to manage would sometimes increase the frequency of their checking after a failure and i always found that bothersome16:22
johnsomAh, so if it sees a failure it polls more often?16:23
colin-right16:23
colin-i'm assuming that is not true in our case16:23
johnsomHmm, I don't think we have that today. If HAproxy can do it, you would have to do a custom template for it.16:23
johnsomYeah, I don't think we do that.16:24
colin-no i actually prefer not to have it because it changes the epxected behavior with all these parameters imo, just double checking16:24
johnsomI could see that being a bit annoying with the logs scrolling more.16:24
johnsomhebul Any other questions we can help with?16:25
hebulQuestion 2 is coming. How to sort out management network for octavia and amphora ? What happens in private network environment (e.g. VXLAN)?16:28
*** yboaron_ has quit IRC16:28
hebulPrivate network won't have connectivity to external world (incuding octavia mgmt net) by default16:29
johnsomSo, let me clarify and then answer.16:29
johnsomThere is the lb-mgmt-net, which is used for the controllers to talk to the amphora and the amphora to talk back to the controllers. It is typically a private network setup for this purpose, but it can be shared and/or routed. It is a TLS TCP connection to the amps, and a UDP back to the controllers. No tenant traffic crosses this network as it is isolated from the tenant traffic inside a network namespace in the amp.16:31
johnsomVIP and member networks, isolated inside the network namespace in the amphora, are hot-plugged into the amphora instance as the user configures their load balancer. We support any network that neutron supports for this, could be tenant private, could be a public external network.16:33
johnsomFundamentally the lb-mgmt-net is just a neutron network. The harder part is how to make it available for the controllers. There are many ways to do this. Provider networks, bridging it out to the controllers, setting up routes, etc.16:34
johnsomDid that help answer the question?16:34
openstackgerritMichael Johnson proposed openstack/octavia master: Add LXD support to Octavia devstack  https://review.openstack.org/63606616:35
hebulYes, it helped to get closer but not clarified :)16:40
hebulIf I understood correctly, we have to provide both provider network for amphorae and VXLAN for private tenant networks16:44
hebulfor hypervysors16:44
hebulhypervisors16:44
johnsomprovider network is optional, that is just one way to setup the lb-mgmt-net.16:45
johnsomYour tenant networks would use whatever mechanism you use today for neutron networks on your compute hosts. VXLAN is fine, as Octavia only talks to neutron and nova APIs for it. How it works behind nova and neutron, we don't need to know.16:47
johnsomSo for OpenStack Ansible, they chose to use a provider network for the lb-mgmt-net. For Redhat OSP 13, they chose to not use provider networks but to bridge it out of neutron.16:48
johnsomIf your neutron is all VXLAN, you could probably even have the Octavia controllers participate in the VXLAN overlay directly if you wish.16:49
cgoncalvesfor OSP that is the default, yes, although one can create a neutron network and pass that in to the installer. the installer will see the network already exists and use it instead16:51
hebul"The harder part is how to make it available for the controllers. There are many ways to do this. Provider networks, bridging it out to the controllers, setting up routes, etc."-- bridging it out to the controllers - what does it mean ?16:53
hebulDoes it mean that I have to connect Octavia services to OVS - integration to the same internal OVS VLAN ID that is used for internal project network for amphorae ?16:55
johnsomWell the lb-mgmt-net is a neutron network. You create it with "openstack network create". At that point it lives in neutron and is connected to the amphora as needed. However, you need to also have a way for your controller processes (worker, health manager, housekeeping) to be able to access that network.16:55
johnsomThat is one option yes. That is how we do it in devstack: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L35816:56
*** ccamposr has quit IRC16:57
*** fnaval has joined #openstack-lbaas16:59
*** velizarx has quit IRC17:01
hebulOk, johnsom, thanks a lot. I have more or less basic understanding about second question. Is there any additional resources about octavia setup in different network scenarios to read more carefully and think ?17:03
johnsomNot really. It's a TODO item.17:06
hebulOk, question number 3 (short one): multiple lb-mgmt-net - is it supported ?17:11
hebulor it is intended to create very large network at the beginning ?17:13
johnsomTechnically it is supported, but currently there is no way to have the controllers select different networks when booting the amphora.  Most of us use large subnets.17:15
hebulok, I see.17:18
hebulThank you, <johnsom>17:21
johnsomSure, let us know if we can help more.17:21
hebulDo you think 2 -3 hours weekly I have on weekends will help you to start with TODO item ? Or it is too little ? :)17:23
johnsomAny help is welcomed.17:24
*** fnaval has quit IRC17:25
*** fnaval has joined #openstack-lbaas17:28
*** hebul has quit IRC17:33
*** rpittau has quit IRC17:44
*** hebul has joined #openstack-lbaas17:44
*** hebul has left #openstack-lbaas17:44
*** hebul has joined #openstack-lbaas17:45
*** hebul has left #openstack-lbaas17:45
*** rpittau has joined #openstack-lbaas17:45
*** AlexStaf has quit IRC17:58
*** trown is now known as trown|lunch18:03
*** salmankhan has quit IRC18:10
*** rpittau has quit IRC18:12
*** openstackgerrit has quit IRC18:51
colin-xgerman: are you using the octavia ingress controller in your clusters atm?18:56
colin-can't recall if we've spoken about this18:56
xgermannope18:56
colin-ok18:57
xgermanyeah, see http://blog.eichberger.de/posts/yolo_cloud/19:00
colin-haha, you had me at yolo19:01
colin-will give that a read at lunch thx for sharing :)19:01
johnsomWell, there you go: http://logs.openstack.org/69/636069/5/check/octavia-v2-dsvm-scenario-lxd/8c1b5e8/testr_results.html.gz19:07
cgoncalvesyou just painted it all green, confess!19:08
johnsomDisable most of the security protections, ignore all of the errors being thrown, ignore the fact that the kernel tuning doesn't work.19:08
cgoncalvesgreat job!19:08
johnsomAnd you can have LXD amps19:08
johnsomcgoncalves What is up with the centos gate. These 2:30 hour timeouts are getting...., old.19:09
johnsomTemped to pull centos out of the check gate all together until it can be shown to be functional19:09
cgoncalvesjohnsom, systemd patch merged upstream today IIRC19:09
johnsomLook in zuul for that patch....19:10
cgoncalveshttps://bugzilla.redhat.com/show_bug.cgi?id=166661219:10
openstackbugzilla.redhat.com bug 1666612 in systemd "Rules "uname -p" and "systemd-detect-virt" kill the system boot time on large systems" [High,Post] - Assigned to jsynacek19:10
johnsomSo how long do we have to wait until it gets in centos?19:10
cgoncalvesyou guys complain of EL too much. either because it ships old versions or, now, because it ships latest versions. pick one, but just one! :)19:11
cgoncalvesdunno19:11
cgoncalvesRHEL/CentOS 7.7?19:11
johnsomI just like things to work....19:11
cgoncalveslol19:12
cgoncalvesI will ask around19:12
johnsomSo, that lxd run was tempest in serial mode as there was a strange nova error about things "in use" that turned out to be apparmor. After centos times out there I will push a patch that puts it back to tempest concurrency 2. So we will have an apples to apples time comparison.19:13
johnsomOh, and I'm not sure the UDP stuff works. That was another whole set of errors about conntrack modules19:14
*** trown|lunch is now known as trown19:20
*** openstackgerrit has joined #openstack-lbaas19:21
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Add an lxd octavia scenario job  https://review.openstack.org/63606919:21
johnsomoctavia-v2-dsvm-scenario-centos-7TIMED_OUT in 2h 33m 20s19:23
eanderssonHave you guys seen octavia-api VIRT memory growing to crazy amounts? Shouldn't be an issue, but we are seeing crazy cpu usage associated with that19:52
colin-still the same figures (~650 amps) we were discussing last week19:53
johnsomThere really shouldn't be much load on the API side... It's all even driven.  What release are you running?19:54
eanderssonYea - we restarted the api and load dropped by 2019:57
eanderssonRocky19:57
eanderssonI can't explain why VIRT would be at 26GB19:57
johnsomI have seen uwsgi go out to lunch and eat CPU, but typically when that happens nova is the first one that goes down19:57
eanderssonWe are using uwsgi, but so is everything else19:59
johnsomYeah, we are too. I have just had times where I found multiple of the uwsgi processes spinning for no apparent reason.20:00
eanderssonThe odd thing is that cpu usage is not even that high20:00
cgoncalvesI think I have seen that happening, yes. load incrases with the number of amps created, never drops. not sure I still have the figure from grafana20:00
eanderssonbut for some reason restarting the octavia processes and load drops by 2020:00
*** jlaffaye has quit IRC20:01
eanderssonThe only thing that is odd that I can see is VIRT is at 26GB20:01
*** jlaffaye has joined #openstack-lbaas20:01
cgoncalvesthere is a known issue for the house keeping20:01
johnsomThat is crazy high for our API process. I mean, it doesn't do that much....20:01
cgoncalveshttps://review.openstack.org/#/c/627058/20:01
eanderssonIt feels like IO / memory pressure, but not sure how or why the api could cause that20:02
johnsomYeah, it pretty much is just sqlalchemy and rabbit20:02
cgoncalvesexactly, sqlalchemy...20:03
johnsomMaybe if you have one in that state, find the thread and connect a debugger to it and see what it's up to....20:03
cgoncalvesnot sqlalchemy's fault though but how we do db querying20:03
johnsom26GB though? Even if it was caching the whole octavia DB you would have a massive deployment for that20:04
cgoncalvesI've seen neutron-lbaas going waaaay above that20:05
johnsomPlus it shouldn't be burning CPU if it's not handling API calls20:05
colin-exactly20:06
cgoncalveshttps://cgoncalves.pt/trash/openstack/octavia/HSZxPMn.png20:06
colin-and the journal output suggests that transaction times are not abnormally long for the tasks it is performing20:06
colin-all 0.1 0.2s20:06
cgoncalvesthe load increase steps there are rally runs20:07
cgoncalves100 LBs IIRC20:07
eanderssonCan you check system load as well20:13
eanderssonLoad Avg20:13
cgoncalvesI don't have access to the system any longer, sorry. it was from mid December20:14
eanderssonpmap on a octavia-api process shows 2477 pages :D20:24
eanderssona busy nova process has 40020:24
eanderssonbusy neutron (with lbaas) has less than 40020:25
eanderssonNot sure why octavia would need to allocate so many pages20:25
johnsomI could see it if it was active, but idle no. There is still a stupid join in there, but that should purge after the request is done20:26
eanderssonYea - we do have a memory leak in neutron-lbaas as well, but it's different20:27
johnsomYeah, that is pretty much known20:28
eanderssonHonestly think my latest lbaas patch will fix that (or at the very least improve it a lot)20:28
eanderssonsince it does not have to do those crazy sql queries20:29
johnsomYeah, some of that craziness leaked over here with some patches attempting to reduce the number of round trips to mysql as it was emptying the connection pools/slots. Or something like that. Either way, they were bad patches20:31
johnsomWe have been working through fixing them.20:31
*** salmankhan has joined #openstack-lbaas20:32
*** salmankhan has quit IRC20:36
*** dmellado has quit IRC20:39
*** salmankhan has joined #openstack-lbaas20:53
nmagnezio/21:17
johnsomHi there21:18
nmagnezijohnsom, when you have a moment, please check my comment in https://storyboard.openstack.org/#!/story/2004112 so I can test my assumptions :)21:19
johnsomLooking21:20
*** salmankhan has quit IRC21:20
nmagnezijohnsom, thank you!21:21
johnsomCommented21:28
johnsomnmagnezi  I get your point, but I think the code is broken for IPv6 members. I think it writes out the first subnet and not the one specified.21:29
nmagnezijohnsom, so basically creating a LB with one member in subnet_a and another member in subnet_b will result the member in subnet_b to be unreachable? (Saying that so I can test my fix attempts)21:32
nmagneziDoes it happen only with IPv6? Or only with a mix of IPv4 and IPv6?21:32
johnsomThe test case I hit for that bug (sorry I didn't put it in the story) is boot up an LB, create tenant network, add IPv4 subnet, add IPv6 subnet, add an IPv6 member to the network. The interface file written out will be for the IPv4 subnet.21:34
johnsomThe IPv6 member will be unreachable21:34
nmagneziAck21:36
nmagneziWill try it out21:36
nmagneziThat's all I needed to know21:36
nmagneziCalling in a day..21:36
johnsomYep, sorry for the poor story quality.21:36
nmagneziOne last thing is that I responded to most of your comments here, and followed up with some questions: https://review.openstack.org/#/c/627064/21:37
nmagneziNo worries21:37
johnsomYeah, saw that. Will reply today21:37
*** dmellado has joined #openstack-lbaas21:43
*** trown is now known as trown|outtypewww22:00
rm_worknmagnezi / johnsom re: mixed-members subnet issues -- i definitely ran into that recently, thought there was already movement somewhere on fixing it?22:39
rm_worki forget if i had a patch or someone else did22:39
rm_workcolin-: do you see anything like this in your API logs? `2019-02-06 20:30:25.839 2364 WARNING oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeart thread processing, retrying...: error: [Errno 110] Connection timed out` octavia-api.log22:39
colin-good question, let me see if i can spot that line. not immediately familiar22:40
johnsomI fixed it for ubuntu, but there is still an open bug for redhat.   Really that whole chain needs to be re-worked however....22:40
rm_workjust look for "error during heartbeart"22:40
rm_workah maybe all that happened was a filed a story for it <_<22:41
colin-don't see that from either octavia-api output over the past three hours, looking back farther22:42
rm_workhmm k22:43
rm_workit'd be before a restart22:43
rm_workit takes a while for that line to start showing up IME22:43
johnsomDoes it really say "hearbeart"?22:44
rm_workyes22:45
rm_workit's an oslo error22:45
rm_workfrom rabbit22:45
rm_workunrelated to our heartbeats22:45
johnsomRight, I get that it's a rabbit thing and either oslo or rabbit code22:46
rm_workI'm just wondering whether it's a bug in how we use the client (missing a close somewhere?) or in the Oslo side22:47
johnsomhttps://bugzilla.redhat.com/show_bug.cgi?id=154210022:47
openstackbugzilla.redhat.com bug 1542100 in python-oslo-messaging "Can't failover when rabbit_hosts is configured as 3 hosts" [High,Closed: wontfix] - Assigned to jeckersb22:47
rm_workWere we supposed to be closing connections somehow and we never got the memo? lol22:47
johnsomWhere here is the code logging that: https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_rabbit.py#L89722:51
rm_workThat bug was supposedly fixed in pike22:52
johnsomYeah, it seems like it throws that if a rabbit node goes down or the network drops22:52
johnsomWhat is the exception logged right after that?22:53
rm_workNot sure22:53
johnsomAh, I guess it needs debug logging....  sigh22:53
colin-broadened the scope to 12h and still haven't found that error so far rm_work22:53
rm_workHmm ok22:53
rm_workWell, thanks22:53
johnsomAh, it's the connection timeout message22:55
rm_workok so... in my deployment, those started happening more and more frequently, and my digging showed that was because there were more and more of those threads, and they basically weren't dieing23:06
rm_workso they were building up23:06
rm_workuntil eventually there were so many the API process was no longer responsive23:06
colin-interesting23:07
colin-any chance that is happening and not logging that message?23:08
rm_workhmmm, what is your log level23:08
colin- default_log_levels is default values23:12
colin-if you're referring to that list?23:12
*** icey has quit IRC23:46
*** yetiszaf has quit IRC23:46
*** fyx has quit IRC23:46
*** coreycb has quit IRC23:46
*** fnaval has quit IRC23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!