Monday, 2018-07-09

*** threestrands has joined #openstack-lbaas00:03
*** threestrands has quit IRC00:03
*** threestrands has joined #openstack-lbaas00:03
*** shananigans has quit IRC00:40
*** shananigans has joined #openstack-lbaas00:40
*** longkb has joined #openstack-lbaas00:43
*** yamamoto has joined #openstack-lbaas00:52
*** threestrands_ has joined #openstack-lbaas01:18
*** threestrands_ has quit IRC01:18
*** threestrands_ has joined #openstack-lbaas01:18
*** threestrands_ has quit IRC01:19
*** threestrands_ has joined #openstack-lbaas01:20
*** hongbin has joined #openstack-lbaas01:20
*** threestrands_ has quit IRC01:21
*** phuoc has joined #openstack-lbaas01:21
*** threestrands_ has joined #openstack-lbaas01:21
*** threestrands_ has quit IRC01:21
*** threestrands_ has joined #openstack-lbaas01:21
*** threestrands has quit IRC01:21
*** phuoc_ has quit IRC01:24
*** phuoc_ has joined #openstack-lbaas01:52
*** sapd_ has quit IRC01:54
*** sapd has joined #openstack-lbaas01:55
*** phuoc has quit IRC01:55
*** phuoc has joined #openstack-lbaas02:02
*** phuoc_ has quit IRC02:04
*** phuoc_ has joined #openstack-lbaas02:16
*** phuoc has quit IRC02:18
*** sapd_ has joined #openstack-lbaas02:22
*** sapd has quit IRC02:22
*** annp has joined #openstack-lbaas02:30
*** phuoc has joined #openstack-lbaas02:40
*** phuoc_ has quit IRC02:44
*** phuoc_ has joined #openstack-lbaas02:46
*** phuoc has quit IRC02:49
*** phuoc has joined #openstack-lbaas02:50
*** phuoc_ has quit IRC02:52
lxkonghi johnsom, does octaiva amphora image support fedora now?02:53
lxkongi met with a problem when i was building amphora image with `-i fedora`02:53
lxkongthe error log: https://paste.ubuntu.com/p/mhKksYcQG6/02:53
*** phuoc has quit IRC03:08
*** sapd__ has joined #openstack-lbaas03:17
*** sapd_ has quit IRC03:18
*** ramishra has joined #openstack-lbaas03:20
*** gans has joined #openstack-lbaas03:25
*** gans has quit IRC03:27
*** atoth has quit IRC03:35
*** annp has quit IRC03:56
*** annp has joined #openstack-lbaas03:56
*** sapd has joined #openstack-lbaas04:03
*** hongbin has quit IRC04:27
*** phuoc has joined #openstack-lbaas04:34
*** phuoc has quit IRC04:40
*** links has joined #openstack-lbaas05:00
*** AlexStaf has quit IRC05:10
*** phuoc has joined #openstack-lbaas05:20
johnsomlxkong: looks like a bug. I added that element to remove those packages recently somince05:34
johnsomSince ubuntu’s image size keeps growing.  That is on stable branch though.05:35
johnsomWe don’t gate on fedora, just centos on the redhat side05:35
johnsomIt must be a missing pkg-map.05:36
lxkongjohnsom: i am using queens stable05:46
johnsomYeah, those package exceptions are only on the stable branches. I solved it a better way on master, just could not backport it due to policy.05:47
johnsomYou could build a master branch image if you really need fedora05:49
*** yboaron_ has joined #openstack-lbaas06:13
*** abaindur has quit IRC06:21
*** ispp has joined #openstack-lbaas06:24
*** longkb has quit IRC06:26
*** sapd has quit IRC06:26
*** longkb has joined #openstack-lbaas06:28
*** annp has quit IRC06:30
*** annp has joined #openstack-lbaas06:30
*** longkb has quit IRC06:37
*** longkb has joined #openstack-lbaas06:38
*** peereb has joined #openstack-lbaas07:05
*** tesseract has joined #openstack-lbaas07:06
*** velizarx has joined #openstack-lbaas07:07
*** rcernin has quit IRC07:08
*** nmanos has joined #openstack-lbaas07:20
*** tesseract has quit IRC07:25
*** tesseract has joined #openstack-lbaas07:27
*** kobis has joined #openstack-lbaas07:30
*** ispp has quit IRC07:32
nmagnezilxkong, If it's possible for you, I think you better favor centos over fedora amp (being tested in CI)07:35
lxkongnmagnezi: thanks for your advise. But for some reason(relating to openssl) we can only use fedora based image in our CI :-(07:36
lxkongjohnsom: can you tell me the patch, maybe i can backport to our internal repo07:37
*** velizarx has quit IRC07:37
*** nmanos has left #openstack-lbaas07:41
*** velizarx has joined #openstack-lbaas07:48
*** ispp has joined #openstack-lbaas07:54
*** rpittau has joined #openstack-lbaas07:57
*** zigo has quit IRC08:03
*** keithmnemonic[m] has quit IRC08:05
*** zigo has joined #openstack-lbaas08:05
*** threestrands_ has quit IRC08:12
*** keithmnemonic[m] has joined #openstack-lbaas08:27
*** yamamoto has quit IRC08:28
*** ramishra has quit IRC08:39
*** ispp has quit IRC08:44
*** ramishra has joined #openstack-lbaas08:45
lxkongjohnsom: is it this one https://review.openstack.org/#/c/559416/?08:49
lxkongjohnsom: btw, can i use master version(or even Rocky) amphora to work with queens octavia? I suppose the communication with amphora is backward compatible, right?08:51
*** yamamoto has joined #openstack-lbaas09:12
*** ramishra has quit IRC09:15
*** ramishra has joined #openstack-lbaas09:17
nmagnezilxkong, it *should* work, but I would advise against using master code in production (as opposed to stable versions), but on the other hand I know some folks are doing just that.09:20
*** kobis has quit IRC09:21
*** kobis has joined #openstack-lbaas09:22
*** abaindur has joined #openstack-lbaas09:22
*** kobis has quit IRC09:26
*** ispp has joined #openstack-lbaas09:46
*** kobis has joined #openstack-lbaas09:50
openstackgerritNguyen Van Trung proposed openstack/octavia master: Follow the new PTI for document build  https://review.openstack.org/58098310:04
*** longkb has quit IRC10:10
*** rraja has joined #openstack-lbaas10:50
lxkongnmagnezi: maybe I should add some context here. We are running Pike in production, and I am testing Queens, so i need fedora amphora image of queens for CI, and johnsom said there is some package issue for stable branch and it's impossible to backport the fix. What i'm thinking now is to either build fedora image of master branch or backport the fix to queens in our internal repo.11:29
lxkongit should be "it's impossible to backport the fix for upstream"11:30
*** atoth has joined #openstack-lbaas11:39
*** yamamoto has quit IRC11:57
*** kobis has quit IRC12:03
openstackgerritZhaoBo proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542012:09
openstackgerritZhaoBo proposed openstack/octavia master: WIP:UDP for [2]  https://review.openstack.org/52965112:09
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939112:09
*** kobis has joined #openstack-lbaas12:15
*** yamamoto has joined #openstack-lbaas12:16
*** yamamoto has quit IRC12:16
*** amuller has joined #openstack-lbaas12:17
*** yamamoto has joined #openstack-lbaas12:18
*** yboaron_ has quit IRC12:44
*** velizarx has quit IRC12:45
*** yboaron_ has joined #openstack-lbaas12:45
*** velizarx has joined #openstack-lbaas12:49
*** wolsen has quit IRC13:18
*** wolsen has joined #openstack-lbaas13:20
johnsomlxkong: I think we can fix fedora on the stable branches, it just needs a pkg-map change.  This will still be different than master fix, but should work. I will try to get a patch up today for you.13:29
johnsomWe just need to exclude those two package removals from fedora.13:30
*** fnaval has joined #openstack-lbaas13:44
*** links has quit IRC14:18
openstackgerritAllen proposed openstack/octavia master: [doc] Add the missing markup for the hyperlink title  https://review.openstack.org/58104614:26
*** AlexStaf has joined #openstack-lbaas14:53
*** yamamoto has quit IRC15:01
*** peereb has quit IRC15:03
*** AlexStaf has quit IRC15:07
*** yamamoto has joined #openstack-lbaas15:17
*** yamamoto has quit IRC15:22
*** velizarx has quit IRC15:24
* xgerman_ reading scrollback15:27
*** rraja has quit IRC15:37
*** kobis has quit IRC15:42
*** ivve has joined #openstack-lbaas15:44
*** yamamoto has joined #openstack-lbaas15:46
*** yamamoto has quit IRC15:46
johnsomlxkong https://review.openstack.org/58107316:02
johnsomOr Pike: https://review.openstack.org/58107416:03
*** ispp has quit IRC16:13
*** ramishra has quit IRC16:15
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Driver Library  https://review.openstack.org/57135816:18
*** abaindur has quit IRC16:25
*** velizarx has joined #openstack-lbaas16:38
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894216:40
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894216:42
*** yamamoto has joined #openstack-lbaas16:47
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894216:49
*** yamamoto has quit IRC16:57
jitekaHello, I ran into a problem today in my devstack lab, my first LB took longer than expected to spawn amphora VM probably due to lack of cached image as it was the first one17:01
jitekaAs result, even if now my amphora VM is running, amphora status is still "BOOTING" and amphora VM is active and reachable via ping on his lb-mgmt-net interface17:01
jitekaSo I have 2 question :17:01
jiteka1. Is it possible to force a refresh of the state to resume where it failed (or got stuck)17:01
jiteka2. What need to be cleaned on the octavia DB after deleting that stuck amphora VM ? (or is any other way exist to delete PENDING_CREATE loadbalancer ?)17:01
*** velizarx has quit IRC17:12
*** tesseract has quit IRC17:15
*** sapd has joined #openstack-lbaas17:21
xgerman_jiteka: this seems odd - the system should move it to error after an hour or so —17:30
*** sapd has quit IRC17:33
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep  https://review.openstack.org/54929517:41
*** sapd has joined #openstack-lbaas17:41
johnsomjiteka So in octavia it says "BOOTING"?17:44
*** sapd has quit IRC17:44
johnsomI mean "BOOTING" only waits a bit over a minute before going to ERROR with the default settings.17:48
johnsomSince nova sets it to active as soon as the process starts.17:48
*** rraja has joined #openstack-lbaas17:49
*** AlexStaf has joined #openstack-lbaas17:51
*** blake has joined #openstack-lbaas17:59
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894218:12
*** abaindur has joined #openstack-lbaas18:23
*** abaindur has quit IRC18:46
*** abaindur has joined #openstack-lbaas18:46
*** atoth has quit IRC18:53
*** blake has quit IRC19:01
*** blake has joined #openstack-lbaas19:02
*** blake has quit IRC19:06
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894219:14
*** blake has joined #openstack-lbaas19:16
*** blake has quit IRC19:20
*** amuller has quit IRC19:24
*** rraja has quit IRC19:31
*** blake has joined #openstack-lbaas19:37
*** abaindur has quit IRC19:45
*** kobis has joined #openstack-lbaas19:48
kobisgeneral question regarding driver API: so the driver allocates the VIP port (OK I can allocate myself if I choose), but I must cleanup within the LB delete method?19:53
kobisIs there a missing piece in the driver patch? or is that by design?19:53
*** AlexStaf has quit IRC19:54
johnsomHmm, interesting question.19:55
johnsomOur driver deletes the VIP when it is deleting the LB resources.19:55
kobisin that case, would it make sense to add a delete callback, which in the amphora case will do nothing?19:56
kobisand if that's not implemented, driver_lib will cleanup?19:56
johnsomOr should we just hook the status update callback when you set the LB to DELETED we make sure the VIP resources are cleaned up.19:57
kobisthat makes sense even more19:57
johnsomYeah, that way it's a double check that resource gets cleaned up19:58
kobiscool, so i'll comment on the patch just so we have that on record19:58
johnsomPerfect, I might be able to add that today19:58
kobisgreat :)19:59
*** abaindur has joined #openstack-lbaas20:04
johnsomkobis I have another issue to run by you. If the user specifies a vip address that is already in use, right now it bubbles up as a 500 driver error.  I was considering adding another driver exception for that case. Any concerns?20:05
johnsomThat assumes the driver is creating the vip20:05
kobisSince exceptions are the only way that openstack client shows comprehensive error messages, I think that it makes a lot of sense to have higher exception granularity than "driver error"20:08
johnsomYeah, the client will pass through the user string, but still a 500 is not the right answer for a user mistake20:08
johnsomkobis Also, on the delete thing.20:08
johnsomI am thinking maybe we should just delete the VIP port on a "DELETED" call back if octavia allocated the VIP. That way if the driver created the VIP and wants to reuse it in some way, they would have the option.  Thoughts?20:09
kobisI don't see why the driver would want that, but it makes sense that whoever created a resource, is responsible to cleanup20:10
johnsomYeah, I think that will be my plan. Are you doing port allocation or are you leaving it to Octavia?20:11
kobisI leave it to Octavia. NLBaaS did it, so we try to stick to the old code :)20:12
johnsomOk, sounds good.  Thanks!20:12
johnsomHopefully you folks are pretty close to done???20:12
kobisYeah we have a working yet untested driver :)20:12
kobiswhich means that our QE will  find it broken in various ways20:13
johnsomNice. We do have a decent tempest plugin if you want....20:13
kobisAnd I'm trying to think of a CI which will cover that as well20:13
kobisYeah once patches will merge in Octavia and on our end, I'll start looking into tempest20:14
kobisany other vendor drivers in progress?20:16
johnsomI don't think so, one had a staffing change so delayed20:17
johnsomkobis Don't forget to update this page when you are ready: https://docs.openstack.org/octavia/latest/admin/providers.html20:18
kobisSure. But would it make sense to update before patches are merged in Octavia and vmware-nsx?20:19
johnsomNo, probably not. It's there when you are ready...20:19
kobisOK20:19
xgerman_mmh, this privsep is getting sillier and sillier20:30
xgerman_so we fork the privsep server (A) as root, which then start a reciever thread (B) as a root again (why?), and then we start gunicorn, which forks a worker process, which successfully sends stuff to A, A sends stuff back, but B never recieves is and bloxks20:32
xgerman_our worker process is shedding privileges20:32
johnsomYeah, the worker in gunicorn should shed privs, at least that is the goarl20:33
xgerman_yeah, I have that20:33
xgerman_as I said it’s getting the results back where the whole thing bombs20:34
xgerman_I *would* think the reciever process should get started under th worker20:35
xgerman_or thread20:35
xgerman_will pester the oslo people20:35
johnsom+120:36
*** abaindur_ has joined #openstack-lbaas20:36
*** abaindur has quit IRC20:36
*** abaindur has joined #openstack-lbaas20:40
xgerman_my money is that the lock doesn’t connect properly aka why would a process running with less priviliges be allowed to read the memory of a more priviliged process…20:42
*** abaindur_ has quit IRC20:42
*** abaindur_ has joined #openstack-lbaas20:42
*** kobis has quit IRC20:43
*** abaindur has quit IRC20:45
*** abaindur has joined #openstack-lbaas20:46
*** abaindu__ has joined #openstack-lbaas20:47
*** abaindur_ has quit IRC20:48
*** abaindur has quit IRC20:50
*** abaindu__ has quit IRC20:53
*** abaindur has joined #openstack-lbaas21:02
openstackgerritMichael Johnson proposed openstack/octavia master: Fixes unlimited listener connection limit  https://review.openstack.org/58072421:27
*** abaindur has quit IRC21:32
*** abaindur has joined #openstack-lbaas21:33
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894221:37
nmagnezijohnsom, o/21:44
johnsomnmagnezi o/21:44
nmagnezijohnsom, that local.conf for the migration tool devstack21:44
nmagnezijohnsom, can I have it ? :D21:45
nmagneziI want to prep a node for testing21:45
johnsomnmagnezi This is the one the gate is running: http://logs.openstack.org/42/578942/50/check/neutron-lbaas-to-octavia-migration/330d126/controller/logs/local_conf.txt21:46
johnsomBut I can give you a simpler one if you want21:46
nmagneziPlease :)21:47
nmagneziI can clean it up but.. if you already have it :)21:47
johnsomhttp://paste.openstack.org/show/725371/21:48
johnsomThat is the one I am using local21:48
*** abaindur_ has joined #openstack-lbaas21:49
johnsomThe namespace provider is "haproxy" and the default is octavia21:49
*** abaindur has quit IRC21:53
lxkongjohnsom: thanks so much for the patch!21:57
johnsomlxkong Sure, sorry for the trouble!21:57
lxkongjohnsom: one more question, does octavia know how many active connections are there in a member?21:57
lxkongwe are still thinking about how to provide connection draining feature doc to our customers21:58
openstackgerritMichael Johnson proposed openstack/octavia master: Fixes unlimited listener connection limit  https://review.openstack.org/58072421:58
lxkongif that's thing like you said the customer should do that outside of octavia(using ansible or something else), they need to know when is the good time to remove the member21:58
johnsomlxkong Yes, I think it does.  One minute21:59
lxkongjohnsom: sure, thanks for checking21:59
*** AlexStaf has joined #openstack-lbaas21:59
johnsomyes, we get that metric in the stats socket. We don't currently collect that information however.22:03
*** abaindur has joined #openstack-lbaas22:06
*** abaindur_ has quit IRC22:07
lxkongjohnsom: you mean here https://github.com/openstack/octavia/blob/8f3eeb5b2e1ddfdca8e22d04dae2d892c9bcef47/octavia/amphorae/backends/utils/haproxy_query.py#L77? This function is collecting everything including the member stats?22:10
lxkongbut in here https://github.com/openstack/octavia/blob/8f3eeb5b2e1ddfdca8e22d04dae2d892c9bcef47/octavia/amphorae/backends/health_daemon/health_daemon.py#L133 we only care about `row['svname'] == 'FRONTEND'`22:11
johnsomlxkong yes.22:11
johnsomlxkong This is the output of the socket:22:12
johnsomhttp://paste.openstack.org/show/725376/22:12
johnsomWhere 7ce0fb2d-fee6-45e5-9e3a-b8fdada3dea6 and a786a26a-019d-4791-a430-b52961db8d58 are members22:12
johnsomThe stot column is the connection count per for the member22:13
*** numans has quit IRC22:14
johnsomlxkong The bummer here is we don't really want to collect stats per member and store them in the DB as that could be a lot of overhead22:14
lxkongjohnsom: ok, we have the data source, just ignored. But back to the feature, I do think it will be more convenient if octavia could provide a connection draining api on members. How about we evaluate the connection number on the fly after the api is called?22:15
johnsomlxkong I think we would need to make this an on-demand query22:15
lxkongfor `on-demand query`, you mean expose an API within the amphora?22:16
*** blake has quit IRC22:16
johnsomlxkong I guess how I would do it is add /v2.0/lbaas/pools/{pool_id}/members/{member-id}/stats to the API22:17
lxkongyeah, that's one option22:17
johnsomlxkong Then when called, make a call out to the amphora to query the current stot. We don't want to do the other stats as failover would impact the accuracy of the other status22:17
johnsomFYI, here are the column descriptions: http://cbonte.github.io/haproxy-dconv/1.6/management.html#9.122:18
lxkongthanks for the link22:19
*** numans has joined #openstack-lbaas22:19
*** fnaval has quit IRC22:19
johnsomlxkong opps, it's "scur" that you want22:19
*** rcernin has joined #openstack-lbaas22:20
johnsomlxkong Any of the cumulative columns will be inaccurate as they get reset on any VRRP or amp failover.22:20
*** threestrands_ has joined #openstack-lbaas22:20
*** threestrands_ has quit IRC22:20
*** threestrands_ has joined #openstack-lbaas22:20
*** abaindur_ has joined #openstack-lbaas22:22
lxkongyeah, that would be the restrict. but when lb is in failover, we can not set weight on members, so no need to check the stats actually, that's ok for the connection draining process22:23
*** jappleii__ has joined #openstack-lbaas22:23
johnsomlxkong I mean after the failover completes, the values reset.  So as long as you stick to only showing metrics that are "current" and not cumulative  you will be fine.22:23
johnsomlxkong so "scur" would be fine.22:24
lxkongahh, yeah, thanks for the explanation22:24
*** abaindur has quit IRC22:24
lxkongthat's what we care about22:24
johnsom+1.22:24
*** jappleii__ has quit IRC22:24
*** jappleii__ has joined #openstack-lbaas22:25
lxkongjohnsom: may i know why is it unlikely that we support a single api for that?22:25
johnsomlxkong Not sure I follow, what do you mean by single API? like have it part of member status?22:26
*** threestrands_ has quit IRC22:26
lxkongjohnsom: something like `/v2.0/lbaas/pools/{pool_id}/members/{member-id}/drain`22:27
johnsomlxkong Wouldn't that be duplicate? We already manage the weight via the member PUT API22:28
johnsomWe might as well have stats in case we later add some of these other "current" stats22:28
*** abaindur has joined #openstack-lbaas22:28
johnsomPlus it is consistent with the listener and load balancer /stats path22:28
*** abaindur_ has quit IRC22:30
lxkongyeah, i understand. I am thinking if we could make the process more easy for the user. For now, the process will be: 1. set weight 0 on the member; 2. watch the member status(once we support it) or wait for a timeout; 3. remove the member form the pool22:30
johnsomYes, that is the process22:30
johnsomThat is the same process you go through with hardware load balancers22:31
lxkongit's acceptable, i just want to know the design here :-)22:31
lxkong`That is the same process you go through with hardware load balancers`, i got the answer now22:32
johnsomI am open to other ideas. I just have the perspective of trying to keep it consistent22:32
*** KeithMnemonic has joined #openstack-lbaas22:32
lxkongjohnsom: i will create a story to track this feature request.22:32
johnsom+122:32
lxkongjohnsom: thanks for your answers. I will test your patch to create a fedora amphora image at the same time.22:33
johnsomcgoncalves http://logs.openstack.org/73/581073/1/check/octavia-v1-dsvm-scenario-kvm-centos.7/523530e/logs/devstacklog.txt.gz#_2018-07-09_16_42_41_84722:35
johnsomIt looks like fedora uses dnf which failed on a remove of those packages, but the centos yum returned 022:36
lxkongyeah, centos is ok22:37
johnsomSo remove a package with DNF that doesn't exist is an error, but with yum removing a non-existent package is fine.22:37
johnsomThat is why the centos gate didn't catch it.22:37
cgoncalvesinteresting! yum has different behaviors in fedora and centos22:39
johnsomcgoncalves Fedora is using dnf22:39
johnsomI don't know if that is a DIB choice or other....22:39
cgoncalveshttp://paste.openstack.org/show/725377/22:40
johnsomcgoncalves lol, oye. I gave it the benefit of the doubt, but yeah, sad.....22:40
*** abaindur_ has joined #openstack-lbaas22:45
xgerman_ok, this is the part where privsep is hanging: https://github.com/openstack/oslo.privsep/blob/master/oslo_privsep/comm.py#L13422:46
*** abaindur has quit IRC22:47
cgoncalvesperhaps a 'ignore_nonexistent: true|false' option in packages-install would be worth adding...?22:47
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Neutron-LBaaS to Octavia migration tool  https://review.openstack.org/57894222:52
johnsomOk, that seems to be passing the migration. I had just disabled the session persistence to figure out all the bugs in the namespace driver. This run should pass, then I will do one more commit changing it to a periodic job.22:53
cgoncalvesexcellent! really great job!22:55
*** abaindur has joined #openstack-lbaas22:57
*** abaindur_ has quit IRC23:00
johnsomYeah, a weeks work and 50 revisions to prove what I already tested local...   Ugh. Well, done now23:10
johnsomlxkong I just had a few more thoughts about the design to get that "current connections" metric per-member.23:11
johnsomWe don't have good way to do that right now with the driver model.....23:11
lxkongwhy? i'm not familiar with the current driver model23:12
johnsomI mean the driver side isn't bad, it's how it's handled in the driver23:12
johnsomRight now the API processes don't have access to the amphora API. Everything goes over a message queue out to the workers....23:13
johnsomUgh, this is really where I wish I could take the time to re-write the status/stats handling code.23:14
johnsomYeah, so, ok, I guess we have to take the DB hit and pass it with the rest of the stats23:14
lxkongjohnsom: i don't understand why we can not send reqeust to o-worker then the amphora to get the value?23:17
johnsomIt would be introducing a synchronous call across the queue which we don't do today.23:18
lxkongjohnsom: do you mean that all the reqeusts the o-worker is handling are all asynchronous?23:18
johnsomlxkong Correct, today everything the o-worker does is async23:19
lxkonghmm...23:19
johnsomWe could add that, but we might be tipping over to just adding it to the DB for now is good enough.....23:19
lxkongjohnsom: ok, fair enough. Then we could get the information as easy as getting the lb status23:21
johnsomYeah, it is just additional info in the message23:21
*** ianychoi_ has joined #openstack-lbaas23:21
*** jlaffaye_ has joined #openstack-lbaas23:22
*** jlaffaye_ has quit IRC23:22
*** jlaffaye_ has joined #openstack-lbaas23:22
*** jlaffaye has quit IRC23:22
*** ianychoi has quit IRC23:25
lxkongjohnsom: are you able to help do that?23:26
xgerman_ok, solved the privsep myster —I need to run the privsep piece as part of the worker23:26
johnsomlxkong No, it would need to be a Stein activity if I do it.  We have too much going on for Rocky already IMO.23:27
johnsomStein we could probably get it in pretty quick23:27
*** abaindur has quit IRC23:30
lxkongjohnsom: ok, good to know. thanks!23:35
*** abaindur has joined #openstack-lbaas23:37
johnsomWe are just three weeks from feature freeze on Rocky...23:37
*** abaindur has quit IRC23:50
*** abaindur has joined #openstack-lbaas23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!