Friday, 2017-09-29

*** catintheroof has quit IRC00:06
*** rcernin has quit IRC00:29
*** dayou_ has joined #openstack-lbaas00:44
*** yamamoto has joined #openstack-lbaas01:36
*** yamamoto has quit IRC01:44
*** rcernin has joined #openstack-lbaas02:20
*** yamamoto has joined #openstack-lbaas02:36
*** dayou_ has quit IRC02:54
*** armax has quit IRC03:04
*** sanfern has joined #openstack-lbaas03:29
*** dayou_ has joined #openstack-lbaas03:36
*** dayou has quit IRC03:38
*** links has joined #openstack-lbaas04:02
*** SumitNaiksatam has joined #openstack-lbaas04:22
*** aojea has joined #openstack-lbaas04:46
*** aojea has quit IRC04:50
*** gcheresh_ has joined #openstack-lbaas05:09
*** ianychoi has quit IRC05:11
*** ianychoi has joined #openstack-lbaas05:12
*** rcernin has quit IRC05:15
*** armax has joined #openstack-lbaas05:23
*** pcaruana has joined #openstack-lbaas05:24
*** gcheresh_ has quit IRC05:26
*** aojea has joined #openstack-lbaas05:26
*** atoth has quit IRC05:26
*** atoth has joined #openstack-lbaas05:26
*** gcheresh_ has joined #openstack-lbaas05:28
*** pcaruana has quit IRC05:29
*** gcheresh_ has quit IRC05:30
*** dayou_ has quit IRC05:44
*** rcernin has joined #openstack-lbaas05:52
*** yamamoto_ has joined #openstack-lbaas05:53
*** yamamoto has quit IRC05:57
*** aojea has quit IRC06:00
*** aojea has joined #openstack-lbaas06:10
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] Adding exabgp-speaker element to amphora image  https://review.openstack.org/49016406:28
*** eezhova has joined #openstack-lbaas06:36
*** pcaruana has joined #openstack-lbaas07:04
*** dulek has joined #openstack-lbaas07:16
*** eezhova has quit IRC07:18
*** aojea has quit IRC07:30
*** eezhova has joined #openstack-lbaas07:46
*** sanfern has quit IRC07:53
*** armax has quit IRC07:57
*** sanfern has joined #openstack-lbaas08:04
*** yamamoto_ has quit IRC09:01
*** sanfern has quit IRC09:14
*** aojea has joined #openstack-lbaas09:15
*** grim-lock has joined #openstack-lbaas09:26
*** yamamoto has joined #openstack-lbaas09:29
*** yamamoto has quit IRC09:32
*** sanfern has joined #openstack-lbaas09:36
*** gcheresh_ has joined #openstack-lbaas09:38
*** yamamoto has joined #openstack-lbaas09:40
*** salmankhan has joined #openstack-lbaas09:41
*** grim-lock has quit IRC09:42
*** aojea has quit IRC09:49
*** gcheresh_ has quit IRC09:54
*** yamamoto has quit IRC09:54
*** yamamoto has joined #openstack-lbaas10:01
*** numans has quit IRC10:08
*** aojea has joined #openstack-lbaas10:09
*** numans has joined #openstack-lbaas10:10
*** aojea has quit IRC10:12
*** aojea has joined #openstack-lbaas10:12
*** aojea_ has joined #openstack-lbaas10:18
*** aojea has quit IRC10:19
*** Alex_Staf has joined #openstack-lbaas10:28
*** aojea_ has quit IRC10:35
*** salmankhan has quit IRC10:39
*** salmankhan has joined #openstack-lbaas10:40
*** aojea has joined #openstack-lbaas10:50
*** sanfern has quit IRC10:56
*** gcheresh_ has joined #openstack-lbaas11:07
*** sapd_ has quit IRC11:17
*** sapd_ has joined #openstack-lbaas11:17
*** sapd_ has quit IRC11:17
*** sapd_ has joined #openstack-lbaas11:18
*** salmankhan has quit IRC11:21
*** sapd_ has quit IRC11:23
*** sapd_ has joined #openstack-lbaas11:24
*** aojea has quit IRC11:27
*** salmankhan has joined #openstack-lbaas11:33
*** armax has joined #openstack-lbaas11:53
*** sapd__ has joined #openstack-lbaas12:03
*** sapd__ has quit IRC12:03
*** sapd_ has quit IRC12:03
*** gcheresh_ has quit IRC12:03
*** sapd__ has joined #openstack-lbaas12:04
*** salmankhan has quit IRC12:09
*** salmankhan has joined #openstack-lbaas12:09
*** sanfern has joined #openstack-lbaas12:19
*** aojea has joined #openstack-lbaas12:20
*** tonygunk has joined #openstack-lbaas12:41
*** armax has quit IRC12:54
*** catintheroof has joined #openstack-lbaas13:03
*** leitan has joined #openstack-lbaas13:06
*** gcheresh_ has joined #openstack-lbaas13:18
*** slaweq has quit IRC13:30
*** gcheresh_ has quit IRC13:37
*** gcheresh_ has joined #openstack-lbaas13:38
*** links has quit IRC13:39
*** gcheresh_ has quit IRC13:57
openstackgerritLingxian Kong proposed openstack/octavia-tempest-plugin master: [WIP] Add basic loadbalancer functionality test  https://review.openstack.org/50851614:04
*** aojea has quit IRC14:16
*** tonygunk has quit IRC14:21
*** tonygunk has joined #openstack-lbaas14:21
*** aojea has joined #openstack-lbaas14:24
*** aojea has quit IRC14:42
*** apuimedo has quit IRC14:43
*** aojea has joined #openstack-lbaas14:46
johnsomOuch, looks like we have some Zuulv3 issues: https://review.openstack.org/#/c/490164/14:47
*** tonygunk has quit IRC14:56
*** tonygunk has joined #openstack-lbaas14:57
*** eezhova has quit IRC15:01
*** rcernin has quit IRC15:01
*** tonygunk has quit IRC15:03
dmelladojohnsom, I guess like everyone today15:05
dmelladoI've been hitting the 'out of stream' issue endlessly :\15:05
*** aojea has quit IRC15:05
*** apuimedo has joined #openstack-lbaas15:10
johnsomYep15:13
*** aojea has joined #openstack-lbaas15:15
*** yamamoto has quit IRC15:18
*** aojea has quit IRC15:25
*** chlong has quit IRC15:35
*** eezhova has joined #openstack-lbaas15:38
*** tonygunk has joined #openstack-lbaas15:46
*** yamamoto has joined #openstack-lbaas16:18
*** yamamoto has quit IRC16:24
*** pcaruana has quit IRC16:26
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: DO-NOT-MERGE: Gate test patch  https://review.openstack.org/50856016:32
openstackgerritMichael Johnson proposed openstack/python-octaviaclient master: DO-NOT-MERGE: Gate test patch  https://review.openstack.org/50856116:33
*** eezhova has quit IRC16:34
*** sshank has joined #openstack-lbaas16:48
*** rtjure has quit IRC16:49
*** salmankhan has quit IRC16:59
*** salmankhan has joined #openstack-lbaas17:00
*** bbzhao has quit IRC17:00
*** bbzhao has joined #openstack-lbaas17:00
*** tongl has joined #openstack-lbaas17:02
*** salmankhan has quit IRC17:11
*** salmankhan has joined #openstack-lbaas17:12
*** SumitNaiksatam has quit IRC17:15
*** salmankhan has quit IRC17:16
tonglWhen we do redirect_to_url in octavia, what's the response code for redirect? Is it 301 by default?17:17
johnsomJust a sec, I have to look17:18
johnsomtongl 30217:19
tongli see, we just use 302 Found for temporary redirect.17:20
*** mugsie has joined #openstack-lbaas17:20
*** yamamoto has joined #openstack-lbaas17:20
tonglmaybe we can add a redirect_code to allow user to set response code if needed.17:20
tonglThe redirect may be permanent17:21
johnsomYeah, that would be a feature enhancement to the API.17:23
johnsomWe technically support 301, 302, 303, 307 and 308 but currently it only offers 30217:24
tonglLet me draft a spec for that17:25
*** yamamoto has quit IRC17:25
tonglCan you paste a link to where 302 is defined?17:26
johnsomIt's missing from our API-ref, but here is the root docs: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-redirect%20location17:28
tongl@johnsom: what's the response code for REJECT action?17:28
johnsomhttps://www.irccloud.com/pastebin/QtIVqZew/17:28
tonglDo we just use 501 for reject?17:29
johnsom40317:30
tonglthx17:34
johnsomI will put up an api-ref patch that specifies these17:35
openstackgerritMichael Johnson proposed openstack/octavia master: L7 policy API-REF update for result codes  https://review.openstack.org/50857517:38
openstackgerritMichael Johnson proposed openstack/octavia master: L7 policy API-REF update for result codes  https://review.openstack.org/50857517:40
johnsomDidn't like the wording, hopefully that is better.17:40
tonglnice17:41
*** SumitNaiksatam has joined #openstack-lbaas17:43
*** rtjure has joined #openstack-lbaas17:53
*** sshank has quit IRC18:05
*** sshank has joined #openstack-lbaas18:06
rm_workso basically, we're not merging anything in the near future18:07
*** yamamoto has joined #openstack-lbaas18:22
johnsomYep18:23
johnsomThings are getting better though18:23
johnsomThis one has me scratching my head however: http://logs.openstack.org/75/508575/2/check/openstack-tox-py27/6208b6f/job-output.txt.gz#_2017-09-29_18_01_36_55832218:25
*** yamamoto has quit IRC18:26
rm_worklol of course18:33
rm_workbecause carlos18:33
johnsomDo you see the issue?18:33
rm_workor actually that might have been barclac18:33
rm_workone sec18:34
rm_worki just know that bit is really fragile because they're doing a bunch of low-level stuff18:34
johnsomOh, I think I see it18:35
johnsomhmm18:35
*** dayou_ has joined #openstack-lbaas18:35
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/health_daemon/status_message.py#L7218:36
johnsomdigest is not ascii18:36
rm_workso in py2 hmac doesn't have the compare method native, so they do their thing18:36
johnsomBut why was this always working until now?18:37
*** tonygunk has quit IRC18:38
*** sshank has quit IRC18:41
*** sshank has joined #openstack-lbaas18:41
rm_workprobably they switched python versions slightly or something when they made the new backend nodes18:41
rm_worki'm trying to see if there's a version where it'll fail18:41
johnsomThanks18:42
*** tongl has quit IRC18:42
rm_worki have some questions about performance testing today18:47
rm_worki have no idea what the numbers coming back from apachebench actually MEAN or how to tell whether i've got the right bottleneck (the LB) or not18:48
rm_worki set up a very simple test scenario right now... i have two backends running the c10k Golang server, and a LB, and two load generator VMs with apachebench18:49
*** sshank has quit IRC18:50
johnsomYeah, so there is an art to this...18:50
johnsomI will also mention that apache bench is not the best benchmark suite out there...18:51
rm_workk18:51
rm_workit seemed like the easiest (and was recommended in the c10k code)18:51
rm_workbut I am open to whatever18:51
rm_workI used Tsung in the past18:51
rm_workwas thinking about poking at that again18:52
johnsomYeah, I wrote up an etherpad for the bluebox folks poking at this, not sure if I can find it18:52
johnsomTsung is a nice multi-client suite.  For single I like weighttp: http://redmine.lighttpd.net/projects/weighttp/wiki18:53
rm_workwill that generate enough load?18:54
johnsomThis describes the ab output: https://httpd.apache.org/docs/2.4/programs/ab.html#output18:54
johnsomweighttp does, but single client.  It will beat the tar out of web servers18:55
*** Pernille has joined #openstack-lbaas18:56
johnsomThis page has some useful reading: http://gwan.com/en_apachebench_httperf.html18:58
johnsomgwan is a great static content web server for benchmarking too.  compiled, serve from memory, etc.18:59
johnsomI like to compare with a LB bypass route too19:01
rm_workyeah just direct to the member?19:02
johnsomYeah, basline comparison19:02
rm_workhmm weighttp is getting a lot of "connection reset by peer", wonder if that means the LB is maxxed19:02
johnsomOr check your timed_waits and open file descriptors...19:03
johnsommax conn settings for the LB19:03
johnsometc.19:03
rm_worki did ulimit -n 10000019:03
johnsomLots-o-knobs to turn19:03
rm_workah right max connections options on the listener....19:03
johnsomEven kernel conntrack might need tuning on client/web server side19:04
johnsomWe do some of that for the amps in our elements19:04
*** tongl has joined #openstack-lbaas19:05
johnsomActually, I think we disable conntrack in the amp since it can dog the flows19:05
*** sshank has joined #openstack-lbaas19:05
*** sshank has quit IRC19:09
rm_workyeah i am wondering if i am hitting bandwidth issues or what19:14
rm_workeven the baseline without the LB involved is a little weird19:14
rm_worki'm doing like -n 10000 -c 1000019:14
rm_workno idea if that's good or not19:14
rm_worklol taking -c down to 1000 makes it ... instant19:15
rm_worklol19:15
rm_workso obviously -c 10000 is not good19:15
*** catintheroof has quit IRC19:20
*** catintheroof has joined #openstack-lbaas19:20
rm_workhmmm 2.7.14 doesn't make that failure happen19:21
rm_workneed to look at what's on the box i guess19:21
xgerman_Yep, Jason reported 10-16K maybe you can ask how they tester19:22
*** yamamoto has joined #openstack-lbaas19:23
xgerman_Met, with blogan they are trying to get rid of A10, too19:23
xgerman_Lol - since dougwig is driving that ;-)19:24
*** catintheroof has quit IRC19:24
rm_worklol19:25
*** yamamoto has quit IRC19:28
rm_workso basically what i can see is, dougwig leaves a10, everyone drops a1019:28
rm_work:P19:28
rm_worki'm getting a lot of connection errors with the LB in the loop... but the amp seems to be underutilized still, haproxy isn't going above like 20% CPU and 10% RAM19:59
*** sshank has joined #openstack-lbaas20:03
*** tonygunk has joined #openstack-lbaas20:08
*** aojea has joined #openstack-lbaas20:09
*** aojea has quit IRC20:13
xgerman_mmh, you fixed ulimit might be at the hypervisor or is the backend without the LB fairing better (aka doing twice the connections) — after all you are going in and out20:21
xgerman_in the amp20:21
rm_workyeah20:22
rm_workdirect to LB doesn't get the connection errors20:22
rm_workeerrrr20:23
rm_worksorry, direct to member node20:23
rm_workto LB is where the connect errors appear20:23
rm_workweighttp -n 100000 -c 6000 -t 12 -k http://$LB_IP/slow20:24
rm_workfinished in 163 sec, 265 millisec and 920 microsec, 612 req/s, 84 kbyte/s20:24
rm_workrequests: 100000 total, 100000 started, 100000 done, 99322 succeeded, 678 failed, 0 errored20:24
*** yamamoto has joined #openstack-lbaas20:24
rm_workweighttp -n 100000 -c 6000 -t 12 -k http://$MEMBER_IP/slow20:24
rm_workfinished in 52 sec, 310 millisec and 604 microsec, 1911 req/s, 265 kbyte/s20:24
rm_workrequests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored20:24
rm_work^^ so that's not ideal20:24
rm_worki don't know if my c/t numbers are way off though20:25
dayou_johnsom: Could you help to do a pip list | grep futu on your devstack machine that has problem restarting health manager with new patch?20:25
dayou_See it the futures and futurist get updated recently, wondering that was breaking20:26
*** tonygunk has quit IRC20:27
dayou_It's change in futurist and fucsigs that I am suspecting20:28
*** yamamoto has quit IRC20:29
*** tongl has quit IRC20:31
*** eezhova has joined #openstack-lbaas20:33
*** Pernille has quit IRC20:34
johnsom612 req/s ????  Something is seriously wrong with that setup.....20:37
rm_work:/20:38
johnsomdayou_ Just a minute, will look20:38
johnsomfuture (0.16.0)20:39
johnsomfutures (3.1.1)20:39
johnsomfuturist (1.4.0)20:39
johnsomrm_work I'm going to setup a devstack and give it a go too20:39
rm_work^^ dayou_20:41
rm_workif you want to look at performance data for a minute, i'm totally down to screenshare and walk through some tests / this testbed20:42
rm_workbut if you want to get your own actual work done, i'm continuing on my own anyway20:42
*** KeithMnemonic has quit IRC20:45
*** eezhova has quit IRC20:47
johnsomWell, I will load up a clean stack and kick the tires as I'm curious now20:47
johnsomOtherwise I'm mostly looking at zuulv3 stuff, but more dust needs to clear there IMO20:47
rm_workjohnsom: load a clean stack for dayou_'s thing or for perf?20:48
rm_workI assume perf is my env20:48
johnsomperf20:48
johnsomThe package answer was easy as I still had that stack up and running20:49
dayou_Well, hopefully you can help testing the health manager thing after perf20:49
rm_worki mean, i can just show you my env and run whatever you want, too20:50
johnsomdayou_ Sure, what is it you would like me to test?20:50
dayou_Just load the health manager patch and try restart and see whether it works20:51
dayou_I see they upgraded futurist requirements from >= 0.11.0 != 0.15.0 to >=1.2.0 on Sep 7th20:53
dayou_I did recreate the stack on Sept 12th20:54
dayou_So that's I am wondering it might be broken before20:54
*** armax has joined #openstack-lbaas21:00
*** tongl has joined #openstack-lbaas21:07
johnsomhttps://www.irccloud.com/pastebin/AEQGde4S/21:16
johnsomrm_work ^^^^ On my lowly vmware workstation desktop devstack21:16
rm_workhmmm21:16
rm_workwhat command did you run21:16
johnsom./weighttp -n 100000 -c 6000 -t 12 -k http://10.0.0.1221:16
rm_workhmmm21:16
johnsomCopied yours, though I think there is room for improvement there....21:16
rm_workso that's with a LB or direct to member21:17
johnsomThat is the LB VIP21:17
rm_workhmmmmm21:18
rm_workoh hold on21:18
rm_workyou're not using /slow21:18
*** aojea has joined #openstack-lbaas21:18
rm_workdirect to member without /slow is:21:18
rm_workfinished in 3 sec, 794 millisec and 283 microsec, 26355 req/s, 3654 kbyte/s21:18
rm_workrequests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored21:18
johnsomOk, that is more like it21:18
rm_workLB without /slow is:21:19
rm_workfinished in 15 sec, 152 millisec and 978 microsec, 6599 req/s, 915 kbyte/s21:19
rm_workrequests: 100000 total, 100000 started, 100000 done, 99986 succeeded, 14 failed, 0 errored21:19
rm_workso still significantly worse21:19
rm_workis that ... expected?21:19
johnsomNot that low21:20
rm_work:/21:20
rm_workhaproxy is at ~80% CPU there21:20
rm_workstill 7% memory21:20
rm_worklol and systemd-journal is at 50%21:21
rm_workfinished in 72 sec, 304 millisec and 733 microsec, 11064 req/s, 1533 kbyte/s21:22
rm_workrequests: 800000 total, 800000 started, 800000 done, 799656 succeeded, 344 failed, 0 errored21:22
rm_workincreased request count21:22
johnsomSo slow delays each of the go threads three seconds...21:22
rm_workyes21:22
rm_workto simulate "work" happening21:22
johnsomWell, that will tank the req/s21:23
rm_workah yeah i guess so21:23
rm_workwe're looking for max RPS21:23
rm_workso ok, using not /slow21:23
rm_workbut why am i getting failures T_T21:23
johnsomI only get them with /slow21:23
rm_workhmm21:24
johnsomLikely because weighttp gives up waiting.21:24
rm_workwell, i'm getting them here too21:24
johnsomhttps://www.irccloud.com/pastebin/Hl4OhTSc/21:24
rm_worklots of "Connection reset by peer"21:24
*** ltomasbo has quit IRC21:24
rm_workif i take it down to 5000 concurrent, i don't see them I think21:25
rm_workah no, still some, but less21:25
rm_worklol and it finishes quicker21:25
*** yamamoto has joined #openstack-lbaas21:25
rm_workdown to 4000 and again, finishes faster and less errors21:26
rm_workit always gets to 99% pretty quick and then sits...21:26
rm_worklike the last few requests aren't finishing?21:26
rm_workat -c 2000 :21:26
rm_workfinished in 21 sec, 724 millisec and 755 microsec, 9206 req/s, 1276 kbyte/s21:26
rm_workrequests: 200000 total, 200000 started, 200000 done, 200000 succeeded, 0 failed, 0 errored21:26
rm_workerr sorry, 400021:26
rm_workgoing below that seems to still be faster lol21:27
rm_workat 2000 now21:27
rm_workfinished in 19 sec, 180 millisec and 722 microsec, 10427 req/s, 1445 kbyte/s21:27
rm_workrequests: 200000 total, 200000 started, 200000 done, 200000 succeeded, 0 failed, 0 errored21:27
rm_workis that an issue on this side (client)?21:27
rm_worklike, this machine sucks at generating so many requests?21:27
rm_worktoo many threads?21:27
*** ltomasbo has joined #openstack-lbaas21:28
rm_workseems like I am topping out around 10k req/s21:29
johnsomYeah, you are asking for a lot of threads for cores you probably don't have21:29
rm_workrunning like this:  weighttp -n 200000 -c 2000 -t 12 -k http://10.32.156.19221:29
rm_workif i run on one box i get 10k21:29
rm_workif i run it simultaneously on two boxes, it takes twice as long on both and gets about 5k/each21:30
rm_workdown to 8 and then 4 threads and results are nearly identical on one box (around 10k rps)21:31
*** yamamoto has quit IRC21:31
rm_workyeah tweaking the -c and -t params around seems to pretty much always end with 10k rps21:31
rm_workand HAProxy hovers around 80% +/- 10% CPU21:32
rm_workwe looked at enabling haproxy on multi-core and decided it wasn't worth it? or caused problems? or we would just leave it up to the deployer?21:34
johnsomWell, long, long ago, the multi-proc model in haproxy had issues with peer sync and seamless restarts.  We never really went back and checked what all was fixed.21:37
johnsomFrankly, we have done zero tuning21:37
johnsomIt's all been about "make it work"21:37
rm_workk21:38
rm_workanyway, unimportant21:38
rm_worki am thinking 10k is about the threshold21:38
rm_workfor what this env can handle21:38
rm_workthat seems... a little low? but maybe that's what we get with this tuning level in our cloud21:38
johnsomIt's odd that I'm seeing about the same.  You aren't running devstack right?21:38
rm_workno this is production21:38
rm_workreal prod cloud here21:39
johnsomYeah, so hmmm, seems like we are hitting something artificial21:39
johnsomBecause, I mean, I'm running in vmware workstation on top of windows....21:39
rm_workreading through this guide and the previous two parts: https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to-achieve-2-000-000-concurrent-ssl-connections-d017e61a4d2721:40
rm_workhttps://medium.freecodecamp.org/load-testing-haproxy-part-1-f7d64500b75d21:40
rm_workand https://medium.freecodecamp.org/load-testing-haproxy-part-2-4c8677780df621:40
rm_worksockets?21:40
rm_work(part 2)21:40
johnsomI'm wondering about the cirros21:41
rm_workwell i am not using cirros at all21:41
johnsomIt looks like conntrack on my devstack host is playing a factor21:48
rm_workhmm21:48
rm_workah but given that i really am seeing about 10k limit on the LB side, not client21:49
rm_workeugh22:04
rm_workyeah it's not the load-generating side that's bottlenecking now22:04
rm_workthat's clear at least22:04
johnsomI think it might be in my case.  The amp isn't really sweating22:05
johnsomafter I turn off logging for example22:05
rm_workah i wonder how much that affects things22:05
rm_worki did note that the logger was taking a lot of CPU22:05
rm_work10k just seems like a low-ish number22:06
rm_workespecially given that i can get better performance with direct-to-member22:06
rm_worki mean, is direct-to-member ALWAYS going to give better performance?22:06
rm_work(in the case that what the member is doing is just statically returning one byte)22:07
johnsomYes, direct to member will always be higher22:07
*** leitan has quit IRC22:08
rm_workk22:09
rm_worki can get about 55k RPS direct-to-member, it seems22:09
rm_workLB seems to really top out at 10k22:09
rm_workLOL well22:13
rm_workif i remove all but one member22:13
rm_workthe LB suddenly jumps to 25k RPS22:13
rm_workwtf22:13
rm_workdid i have slow members throwing it off?22:14
rm_workactually half the RPS makes sense, right? because it has to do user-->haproxy<--member22:14
rm_workso it has to do twice the jumps22:14
rm_workadding a second member halves the RPS22:23
rm_workjohnsom: ^^22:23
johnsomYeah, not sure on that22:23
johnsomSeems odd22:23
rm_workyeah both members test at 55kRPS22:24
rm_workput one behind the LB, halve that to ~26kRPS22:25
rm_workput a second, halve THAT to ~12kRPS22:25
rm_worki think it's the act of having to select a member at all?22:26
rm_workoh, maybe the algorithm?!?22:26
rm_workeh, this is ROUND_ROBIN, so nm, shouldn't be much calculation22:26
rm_workbut i guess maybe there's SOME check that has to take place to decide where to send traffic?22:26
*** armax has quit IRC22:26
*** yamamoto has joined #openstack-lbaas22:27
rm_workon the amp, I see this during load: http://paste.openstack.org/show/622353/22:29
rm_worki guess "max concurrent connections" is something I wanted to test as well, which is maybe more meaningful that RPS22:31
rm_work*than22:31
rm_workbut i'm not sure how to test that, besides noting how many I try to make before it starts giving connection errors?22:31
rm_workwhich i assume is the thing22:31
johnsomAnyone know the new cirros password?22:31
rm_workerr did it change?22:31
rm_workit isn't cubswin;) anymore?22:31
johnsomtrying to su in an image and getting no where22:32
rm_workerrr22:32
rm_workcubswin:)22:32
*** aojea has quit IRC22:32
rm_worki guess that's the cirros user password?22:32
rm_workcan you not passwordless sudo?22:32
*** yamamoto has quit IRC22:32
johnsomYeah, I'm trying to bump the ulimit with no luck22:33
johnsomah, yeah, nevermind.  Just no bash alias there22:33
rm_workOH yeah that22:34
rm_workyou have to sudo su -22:34
rm_workand then do ulimit22:34
rm_workor did you find another way?22:34
johnsombuilt myself a staticly linked weighttp22:34
rm_workheh22:34
rm_workmy problem is definitely not client side22:35
rm_workit's definitely the LB limiting me to ~11kRPS22:35
rm_worki've been running weighttp from /dev/shm anyway22:36
rm_worki had to build my own because of course there's no CentOS package <_<22:36
johnsomhttps://www.irccloud.com/pastebin/wquPhAUr/22:36
johnsomstraight to the member web server22:37
johnsomso, something is fishy with cirros22:37
rm_workhmm22:38
rm_worki mean22:38
rm_workload generation on the same box you're serving from...22:38
johnsomNo, I booted another22:38
rm_workyeah it's still on your devstack HV22:39
rm_workwhich is a piece of your windows machine <_<22:39
rm_workbox == your laptop22:39
*** aojea has joined #openstack-lbaas22:40
*** aojea has quit IRC22:45
*** aojea has joined #openstack-lbaas22:57
johnsomYeah, so hacked haproxy to just 503 (no members), still right around 10k23:00
johnsomSomething outside is limiting it23:00
rm_workgoing to try with Vegeta!23:00
rm_workit has the appropriate power level for what i'm going for23:00
johnsomI'm thinking it's the security groups conntrack23:01
rm_workhmm23:01
rm_workwell i doubt that's the issue in my env?23:01
johnsomDid you take the security group code out of octavia ?23:01
johnsomI mean you are still running neutron right?23:01
*** aojea has quit IRC23:02
*** sshank has quit IRC23:02
rm_workno?23:02
rm_workbut like23:02
rm_workdirect to member -> member still has a security group23:02
rm_workand i get 55k to members23:03
rm_workthey're all just VMs23:03
rm_workalso, i'm not limited to 10k23:03
rm_workmy LB will do 25k with one member23:03
rm_workVegeta seems to be a lot better tool AFAICT23:08
rm_workhmm though i can't get the same request rate direct-to-member with it :/23:17
rm_workVegeta is showing me the same thing though -- about 11k is the best I can do23:19
rm_workbefore everything goes to shit23:19
johnsomYeah, my host running qemu is maxing out conntrackd23:19
rm_workhmm23:20
johnsom-rw-------  1 root     root     599739930 Sep 29 16:22 conntrackd-stats.log23:22
johnsomWell, that is probably part of it...23:23
johnsomSigh23:23
johnsomappears to be logging for each connection23:23
*** tongl has quit IRC23:25
rm_workhmm23:28
*** yamamoto has joined #openstack-lbaas23:29
rm_workoh23:32
rm_workhow do we disable conntrack on the amps exactly?23:32
rm_workisn't it slightly different on like, centos7 vs ubuntu?23:32
rm_workon my amp, if I do "lsmod" i definitely see nf_conntrack loaded23:33
rm_workjohnsom: ^^23:34
johnsomYeah, I don't23:34
johnsomI don't think it's in the ubuntu cloud image23:34
*** yamamoto has quit IRC23:35
rm_workhmmmmmm crap23:35
rm_worki may need to disable it23:35
johnsomYeah, try disabling it in one of your amps and see what you get23:35
johnsomI think you can just rmmod it and it's friends23:36
rm_workip_vs ?23:36
rm_worklet's see23:36
johnsomNo, I don't htink so23:36
rm_worki mean, YES, ip_vs is one23:36
johnsomJust do the conntrack one and it should whine if there are friends23:37
rm_workyes, ip_vs23:37
johnsomHmm23:37
johnsomUgh, these ubuntu cloud images...  joydev module loaded, pata, iscsi, parport, sigh23:38
rm_worktime to add to our elements to do some blacklisting? :P23:38
johnsomWell, it wastes so much disk space too.  Which someone would pay me to just make a small OS image build system.  I have yet to see a distro that isn't bloated23:39
rm_workremoving the module appears to make zero difference :/23:42
johnsomI think to get better numbers I would need to setup multiple VMs and pop the traffic out a provider network.  Get client and servers out of the hands of neutron/qemu/etc23:44
rm_workahhh ummmmmm23:56
rm_worki might have found a problem23:56
rm_workso haproxy seems to have a maxconn default ??23:57
johnsomYeah, like 200023:57
rm_worki added "maxconn 200000" to the config in global and defaults23:57
rm_workreloaded it23:57
rm_workand now it's getting a bit higher RPS23:57
rm_workbut also ulimit errors23:57
rm_worki thought it was supposed to auto-adjust the process ulimit :/23:57
johnsomI told you to increase your listener max conn23:57
rm_worknot sure how to fix it for haproxy23:57
rm_workthe default listener max conn is *unlimited* according to our docs23:58
johnsomYeah, I think I opened a bug against that23:58
rm_workso what am i supposed to increase it to <_<23:58
johnsomA large number23:58
johnsomgrin23:58
rm_workk23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!