Tuesday, 2020-09-22

*** jawad_axd has joined #openstack-monasca00:34
*** jawad_axd has quit IRC00:38
*** jawad_axd has joined #openstack-monasca01:15
*** jawad_axd has quit IRC01:19
*** jawad_axd has joined #openstack-monasca01:36
*** jawad_axd has quit IRC01:41
*** jawad_axd has joined #openstack-monasca01:57
*** jawad_axd has quit IRC02:01
*** mbindlish has joined #openstack-monasca03:03
*** vishalmanchanda has joined #openstack-monasca05:58
*** witek has joined #openstack-monasca06:22
*** jawad_axd has joined #openstack-monasca07:22
*** dougsz has joined #openstack-monasca07:37
*** tosky has joined #openstack-monasca07:40
*** k_mouza has joined #openstack-monasca09:32
openstackgerritkuang congxian proposed openstack/monasca-api master: Remove six.add_metaclass  https://review.opendev.org/75326909:34
openstackgerritkuang congxian proposed openstack/monasca-api master: Remove six.PY2 and six.PY3  https://review.opendev.org/75327409:52
*** adriancz has joined #openstack-monasca10:02
*** k_mouza has quit IRC10:35
witekadriancz, dougsz: could we get monasca-transform deprecation merged, please?10:44
witekhttps://review.opendev.org/75198410:44
*** k_mouza has joined #openstack-monasca10:44
adrianczdone10:45
witekthanks10:45
openstackgerritMerged openstack/monasca-transform master: Remove project content on master branch  https://review.opendev.org/75198410:45
*** k_mouza has quit IRC10:48
*** k_mouza has joined #openstack-monasca10:48
openstackgerritWitold Bedyk proposed openstack/monasca-analytics master: Disable Zuul jobs  https://review.opendev.org/75328811:17
openstackgerritWitold Bedyk proposed openstack/monasca-analytics master: Add Monasca-analytics use-cases with Monasca  https://review.opendev.org/65691711:18
openstackgerritWitold Bedyk proposed openstack/monasca-analytics master: WIP Add fard disks failure detection example  https://review.opendev.org/69267611:19
*** bandorf has joined #openstack-monasca12:59
witekhello everyone13:00
bandorfhello13:00
witekhi Matthias13:00
witekdo we have any topics to discuss?13:01
witekthe agenda for today is empty13:01
bandorfFrom my site, currently not. Did Adrian talk to you?13:01
witekyes, but hasn't mentioned any topics13:02
bandorfI'm currently mainly working on internal org-topics. Thus, no monasca-related topics13:02
bandorffrom my site13:03
witekno worries, anyone else around?13:03
bandorfLet me ping Adrian13:03
toskynow that monasca-transform content has been cleaned, there are no jobs based on legacy-dsvm-base in master and (soon) victoria13:04
adrianczhi13:04
toskythere are still 3 other legacy jobs, which don't depend on devstack-gate, so less likely to suddendly break, but they should be ported at some point13:05
witekthese are Java build and publish jobs, right?13:05
toskyyep13:05
toskylegacy-monasca-common-maven-build, legacy-monasca-persister-maven-build and legacy-monasca-persister-maven-build, last time I checked13:06
witekalso legacy-monasca-thresh-maven-build13:06
toskyups, right, wrong copy-and-paste13:07
toskyI copied twice the same13:07
witekwe should drop legacy-monasca-persister-maven-build, we've deprecated Java implementation in monasca-persister13:07
witeklet me propose a change for that13:08
toskythanks!13:09
toskyas I already said, from my point of view (the goal's point of view) as long as the legacy jobs disappear it's fine; either because they are ported, or because they are removed13:10
witekdoes the goal include only DevStack based jobs, or any legacy Zuul jobs?13:10
witekI know we should migrate anyway, it's just for info13:11
*** k_mouza has quit IRC13:12
witekon the deprecation track: there are two documentation changes in monasca-analytics repo which I would like to merge before retiring the repo13:13
witekI wasn't able to merge them because of CI failing, so proposed disabling Zuul jobs13:13
witekhttps://review.opendev.org/75328813:14
*** k_mouza has joined #openstack-monasca13:14
witekadriancz: could we please merge? ^^13:14
adrianczok, i will review that13:15
adrianczi cant merge this13:15
witekthanks13:15
witekups13:15
toskywitek: so, the goal is about all legacy jobs, but the jobs which depends on devstack-gate are more relevant, because they *will* break for sure13:15
adrianczim not core reviewer :  (13:15
witekthanks tosky13:16
witekadriancz: indeed, the core list is very short13:16
toskyso you don't stricly need to rush, but having native jobs usually means less maintenance burden13:16
toskyit's for your own good13:16
witektosky: do you know if there are any other Java related jobs existing?13:17
toskywitek: other jobs which deal with Java in other repositories? Uhm, I don't know13:18
toskyyou may check if there are java-based base jobs in opendev.org/zuul/zuul-jobs.git, and if that fails, well, there is codesearch :)13:19
witekyep, thanks13:19
witekany other topics for today?13:20
witekif not, thanks and see you next week13:22
bandorfBye, everyody13:22
adrianczThanks13:23
openstackgerritMerged openstack/monasca-analytics master: Disable Zuul jobs  https://review.opendev.org/75328813:23
*** bandorf has quit IRC13:30
*** CBR09 has joined #openstack-monasca13:32
CBR09hi witek, are you around here13:34
witekyes13:34
CBR09I've a question about monasca performance13:35
CBR09I wonder about performance of monasca in large-scale, especially in validating token between monasca-api and keystone.13:35
CBR09is there any benchmark performance of monasca and keystone ? ( I saw some presentation but it has very few informantion)13:35
witekit works as any other OpenStack service13:36
witeknormally memcached is configured to cache the tokens and speed up13:36
CBR09yea, I've setup memcached for caching13:36
CBR09but I've still got low performance13:37
CBR09around 2000 request/s13:37
CBR09with 16 core 32G ram13:37
CBR09I tried increase from 16 to 32core, but it's still around 2k request/s13:37
witekwhat's your setup? how many Monasca and Keystone nodes do you have?13:38
CBR09I don't know what component is bottoneck13:38
CBR09I've tried on one node with 16 cores13:38
CBR09and using docker-compose to bootstrap monasca13:38
CBR09and setup mecached to caching + fernet token for keystone13:39
witekis Keystone a bottleneck or Persister?13:40
CBR09I don't know the bottleneck is monasca-api or keystone13:41
CBR09how can I identify that ?13:41
CBR09I don't think the persister here, because of posting metric is relevant to monasca-api and keystone13:43
witekhow do you measure the performance?13:44
CBR09I've wrote a go program (using goroutine) and send request to monasca-api as fast as possible13:45
CBR09and calc avg of request/s13:46
witekdoes running multiple instances change the result?13:46
CBR09I didn't try it, but assumpsion running with 3 instances13:49
CBR09and x3 request/s, my total is 6k request/s, it's still low13:49
CBR09I think with a node 16 core - 32g ram, it should handle more than 2k request/s13:50
witekbut it will indicate if the bottleneck is the API or your test program13:50
CBR09yes, I agree with that, it will indicate the bottleneck is the api or my client13:52
CBR09ah in the performance test, I saw the strange thing13:54
CBR09i've setup the flag: legacy_kafka_client_enabled=false13:55
CBR09to using new confluent-kafka to improving performance13:55
CBR09but it decrease performance than before, I don't know why13:56
witekyou could try setting log.message.format.version=0.9.0.0 in Kafka's server.properties14:02
CBR09my kafka server is 2.4, and config log.message.format.version=0.9.0.0 into kafka server.properties?14:04
witekthat's because monasca-thresh still works with the old message format, or you could just skip deploying monasca-thresh14:08
CBR09for now I just focus on request/s when posting metrics to monasca-api14:09
CBR09and I don't know why new confluent kafka client cause decrease the performance14:10
witekseems strange to me as well, would be good to eliminate all other possible error sources14:17
witektry to run only API and Kafka and see how the results change14:17
CBR09what do you mean try to run only api and kafka ?14:18
CBR09do I need stop monasca-thres, persister and influxdb14:19
witekmonasca-api and Kafka, without persister and thresh14:19
witekcorrect14:19
CBR09yea, I will give it a try14:20
CBR09ah is there any way to disable keystone auth when posting metrics ?14:24
witekno, but I think it would be a useful feature (for testing)14:25
CBR09yes14:33
dougszsorry, was head down, CBR09, have you looked at per process CPU load?14:35
dougszis your go program  single threaded and maxing out a core?14:36
*** k_mouza has quit IRC14:36
CBR09my go program using 8 core and  around 300 go-routine run on 8 core14:37
dougszwell, probably not that then :D14:38
CBR09I've tunning keystone worker + monasca guniorn worker and it just increase max to 2k5: D14:38
*** k_mouza has joined #openstack-monasca14:43
CBR09why not dougsz: D14:45
dougszSo 2.5k req/s with a 100 metrics in each is 250k metrics / second. That is close to the limit of InfluxDB.14:51
CBR09ah, my go program just send 1 metrics in each request, so it mean 2.5k request/s ~ 2.5k metrics/s14:52
dougszUnderstood, just thinking about real world use cases.14:53
CBR09oh, I saw, thank for that point14:54
CBR09can you share me some link about limit of influxdb14:54
CBR09on number of metrics/s14:54
dougszhttps://docs.influxdata.com/influxdb/v1.8/guides/hardware_sizing/#single-node-or-cluster14:55
dougsz750k metrics/s claimed there14:56
dougszI've found to get to ~100k metrics/s you need to run quite a few persisters14:56
dougszOh yeah, how many partitions do you have in Kafka?14:56
*** jawad_axd has quit IRC14:56
dougszon the metrics topic?14:57
CBR09let's me check14:57
CBR09but if we close to the limit of InfluxDB, we need to implement more time series backend or something like load balancing14:59
CBR09so we still need more performance when posting metrics to monasca-api : D14:59
*** mbindlish has quit IRC15:00
CBR09if just 2k5 req/s, it can be accept with metrics, but for logs I think it very low15:01
dougszHow many nodes are you hoping to monitor?15:01
CBR09just anwser how many partitions on metric topic: metrics:64:1, 64 partitions15:01
dougszWe batch logs in Kolla Ansible and post every 30 seconds or so, so there is a lot of capacity15:02
dougszok, sounds like you have plenty of partitions for sharing the work out15:03
dougszWas wondering if you just had 1 or something15:03
dougszAnd I assume you are using Fernet tokens?15:04
dougsz(not UUID)15:04
CBR09How many nodes are you hoping to monitor? => we want to monitor as many as possible node, around 10k request/s15:04
CBR09yes15:04
CBR09I'm using fernet token and memcache for caching15:04
dougszOk, that is good15:05
dougszAnd which processes are pinning cores? Sorry if i missed it.15:06
dougszor rather maxing out cores15:06
CBR09we don't pinning any processes to core, how can I do that ?15:07
CBR09We batch logs in Kolla Ansible and post every 30 seconds or so, so there is a lot of capacity => we have a log rate ~ 40-50k req/s, so it's impossible to using monasca15:09
CBR09when using kafka behind monasca+keystone, I saw the performance decrease too much15:10
dougszWow. Sorry, above I mean which processes are using 100% CPU. I guess you see Keystone?15:20
dougszYou should be able to self monitor to graph it15:20
CBR09right now I using htop to view cpu usage and saw monasca-api gunicorn consume more cpu15:24
CBR09I will setup self monitor to graph it15:26
CBR09to see clearly about cpu usage of keystone and monasca-api15:26
dougszSounds good. You've compared against Apache/wsgi in Kolla?15:28
CBR09I using docker-compose to bootraping monasca15:29
CBR09and saw keystone run with uwsgi15:29
CBR09any recommend about that ?15:29
CBR09ah i saw many logs in monasca-api15:29
CBR092020-09-22 15:28:23,936.936 42 WARNING urllib3.connectionpool [req-a3ea5df2-d554-4e94-b707-5d7cb3872419 559fa14ea23345a19b7f08885bd6c9eb d419ea8d4b544e2cb7dec129ff241b77 - default default] Connection pool is full, discarding connection: keystone: queue.Full15:29
dougszah, I wondered how gunicorn compares to wsgi in kolla for the API, same for Keystone15:30
dougszHmm, wonder if you hit max connections with memcached?15:31
dougszWe saw something like that for Neutron15:31
CBR09ah I don't benchmark python web server like wsgi and gunicorn15:32
CBR09but I searched the blogs15:32
CBR09https://www.appdynamics.com/blog/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/15:32
dougszyeah, saw that, no time to read carefully, but wondered if that was the consensus15:34
CBR09Hmm, wonder if you hit max connections with memcached? => let me check15:34
CBR09if keystone is the bottleneck, next time we will try run keystone behind apache mod_wsgi because uwsgi's performance is worst like above blog15:36
dougszhttps://pastebin.com/4BLYKwKL15:36
dougszYou could try adding that setting in Monasca API?15:36
dougszIf you are seeing an issue with using all memcached connections15:37
CBR09thank you, I will try it, for the above errors15:37
dougsznp15:38
CBR09I saw memcache logs, too many open file15:38
CBR09I will increase that limit and setting your config15:38
dougszInteresting15:39
*** witek has quit IRC15:56
*** irclogbot_0 has quit IRC16:08
*** irclogbot_2 has joined #openstack-monasca16:09
CBR09that's config just only work on python 2.x : (16:17
dougszCBR09: interesting16:21
dougszFor Neutron on Centos 8 + py3 it was fine, despite the warning16:21
dougszYou see it cause a failure in the logs?16:22
CBR09really?, i will try it = )), I just see warning in config sample file16:22
dougszyeah, we didn't fully investigate why the warning note is no longer valid, but it certainly seems to work on py316:23
*** k_mouza has quit IRC16:25
CBR09thank you, i will try it16:27
dougszCBR09: Thanks, please share results, I am interested :)16:31
CBR09tomorrow i will share results with you, in my country now is mid night : ))16:32
CBR09i guest i need get some sleep16:33
dougszsleep is good! bye for now16:33
CBR09thank you for your help, i will share soon with you16:33
CBR09bye16:33
*** dougsz has quit IRC16:33
*** k_mouza has joined #openstack-monasca16:34
*** CBR09 has quit IRC16:35
*** k_mouza has quit IRC16:38
*** k_mouza has joined #openstack-monasca16:42
*** k_mouza has quit IRC16:46
*** k_mouza has joined #openstack-monasca16:51
*** k_mouza has quit IRC17:11
*** k_mouza has joined #openstack-monasca17:18
*** k_mouza has quit IRC17:23
*** k_mouza has joined #openstack-monasca17:37
*** k_mouza has quit IRC17:41
*** k_mouza has joined #openstack-monasca17:46
*** k_mouza has quit IRC17:51
*** vishalmanchanda has quit IRC18:58
*** tosky_ has joined #openstack-monasca21:09
*** k_mouza has joined #openstack-monasca21:09
*** tosky is now known as Guest3320021:10
*** tosky_ is now known as tosky21:10
*** k_mouza has quit IRC21:14
*** gmann has quit IRC22:04
*** adriancz has quit IRC22:04
*** gmann has joined #openstack-monasca22:06
*** adriancz has joined #openstack-monasca22:06
*** tosky has quit IRC22:48
*** gokhani has quit IRC23:09

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!