*** openstack has joined #openstack-ceilometer | 02:17 | |
*** openstackgerrit has joined #openstack-ceilometer | 02:24 | |
*** llu_linux is now known as llu | 02:41 | |
openstackgerrit | Fengqian.gao proposed a change to openstack/ceilometer: Add pagination support for sqlalchemy database https://review.openstack.org/35454 | 05:32 |
---|---|---|
openstackgerrit | Fengqian.gao proposed a change to openstack/ceilometer: Change pagination related methods of mongodb and db2 https://review.openstack.org/41869 | 05:32 |
openstackgerrit | Jenkins proposed a change to openstack/ceilometer: Imported Translations from Transifex https://review.openstack.org/60154 | 06:04 |
*** SergeyLukjanov has joined #openstack-ceilometer | 07:12 | |
*** SergeyLukjanov has quit IRC | 07:41 | |
*** SergeyLukjanov has joined #openstack-ceilometer | 07:48 | |
openstackgerrit | ChangBo Guo proposed a change to openstack/ceilometer: Don't need session.flush in context managed by session https://review.openstack.org/60166 | 07:54 |
*** SergeyLukjanov has quit IRC | 08:36 | |
*** eglynn-afk has joined #openstack-ceilometer | 09:03 | |
openstackgerrit | Mehdi Abaakouk proposed a change to openstack/ceilometer: [WIP] replace oslo.rpc by oslo.messaging https://review.openstack.org/57457 | 09:29 |
openstackgerrit | Lianhao Lu proposed a change to openstack/ceilometer: Fixed various unit test cases for alarms https://review.openstack.org/60186 | 09:35 |
openstackgerrit | Mehdi Abaakouk proposed a change to openstack/ceilometer: [WIP] replace oslo.rpc by oslo.messaging https://review.openstack.org/57457 | 09:49 |
*** eglynn-afk is now known as eglynn | 10:16 | |
*** anteaya has joined #openstack-ceilometer | 10:59 | |
*** yfujioka has quit IRC | 12:27 | |
*** thomasem has joined #openstack-ceilometer | 12:37 | |
*** nsaje has joined #openstack-ceilometer | 12:45 | |
*** gordc has joined #openstack-ceilometer | 13:06 | |
openstackgerrit | Mehdi Abaakouk proposed a change to openstack/ceilometer: Replace oslo.rpc by oslo.messaging https://review.openstack.org/57457 | 13:09 |
sileht | jd__, if you can take a look to the oslo.messaging stuffs | 13:13 |
jd__ | sure | 13:13 |
sileht | jd__, https://review.openstack.org/#/c/57457/ and https://review.openstack.org/#/q/status:open+project:openstack/oslo.messaging+branch:master+topic:bp/notification-subscriber-server,n,z | 13:13 |
sileht | thx :) | 13:14 |
*** SergeyLukjanov has joined #openstack-ceilometer | 13:26 | |
*** gordc has quit IRC | 13:26 | |
thomasem | Hey everyone! | 13:32 |
thomasem | Hey, jd__, eglynn: this patch set appears ready to go, could we get some eyes on it? Wanting to get this piece in soon for the dependent patch set that's about to be up for review. :) https://review.openstack.org/#/c/42713/? | 13:42 |
jd__ | thomasem: definitely on my too long todo list :( | 13:43 |
eglynn | thomasem: I'll try to take a look at it before EoD | 13:43 |
jd__ | probably not today since I'm conferencing | 13:43 |
thomasem | jd__, eglynn: Thanks! I appreciate the help. | 13:44 |
thomasem | jd__, Understandable | 13:44 |
*** nprivalova has joined #openstack-ceilometer | 13:55 | |
nprivalova | jd__, hi! I have a question for you about several instances of collector in perf tests. ping me if you have a time | 13:56 |
jd__ | nprivalova: likely not today I think, but you can mail me if you want or we can try to chat tomorrow :) | 13:57 |
nprivalova | jd__, ok. I'll try to ask someone else :) And if no result will mail | 13:58 |
jd__ | sure, I'm interested :) | 13:58 |
nprivalova | guys, I have the lab with 3 controllers and 200 computes. There are HA-mysql installation in controllers. I installed ceilometer only on one controller and on all computes and I believe that if I install ceilometer on 2 controllers more performance will be better. As I understand I may start 3 instances of collector instead of 1. Am I right? | 14:04 |
thomasem | Can you clarify this statement? "I installed ceilometer only on one controller and on all computes?"? | 14:12 |
thomasem | As long as the collectors are hitting the same queue (so it round-robins) I think that's the intended way for it to work. | 14:12 |
thomasem | by hitting I mean consuming from | 14:12 |
nprivalova | so I have ceilometer-api, ceilometer-agent-central, ceilometer-collector and ceilometer-agent-compute running on one collector and ceilometer-agent-compute on all computes | 14:12 |
thomasem | Gotcha | 14:12 |
nprivalova | looks like if I run ceilometer-collector on 2 controllers more they start to process more messages from queue and DB will be loaded more | 14:12 |
thomasem | Yeah, the DB needs to be able to keep up | 14:12 |
thomasem | with multiple connections | 14:12 |
thomasem | if you have 10 subsequent messages on the queue, and 3 collectors, this is the intended spread, I think: | 14:12 |
thomasem | ceilometer_01 - 1,4,7,10 | 14:12 |
thomasem | collector_02 - 2,5,8 | 14:12 |
thomasem | collector_03 - 3,6,9 | 14:12 |
thomasem | sorry that first ceilometer_01 should be collector_01 | 14:12 |
nprivalova | yep, I believe that db configured to work with multiple connections. Galera is used there | 14:12 |
thomasem | Cool. The addition of collectors (assuming there are always unconsumed messages in the queue) will likely increase the load on your DB, so you could shift the bottle-neck. | 14:12 |
thomasem | if there is one | 14:12 |
thomasem | :) | 14:12 |
thomasem | But at some point the MQ service can't go any faster too | 14:12 |
thomasem | Anywho, to answer the question, I believe the collector is designed to share a queue with other collectors to process more messages faster. | 14:13 |
nprivalova | I just want to understand may I start to work on bp about "make getting the data from db faster". On the HK summit there was a lot of discussions about MQ bottleneck. So I was confused about this fact. "Should I start improving "getting" performance before resolving this bottleneck" - that is my concern | 14:26 |
nprivalova | now I've made a test with 200 instances up and interval 5 sec polling. It worked ok, about 9 000 000 entries in db. Today I'm planning to run at least 1000 instances. But I didn't have alarms and events | 14:27 |
thomasem | Ah | 14:59 |
thomasem | So, nprivalova, you might want to have a look at some of the DB performance testing we're doing. We are looking to scale to 1,000,000 messages/day | 14:59 |
thomasem | https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing | 15:00 |
thomasem | So we're taking a look at the drivers against various data stores and seeing what we come up with. This goes directly to the DB layer, so it's not looking at the collector speed. Though, I am interested about bottlenecks further up the stack. | 15:00 |
thomasem | The speed of my driver won't mean squat without the MQ and the collector keeping up. :) | 15:02 |
thomasem | collector(s) | 15:02 |
nprivalova | thomasem, I know about your investigations. My purpose is mostly MQ performance. DB performance is really interesting piece so I hope you will get interesting results :) btw, do you measure 'write-spead'? Because 'read-spead' depends on implementation very much | 15:05 |
thomasem | All we're testing right now is write-speed. | 15:06 |
thomasem | We're going to worry about read-speed after that (which is lesser priority than writes). | 15:06 |
thomasem | Since writes HAVE to keep up with the flow of messages, queries can take a little more time. | 15:07 |
thomasem | nprivalova, Cool. I wasn't sure you were. :) | 15:07 |
thomasem | nprivalova, pretty much, I'm taking a look at various deployment configurations and backends and just hammering it with millions of generated events (from a pool, pseudo-random) and getting RAM, disk I/O, read speed, CPU utilization, etc. | 15:09 |
thomasem | to find out what needs to be tuned | 15:09 |
openstackgerrit | Alexei Kornienko proposed a change to openstack/ceilometer: Added profiler notification plugin https://review.openstack.org/60262 | 15:14 |
openstackgerrit | Alexei Kornienko proposed a change to openstack/ceilometer: Added profiler notification plugin https://review.openstack.org/60262 | 15:38 |
*** gordc has joined #openstack-ceilometer | 15:44 | |
*** SergeyLukjanov has quit IRC | 15:54 | |
*** kwhitney has quit IRC | 16:08 | |
openstackgerrit | John Herndon proposed a change to openstack/ceilometer: Event Storage Layer https://review.openstack.org/57304 | 16:12 |
*** gordc has quit IRC | 16:18 | |
*** nprivalova has quit IRC | 16:22 | |
*** jdob has joined #openstack-ceilometer | 16:28 | |
*** SergeyLukjanov has joined #openstack-ceilometer | 16:30 | |
*** SergeyLukjanov is now known as _SergeyLukjanov | 16:31 | |
*** _SergeyLukjanov is now known as SergeyLukjanov | 16:31 | |
*** SergeyLukjanov is now known as _SergeyLukjanov | 16:32 | |
*** _SergeyLukjanov is now known as SergeyLukjanov | 16:33 | |
*** gordc has joined #openstack-ceilometer | 16:33 | |
*** litong has joined #openstack-ceilometer | 16:40 | |
*** shadower has joined #openstack-ceilometer | 16:42 | |
shadower | is there a documentation describing how to install ceilometer in a multi-node setup? I checked docs.openstack.org's architecture.html and install/manual.html but I'm still unclear where each service goes | 16:44 |
shadower | e.g. if I have a bunch of nova compute nodes and a single node with keystone, scheduler, the databases, etc. | 16:44 |
*** litong has quit IRC | 16:44 | |
shadower | I'd want to put the compute agent onto each compute node and the ceilometer-api on the controller node | 16:44 |
shadower | but what about the notification agent, collector and all the other services? | 16:44 |
*** litong has joined #openstack-ceilometer | 16:46 | |
*** kwhitney has joined #openstack-ceilometer | 17:02 | |
*** herndon has joined #openstack-ceilometer | 17:11 | |
openstackgerrit | litong01 proposed a change to openstack/ceilometer: add more test cases to improve the test code coverage #5 https://review.openstack.org/49802 | 18:06 |
openstackgerrit | litong01 proposed a change to openstack/ceilometer: test code should be excluded from test coverage summary https://review.openstack.org/60309 | 18:32 |
*** thomasm_ has joined #openstack-ceilometer | 19:13 | |
*** thomasem has quit IRC | 19:14 | |
*** eglynn has quit IRC | 19:41 | |
*** eglynn has joined #openstack-ceilometer | 20:20 | |
openstackgerrit | Monsyne Dragon proposed a change to openstack/ceilometer: Add configuration-driven conversion to Events https://review.openstack.org/42713 | 20:41 |
dragondm | gordc: Thanks for the review. I've uploaded a new patchset fixing your concerns. If you think we really need to add a new config group for those options I can add that too. | 20:44 |
*** eglynn has quit IRC | 20:45 | |
herndon | gordc: not sure what the error is with trait types... gates didn't catch anything. Couldn't we just fix the problem instead of reverting the whole patch?? | 21:11 |
*** eglynn has joined #openstack-ceilometer | 21:14 | |
gordc | whoops. didn't see messages. | 21:51 |
gordc | dragondm: i'm ok with not having config group... i'll +2 once jenkins pass | 21:51 |
gordc | herndon: i'm not sure what error jd__ sees. i'm ok with your patch though... just want to see if there was a reason for revert. | 21:52 |
herndon | the event patch can't go in if trait types is there :/ | 21:53 |
dragondm | gordc: Cool. I had to recheck on jenkins (looks like tempest is failing with a spurious glance issue) Hopefully that will go through soon. | 21:53 |
herndon | I commented on the review. I'm really surprised there is a problem as I tested the migrations with data in the db, and tested up->down->up migrations. This stuff is tricky :(. | 21:54 |
gordc | herndon: i didn't see any issues either when i tested with data. (saw issues elsewhere but not related to your patch) | 21:55 |
gordc | dragondm: yeah, i was going to recheck the patch. you beat me to it :) | 21:55 |
dragondm | Heh. I'm quick. Perhaps a little too quick if the typos in my documentation are any indication. ( brain.speed > finger.speed ) :P | 21:56 |
*** SergeyLukjanov has quit IRC | 21:58 | |
gordc | aside from typos the docs were really good though. i would've avoided your 2000 line patch if your docs weren't so damn clear.lol | 21:59 |
*** SergeyLukjanov has joined #openstack-ceilometer | 21:59 | |
dragondm | Heh. Thanks. Yah, since I'm basically defining a mini DSL, I figured folks would need to know what to do with it :> I actually wrote much of the documentation when I wrote the blueprint for the feature. | 22:01 |
*** jdob has quit IRC | 22:14 | |
*** prad has joined #openstack-ceilometer | 22:16 | |
*** litong has quit IRC | 22:17 | |
openstackgerrit | Eoghan Glynn proposed a change to openstack/ceilometer: Correct subscriber reconnection logic on QPID broker restart https://review.openstack.org/60371 | 22:27 |
*** thomasm_ has quit IRC | 22:29 | |
*** SergeyLukjanov has quit IRC | 22:37 | |
*** SergeyLukjanov has joined #openstack-ceilometer | 22:38 | |
*** SergeyLukjanov has quit IRC | 22:42 | |
*** openstackgerrit has quit IRC | 22:48 | |
*** openstackgerrit has joined #openstack-ceilometer | 22:48 | |
*** eglynn has quit IRC | 22:54 | |
*** robbybb111 has left #openstack-ceilometer | 22:55 | |
*** SergeyLukjanov has joined #openstack-ceilometer | 23:11 | |
*** SergeyLukjanov has quit IRC | 23:18 | |
*** prad has quit IRC | 23:25 | |
*** asalkeld has joined #openstack-ceilometer | 23:43 | |
*** openstackgerrit has quit IRC | 23:47 | |
*** openstackgerrit has joined #openstack-ceilometer | 23:47 | |
*** herndon has quit IRC | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!