Thursday, 2014-09-18

*** charlesw has joined #magnetodb02:01
*** openstackgerrit has quit IRC02:01
*** openstackgerrit has joined #magnetodb02:22
*** vnaboychenko has quit IRC03:43
*** charlesw has quit IRC03:53
*** vnaboychenko has joined #magnetodb04:20
*** k4n0 has joined #magnetodb04:46
*** rushiagr_away is now known as rushiagr05:16
openstackgerritAjaya Agrawal proposed a change to stackforge/magnetodb: Added debug env to tox  https://review.openstack.org/12087306:10
openstackgerritAjaya Agrawal proposed a change to stackforge/magnetodb: Added debug env to tox  https://review.openstack.org/12087306:12
*** ajayaa has joined #magnetodb06:15
openstackgerritOleksandr Minakov proposed a change to stackforge/magnetodb: (WIP) Adds table size in monitoring API  https://review.openstack.org/12233006:16
*** vnaboychenko has quit IRC06:40
openstackgerritAndrei V. Ostapenko proposed a change to stackforge/magnetodb: (WIP) "ALL_OLD" return_values for put_item  https://review.openstack.org/12191107:41
*** k4n0 has quit IRC08:37
*** k4n0 has joined #magnetodb08:52
*** k4n0 has quit IRC09:22
*** k4n0 has joined #magnetodb09:34
*** charlesw has joined #magnetodb10:59
openstackgerritDmitriy Ukhlov proposed a change to stackforge/magnetodb: Use by default TokenAwarePolicy  https://review.openstack.org/12238911:04
ajayaaHi charlesw!11:15
ajayaacharlesw, I enabled notification to rabbitmq through config file. While importing magneto.common.notifier.rpc_notifier it throws an import error.11:17
openstackgerritDmitriy Ukhlov proposed a change to stackforge/magnetodb: Use by default TokenAwarePolicy  https://review.openstack.org/12238911:18
ajayaaThat is a run time error and occurs at magnetodb/openstack/common/notifier/api.py(162)11:18
ajayaaI think you have forgotten to add __init__.py in magnetodb / magnetodb / common / notifier/11:19
ajayaaAfter adding a __init__.py file it runs fine.11:20
openstackgerritAndrei V. Ostapenko proposed a change to stackforge/magnetodb: (WIP) "ALL_OLD" return_values for put_item  https://review.openstack.org/12191111:21
ajayaaHi all. Am I missing something? Has anyone tried it and didn't face a problem?11:22
charleswajayaa, integration with rabbitmq or other messaging broker has not been verified. I'll take a  look.11:26
ajayaacharlesw, I was just doint that and found out that adding __init__.py in magnetodb/commom/notifier/ it runs fine.11:27
ajayaacharlesw, Shall I go ahead and file a bug about it?11:28
charleswyes please11:28
*** charlesw has quit IRC11:33
*** vnaboychenko has joined #magnetodb11:51
*** k4n0 has quit IRC12:01
*** rushiagr is now known as rushiagr_away12:14
*** rushiagr_away is now known as rushiagr12:24
*** vnaboychenko has quit IRC12:37
isviridovHello everybody12:55
isviridovajayaa, thank you for checking the notifications, please paste the link to bug here as well12:56
ajayaaisviridov, https://bugs.launchpad.net/magnetodb/+bug/137107012:57
ajayaanp.12:57
achuprin_Hi everyone!12:57
isviridovHey achuprin_12:57
isviridovIs ominakov- around?12:57
ominakov-Hi guys!12:58
achuprin_Hi ominakov-12:58
isviridovHi ominakov- , just wanted to make sure you are here :)12:58
achudnovetshi guys!12:59
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: Add queued storage manager  https://review.openstack.org/12240412:59
ajayaaHi everyone!12:59
isviridovLooks like everybody are joining for weekly meeting.12:59
isviridovIt is taking place in #openstack-meetings12:59
isviridov* -meeting13:00
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: (WIP)Add queued storage manager  https://review.openstack.org/12240413:01
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: (WIP)Add queued storage manager  https://review.openstack.org/12240413:02
*** keith_newstadt has joined #magnetodb13:03
*** miarmak has joined #magnetodb13:05
*** miarmak has quit IRC13:09
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: (WIP)Add queued storage manager  https://review.openstack.org/12240413:10
*** charlesw has joined #magnetodb13:15
openstackgerritAleksey Chuprin proposed a change to stackforge/magnetodb: Edited function fix_etc_hosts  https://review.openstack.org/12222813:19
*** charlesw has quit IRC13:24
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: (WIP)Add queued storage manager  https://review.openstack.org/12240413:25
openstackgerritAndrei V. Ostapenko proposed a change to stackforge/magnetodb: (WIP) "ALL_OLD" return_values for put_item  https://review.openstack.org/12191113:31
*** charlesw has joined #magnetodb13:40
rushiagrkeith_newstadt: I wanted to know what MDB team is planning to use for cassandra provisioning, managing and scaling14:00
*** ajayaa has quit IRC14:00
rushiagrI don't want to walk the path which is different from what the MDB community is trying to go14:00
rushiagror are we leaving this question out of mdb completely?14:01
keith_newstadti think we should approach the subject of deployment the same way other openstack projects do, e.g. swift or nova14:02
keith_newstadtat symantec, we'd ultimately like to be using heat for all our deployment14:02
keith_newstadtalso, i don't think mdb should be implemented specifically to scaling and managing cassandra14:03
keith_newstadtas it will be possible to run mbd on top of other database technologies14:04
isviridovrushiagr, we will try reuse trove if possible, but only for deployments within tenant14:04
rushiagrkeith_newstadt: okay, directly heat versus trove managed (which might internally use heat for scaling)14:04
rushiagrkeith_newstadt: I agree with that14:05
rushiagrkeith_newstadt: and yes, when we'll go for an incubation request, we will be asked how many backends (databases)we support, etc14:05
rushiagrkeith_newstadt: my only concern is an example I found out on AWS's DynamoDB page14:05
rushiagrkeith_newstadt: say a customer wants to migrate to mdb, in the first couple hours, he'll dump his gigabytes of data into mdb14:06
rushiagrand after that, he'll use MDB for a mostly-read-heavy load14:06
rushiagrin this case, heat autoscaling won't work...14:07
rushiagroh, are we just saying that scaling, or autoscaling is better to be kept completely out of mdb?14:07
achudnovetsrushiagr: why autoscaling won't work?14:07
keith_newstadtsorry, guys, i have to join my next meeting14:08
rushiagrachudnovets: because load is not showing a predictable pattern14:08
keith_newstadti'll catch up in a bit...14:08
rushiagrkeith_newstadt: no issues. Sure.14:08
rushiagrkeith_newstadt: I'll be leaving soon too, but please write your opinion when you come back. I'll read it when I come back14:08
isviridovrushiagr, IMO autoscaling for database can be risky thing and effect the SLA dramatically14:08
isviridovrushiagr, that is why C* management and deployment is out of scope for now.14:09
rushiagrisviridov: well, not right now, but eventually we'll need to I guess14:09
rushiagrisviridov: I agree with you, C* management can, and should be left completely out of mdb14:09
rushiagrisviridov: what is the problem with trove and single tenant?14:12
achudnovetsrushiagr: partially agree. You can provide your own scaling metric, based on C* stats for example. But I totally agree with isviridov about autoscaling DBs :)14:12
*** ikhudoshyn_ is now known as ikhudoshyn14:13
*** ominakov- is now known as ominakov14:13
rushiagrachudnovets: okay. I'll think about autoscaling more14:14
rushiagrso mdb is not going to worry about managing C* nodes, am I right?14:14
isviridovrushiagr, yeap14:15
achudnovets+1 for not worry about managing C* nodes14:15
rushiagrisviridov: okay. Just wanted to know mdb's stance, so that we can make our decisions based on that14:15
rushiagrisviridov: achudnovets: thanks for the input14:15
isviridovrushiagr, always wellcome14:16
rushiagrisviridov: :)14:16
isviridov* welcome14:16
achudnovetsrushiagr: thanks for good questions and notes :)14:16
rushiagrachudnovets: :)14:17
rushiagrsee you people around tomorrow. Bye for now :)14:24
isviridovrushiagr, see you14:24
charleswSorry missed the party. It was not on my calendar somehow.14:26
charleswC* auto scaling is a big topic by itself. +1 for leaving it out of MDB.14:27
isviridovcharlesw, hello. Here is meeting minutes http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-18-13.01.html14:28
charleswNetflix has done some work on it. They have a project called Priam14:28
rushiagrcharlesw: yeah, right14:28
rushiagrcharlesw: it was delightful to see them trying out their comprehensive tests in just about two hours, and shutting off everything else right after that :)14:29
charleswhttps://github.com/Netflix/Priam14:29
rushiagrcharlesw: thanks for the link. Seems they've done pretty comprehensive study of cassandra-on-cloud14:31
charleswEven though Priam is for AWS, I wonder if it can be adapted to work in OpenStack14:31
charleswThanks isviridov. I saw the topic on get item count and table size. I believe C* doesn't maintain such numbers. You can only get ballpark figures. Will wait for ominakov's reaserch.14:34
rushiagrcharlesw: I think I'll have a look at Priam, study in some detail14:51
*** vnaboychenko has joined #magnetodb14:51
charleswrushiagr: thanks, please share any findings/comments14:52
rushiagrcharlesw: sure I will14:52
rushiagrbut not today. See ya'll tomorrow14:52
charleswrushiagr: sure, see you14:54
*** rushiagr is now known as rushiagr_away14:56
*** keith_newstadt has quit IRC14:57
ikhudoshynhttps://wiki.openstack.org/wiki/MagnetoDB/specs/async-schema-operations14:59
*** vnaboychenko has quit IRC15:01
*** keith_newstadt has joined #magnetodb15:03
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: (WIP)Add queued storage manager  https://review.openstack.org/12240415:05
openstackgerritIllia Khudoshyn proposed a change to stackforge/magnetodb: Add queued storage manager  https://review.openstack.org/12240415:27
openstackgerritDmitriy Ukhlov proposed a change to stackforge/magnetodb: Improve json schema validation for left requests  https://review.openstack.org/12244515:28
achudnovetscharlesw: btw, my congratulations :)!15:31
*** openstackgerrit has quit IRC15:33
charleswThanks! You guys have been very helpful.15:44
*** openstackgerrit has joined #magnetodb16:03
openstackgerritA change was merged to stackforge/magnetodb: Use by default TokenAwarePolicy  https://review.openstack.org/12238916:22
openstackgerritA change was merged to stackforge/magnetodb: Fixes bug with scan on hash key attribute  https://review.openstack.org/12065416:26
*** vnaboychenko has joined #magnetodb16:47
*** jeromatron has joined #magnetodb16:57
*** jeromatron has quit IRC17:05
*** jeromatron has joined #magnetodb17:08
*** jeromatron has quit IRC17:18
*** jeromatron has joined #magnetodb17:20
*** jeromatron has quit IRC17:22
*** jeromatron has joined #magnetodb17:25
*** openstackgerrit has quit IRC17:31
*** openstackgerrit has joined #magnetodb17:33
*** jeromatron has quit IRC17:35
*** jeromatron has joined #magnetodb17:35
*** jeromatron has quit IRC17:37
*** jeromatron has joined #magnetodb17:41
*** jeromatron has quit IRC17:45
*** jeromatron has joined #magnetodb17:46
*** rushiagr_away is now known as rushiagr17:46
*** jeromatron has quit IRC17:50
*** keith_newstadt has quit IRC17:50
*** jeromatron has joined #magnetodb17:53
*** rushiagr is now known as rushiagr_away18:03
*** rushiagr_away is now known as rushiagr18:15
*** rushiagr is now known as rushiagr_away18:20
*** rushiagr_away is now known as rushiagr18:20
*** rushiagr is now known as rushiagr_away18:40
*** rushiagr_away is now known as rushiagr18:49
*** jeromatron has quit IRC18:58
*** rushiagr is now known as rushiagr_away18:58
*** jeromatron has joined #magnetodb19:00
*** jeromatron has quit IRC19:24
*** jeromatron has joined #magnetodb19:26
*** openstackgerrit has quit IRC19:45
*** jeromatron has quit IRC20:29
*** jeromatron has joined #magnetodb20:31
*** jeromatron has quit IRC20:53
*** jeromatron has joined #magnetodb21:01
*** openstackgerrit has joined #magnetodb21:09
*** charlesw has quit IRC21:34
*** charlesw has joined #magnetodb22:49
*** jeromatron has quit IRC23:07
*** jeromatron has joined #magnetodb23:07
*** jeromatron has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!