Thursday, 2015-01-22

*** dmakogon_ has quit IRC00:19
*** achanda has quit IRC00:31
*** dmakogon_ has joined #magnetodb00:33
*** achanda has joined #magnetodb00:37
*** achanda has quit IRC00:50
*** achanda has joined #magnetodb00:54
*** rushiagr_away has quit IRC01:13
*** rushiagr_away has joined #magnetodb01:14
*** achanda has quit IRC01:30
*** charlesw has joined #magnetodb01:48
*** charlesw_ has joined #magnetodb02:01
*** charlesw has quit IRC02:06
*** charlesw_ is now known as charlesw02:06
*** achanda has joined #magnetodb02:31
*** achanda has quit IRC02:36
*** achanda has joined #magnetodb03:33
*** achanda has quit IRC03:37
*** vivekd has joined #magnetodb04:21
*** charlesw has quit IRC04:23
*** ajayaa has joined #magnetodb04:32
*** achanda has joined #magnetodb05:34
*** achanda has quit IRC05:39
*** ajayaa has quit IRC06:47
*** achanda has joined #magnetodb06:53
*** achanda has quit IRC07:39
*** achanda has joined #magnetodb07:46
*** romainh has joined #magnetodb07:50
*** achanda has quit IRC08:06
*** ajayaa has joined #magnetodb08:17
*** achanda has joined #magnetodb08:20
*** achanda has quit IRC08:31
*** ygbo has joined #magnetodb09:06
*** ajayaa has quit IRC09:34
*** ajayaa has joined #magnetodb09:38
*** ajayaa has quit IRC09:43
openstackgerritIllia Khudoshyn proposed stackforge/magnetodb: Add restore manager  https://review.openstack.org/14690911:21
openstackgerritIllia Khudoshyn proposed stackforge/magnetodb: (WIP) Add simple backup implementation  https://review.openstack.org/14896312:54
*** dmakogon_ is now known as denis_makogon12:58
*** charlesw has joined #magnetodb13:19
isviridovHello aostapenko13:39
aostapenkoHello, isviridov13:39
*** rushiagr_away is now known as rushiagr13:40
*** [o__o] has quit IRC13:43
*** [o__o] has joined #magnetodb13:44
*** miqui has joined #magnetodb13:54
*** charlesw has quit IRC13:57
*** miqui_ has joined #magnetodb13:57
isviridovHello everybody13:58
isviridovAnybody here for meeting?13:58
ominakovo/13:58
isviridovHello ominakov14:00
aostapenkohello, everyone14:01
isviridovikhudoshyn aostapenko charlesw14:01
isviridovdukhlov let us start14:01
isviridov#startmeeting magnetodb14:01
openstackMeeting started Thu Jan 22 14:01:32 2015 UTC and is due to finish in 60 minutes.  The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
openstackThe meeting name has been set to 'magnetodb'14:01
isviridovToday agenda https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Agenda14:01
isviridovo/14:01
ikhudoshynщ/14:02
ominakovo/14:02
isviridovikhudoshyn cyrillic here14:03
ikhudoshynoh, really ?!14:03
isviridovIt looks like Bostin is not with us today14:03
ikhudoshyngonna be fast?14:03
isviridovLet us go throught action items14:03
isviridovajayaa start mail discussion about TTL implementation14:04
isviridov#topic Go through action items isviridov14:04
isviridovSo, the question was rised14:05
isviridovEverybody seen this email?14:05
isviridovikhudoshyn aostapenko?14:06
ikhudoshynhm14:06
ikhudoshynoh, yea14:07
ikhudoshynthere was14:07
isviridov#link http://osdir.com/ml/openstack-dev/2015-01/msg00944.html14:07
ikhudoshynisviridov: tnx14:07
ikhudoshynmove on?14:07
isviridovikhudoshyn I expected to hear any oppinions14:08
* isviridov I know it is hard without dukhlov14:09
ikhudoshyni don't like the idea of only enabling ttl on insert14:09
isviridovThe question is how to implement update item with TTL14:09
isviridovikhudoshyn +114:09
ikhudoshyni think first should agree if Dima's native LSI are prod-ready, then switch to them14:10
ikhudoshynand then evaluate re-evaluate required efforts14:11
ikhudoshynfrom what i've seen our insert/update/delete logic becomes much cleaner with the latter14:11
isviridovikhudoshyn I think that we have adopted LSI already14:12
ikhudoshynwe've merged them, it's not the same for me))14:12
isviridovHere it is the same :)14:12
isviridovMerged and swithed our configuration14:13
ikhudoshynok, so we could go with the 2nd step)14:13
ikhudoshynare we to keep out outdated stuff forever?14:13
ikhudoshyn(like old LSI implementation)14:13
ikhudoshynif we think that new LSI is good enuogh lets get rid of old14:14
isviridovikhudoshyn now, just we have migration we can foget about it14:14
isviridovI see no reason to support it anymore14:14
aostapenkoI don't think we should support previous implementation14:14
ikhudoshynhurray14:15
* ikhudoshyn loves throwing away old stuff14:15
* isviridov not sure if ominakov has created BP for migration14:15
* isviridov loves it as well14:15
ikhudoshyngetting back to ttl, let's re-check estimates14:15
ominakovisviridov, i'll do it14:16
isviridovThe key problem is: C* doesn't support TTL per row14:16
ikhudoshynyep, we should emulate it14:16
ikhudoshynwe may consider having it per item but thus we'd diverge from AWS API14:17
isviridovMeans we have to update whole row14:17
ikhudoshynisviridov: exactly14:18
* isviridov is checking TTLs support in AWS14:18
*** charlesw has joined #magnetodb14:18
isviridovikhudoshyn are there any TTL in AWS?14:18
ikhudoshynhm, i guess it was14:19
* ikhudoshyn seems to be wrong14:20
ikhudoshynhttp://www.datastax.com/dev/blog/amazon-dynamodb no ttl in dynamodb14:20
isviridovikhudoshyn it doesn't accroding to AWS API14:20
isviridovOk, let us return back to TTL per row in mdb api only :)14:21
ikhudoshynthe only issue i could think of, is a backend that wouldn't have native ttl14:21
ikhudoshyni mean, we could have it per attribute, but..14:22
isviridovikhudoshyn it is responcibility of driver aouthor to implement it or not14:22
* isviridov don't like per table TTL14:22
ikhudoshynif we'd want to support another backend w/o ttl, emulating ttl per item would be much more complex14:22
ikhudoshynso..14:24
isviridovDo you mean that TTL feature is not needed?14:24
ikhudoshynare we to have per item ttl?14:24
ikhudoshynisviridov: no, i just mean that emulate per-row ttl is easier than per item14:25
isviridov#link https://blueprints.launchpad.net/magnetodb/+spec/row-expiration by Symantec14:25
ikhudoshynok, let's not use C* native ttl))14:26
isviridovItem == row14:26
ikhudoshyn*easier than per-field14:26
isviridovcharlesw is TTL per field is expected to be needed?14:27
*** dukhlov_ has joined #magnetodb14:28
charleswI'd think so14:28
charleswuntil C* comes up with a solution14:28
ikhudoshynwe can't have both at the same time14:29
isviridovikhudoshyn why?14:29
charleswI was reading C* may expose row marker as a column, then we can set ttl on row marker14:29
ikhudoshynusage seems to be far too complex14:29
isviridovcharlesw really interesting. Could you point us where we can read it?14:30
charleswhttp://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html14:30
ikhudoshynthat's exactly what ajayya told in nis email14:32
ikhudoshyn*his14:32
isviridovdoesn't look like row marker is available via CQL ap14:33
isviridov*api14:33
ikhudoshyn"Wouldn't it be simpler if Cassandra just let us change the ttl on the row marker?" --> This is internal impl details, not supposed to be exposed as public API14:34
ikhudoshynthat's from that thread14:34
isviridovBetter to say not exposed14:35
isviridov#idea suggest exporing row marker for C* community14:35
ikhudoshynwe could agree 'it's not exposed right now'14:35
isviridov#idea overwrite all colums with new TTL14:36
isviridovDoes it look correct?14:36
ikhudoshyn#idea implement per-row ttl manually14:36
ikhudoshynusing dedicated ttl field14:37
charleswis ttl allowed on primary key?14:37
ikhudoshyni doubt14:37
charleswif not, setting ttl on all columns won't work14:37
isviridovcharlesw +114:38
isviridovBut according to you #link http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html it should work14:38
isviridovikhudoshyn do you mean manually check TTL and manually delete it?14:39
ikhudoshynisviridov: exactly14:39
isviridovSounds like async job and check on each read14:40
ikhudoshynisviridov: yup14:40
isviridovikhudoshyn will you share your view in ML?14:40
ikhudoshynok will do14:40
isviridovGreat, I'll join14:41
isviridovcharlesw we would like hearing from you as well14:41
isviridovMoving on?14:41
ikhudoshyn+114:41
charleswsure I'd love to join14:41
isviridovNext action item14:42
isviridovachudnovets update spec14:42
isviridovachudnovets_ was going to drive to clinic with his son14:43
isviridovI think we can go ahead14:43
ikhudoshynlets do14:43
isviridov#topic Discuss https://blueprints.launchpad.net/magnetodb/+spec/test-concurrent-writes aostapenko14:43
isviridovaostapenko stage it your14:43
isviridov*yours14:44
aostapenkoYes, I'm going to implement this scenarios with tempest14:44
isviridovWith conditional writes?14:44
aostapenkoI did not write cases yet. working on framework14:45
aostapenkoI will share list of scenarios14:45
isviridovI believe it is the only reasonable way to update the row14:46
aostapenkoSo will use it. We'll have negative cases too14:47
isviridovcharlesw any hints how aostapenko can do it?14:47
* isviridov Paul would be useful here14:48
charleswI'll ask Paul. Andrei could you please write a doc so we are clear on our cases?14:49
isviridovcharlesw do you think bp itself is a good place for this?14:49
charleswyes14:50
aostapenkocharlesw: sure14:50
isviridov#action aostapenko charlesw isviridov brainstorm the scenario14:50
isviridovaostapenko till then not approved'14:50
*** idegtiarov_ is now known as idegtiarov14:50
isviridovMoving on14:50
isviridov#topic Open discussion isviridov14:51
isviridovAnything for this topic?14:51
*** cl__ has joined #magnetodb14:51
ikhudoshynguys pls review aostapenko's patch about swift and lets merge it14:51
isviridovLink?14:51
charleswAfter discussion with Dima, I plan to refactor some of the notification code14:51
isviridovcharlesw great14:52
charleswmoving notification from storage manager layer to API controller layer14:52
ikhudoshynhttps://review.openstack.org/#/c/146534/14:52
isviridovAnything else critical for review?14:52
charleswwhat do you guys think14:52
ikhudoshyncharlesw: why?14:52
charleswmake storage manager code cleaner14:53
isviridov#action dukhlov_ charlesw aostapenko isviridov review https://review.openstack.org/#/c/146534/14:53
isviridovcharlesw how will you measure table async tasks duration if so?14:54
ikhudoshyncharlesw: ... and make API code messier14:54
ikhudoshyn?14:54
ikhudoshyni don't mind adding notifications to API layer14:55
charleswAnd the request metrics collection can use the notification mechanism. So we won't have two sets of notification (in API/middleware using StatsD, and other places using messaging)14:55
isviridovikhudoshyn the more notification the more information we have about system14:55
ikhudoshynisviridov: +1, i just wouldn't like to remove existing notifications from storage14:56
miqui_..hi...14:56
dukhlov_ikhudoshin: now we have unstructured notifications14:56
isviridovmiqui_ hello14:57
ikhudoshyndukhlov_: what d'you mean 'unstructured'14:57
ikhudoshynmiqui_: hi14:57
charleswhi miqui14:57
dukhlov_we are sending notification somehere14:57
isviridovcharlesw what do you mean two sets?14:57
dukhlov_we don't have any strategy when where and what we need to send14:57
charleswin middlware/API controller, we sends StatsD metrics, in storage, we use messaging14:58
dukhlov_maybe it is because we don't have customer's real usecase for that14:58
dukhlov_but now we have first one - integrate statsd to notification mechanism14:59
dukhlov_for this case we need request based notification14:59
dukhlov_like request done or request failed and took some time for this job15:00
ikhudoshyndukhlov_: so lets consider ADDING motifications to API15:00
isviridovLet us return back to use case15:00
isviridov1. we need to send information to ceilometer15:00
charleswwe plan to have a central event registry, it will describe each event: type, messaging or metrics event name, delivery mechanism(messaging/metrics/or both). And use one notifier to decide what to do based on event description.15:00
dukhlov_which information exactly?15:01
* isviridov will listen a bit15:01
dukhlov_if we just add notifications it will be duplicate each other in storage and api15:01
charlesw+115:02
aostapenko+115:02
isviridovThe official meeting finished15:02
isviridov#endmeeting15:02
openstackMeeting ended Thu Jan 22 15:02:52 2015 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.html15:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.txt15:02
openstackLog:            http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.log.html15:02
isviridovdukhlov_ it depends what this notification describes15:03
dukhlov_at storage layer it is reasonable to have only notifications for async jobs15:03
dukhlov_isviridov, sure15:04
isviridovdukhlov_agree or specific for storage15:04
ikhudoshyndukhlov_: are we to limit our set of sent notifications to request_arrived/request_completed/request_failed ?15:04
isviridovcharlesw dukhlov_ do we have specific example right now?15:05
isviridovI mean the notification we are going to move out of storage15:07
aostapenkoonly from manager as I understood15:11
ikhudoshynbtw, we already have two APIs, are we to have notifications duplicated15:11
ikhudoshyn?15:11
charleswikhudoshyn, we need to have a unified notifier, and a central event registry, notifier will inspect the event and decide how to deliver message15:13
dukhlov_actually yes15:13
dukhlov_because at least it is different evens - call ddb and mdb15:14
ikhudoshyndukhlov_: 'yes' to douplicate notification code? does not sound good for me15:14
dukhlov_first of all it can be middleware15:15
ikhudoshyndukhlov_: 'middleware' sounds good ))15:16
charleswAll API layer notifications can be moved to middleware15:16
ikhudoshynbut only if we could rip ALL notifications gode from the rest of codebase15:16
dukhlov_and instead of a lot of notifier.notify in each method we can do try {do_request, notify_done} catch {notify.error}15:16
isviridovcharlesw do we have any now?15:17
charleswonly async notifications will stay in storage manager15:17
dukhlov_we can, but for this we need to collect information in request.context in api and storage layers15:17
ikhudoshyni think it would be nice to have a table of all our notifications with current location and where to move15:18
charleswin progress, I'll send out new patch today hopefully15:18
dukhlov_ikhudoshyn, agree15:18
charleswI'll put together a doc15:18
isviridovcharlesw yes, even without patch15:19
ikhudoshyncharlesw: that would be really helpful15:19
charleswwill do soon15:19
*** cl__ has quit IRC15:20
*** vivekd has quit IRC15:23
ominakovcharlesw, ikhudoshyn tnx for comments to my patch (https://review.openstack.org/#/c/147162/4). I have one more question. When we do delete in async-task-executor table already DELETING, so we can't determine is it a first or second delete15:46
charleswominakov, why would we delete a table already in DELETING state? We can do delete for DELETE_FAILED.15:49
ominakovcharlesw, yep, but when task executor picks task from queue - table already in DELETING state15:51
ikhudoshyn'cos manager puts it in DELETING state before passing it to async executor))15:51
ikhudoshyni think we should refactor that15:52
ominakovikhudoshyn, thx15:52
ominakovikhudoshyn, +115:52
ikhudoshynwe need additional statuses like {DELETE, CREARE, whatsoever}_REQUEST_ACCEPTED when request arrives15:53
ikhudoshynand use DELETING/CREATING only when async exec actually performs the operation15:53
charleswwhat's the problem if you delete the table already deleted but table_info entry hasn't been removed?15:56
*** charlesw has quit IRC16:05
*** charlesw has joined #magnetodb16:17
*** romainh has left #magnetodb16:23
ominakovcharlesw, in async-task-executor we don't know is it table already deleted but table_info entry hasn't been removed or is it active table16:38
charleswThen you can just go ahead and delete again. Drop table if exists should work.16:42
charleswDELETE is supposed to be idempotent. response code 404 or 200 should both be ok16:42
*** ygbo has quit IRC16:51
*** charlesw has quit IRC17:08
ominakovwe have no problem with response code, just async-task-executor can't make decision about suppressing exception or not (from backend)17:09
*** charlesw has joined #magnetodb17:09
*** isviridov is now known as isviridov_away17:32
*** achanda has joined #magnetodb17:49
*** denis_makogon has quit IRC17:57
*** ajayaa has joined #magnetodb17:58
*** charlesw has quit IRC18:03
*** charlesw has joined #magnetodb18:03
openstackgerritAlexander Chudnovets proposed stackforge/magnetodb: (WIP) Monitoring API URLs refactoring  https://review.openstack.org/14524718:12
*** rushiagr is now known as rushiagr_away18:13
*** ajayaa has quit IRC18:32
*** achanda has quit IRC19:13
*** achanda has joined #magnetodb19:15
*** charlesw has quit IRC19:20
*** achanda has quit IRC19:51
*** achanda has joined #magnetodb19:57
*** achanda has quit IRC20:12
*** achanda has joined #magnetodb20:23
openstackgerritAndrei V. Ostapenko proposed stackforge/magnetodb: Migrates to oslo.context library  https://review.openstack.org/14939320:38
*** romainh has joined #magnetodb20:51
*** romainh has left #magnetodb20:51
openstackgerritAndrei V. Ostapenko proposed stackforge/magnetodb: Adds Swift support  https://review.openstack.org/14653421:37
*** charlesw has joined #magnetodb22:03
*** dukhlov_ has quit IRC23:02
*** charlesw has quit IRC23:32
*** dukhlov_ has joined #magnetodb23:50

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!