Tuesday, 2013-10-08

*** vkmc has quit IRC00:07
*** dafter has joined #openstack-marconi00:16
*** dafter has quit IRC00:16
*** dafter has joined #openstack-marconi00:16
*** dafter has quit IRC00:20
*** malini is now known as malini_afk00:23
*** tedross has quit IRC00:26
*** amitgandhi1 has quit IRC00:35
*** amitgandhi has joined #openstack-marconi00:35
*** malini_afk is now known as malini00:46
*** alcabrera has joined #openstack-marconi00:56
alcabreramalini, oz_akan_: Evening, guys. :)00:56
oz_akan_hi alcabrera00:56
maliniEvening :)00:57
malinioz_akan: I'll update the test now00:57
oz_akan_malini: thanks00:57
alcabreraI just saw the hexadecimal email.00:57
alcabreraYeah, zyuan's change was merged today that enforces UUIDs for client-id.00:58
alcabreraoz_akan_, malini: https://review.openstack.org/#/c/49378/00:58
alcabreraThat one ^^00:58
alcabreraSo... I imagine test code will need some updates, possibly the tsung stuff, too.00:59
maliniyeap..I am on it00:59
malinioz_akan: do you just need tsung_simple1.xml ?00:59
oz_akan_no01:00
malinialcabrera: anything generated by uuid.uuid1() shud be good enuf, rt?01:00
*** amitgandhi has quit IRC01:00
alcabreramalini: yeah, that should be fine.01:00
oz_akan_I usem ike-incremental.xml01:00
oz_akan_I use mike-incremental.xml01:00
oz_akan_I am running it actually now01:00
alcabrerapass it as str(uuid.uuid1()) and you should be fine. :)01:01
oz_akan_I already switched code to earlier version to be able to run the test01:01
maliniI'll copy mike-incremental.xml & update it01:01
oz_akan_thanks01:02
malinioz_akan_: Plz try mike-incremental.bk.xml01:04
malinithanks alejandro!!01:04
oz_akan_thanks malini, I am waiting current running test to finish, then I will update the code again and run that test01:05
malinicool01:05
oz_akan_what is uuid ?01:06
alcabreraUniversally Unique Identifier, I believe.01:06
alcabreraLemme double check that, oz_akan_.01:06
oz_akan_can be anything?01:06
alcabreraoz_akan_: no, it follows a standardized format.01:07
oz_akan_and has to be unique for project-id ?01:07
maliniuuid doesnt have to be unique for the project..In our design we expect each client to have a unique id01:08
alcabreraoz_akan_: no. It determines the behavior of GET|POST /v1/queues/{queue}/messages. If it's the same client-id that was used to POST messages, and GET messages is called, then no messages will be shown that are associated with that client-id.01:08
alcabreraHere's a pretty succinct summary of the uuid format: http://en.wikipedia.org/wiki/Universally_unique_identifier#Definition01:09
alcabrera32 hexadecimal digits. :D01:09
alcabreraWe can totally fake uuids if we wanted.01:09
alcabreraFor tests and such.01:09
oz_akan_could you mail a few examples, so I can use if I need to01:10
alcabrerasure thing01:10
oz_akan_why would the client that posted messages wouldn't see his own messages?01:11
oz_akan_what is the requirement there/01:11
alcabreraoz_akan_: sent UUID examples01:12
oz_akan_tks01:12
alcabreraI keep forgetting the rationale behind it. I think the use case is in the pub-sub model, a publisher shouldn't see its own messages.01:13
alcabreraSince in pubsub, claims are never made.01:13
alcabreraThough for testing purposes, we provide a back door with ?echo=true.01:14
oz_akan_ok01:15
oz_akan_lb - proxy - router - queues seems very slow01:19
oz_akan_so I will play the last card we have01:19
oz_akan_lb - router - proxy - router - queues01:19
alcabreraThat's going to help a lot, if the queues example tells us anything. ;)01:19
* alcabrera likes making optimistic hypotheses01:20
oz_akan_crossing fingers01:25
*** kgriffs_afk is now known as kgriffs01:36
alcabrerakgriffs: o/01:36
kgriffshi01:36
oz_akan_hi01:37
kgriffsre UUIDs it is to support duplex queues, as alcabrera said. But yeah, if you are only doing producer consumer it may appear to be less helpful. That being said, I think it will come in handy later since we can mine the data and get stats on how many clients are interacting with the service.01:38
kgriffsAnyway, it's an ongoing discussion and will evolve over time01:39
kgriffs</two-cents>01:39
oz_akan_makes sense, have we mentioned UUID in the documentation?01:40
kgriffsyes01:40
kgriffsit was late in coming in01:40
kgriffsbut Catherine and the client lib devs know about it now01:40
oz_akan_it may seem like a burden for the developer, so would be good to understand why it is required01:40
kgriffsyeah, I think the dev docs explain it somewhere. It may not be in the api spec. I can check.01:41
oz_akan_I was just curious01:41
oz_akan_thanks01:41
alcabreraHmm, good point, kgriffs. If it's not in the API spec, I can fix that up, with a paragraph explaining the rationale.01:41
kgriffsno worries; we've had others ask about it as well. I think it needs some refining01:41
kgriffsalcabrera: looks like there is a short blurb about it under "Common Headers" but it could use some elaboration or something. Well, the spec wasn't ever really meant to be an end-user document. But whatever. :D01:42
alcabreralol01:43
kgriffsI guess until we have something on RTD, it's all people have.01:43
kgriffs:p01:43
oz_akan_what RTD?01:44
kgriffsreadthedocs01:44
alcabrerakgriffs: the client-id explanation on the specs looks good.01:44
kgriffsalcabrera01:44
kgriffshttps://readthedocs.org/01:44
alcabreraoz_akan_: uwsgi hosts its documentation in that style: http://uwsgi-docs.readthedocs.org/en/latest/01:44
kgriffsI'd love to have a user guide and operator/admin guide on there01:44
alcabreraIt's autogenerated by parsing either .rst files or source code comments in a certain format.01:45
alcabrerakgriffs: +101:45
kgriffsI need to do that for Falcon too. :p01:45
alcabrerakgriffs: yup, I remember that issue. Falcon needs lots of docstring love. :P01:45
*** malini is now known as malini_afk01:56
*** kgriffs is now known as kgriffs_afk01:58
*** kgriffs_afk is now known as kgriffs02:03
*** amitgandhi has joined #openstack-marconi02:05
*** amitgandhi has quit IRC02:06
oz_akan_performance is better with a router in front of proxy but still adds significant slowness02:35
kgriffsyou mean, the proxy adds significant slowness?02:38
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: feat: proxy API improvements  https://review.openstack.org/5021302:40
alcabrerajenkins is gonna complain. I forgot to pep8 before submitting. :/02:40
alcabreraoz_akan_: How much better is the performance with a router in front? How much slower does the proxy make things? Is it still 5-10x slower?02:41
oz_akan_736 rps, 1200 ms (30 users, 100 seconds)02:41
oz_akan_with proxy02:41
oz_akan_933 rps, 30.54 ms (30 users, 100 seconds)02:41
oz_akan_without02:41
oz_akan_1603.8 rps, 52.21 ms (30 users, 100 seconds)02:41
oz_akan_without02:41
oz_akan_last one is 50 users02:42
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: feat: proxy API improvements  https://review.openstack.org/5021302:42
oz_akan_50 x 31 requet per second02:42
oz_akan_alcabrera: ^^02:42
oz_akan_1200 ms for 30 users with proxy02:42
alcabreraoz_akan_: thanks!02:42
oz_akan_1.2 seconds per request02:42
alcabrerathat's pretty slow. :/02:42
alcabreraMust be all the network hopping, though... hmm...02:43
oz_akan_for 15 users with proxy: 507.8 rps, 44.22 ms02:43
alcabreraThe proxy is much faster if most of the requests are cached locally in memcached02:43
oz_akan_so we can do ~600 rps at 50 ms02:43
kgriffsis that using async workers or sync?02:43
alcabreraThat way, the proxy avoids making the extra network hop to speak to mongodb.02:43
alcabreraI believe the tsung tests are biased towards mostly unique project/queues combos, which is the worst case for the proxy.02:44
oz_akan_I believe proxy caches all in memcached already02:44
oz_akan_test uses 20 queues02:44
alcabrerahmmm...02:44
kgriffsdoesn't tsung generate random queue names?02:44
oz_akan_I think it does, but there are 20 for the test I run02:44
oz_akan_we don't use threads02:45
oz_akan_in uwsgi02:45
kgriffsdoes it generate them all at the beginning, then use the same ones for the duration of the test?02:45
oz_akan_yes, these are predefined02:45
oz_akan_same queues all the time02:45
oz_akan_proxy slowness comes because of being in the middle02:45
alcabreraSame project-id, too?02:45
alcabreraI mean, is the project-id pre-generated?02:46
alcabreracaching decisions are made on the basis of project + queue.02:46
oz_akan_wouldn't matter if one of them not set02:47
oz_akan_it would still cache, wouldn't it?02:47
alcabreraIt would cache correctly if project is None. :)02:48
alcabreraIn that case, the cache key is roughly '_/queue_name'.02:49
alcabreraoz_akan_: so, yes, you're right.02:49
oz_akan_all in one box, sending 465 request max, w/ proxy 42.76 ms vs w/o proxy 11.95 ms02:49
oz_akan_over network, difference is much more02:49
oz_akan_736 rps, 1200 ms (30 users, 100 seconds) vs 933 rps, 30.54 ms (30 users, 100 seconds)02:50
alcabreraoz_akan_: It seems like the breakeven point of the proxy is at about that point - ~750 rps. So if the load on a single partition ever exceeded that, the proxy would outperform the single partition. It sounds like the load would have to be pretty fierce.02:51
alcabreraStill, 1.2s latency under load is not cool.02:51
oz_akan_alcabrera: 750 with 1200 ms latency02:52
oz_akan_that is 1.2 seconds per request02:52
alcabreraslowness02:52
oz_akan_with proxy we can't do more than 200-300 per partition02:52
oz_akan_let me run a light test to talk for sure02:52
alcabrerakk02:52
kgriffs43 vs 1202:53
kgriffsthat is suprising02:53
kgriffsseems like it shouldn't be that high02:53
oz_akan_oh, I had it already02:53
oz_akan_http://198.61.239.147:8000/log/20131008-0204/report.html02:53
oz_akan_this with partiton 15 x 31 request max02:54
oz_akan_44 ms02:54
oz_akan_507 rps02:54
kgriffssounds like there is a big bottleneck somewhere in the proxy02:54
kgriffsrequests?02:54
oz_akan_proxy code is fast02:55
oz_akan_I think it is the communication02:55
alcabrerakgriffs: I'd need to profile it. It's too bad I can't get lazy requests async forwarding using eventlet. :/02:55
alcabreras/lazy/easy02:55
kgriffsoz_akan_: localhost socket communication is taking an extra 30 ms?02:55
*** nosnos has joined #openstack-marconi02:55
oz_akan_not that much, it was somewhere between 10-15 ms02:56
oz_akan_it slows down each request, and under relatively high load, user experiences each request much slower02:56
oz_akan_are requests get queued02:57
alcabreraI'm going to go get some sleep, guys.02:57
kgriffshow many CPUs?02:57
kgriffsand how many uwsgi workers?02:57
kgriffsalcabrera: ttfn02:57
kgriffsg'night02:57
alcabreraGood luck with data gathering. I'll look forward to the data tomorrow morning and work from there. :)02:57
alcabreranight!02:57
*** alcabrera has quit IRC02:57
oz_akan_good night02:58
oz_akan_kgriffs: same amount of cpus for both tets02:58
oz_akan_tests02:58
oz_akan_I didn't change anything between w/ and w/o proxy tests02:58
kgriffshow many?02:59
oz_akan_4 cores on web servers, 8 on mongodb02:59
oz_akan_so ?02:59
kgriffs4 cores and how many workers?02:59
oz_akan_no workers03:00
oz_akan_we have standalone processes03:00
oz_akan_that is faster than having workers03:00
oz_akan_8 processes03:00
oz_akan_cpu load is around 50% during the test03:00
kgriffs8 separate uwsgi processes, using nginx in front of them?03:01
kgriffsI'm actually surprised that using one uwsgi "manager" and several workers behind it isn't faster. When I tested that with RSE it was extremely fast.03:02
oz_akan_we have uwsgi as router in front of these 8 processes03:03
oz_akan_uwsgi router is faster than nginx03:03
oz_akan_and these talk via sockets03:04
kgriffshmm03:04
kgriffsso you tried running uwsgi -p 803:05
kgriffsand that was slower than runing 8x uwsgi -p 1 with a router in front?03:05
oz_akan_yes, very slow03:06
kgriffsthat is really suprising03:06
kgriffsdoesn't sound right03:06
oz_akan_that is when we have the proxy03:06
kgriffsi mean, a preforking model is slower than going over a socket?03:07
oz_akan_when we do 8 workers, we saw that the master process uses most of the cpu03:07
oz_akan_might be something about it03:07
kgriffshmm.03:08
oz_akan_but I think the problem it solves is about connections, it might be faster to establish a connection using uwsgi router03:08
oz_akan_I am 100% router between proxy and queues is the fastest03:09
oz_akan_I am not sure if we need router if we only have queues03:09
kgriffsbtw, have you tested with --gevent?03:09
oz_akan_505 rps, 27.26ms queues only03:11
oz_akan_513.3 rps, 11.95 ms uwsgi router + queues03:11
oz_akan_proxy code throws errors with gevent03:11
kgriffshow about queues?03:11
oz_akan_and for queues it was not any faster then not having it03:11
kgriffsoic03:12
oz_akan_I remember you asking this before and me pasting some benchmark results03:12
oz_akan_but I can't recall numbers, how fast slow etc..03:12
kgriffsyeah, rings a bell03:12
kgriffsok, thanks for the info03:13
kgriffsI guess we can sleep on this info and figure out what to do next in the morning03:13
kgriffsstill seems strange to me that the proxy would add so much overhead, even under load03:14
oz_akan_take care03:14
kgriffsttfn03:14
oz_akan_think about this03:14
oz_akan_I see 11 ms on marconi logs03:14
*** kgriffs is now known as kgriffs_afk03:14
oz_akan_but tsung reports 40 ms03:14
oz_akan_that is the network and queued requests overhead03:15
*** oz_akan_ has quit IRC04:02
*** dafter has joined #openstack-marconi04:31
*** dafter has quit IRC04:31
*** dafter has joined #openstack-marconi04:31
*** oz_akan_ has joined #openstack-marconi05:13
*** whenry has quit IRC05:17
*** oz_akan_ has quit IRC05:18
*** Alex_Gaynor has quit IRC07:17
*** ykaplan has joined #openstack-marconi07:48
*** flaper87|afk is now known as flaper8707:49
*** yassine has joined #openstack-marconi08:10
*** reed has joined #openstack-marconi08:29
*** Alex_Gaynor has joined #openstack-marconi09:07
*** ykaplan has quit IRC09:39
*** ykaplan has joined #openstack-marconi10:00
*** nosnos has quit IRC10:12
*** flaper87 is now known as flaper87|afk10:23
*** fifieldt has quit IRC10:37
*** oz_akan_ has joined #openstack-marconi11:12
*** oz_akan_ has quit IRC11:14
*** oz_akan_ has joined #openstack-marconi11:15
*** dafter has quit IRC11:22
*** dafter has joined #openstack-marconi11:24
openstackgerritChangBo Guo proposed a change to openstack/marconi: Replace decprecated method aliases in tests  https://review.openstack.org/5029711:27
*** alcabrera has joined #openstack-marconi11:44
alcabreraGood morning! :D11:44
*** tedross has joined #openstack-marconi11:58
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: proxy mongodb storage fields overspecified  https://review.openstack.org/5030511:59
openstackgerritChangBo Guo proposed a change to openstack/marconi: Follow hacking rules about import  https://review.openstack.org/5030812:03
openstackgerritChangBo Guo proposed a change to openstack/marconi: Replace decprecated method aliases in tests  https://review.openstack.org/5029712:07
*** ykaplan has quit IRC12:10
openstackgerritChangBo Guo proposed a change to openstack/marconi: Follow hacking rules about import  https://review.openstack.org/5030812:12
openstackgerritChangBo Guo proposed a change to openstack/marconi: Replace decprecated method aliases in tests  https://review.openstack.org/5029712:15
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: allow multi-update on partition storage  https://review.openstack.org/5031212:22
*** ykaplan has joined #openstack-marconi12:25
*** dafter has quit IRC12:35
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: stream request data rather than loading it into memory  https://review.openstack.org/5032812:38
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: invalidate partition cache entry on delete  https://review.openstack.org/5033312:46
*** ayoung has quit IRC13:02
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: validation queue listing limits in proxy  https://review.openstack.org/5034213:11
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: fix: validate queue listing limits in proxy  https://review.openstack.org/5034213:11
*** jraim_ has joined #openstack-marconi13:16
*** jraim has quit IRC13:17
*** jraim_ is now known as jraim13:17
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: Log names of drivers being loaded  https://review.openstack.org/4984313:31
*** malini_afk is now known as malini13:32
alcabreramalini: good morning!13:32
malinigood morning!!13:32
alcabreraI've got soooo many tiny proxy fixes this morning, including the queue limits being ignored on GET /v1/queues. :D13:33
*** ykaplan has quit IRC13:38
*** jcru has joined #openstack-marconi13:39
*** amitgandhi has joined #openstack-marconi13:43
*** dafter has joined #openstack-marconi13:43
*** amitgandhi has quit IRC13:45
*** amitgandhi has joined #openstack-marconi13:45
*** amitgandhi has quit IRC13:56
*** amitgandhi has joined #openstack-marconi13:57
*** kgriffs_afk is now known as kgriffs14:02
*** ykaplan has joined #openstack-marconi14:03
*** vkmc has joined #openstack-marconi14:05
*** ayoung has joined #openstack-marconi14:08
*** ayoung_ has joined #openstack-marconi14:16
*** ayoung has quit IRC14:17
openstackgerritZhihao Yuan proposed a change to openstack/marconi: feat(test): queue context for proxy  https://review.openstack.org/4983014:29
*** jergerber has joined #openstack-marconi14:40
*** acabrera has joined #openstack-marconi14:57
zyuanalcabrera: where is the problem within the current proxy?14:57
zyuanand what kind of design can address this?14:58
acabrerazyuan: the problem is that the current proxy requires data lookup as well as an additional network hop.14:58
kgriffswe would like to remove the sharding from the data plane14:59
kgriffsthe fundamental design is flawed14:59
zyuanacabrera: but any proxy need data lookup and network forwarding...14:59
zyuankgriffs: then how to do sharding?>14:59
kgriffsthe sharding should just pick which DB connection to use, and should not be piping any data14:59
*** alcabrera has quit IRC15:00
kgriffszyuan: the more traditional way15:00
*** acabrera is now known as alcabrera15:00
zyuanhmm15:00
kgriffsi.e., we have a layer that sits above the DB client library15:00
kgriffsand picks which DB client/connection to use15:00
*** ayoung_ is now known as ayoung15:00
kgriffsthis has been done with good success by a number of startups that have suddenly ran into scaling problems15:00
zyuanthen how marconi server talks to end-user?15:01
kgriffsI think the meat of the current proxy code will be able to be used15:01
zyuandirectly?15:01
kgriffswe are pushing the routing logic down a layer15:01
kgriffsI will be out there later this week, and we can discuss15:01
kgriffsbrb15:03
malinikgriffs: did you hear from flaper87 yet ?15:05
*** dafter has quit IRC15:07
*** dafter has joined #openstack-marconi15:09
*** dafter has quit IRC15:09
*** dafter has joined #openstack-marconi15:09
kgriffsnope15:12
kgriffsstand by15:12
kgriffsmaking a minor change to the patch15:17
kgriffs(self.assertEquals ==> self.assertEqual)15:18
alcabreraback15:21
openstackgerritKurt Griffiths proposed a change to openstack/marconi: fix(mongo): Queue listing may include queues from other projects  https://review.openstack.org/5017615:21
alcabrerakgriffs: +215:22
kgriffsflaper87 is reviewing from mobile15:26
maliniwooot15:27
alcabreraawesome15:27
alcabreraflaper87|afk: Thanks!15:27
kgriffsgive me a few more mins15:28
kgriffsto finish travel arrangments15:28
alcabreraI'm taking a moment to try to fix the eventlet/requests problem. I have a unit test almost ready in the eventlet code base.15:28
alcabreraI'll disengage from that when you're ready, kgriffs.15:29
openstackgerritZhihao Yuan proposed a change to openstack/marconi: Replace deprecated method aliases in tests  https://review.openstack.org/5029715:34
openstackgerritZhihao Yuan proposed a change to openstack/marconi: feat(test): queue context for proxy  https://review.openstack.org/4983015:37
openstackgerritA change was merged to openstack/marconi: fix(mongo): Queue listing may include queues from other projects  https://review.openstack.org/5017615:41
kgriffsmalini: ^^^15:41
malinikgriffs:15:41
openstackgerritZhihao Yuan proposed a change to openstack/marconi: Replace deprecated method aliases in tests  https://review.openstack.org/5029715:43
openstackgerritZhihao Yuan proposed a change to openstack/marconi: feat(test): queue context for proxy  https://review.openstack.org/4983015:44
*** ykaplan has quit IRC15:53
*** yassine has quit IRC16:06
*** ykaplan has joined #openstack-marconi16:10
zyuankgriffs: ping16:14
*** ykaplan has quit IRC16:30
*** malini is now known as malini_afk16:40
*** malini_afk is now known as malini16:47
*** alcabrera is now known as alcabrera|afk16:55
*** oz_akan__ has joined #openstack-marconi17:04
*** alcabrera|afk is now known as alcabrera17:07
*** oz_akan_ has quit IRC17:07
*** mpanetta has joined #openstack-marconi17:29
*** mpanetta has quit IRC17:30
*** mpanetta has joined #openstack-marconi17:31
malinizyuan: looks like we dont validate client UUID for all requests ..is tht intentional ?17:36
zyuanmalini: it is17:36
maliniwhy ?17:36
zyuanmalini: we only calidate it when the UUID is needed17:36
zyuanwe only read client id when it's needed17:36
maliniwhich api's need the client-id ?17:36
zyuanonly post message and listing message17:36
kgriffsLater if we want to, e.g., write it to logs we will need to always read it17:37
malinithx!!17:37
zyuanour policy was: user always submit client-id17:37
kgriffsbut, you know, YAGNI17:37
zyuanand my patch extend the policy to: user always submit correct client-id17:37
maliniok..thx!!17:37
zyuanbut we only check it to defend ourself.17:37
*** jcru has quit IRC17:58
*** dafter has quit IRC17:58
*** dafter has joined #openstack-marconi18:00
*** dafter has quit IRC18:05
*** dafter has joined #openstack-marconi18:20
*** dafter has quit IRC18:20
*** dafter has joined #openstack-marconi18:20
*** kgriffs is now known as kgriffs_afk18:33
*** dafter has quit IRC18:39
*** dafter has joined #openstack-marconi18:39
openstackgerritKurt Griffiths proposed a change to openstack/marconi: WIP  https://review.openstack.org/5043718:42
openstackgerritKurt Griffiths proposed a change to openstack/marconi: Use oslo.config directly instead of common.config  https://review.openstack.org/4955018:42
*** kgriffs_afk is now known as kgriffs18:42
*** tvb|afk has joined #openstack-marconi18:42
*** tvb|afk has quit IRC18:42
*** tvb|afk has joined #openstack-marconi18:42
kgriffsalcabrera: ^^^18:43
kgriffsJenkins will fail on it because I still have some refactoring to do18:43
kgriffsbut I wanted to put it up there so you can see where I'm headed18:43
alcabreraI'll check it out.18:44
alcabrerakgriffs: Thanks!18:44
kgriffsyw!18:44
*** dafter has quit IRC18:44
alcabrerakgriffs: comments inline18:50
alcabreraIt's a good start. I'll have a patch ready before I go home today with the structural groundwork for the schema changes/admin-API changes.18:51
alcabreraSo... in about an hour. :P18:51
kgriffskk18:53
alcabrerakgriffs: PUT /v1/shard/{name} {'weight': integer, 'location': string::url, 'options': {...}}18:55
alcabreraAlso18:55
alcabreraPATCH as above18:55
alcabreralocation looks like: mongodb://localhost:27010, redis://<address>/<port>. We can make location handling driver friendly by encapsulating it in a urlparse object or something like that (some common interface). StorageConnection.[type => 'mongodb', host, port, raw => as given]18:57
kgriffsoic19:00
kgriffsso we could parse redis uri out19:00
alcabrerayup19:00
kgriffsin the redis driver19:00
kgriffsand mongo could take it directly19:00
kgriffs+119:01
kgriffsalthough19:01
kgriffshmm19:01
kgriffswhat does that get us vs. just putting everything under options?19:01
alcabrerait gives us a common, guaranteed field.19:02
alcabreraoptions can be anything19:02
alcabreraby common/guaranteed, I can do stricter validation using jsonschema.19:02
kgriffsok19:02
alcabrerakgriffs: I've got some proxy fixes we can leverage if we get them reviewed by tomorrow morning.19:14
kgriffsok19:14
alcabreraThe links to those I think will help us were emailed earlier today. Would you prefer if I posted them on an etherpad? :)19:15
*** tvb|afk has quit IRC19:18
*** ayoung has quit IRC19:18
*** malini is now known as malini_afk19:25
kgriffsalcabrera: sure, etherpad19:28
*** dafter has joined #openstack-marconi19:31
*** dafter has quit IRC19:31
*** dafter has joined #openstack-marconi19:31
openstackgerritAlejandro Cabrera proposed a change to openstack/marconi: feat: storage sharding schema for marconi-queues  https://review.openstack.org/5045619:34
*** kgriffs is now known as kgriffs_afk19:34
alcabrerakgriffs_afk: https://etherpad.openstack.org/proxy-patches19:36
*** kgriffs_afk is now known as kgriffs19:52
openstackgerritKurt Griffiths proposed a change to openstack/marconi: WIP: Sharding  https://review.openstack.org/5043719:53
kgriffsalcabrera: thx19:54
*** dafter has quit IRC19:54
kgriffsthat WIP is now testing green19:54
kgriffsso, refactoring are OK19:54
kgriffshowever, the sharding isn't actually doing anything.19:54
kgriffs:p19:54
kgriffslet me make it just always return the same thing, and then add a test that turns it on19:55
alcabrerakgriffs: sweet. I'm starting to see the picture on your side of things. I've got some awesome scribbles here for the next steps on my end.19:57
alcabreraThe next step for me is to implement the sharding storage controller, which allows one to handle shard storage over on the admin side.19:57
alcabrera*shard registration19:58
*** ayoung has joined #openstack-marconi19:58
alcabrerathen, I'll tackle the shard admin resource, integrating it with the schema and the storage controller.19:58
*** tvb|afk has joined #openstack-marconi19:59
*** tvb|afk has quit IRC19:59
*** tvb|afk has joined #openstack-marconi19:59
kgriffsinteresting thought19:59
kgriffsyou could spin up a gazillion sqlites19:59
kgriffsand shard queues across them19:59
alcabreraAfter that, I've got: integrate storage_lookup with resources (/v1/queues/{queue}/*_), add caching, and test the whole thing.19:59
kgriffs:p19:59
alcabreralol19:59
alcabreramarconi gets crazier every day. :P20:00
alcabreraInnovation on demand.20:00
kgriffs"integrate storage_lookup with resources"20:00
kgriffscan you elaborate?20:00
alcabreraSure thing.20:00
kgriffsSeems like you just need to flesh out sharding.py20:00
alcabreraAll endpoints of the form given above (those involving a queue) need to figure out what storage connection to use. At the transport layer, it'll probably be as simple as self.sharder.lookup_storage(project, queue) => client; client.op().20:01
alcabreraat the storage/sharding layer, it'll take that fleshing out that you mention. :P20:02
kgriffsdid you see my latest patchset?20:02
alcabrerakgriffs: briefly - I've glanced.20:02
kgriffsI added the plumbing to the transport resources20:02
kgriffsthey will end up calling into sharding.Manager20:03
kgriffsso, you don't need to do anything in transport.wsgi20:03
alcabrerakgriffs: ah, I see now.20:04
alcabreraGotcha, cool.20:04
kgriffsin fact, all you have to do is address the two TODO's in storage.sharding20:04
alcabreraThat integration step is already taken care of.20:04
kgriffsyep20:04
kgriffskapow!20:04
alcabreraHaha. :P20:05
alcabreraI had another idea on storage-shard options.20:05
*** jcru has joined #openstack-marconi20:05
alcabreraoptions are 'optional'. I'm thinking storage drivers are required to provide sane defaults for those options, and passing in a new set of options is something that can bee done at PUT /v1/shards/{name} time, but isn't required.20:06
kgriffsyes20:06
kgriffsmy thought was the __init__ has kwargs with default options20:06
alcabreraThis'll reduce the amount of storage needed for options details for the common case: homogenous shards20:06
kgriffsthose would be overridden by "options"20:06
alcabrera*homogeneous20:06
alcabrerakgriffs: exatly20:06
alcabrera*exactly20:06
kgriffswhat do you think about replacing driver_name with uri20:06
alcabrerawe're on the same page on that, cool20:06
alcabrerakgriffs: +1 on that substitution. The name is no longer identifying. It's more like driver.type at this point.20:07
alcabreraAnd even that's redundant, given a proper URI20:07
kgriffsi mean, you could pull out the first part of the uri20:07
alcabreraredis://*, mongodb://*, leveldb://*20:07
kgriffsand give that to stevedore20:07
kgriffsthen pass the uri into the driver20:07
alcabrerayep, those were my thoughts. :D20:07
kgriffscool20:07
kgriffslet me do that20:07
alcabreraawesome20:08
alcabreraI'm wrapping up for the day.20:08
kgriffskk20:08
alcabreraAny final questions/thoughts/concerns before I head out?20:08
kgriffsthat's it20:08
kgriffsthanks!20:08
alcabreracool, cool. I'll handle sharding storage/resource tomorrow. See you! Safe travels. :)20:08
kgriffsttfn20:09
*** alcabrera has quit IRC20:09
*** tvb|afk has quit IRC20:45
*** malini_afk is now known as malini20:53
*** dafter has joined #openstack-marconi20:53
*** dafter has quit IRC20:53
*** dafter has joined #openstack-marconi20:53
amitgandhikgriffs: to get messages by ids21:02
amitgandhiis it by ids as a query string?21:02
kgriffsyes21:06
kgriffs /messages?ids=x,y,z21:06
*** malini is now known as malini_afk21:46
*** dafter has quit IRC21:54
*** tvb|afk has joined #openstack-marconi21:58
openstackgerritZhihao Yuan proposed a change to openstack/marconi: feat(validation): project id length  https://review.openstack.org/5049622:05
*** tvb|afk has quit IRC22:09
*** mpanetta has quit IRC22:26
*** oz_akan__ has quit IRC22:27
*** mpanetta has joined #openstack-marconi22:27
*** mpanetta has quit IRC22:31
*** amitgandhi has quit IRC22:35
*** jergerber has quit IRC22:36
*** jcru has quit IRC22:40
*** mpanetta has joined #openstack-marconi22:57
*** mpanetta has quit IRC23:08
openstackgerritKurt Griffiths proposed a change to openstack/marconi: feat: Storage sharding foundation  https://review.openstack.org/5043723:08
*** mpanetta has joined #openstack-marconi23:11
*** kgriffs is now known as kgriffs_afk23:20
*** vkmc has quit IRC23:43
*** mpanetta has quit IRC23:52
*** mpanetta has joined #openstack-marconi23:52
*** mpanetta has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!