Tuesday, 2014-05-20

*** igor_ has joined #openstack-marconi00:00
*** igor_ has quit IRC00:04
*** igor_ has joined #openstack-marconi01:00
*** igor_ has quit IRC01:05
*** nosnos has joined #openstack-marconi01:47
*** malini is now known as malini_afk01:53
*** igor_ has joined #openstack-marconi02:01
*** igor_ has quit IRC02:05
*** vkmc has quit IRC02:37
*** whenry has quit IRC02:40
*** igor_ has joined #openstack-marconi03:02
*** nosnos has quit IRC03:07
*** igor_ has quit IRC03:07
*** mpanetta has joined #openstack-marconi03:34
*** mpanetta has quit IRC03:36
*** mpanetta has joined #openstack-marconi03:36
*** nosnos has joined #openstack-marconi03:47
*** nosnos has quit IRC04:01
*** prashanthr_ has joined #openstack-marconi04:10
*** prashanthr_ has quit IRC04:15
*** prashanthr_ has joined #openstack-marconi04:25
*** nosnos has joined #openstack-marconi04:30
*** mkoderer has joined #openstack-marconi04:34
*** haomaiwang has joined #openstack-marconi04:41
*** igor_ has joined #openstack-marconi05:04
*** prashanthr_ has quit IRC05:06
*** mpanetta has quit IRC05:08
*** igor_ has quit IRC05:08
*** prashanthr_ has joined #openstack-marconi05:10
*** igor has joined #openstack-marconi06:04
*** igor has quit IRC06:10
*** AAzza has joined #openstack-marconi06:10
*** igor has joined #openstack-marconi06:24
*** igor has quit IRC06:26
*** flaper87|afk is now known as flaper8707:10
*** igor has joined #openstack-marconi07:28
*** igor has quit IRC07:32
*** flwang has joined #openstack-marconi08:06
flwangflaper87: helllllllllllllllllllllllllllllo08:06
flaper87flwang: what's up????????????????????????????????????????????08:07
flwangflaper87: nooooooooooooo thing, just say hi08:07
flaper87flwang: how are you doing?08:07
flwangflaper87: good, packing stuff for the move08:07
flaper87haha, that's the not-so-fun part of moving08:08
flaper87:D08:08
flwangflaper87:  how about the summit?08:08
flwangwhat's the final name for Marconi?08:08
flaper87flwang: we haven't decided yet, we need to do that ASAP.08:10
flaper87flwang: any ideas?08:11
flaper87the summit was great, as usual! :D08:11
flwangflaper87: maybe we can follow the rename of Ceilometer08:13
*** rwsu has quit IRC08:26
*** jamie_h has joined #openstack-marconi08:27
*** ykaplan has joined #openstack-marconi08:27
*** igor has joined #openstack-marconi08:28
*** igor has quit IRC08:33
*** ykaplan has quit IRC08:59
*** ykaplan has joined #openstack-marconi09:04
*** igor has joined #openstack-marconi09:24
*** igor has quit IRC09:28
*** flaper87 has quit IRC09:41
*** ykaplan has quit IRC09:57
*** igor has joined #openstack-marconi09:59
*** flwang has quit IRC10:12
*** prashanthr_ has quit IRC10:18
*** ykaplan has joined #openstack-marconi10:32
*** haomaiwang has quit IRC10:39
*** haomaiwang has joined #openstack-marconi10:39
*** haomaiw__ has joined #openstack-marconi10:43
*** haomaiwang has quit IRC10:45
*** haomaiwang has joined #openstack-marconi10:54
*** haomaiw__ has quit IRC10:55
*** tedross has joined #openstack-marconi11:29
*** ykaplan has quit IRC11:33
*** prashanthr_ has joined #openstack-marconi11:37
*** AAzza has quit IRC11:59
*** AAzza has joined #openstack-marconi11:59
*** ykaplan has joined #openstack-marconi12:06
openstackgerritAlex Bettadapur proposed a change to openstack/marconi: V1 Tests JsonSchema  https://review.openstack.org/9421212:14
*** vkmc has joined #openstack-marconi12:16
*** vkmc has quit IRC12:16
*** vkmc has joined #openstack-marconi12:16
*** abettadapur has joined #openstack-marconi12:17
*** flaper87|afk has joined #openstack-marconi12:27
*** flaper87|afk is now known as flaper8712:27
*** sriram has joined #openstack-marconi12:36
*** sriram has quit IRC12:37
*** sriram has joined #openstack-marconi12:37
*** prashanthr_ has quit IRC12:45
*** flaper87 has quit IRC12:46
*** flaper87 has joined #openstack-marconi12:46
*** ChanServ sets mode: +o flaper8712:46
*** prashanthr_ has joined #openstack-marconi12:48
*** jchai has joined #openstack-marconi12:57
*** nosnos has quit IRC13:02
*** Obulpathi has joined #openstack-marconi13:10
*** mwagner_lap has quit IRC13:11
*** Obulpathi has quit IRC13:13
*** Obulpathi has joined #openstack-marconi13:13
*** AAzza has quit IRC13:15
*** jay-atl has quit IRC13:15
*** ykaplan has quit IRC13:33
*** ykaplan has joined #openstack-marconi13:39
*** LaceyKite has joined #openstack-marconi13:45
*** LaceyKite has left #openstack-marconi13:45
*** cpallares has joined #openstack-marconi13:46
*** prashanthr_ has quit IRC13:56
openstackgerritFlavio Percoco proposed a change to openstack/marconi: Revert "Disable Metadata write operations on v1.1"  https://review.openstack.org/9437414:15
*** AAzza has joined #openstack-marconi14:18
*** jmckind has joined #openstack-marconi14:22
*** oz_akan_ has joined #openstack-marconi14:28
*** alcabrera|afk is now known as alcabrera14:31
*** jay-atl has joined #openstack-marconi14:31
*** alcabrera is now known as alcabrera|afk14:38
*** LaceyKite has joined #openstack-marconi14:45
*** mpanetta has joined #openstack-marconi14:46
*** prashanthr_ has joined #openstack-marconi14:47
*** mwagner_lap has joined #openstack-marconi14:48
*** alcabrera|afk is now known as alcabrera14:54
alcabrerao/14:54
*** megan_w|afk is now known as megan_w14:54
mpanettaHi14:54
alcabreramarconi meeting in 1 minute? >.>14:59
alcabreraflaper87: ^14:59
*** kgriffs|afk is now known as kgriffs14:59
flaper87alcabrera: yup14:59
alcabreraready!14:59
*** mpanetta has quit IRC14:59
*** tjanczuk has joined #openstack-marconi15:00
*** whenry has joined #openstack-marconi15:00
*** mpanetta has joined #openstack-marconi15:00
*** balajiiyer has joined #openstack-marconi15:00
*** tjanczuk has left #openstack-marconi15:00
*** abettadapur has quit IRC15:02
alcabreravkmc, prashanthr_, cpallares: team meeting in #openstack-meeting-alt. Join us if you can. :)15:03
prashanthr_alcabrera: Good morning ! :)15:03
prashanthr_was just joining now15:03
*** malini_afk is now known as malini15:03
alcabreragood morning! :D15:03
*** sriram_p has joined #openstack-marconi15:04
*** mpanetta has quit IRC15:05
*** mpanetta has joined #openstack-marconi15:05
mpanettagrrr15:05
*** sriram_p has quit IRC15:08
*** sriram_p has joined #openstack-marconi15:09
*** jchai is now known as jchai_afk15:11
vkmco/15:12
balajiiyervkmc can you join us on #openstack-meeting-alt?15:12
*** prashanthr_ has left #openstack-marconi15:13
vkmcI'm there :) didn't want to interrupt15:13
*** prashanthr_ has joined #openstack-marconi15:16
*** flwang has joined #openstack-marconi15:21
*** ykaplan has quit IRC15:23
*** jchai_afk is now known as jchai15:27
*** ykaplan has joined #openstack-marconi15:42
openstackgerritFlavio Percoco proposed a change to openstack/marconi: Revert "Disable Metadata write operations on v1.1"  https://review.openstack.org/9437415:48
flaper87dinner, brb16:01
*** tjanczuk has joined #openstack-marconi16:01
kgriffsvkmc: prashanthr is going to do the redis driver?16:01
*** LaceyKite has quit IRC16:01
prashanthr_kgriffs : Yes i am planning for the redis and openstack swift drivers16:02
alcabreraI'm taking down the meeting minutes16:02
vkmckgriffs, yeah, he is already working in that direction16:02
prashanthr_just started with the coding today: https://github.com/PrashanthRaghu/marconi-redis and https://github.com/PrashanthRaghu/marconi-swift16:03
kgriffsprashanthr_: let's focus on redis first16:03
vkmckgriffs, if there is a storage backend that fits Marconi future plans better, then I'd love to work on that16:03
balajiiyerkgriffs: FYI, Juno release schedule is here https://wiki.openstack.org/wiki/Juno_Release_Schedule  If you need to realign roadmap based on this. (and before sending an email to ML)16:03
prashanthr_kgriffs: Sure will do that.16:03
kgriffsvkmc: let's wait for flaper87 to get back from dinner. AMQP could be a good one for you to tackle, or riak, idk - we will need to see what he thinks16:04
kgriffsbalajiiyer: thanks, I'll take a look!16:04
*** LaceyKite has joined #openstack-marconi16:04
vkmckgriffs, sure :)16:04
tjanczukHere is where Rabbit's amqp 1.0 plug in stands: https://www.rabbitmq.com/plugins.html. It is "experimental" and not among the supported set. An euphemism for a toy.16:05
vkmckgriffs, I added in that etherpad the ones that most developers were interested about when I did the starting research... I remember flaper87 talking about elasticsearch and flwang about redis16:06
kgriffstjanczuk: sounds like a toy at the moment. the question is, do they plan to mature it over the next 6 months or is it not being worked on?16:07
kgriffsvkmc: we heard kafka a lot at the summit16:07
kgriffsso, I added it16:07
kgriffs(for sake of discussion)16:07
vkmckgriffs, awesome, thanks16:07
tjanczukAFAIK they are not working on it. Given the history behind 0.9-1.0 schism I would be surprised if Rabbit decided to ever productize it.16:08
kgriffsprashanthr_: do you have a minute to discuss the redis driver?16:08
prashanthr_kgriffs: Yes sure.16:11
kgriffstjanczuk: booh. that's too bad.16:11
tjanczukre rabbit and amqp 1.0: this note from 2011 is not very encouraging: https://groups.google.com/forum/#!topic/rabbitmq-discuss/9Hj0FzgyLQk/discussion. It is also telling that nothing really has changed in this picture in the last 3 years. So I would not put my money on AMQP 1.0 being supported by Rabbit any time soon.16:12
kgriffsprashanthr_: so, I've been thinking a lot about how we can make an operator's life easier when deploying Marconi16:12
kgriffsa couple of concerns they have16:13
kgriffsfirst is what to do about really long queues.16:14
maliniflaper87: ping me when you want to chat abt refactoring functional tests16:14
*** megan_w is now known as megan_w|afk16:14
kgriffssecond is what to do about really hot queues, meaning how do we ensure QoS without having to use conservative rate limits16:15
kgriffsoz_akan_, mpanetta: BTW - I think you will find this conversation interesting16:15
prashanthr_kgriffs: That's interesting. I had seen some info on these lines when i was reading the blueprints.16:16
kgriffsprashanthr_: Going forward, I think we need to start thinking of storage more in terms of a "pool"16:18
mpanettaAny issue that can fill a shard or take down queues I am interested in finding a way to fix.16:18
kgriffsmpanetta: I thought you might be interested. :)16:18
kgriffsso here is the thing16:18
mpanettaYeppers :)16:18
kgriffsthe less pool-like the storage, the less efficient and predictable16:19
prashanthr_mpanetta: :)16:19
prashanthr_kgriffs: Does "pool" mean an abstraction for storage ?16:19
kgriffsprashanthr_: think of "pool" in the sense of swift. It uses a massive pool of disks, and splits objects across those disks16:19
*** whenry has quit IRC16:20
kgriffstwo things you need to create an efficient pool16:20
kgriffsa large number of nodes16:20
kgriffsand a somewhat even distribution of messages across those nodes16:21
kgriffssay you only have 3 servers16:21
kgriffsyou assign a group of queues to server A, a second group to server B, etc.16:22
*** ykaplan has quit IRC16:22
kgriffsthe trouble is, you can't know in advance whether a given queue will be long and/or hot16:22
kgriffsso, you have to be very conservative in your capacity planning16:22
vkmcmaybe you can control the load balance of the server before asigning a queue to the server... but we need to figure out how to do it without losing performance16:23
kgriffsi.e., you have to leave a lot of spare capacity on each server. If you don't and one queue becomes really hot, it will starve requests going to other queues16:23
kgriffsAnother issue: you can't break out of the single server. A queue can't "grow" out of a single server. Less of a problem for disk-based storage, but a real issue for RAM stores16:24
*** haomaiwang has quit IRC16:24
*** ametts has quit IRC16:25
alcabreraproject-quotas are going to be critical, imo16:25
alcabrerae.g.16:25
kgriffsvkmc: you could, but it would only help a little bit. You would base it on which server is heavily loaded at the moment. But you still don't know what queues will be hot in the future16:25
alcabrera"you can only use X MB of memory for your PID"16:25
alcabreraor X storage16:25
mpanettaalcabrera: ++16:25
alcabreraguarantees at least fair use of a public resource across all tenants16:26
prashanthr_kgriffs: I get the point related to spare capacity.16:26
prashanthr_vkmc: Also i guess once the queue is assigned to the redis server migration is tough.16:26
kgriffsI think quotas and rate limits are always going to be needed. However, we want to be as generous as we can.16:26
kgriffsthe way you become more generous is thinking about a large pool of storage16:26
kgriffsbecause QoS and running out of space are mitigated somewhat, the large the pool you have16:26
vkmckgriffs, makes sense16:27
mpanettaAlso don't forget we need to have better security on the admin endpoint...  speaking of managing queues...16:27
kgriffsmpanetta: noted16:27
vkmcprashanthr_, I wasn't thinking about migrating the queues but controlling the load of the server before assigning it... but yeah, in that case, that approach seems to expensive16:28
prashanthr_vkmc: That's correct. The problem is the non-deterministic nature of the load prediction.16:29
vkmcprashanthr_, totally16:29
kgriffsprashanthr_: at the summit we discussed some ways to do migration with zero downtime. one of the people there needs to write up a migration bp based on that discussion (either myself or someone else - I've got a note to followup)16:29
kgriffsso, let's think about each driver in turn16:29
kgriffs1. MongoDB16:29
kgriffshow would we make mongo act more like a pool?16:30
*** rwsu has joined #openstack-marconi16:30
prashanthr_kgriffs: Can I say "10GB" of RAM as large ? caz currently i have just so much capacity to develop.16:30
*** Obulpathi has quit IRC16:31
kgriffsmpanetta, alcabrera: BTW, I think we still need the concept of a "cell" or whatever on the backend - it can be useful to segregate groups of queues by network switch or something, but inside that group you want a really big pool16:31
mpanettaHmm16:31
mpanettaI wish pools could be dynamic...16:32
mpanettaWhy can't they be?16:32
alcabrerakgriffs: could you elaborate on cell?16:32
alcabreraI'm favorable towards moving away from the notion of queue <-> shard, since that enables partitioning messages across different shards (better load balancing)16:33
alcabrerathough it increases the lookup cost16:33
*** abettadapur has joined #openstack-marconi16:33
alcabrerait'll no longer be a single query16:33
mpanettaYeah, what exactly do you mean by cell?16:35
*** openstackgerrit has quit IRC16:35
*** Obulpathi has joined #openstack-marconi16:36
*** megan_w|afk is now known as megan_w16:36
*** openstackgerrit has joined #openstack-marconi16:36
*** tjanczuk has quit IRC16:36
vkmcbrb16:37
kgriffsre cell16:38
kgriffshmm16:38
kgriffsI'll probably have to draw some pictures16:38
kgriffsfor now, let's cross out cell and just say "pool"16:38
kgriffsin the code today we call these "shards"16:38
kgriffsbut I want to call them pools and get us thinking more along those lines16:39
kgriffswithin a pool you can shard a single queue across multiple nodes16:39
kgriffsmake sense so far?16:39
kgriffsfor example16:39
kgriffswith mongodb you could create a bunch of replica sets, and shard a single queue across multiple sets16:40
alcabreragotcha16:40
alcabreraso instead of a shard being a discrete entity16:40
alcabrerait is a set of entities16:40
alcabreraso we can at least mitigate lookup cost to that set16:40
*** rossk has joined #openstack-marconi16:41
alcabreraand gain the benefit of dynamic capacity within a cell16:41
kgriffswith Redis we will need to do the sharding ourselves. Redis Cluster is not going to work for time-series data (which a message queue is, basically)16:41
kgriffsalcabrera: right16:41
kgriffsa pool is still a ton of servers, but is constrained by your datacenter design and gear16:42
kgriffsI think we still want to be able to migrate between pools16:42
*** LaceyKite has quit IRC16:43
kgriffsbecause someone may need to decomission a region in a DC in order to get in an do hardware upgrades16:43
alcabrerayup16:43
alcabreragood point16:43
kgriffsprashanthr_: see my note about redis ^^^16:44
kgriffswith mongo, we may or may not be able to use it's native sharding16:44
*** tjanczuk has joined #openstack-marconi16:44
kgriffswe have to see if there is an opportunity for race conditions when creating the marker16:44
prashanthr_kgriffs: Sure ! where can I find it ?16:44
kgriffsprashanthr_: "it"?16:45
prashanthr_kgriffs: "the notes" ?16:45
kgriffsoh, sorry, I just met what I said here in IRC about having to do our own sharding16:46
kgriffswith AMQP, we can use qpid dispatch to scale out16:47
kgriffswith mongodb we *may* be able to use it's native sharding, but I suspect our monotonic counter is going to give us trouble. We may end up just having to shard by queue name, and employ super fast SSD and quotas16:48
*** tjanczuk has quit IRC16:48
prashanthr_kgriffs I read it :). Was just trying to read a bit about Redis cluster from my book I have.16:49
prashanthr_I learnt about Redis data structures over the last week. Just the clustering part remaining.16:49
kgriffsprashanthr_: with redis we will need to do it in the driver (application-level sharding). Also we will need to do replication ourselves. Let's chat about that for a bit.16:49
kgriffsprashanthr_: ok, as you will find, redis just scatters incoming writes based on the key16:50
prashanthr_kgriffs: Sure.16:50
kgriffsthe trouble is, that makes us lose our ordering16:50
kgriffsit also means we have to query several members of the cluster every time when reading the queue16:51
kgriffsprashanthr_: so, here is my idea16:51
alcabrerabrb16:51
*** whenry has joined #openstack-marconi16:52
kgriffswe implement our own clustering but make use of the fact that messages are ordered16:52
kgriffsallow me to explain16:53
prashanthr_kgriffs:  Sure.16:53
kgriffswhen a message arrives, we choose one of N servers to write the message to16:53
kgriffsso, there is some setting of N that is configurable by the operator - you set it at deployment time and never change it16:54
kgriffsthe way you choose the server is critical16:54
kgriffsrather than just hashing with mmh3 or md5 or something, you keep a running counter of messages that have gone to that queue so far16:54
*** LaceyKite has joined #openstack-marconi16:55
prashanthr_kgriffs: Will that counter of messages be in the memory cache / in redis ?16:55
kgriffshmmm. I think you would have a HA set of Redis instances (3) for each pool that store the counters16:57
kgriffsthe alternative is to use a timestamp16:57
kgriffslet's come back to this in a moment16:57
prashanthr_kgriffs: Sure. I have a basic idea of it now.16:58
kgriffslet's call this a "clock" whether it is a shared counter or a timestamp16:58
kgriffsso, from the clock, you determine which server (0...N) the message should go through16:59
kgriffsthe idea is to batch up messages based on the clock16:59
kgriffsso for X number of clock "ticks" messages go to server 0. For the next X ticks, they go to server 1, etc.16:59
kgriffswrapping around to server 0 when you reach the end17:00
kgriffsnow, when a request comes in to read messages17:00
kgriffsyou simply read them off in the same order17:00
*** abettadapur has quit IRC17:00
kgriffsthe way you know where the client last read was they give you a marker17:00
prashanthr_kgriffs: That's really interesting.Now it's pretty clear to me.17:01
kgriffsthe marker contains the server ID as well as the message ID last read (note the message ID must be monotonic, similar to mongodb's)17:01
kgriffsthat way, we don't have to keep track of client state - they do it for themselves17:01
flaper87kgriffs: is that what we talked during the summit? or is it based on a new idea?17:01
kgriffsflaper87: same thing17:02
flaper87kgriffs: cool, I just wanted to avoid reading the backlog. keep going17:02
kgriffsprashanthr_: first step, though, is to create a driver for just a single redis instance. don't so sharding or replicas or anything for the first iteration of the driver17:02
kgriffsflaper87: the backlog gives my justification for wanting to do this17:02
kgriffs(FWIW)17:03
kgriffsprashanthr_: so, we have two things left to sort out for marconi's redis sharding implementation17:03
flaper87kgriffs: do you think we should do this before scale-up? I'm of the idea that we should do scale-up first and then this. I think it'll give us the bases for the scale-out work17:03
kgriffsflaper87: I think migration of a queue needs to happen right away if possible17:04
prashanthr_kgriffs: Sure. Will try to get the basic redis operations working.17:04
kgriffsI was thinking j-2 or j-3 to look at mongodb native sharding, and redis sharding17:04
kgriffsprashanthr_: so, two things for the redis sharding17:05
flaper87kgriffs: agreed, I think j-2 would be better. These kind of features could use a milestone for tests17:05
kgriffsfirst, what to do when a client does not give us a marker17:05
flaper87I'd be a bit afraid of releasing it in j-317:05
kgriffssecond, whether to use a counter or system clock17:06
flaper87also, it'd need to land before the feature freeze17:06
kgriffsmmm17:06
kgriffsgood point17:06
* flaper87 STFU and let kgriffs talk w/ prashanthr_17:06
malinidid I hear milestone for tests?17:06
kgriffswe should just spend all of j-3 for testing and tuning17:07
* kgriffs is trying to get on malini's good side17:07
malinikgriffs: you are making great progress!17:07
malinikeep up the good work ;)17:07
kgriffsprashanthr_: my hunch is that we can't simply start the client off at node "0" if they don't supply a marker17:07
kgriffsthat may not be the tail of the queue17:07
prashanthr_kgriffs: counters provide pretty good ordering I guess. What is the default behaviour currently when the marker is not provided ?17:08
maliniflaper87: are you going to update this https://review.openstack.org/#/c/87937/ ?17:08
maliniI can add CLI tests in Tempest once this lands17:08
flaper87malini: I will, most likely tomorrow17:09
kgriffswell, we just give them the first n messages from the queue, according to the limit they specified17:09
kgriffsthe trouble is that when you shard a single queue, now you have to figure out which node has those first n messages. :p17:09
maliniflaper87: cool!! I will ask again tomorrow ;)17:09
kgriffsprashanthr_: let's think about it and discuss more in a few weeks once you have the basic driver done17:10
kgriffsprashanthr_: oh, I just realized I didn't explain reading very well17:10
kgriffsyou would read from the last position (according to the marker supplied by the client).17:11
kgriffssay the client asks for 20 messages given a marker for Server 1 (s-1)17:11
kgriffsbut after querying s-1 you only get back 5 messages17:12
kgriffsthe simple thing to do is just return 5, since the API does not guarantee giving back the requested number anyway17:12
kgriffsthe slightly more complex thing to do is to then query s-2 and splice those messages on to the end of the ones you got from s-117:13
kgriffssince this boundary condition should happen only rarely, there shouldn't be much extra load on the overall storage pool if you read from 2 nodes to fulfill a single request17:13
*** abettadapur has joined #openstack-marconi17:14
*** alcabrera is now known as alcabrera|afk17:14
kgriffsprashanthr_: do we already have a redis bp?17:14
prashanthr_kgriffs: Hmm tat's true. More bookkeeping work in the application layer has to be done. Also with redis the operations will be faster.17:14
prashanthr_kgriffs: What does bp mean ?17:15
kgriffsblueprint17:15
kgriffslet me check on that17:15
vkmc there is one yeah :)17:15
vkmccurrently assigned to alcabrera17:15
prashanthr_kgriffs yes : https://blueprints.launchpad.net/marconi/+spec/redis-storage-driver17:15
*** tjanczuk has joined #openstack-marconi17:16
kgriffscan you assign that to yourself?17:16
prashanthr_kgriffs: Sure.17:16
vkmche cannot, the approver has to do it17:16
kgriffsoic17:16
kgriffsI'll do it then17:17
kgriffsbtw, here is the juno calendar: https://wiki.openstack.org/wiki/Juno_Release_Schedule17:17
prashanthr_kgriffs: Yes i do not have the permissions.17:17
kgriffsI'd like to have the basic redis driver (no replica, no sharding) done by end of june17:17
kgriffsso, I will schedule redis bp for juno-1 and it may bleed over to juno-2. Then Let's create a second blueprint for redis-pool17:18
kgriffsprashanthr_: I forgot to mention, we also need to handle replication ourself for HA. 2 nodes should be enough, although I suppose it could be configurable17:18
kgriffsprashanthr_: the reason we can't use redis master-slave is because they ACK *before* waiting on the propagation to the slave17:19
kgriffsso, my idea was, just send two writes to two nodes in the storage pool simultaneously (using async)17:19
kgriffswe can talk about that some more in a few weeks when the basic driver is done17:20
prashanthr_kgriffs: Sure i can make the number of nodes configurable.17:20
prashanthr_I will also make a list of to-do's from the discussion today17:20
prashanthr_so that we can discuss again after the basic driver is up.17:21
kgriffssounds good. I'll register another blueprint and you can help me flesh out the wiki page for it17:21
kgriffsprashanthr_: btw, I was thinking that you could use a lua "stored procedure" for inserting messages17:22
kgriffsthat way you can generate a monotonic message ID and insert the message in a single, atomic transaction17:22
kgriffsWe have to do a really bad hack in the mongodb driver since you can't do transactions (or at least, before 2.6 - may be able to do it now)17:23
prashanthr_kgriffs: that's correct. No need to seperate out the counting and insertion. Else it's tough to accomodate the counter in the redis transaction,17:24
prashanthr_thanks for the idea :)17:24
kgriffsprashanthr_: the lua thing works because only one can run at a time, so the counter you get won't be jumped by a different request in parallel17:24
kgriffsrock on17:24
vkmckgriffs, this work plan also applies to the storage I'll be working on, right?17:25
kgriffsprashanthr_: I'm showing two accounts in launchpad for "Prashanth Raghu"17:25
vkmckgriffs, now that flaper87 is around we can make the selection17:25
prashanthr_kgriffs: https://launchpad.net/~p-is-prashanth this is the account I use.17:26
kgriffsprashanthr_: good, I guessed right. :)17:27
kgriffsvkmc: yes, I think so. but, depends on whether we need to do extra work for sharding or the backend can do it natively for us17:27
kgriffsin any case you would want to deliver a basic driver by end of june, and then spend remainder of j-2 polishing/completing17:28
prashanthr_kgriffs :) Awesome.17:28
prashanthr_To just repeat our discussion:17:28
prashanthr_1. Implement a basic driver.17:28
prashanthr_2. Make a list of to-do's to accomodate Redis pools.17:28
prashanthr_3. Make the pooling work :)17:28
prashanthr_Is this right ?17:28
vkmckgriffs, great, I'll submit the corresponding blueprints then17:28
kgriffsflaper87: around?17:28
AAzzaby the way, can someone assign this blueprint to me? https://blueprints.launchpad.net/marconi/+spec/py3k-support17:30
kgriffsflaper87: when you have a minute, can you help vkmc come up with a shortlist (2-3) options for her driver, then we can make the final decision after that.17:30
prashanthr_kgriffs , vkmc , flaper87 , alcabrera , malini : Me retiring for the day . Have a great day ahead :)17:30
kgriffsAAzza: sure thing17:30
kgriffsprashanthr_: take care! Thanks for the chat!17:30
maliniprashanthr_: good night!17:31
vkmcprashanthr_, ttyt! enjoy the rest of your evening :)17:31
AAzzame is https://launchpad.net/~grafinya-uvarova17:31
prashanthr_thank you all ! :)17:31
*** prashanthr_ has quit IRC17:31
kgriffsAAzza: you're all set!17:32
AAzzakgriffs: many thanks)17:32
kgriffssriram: ping17:33
*** igor has quit IRC17:34
sriramkgriffs:pong17:35
kgriffshey good buddy17:35
*** igor_ has joined #openstack-marconi17:35
kgriffswe need to chart a course for marconi-bench17:35
sriramyep!17:36
kgriffsfor j-1 I'd like to basically take what we've done so far, create a "conductor" that will let us start each client remotely and collect the results, then dump them out to a gnuplot file17:37
kgriffsor, we could do JSON and use flot17:37
kgriffsI think our clients are mostly there17:38
kgriffsmine needs to post to graphite still so we can watch for anomalizes, and we need to figure out how to measure and report "attempted req/sec"17:38
*** balajiiyer has quit IRC17:38
*** balajiiyer has joined #openstack-marconi17:38
kgriffswhere, "attempted req/sec" roughly translates to "how many simultaneous requests are we trying here?"17:39
kgriffshmm17:39
*** jmckind has quit IRC17:39
kgriffsI was going to try to add the "attempted req/sec" and graphite reporting to the producer. Then I will update the gist, and how about you pull that down and submit a patch with both to gerrit?17:40
sriramSure, I can do that.17:40
kgriffsok. I will make up a blueprint just for this first iteration17:40
sriramkgriffs: \m/17:42
*** reed has joined #openstack-marconi17:43
kgriffssriram: I don't think we need to enable keystone for our first test17:45
sriramyeah, I agree17:46
kgriffsare we now on HAProxy?17:46
sriramkgriffs: I had one setup, which didnt do that much.17:47
kgriffssriram: oh, is it not configured to balance to the api servers?17:48
sriramkgriffs: it can be setup, just a matter of updating a few conf files.17:48
sriramshouldnt take too long.17:48
sriramit was pointing to the old environment, but the old environment needs to be fixed.17:49
kgriffsok17:50
kgriffshow many web heads (api servers) are there?17:50
sriram417:51
kgriffs1. HAProxy load balancer17:52
kgriffs2. 4x web heads running uwsgi, with Keystone auth disabled (we will bench it enabled later)17:52
kgriffs3. 1x3 mongo replica set17:52
kgriffs4. 1x graphite box17:52
kgriffs3. Load generators (1x producer, 1x consumer, one conductor)17:52
kgriffsdoes that look correct?17:52
kgriffsoops. last one was a typo17:53
kgriffs5. 3x Load generator boxes (1x producer, 1x consumer, one conducto17:53
sriramyes17:53
sriramThat looks perfect17:53
sriramthe graphite box is also setup.17:53
sriramI was able to send messages and create a graph the other day, but it was from the same machine.17:54
sriramhavent tried it from another box.17:54
kgriffssriram: what is your launchpad ID?17:56
sriramthesriram17:56
sriramI think17:56
*** alcabrera|afk is now known as alcabrera17:56
sriramyes thats correct.17:56
kgriffssriram: https://blueprints.launchpad.net/marconi/+spec/basic-benchmarking17:57
sriramwoot!17:58
kgriffswe can add more blueprints later... I don't want to waste time trying to predict the future *too* far in advance17:58
kgriffsbecause...17:58
kgriffspeople may become suspicious17:58
* kgriffs has to keep a low profile as a transdimensional being17:58
*** martyntaylor has joined #openstack-marconi18:00
*** abettadapur has quit IRC18:05
sriramheh18:05
*** jamie_h has quit IRC18:07
* flaper87 back18:20
flaper87vkmc: kgriffs so18:20
flaper87kgriffs: re the third-party CI. Are we going to do it?18:20
vkmchey :)18:22
malinijchai: ping18:22
jchaimalini: hello18:22
*** abettadapur has joined #openstack-marconi18:22
malinijchai: I am trying to figure out why https://review.openstack.org/#/c/87762/ is failing jenkins18:23
maliniBut I have no clue18:23
maliniCan you try pinging in #openstack-qa18:23
maliniSomebody there might have an idea18:23
flaper87malini: re functional tests, whenever you want18:24
maliniflaper87: It comes with a lot of downs, which makes me wonder if its worth it18:24
maliniBut right now we are duplicating effort18:25
flaper87kgriffs: malini https://review.openstack.org/#/c/94374/18:25
flaper87malini: mind sharing what the work is about?18:25
flaper87What does tempest expect us to do?18:25
malininobody has asked us to update our functional tests to use tempest libs18:26
maliniWe can continue as is if we choose to18:26
maliniBut we'll end up writing two sets of tests, for each version18:26
maliniwhich might not be sustainable18:26
maliniThis is from neutron https://review.openstack.org/#/c/72585/18:28
flaper87malini: ok but I still don't know what the change is about :)18:28
* flaper87 clicks18:28
peoplemerge@kgriffs: vkmc: flaper87: did I hear right there's interest in a riak backend?  Or did sb mistype and mean redis?18:28
maliniflaper87: can I ping you later to discuss this more? am in a meeting now18:28
flaper87peoplemerge: not exactly interest but it's still a valid option18:29
vkmcpeoplemerge, we already have someone working on redis... and we are also considering riak18:29
flaper87malini: oh sure, please, lets talk later18:29
flaper87vkmc: is your intership *just* about working on a storage driver?18:30
flaper87internship*18:30
peoplemergevkmc: flaper87: good to know, I did a 3 month project this year using riak18:30
*** martyntaylor has left #openstack-marconi18:30
vkmcflaper87, that would be just a part of it... I'll also would like to contribute with another tasks18:30
flaper87peoplemerge: you could write a riak driver as an external plugin18:30
flaper87that would be cool18:30
*** martyntaylor has joined #openstack-marconi18:31
vkmcflaper87, but I guess that the central topic should be implementing the driver18:31
peoplemergeflaper87: will take a crack at it, let me familiarize myself with the code18:31
flaper87vkmc: ok, I'm asking because we've discussed about how many drivers we want to keep in the code base and the number comes down to 2 more for now. That is, sqla+mongo+redis+amqp18:31
flaper87peoplemerge: please ask. I think there's a cookiecutter template for sotrage drivers18:31
* flaper87 tries to find it18:32
flaper87vkmc: ok18:32
flaper87vkmc: I guess it has to be merged into the codebase in order to be considered valid for the internship, right?18:32
vkmcflaper87, I see... well, prashanthr will be working on Redis and he also mentioned Swift18:32
peoplemergeflaper87: that should help18:33
vkmcflaper87, I guess so, not so sure how that work in GSoC18:33
vkmcflaper87, Maybe I could take AMQP?18:34
flaper87vkmc: sure, we'll need to work close on that one.18:36
flaper87vkmc: 2 secs18:36
*** AAzza has quit IRC18:37
vkmcflaper87, sure18:37
*** peoplemerge has quit IRC18:40
*** peoplemerge has joined #openstack-marconi18:40
*** oz_akan_ has quit IRC18:45
*** oz_akan_ has joined #openstack-marconi18:46
*** megan_w is now known as megan_w|afk18:49
*** tjanczuk has quit IRC18:51
*** AAzza has joined #openstack-marconi18:53
flaper87vkmc: back18:56
flaper87so, yeah, that sounds good to me18:56
vkmcflaper87, awesome18:56
flaper87we've been having lots of discussions as to whether support 0.9 or 1.018:56
flaper87the thing is that I think 1.0 is the way to go18:56
flaper87but probably having 0.9 makes sense too. So, you probably will work on one of those18:57
vkmcflaper87, so... we forget all about elasticsearch and rethinkdb?18:57
flaper87:D18:57
flaper87vkmc: thing is that, as of now, we don't want those in the codebase18:57
flaper87:/18:57
flaper87Unless people think otherwise18:57
flaper87By we don't want them is that, we don't believe the Marconi community, as-is, will be able to maintain them all un the long run18:58
vkmcflaper87, sounds reasonable18:58
flaper87so, it's better to keep the codebase small and clean by letting other drivers live in external repositories18:58
*** peoplemerge has quit IRC18:58
*** peoplemerge has joined #openstack-marconi18:59
vkmcflaper87, well, in fact, I prefered to ask for feedback before proposing anything because I want to contribute with something that fits the project plans18:59
vkmcflaper87, I agree that we should better focus on less things and make them count19:00
peoplemergeflaper87: hope I didn't miss your msg re: cookie-cutter persistence19:01
flaper87peoplemerge: you didn't, I did miss the thought in my head, though. :P19:02
flaper87lemme get that for you19:02
vkmcflaper87, so... back to ampq, you consider that we should go for 0.9 or 1.019:03
*** kgriffs is now known as kgriffs|afk19:04
* peoplemerge grins19:06
flaper871.0 all the way :D19:06
flaper87peoplemerge: https://github.com/FlaPer87/cookiecutter-marconi-transport19:06
peoplemergeflaper87: thx19:06
flaper87damn, it's for transport19:06
flaper87:(19:06
peoplemergeah right19:06
flaper87I guess you could copy mongodb's :D19:06
flaper87or write a cookiecutter template for storage drivers :D19:06
* flaper87 is taking advantage of peoplemerge19:06
peoplemergeflaper87: probably easier than eating cookie dough19:06
*** abettadapur has quit IRC19:07
*** abettadapur has joined #openstack-marconi19:07
flaper87vkmc: so, re amqp1.019:09
flaper87mmh, where should I start?...19:09
flaper87mmh19:10
flaper87vkmc: btw, when does your internship start?19:10
vkmcflaper87, yesterday19:10
vkmclol19:10
flaper87holy crap19:10
vkmcthat's why I'm being such a pain in the neck :D19:11
flaper87so, my suggestion for you is to start reading about AMQP 0.9 and AMQP 1.019:11
flaper87Both protocols are completely different19:12
flaper87However, AMQP 1.0 is now a standard whereas 0.9 is not.19:12
alcabreraI'll start reading, too, so I can actually help with this. :D19:12
vkmccool, will do that19:12
vkmcI'm a bit lost though... because I was relating 'storage backend' with something more like mongo19:13
vkmcand AMPQ is a queuing protocol19:13
*** tjanczuk has joined #openstack-marconi19:13
alcabrerathe idea is to use marconi as a medium to communicate messages to an amqp system19:14
flaper87vkmc: it's a protocol but it'll talk to a queuing broker19:14
mpanettaSo... we want to store our queues in a queue :P19:14
flaper87mpanetta: shhhhhhhhhhhhhh19:14
alcabrerampanetta: exactly -- queue-ception19:14
vkmcyes, that's the feeling mpanetta hahaha19:14
mpanettahaha19:14
flaper87one of marconi's goals is to support existing technologies19:14
mpanettaSounds good to me :)19:14
flaper87hence the will to support existing brokers19:14
flaper87mpanetta: no voodoo for ya'19:15
alcabreraflaper87: I've found two spec links19:15
alcabrera1.0: http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-complete-v1.0.pdf19:15
alcabrera0.9.1: http://www.amqp.org/specification/0-9-1/amqp-org-download19:15
alcabreraare these good places to get into the details? or is that diving too deep?19:15
flaper87that's probably too detailed19:16
flaper870.9: http://en.wikipedia.org/wiki/AMQP19:16
flaper87erm, sorry19:17
flaper87that's for both19:17
alcabreraah, alright19:17
alcabrerathanks!19:17
vkmcthanks flaper87, alcabrera for the docs19:17
alcabreraI'll give that wiki page a read soon19:17
vkmcjust one nit... are we getting closer to what rabbitmq does by adding support for ampq?19:18
flaper870.9: http://www.rabbitmq.com/tutorials/amqp-concepts.html19:18
vkmcI thought that was something we wanted to keep away from19:18
flaper87vkmc: nope, we're supporting rabbitmq19:18
alcabreravkmc: no -- we'd in fact be running something like rabbit in the background19:18
alcabreraand marconi would delegate to it, but only in marconi's terms -- queue, claims, messages and their operations19:18
vkmccool19:19
flaper87that rabbitmq.com link has enough information about 0.919:19
flaper87I'd recommend reading that one19:19
alcabrerathe rabbitmq docs are pretty thorough, it seems. :)19:19
flaper87these are the brokers supporting 1.0 http://en.wikipedia.org/wiki/AMQP#AMQP_1.0_broker_Implementations19:20
alcabrerait'd be fun to play with this: https://hackage.haskell.org/package/amqp19:20
alcabreraeven though it only supports rabbit/0.9.119:20
vkmcyes, I also found a comparation between 0.9 and 1.0 here https://www.rabbitmq.com/specification.html19:21
flaper87brb19:22
vkmcflaper87, kgriffs assign me to https://blueprints.launchpad.net/marconi/+spec/support-amq when you have a moment19:23
vkmcor alcabrera ^^ :)19:23
alcabreravkmc: assigned19:23
alcabrera:)19:23
vkmcyay thanks19:23
alcabreranp~19:23
alcabreraI'm going to head home for the day.19:24
*** jdprax has joined #openstack-marconi19:24
vkmcalcabrera, ttfn, enjoy the rest of your day! o/19:24
alcabrerathanks, and you too, vkmc!19:25
*** alcabrera is now known as alcabrera|afk19:25
flaper87vkmc: please, lets discuss the details of the amqp driver. it'll probably require changing some bits of the API too.19:25
vkmcflaper87, sure19:26
flaper87I'd recommend reading those docs so we can discuss it further in the next couple of days19:26
*** tjanczuk has left #openstack-marconi19:26
flaper87I need to write down the implementations details19:26
*** tjanczuk_ has joined #openstack-marconi19:26
flaper87I have put thoughts on that19:26
flaper87but I gtg now, for real this time :P19:27
flaper87bbib19:27
*** tjanczuk_ has left #openstack-marconi19:27
*** jdprax has quit IRC19:27
*** whenry has quit IRC19:28
vkmcflaper87, yeah that seems better to me too :)19:29
vkmcflaper87, I'll read the docs in the meantime19:29
vkmcflaper87, thanks for the pointers :) ttfn! o/19:30
*** tjanczuk_ has joined #openstack-marconi19:30
*** kgriffs|afk is now known as kgriffs19:35
*** whenry has joined #openstack-marconi19:43
*** tjanczuk_ has left #openstack-marconi19:43
*** tjanczuk has joined #openstack-marconi19:43
*** tjanczuk has left #openstack-marconi19:44
cpallaresvkmc: would you happen to know what command to use when tox gives you gibberish?19:49
cpallaresI should have asked flaper87 before he left19:49
kgriffswhat do you mean by "gibberish"?19:50
* flaper87 back19:54
flaper87cpallares: you probably want $ nosetest tests.unit.queues.....19:55
cpallareskgriffs: like has a bunch of nonsense text like --> \xde\xcd\xe1\xa9\xb3)\x01@19:56
vkmccpallares, can you paste the output?19:56
vkmcoh :)19:56
cpallaresthanks flaper87!19:56
* cpallares writes it down19:56
vkmcI'm not familiar with tests yet... this is useful https://github.com/openstack/marconi/blob/master/tests/functional/README.rst19:57
*** oz_akan_ has quit IRC19:57
malinitht readme needs some love :(19:58
vkmcdidn't we run the tests with testr though? I added that to the wiki, hope it's not wrong19:58
flaper87vkmc: we use testr19:58
vkmcflaper87, but it is for unit tests right?19:59
flaper87vkmc: everything19:59
flaper87vkmc: functional and unit tests19:59
flaper87we use tsung for benchmarks, though.20:00
*** whenry has quit IRC20:00
*** tjanczuk_ has joined #openstack-marconi20:01
*** tjanczuk_ has left #openstack-marconi20:01
*** tjanczuk_ has joined #openstack-marconi20:02
*** oz_akan_ has joined #openstack-marconi20:02
vkmcflaper87, good to know :)20:03
*** balajiiyer1 has joined #openstack-marconi20:06
*** kgriffs is now known as kgriffs|afk20:06
*** balajiiyer has quit IRC20:06
*** balajiiyer1 has quit IRC20:07
*** balajiiyer has joined #openstack-marconi20:07
balajiiyerabettadapur: can you take care of this bp ? https://blueprints.launchpad.net/marconi/+spec/api-v1.1-header-changes20:09
abettadapurbalajiiyer: looking at it now20:09
*** whenry has joined #openstack-marconi20:12
*** whenry has quit IRC20:15
*** AAzza has left #openstack-marconi20:20
*** whenry has joined #openstack-marconi20:26
*** abettadapur has quit IRC20:32
*** martyntaylor has left #openstack-marconi20:32
vkmcmalini, I'm facing another problem with Postman, not sure what is missing this time http://paste.openstack.org/show/81019/20:39
vkmcoh,.. it's not on the paste, Postman is getting a 20420:40
*** sriram has quit IRC20:42
vkmcmaybe somebody else can give me a hint on that :) I'm trying to use Postman to interact with the API20:43
*** shakamunyi has joined #openstack-marconi20:46
*** sriram_p has quit IRC20:47
*** LaceyKite has quit IRC20:48
*** tedross has quit IRC20:54
*** jraim has quit IRC21:04
*** jraim has joined #openstack-marconi21:05
*** oz_akan_ has quit IRC21:07
*** whenry has quit IRC21:12
*** mwagner_lap has quit IRC21:19
*** Obulpathi has quit IRC21:27
balajiiyervkmc can you try removing additional headers in postman, perhaps? X-Project-ID21:40
vkmcI'll try that, thanks balajiiyer21:46
*** balajiiyer has left #openstack-marconi21:48
*** balajiiyer has joined #openstack-marconi21:49
*** jergerber has joined #openstack-marconi21:49
flaper87vkmc: btw, if you've any questions please, feel free to ask at any time, drop emails etc.21:54
*** balajiiyer has left #openstack-marconi21:55
vkmcflaper87, will do, thanks for that! :)21:55
*** mpanetta has quit IRC22:02
*** Obulpathi has joined #openstack-marconi22:13
*** mwagner_lap has joined #openstack-marconi22:26
*** Obulpathi has quit IRC22:30
*** jchai has quit IRC22:51
*** flaper87 is now known as flaper87|afk23:15
*** malini is now known as malini_afk23:43

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!