Monday, 2011-05-23

*** cloudgroups has joined #openstack-dev00:45
*** cloudgroups has left #openstack-dev00:46
*** dovetaildan has quit IRC00:59
*** galthaus has joined #openstack-dev01:05
*** galthaus has quit IRC01:35
*** cloudgroups has joined #openstack-dev02:00
*** lorin1 has joined #openstack-dev02:33
*** alekibango has quit IRC03:03
*** cloudgroups has left #openstack-dev03:08
*** Binbin has joined #openstack-dev03:32
*** clayg has quit IRC03:32
*** clayg_ has joined #openstack-dev03:33
*** clayg_ is now known as clayg03:38
*** Binbin is now known as Binbin_afk03:51
*** lorin1 has left #openstack-dev04:37
*** Binbin_afk is now known as Binbin04:49
*** foxtrotdelta has quit IRC04:56
*** Binbin has quit IRC05:10
*** Zangetsue has joined #openstack-dev05:35
*** Arminder has quit IRC05:38
*** Arminder has joined #openstack-dev06:08
*** Arminder has quit IRC06:08
*** Arminder has joined #openstack-dev06:11
*** zigo-_- has joined #openstack-dev09:49
*** elasticdog has quit IRC10:32
*** Zangetsue has quit IRC10:44
*** Zangetsue has joined #openstack-dev10:45
*** Joelio has left #openstack-dev11:15
*** elasticdog has joined #openstack-dev11:21
*** jaypipes has joined #openstack-dev11:58
*** thatsdone has joined #openstack-dev12:12
*** thatsdone has quit IRC12:23
*** dovetaildan has joined #openstack-dev12:23
*** exlt has joined #openstack-dev12:36
*** cloudnod has joined #openstack-dev12:38
cloudnodhi12:40
jaypipescloudnod: hi12:52
ttxdiablo-1 feature needs some review love: https://code.launchpad.net/~morita-kazutaka/nova/snapshot-volume/+merge/6107112:57
jaypipessandywalsh: around?12:58
cloudnodi'm going to start hacking on some python so i can contribute12:59
notmynamecloudnod: great :-)13:00
cloudnodhear me roar. heh13:00
cloudnodif it was written in perl or php I could crush it.... but probably good that it isn't. :)13:00
cloudnodi have one more beta invite to irccloud if someone wants it...  awesome browser-based irc client that stays connected even when you close browser... like using screen but slicker13:02
sandywalshjaypipes, hey!13:04
jaypipessandywalsh: hi! :)13:04
jaypipessandywalsh: flew over your little town when I headed over to Greece. Sorry, couldn't spot your house ;)13:04
jaypipessandywalsh: wanted to ask you a couple questions about the zone scheduler stuff..13:05
sandywalshI was waving ... didn't you see me?!13:05
jaypipessandywalsh: hehe13:05
jaypipessandywalsh: is the distributed scheduler only working (or meant to work) with the OS API?13:05
*** yamahata_lt has joined #openstack-dev13:06
sandywalshjaypipes, yup. at this point13:06
sandywalshjaypipes, once we get it working end-to-end we can abstract out the OS API calls into something that works with both13:06
jaypipessandywalsh: does the dist scheduler even a future with the EC2 API?13:06
jaypipeseven have...13:06
sandywalshjaypipes, the issue is, it's easier for us to extend the OS API ... not so much with the EC2 API13:07
jaypipessandywalsh: I'm wondering if the code belongs under /nova/api/openstack...13:07
sandywalshjaypipes, well, it doesn't affect the other operation at this point13:07
sandywalshjaypipes, and it *can* support EC2, it just doesn't currently13:08
jaypipessandywalsh: Oh, I'm not saying it should. I'm just curious as to why, since all the other stuff that only affects the OS API is under that directory.13:08
dabojaypipes: right now the main dist-scheduler branch passes all the ec2 tests13:08
dabothat's not to say that it would run flawlessly, but that's at least a good sign13:08
jaypipesdabo, sandywalsh: sorry, I'm not sure I understand how it *can* support the EC2 API since we don't have any control over the EC2 API. Do you mean as a sort of "super scheduler" that could route requests for new instances that come in on *either* API?13:09
dabothe goal is that it *will* support ec213:09
dabodon't see why it would have to be "super", but yes13:09
sandywalshjaypipes, embrace and extend would be the only choice. So long as we only add functionality and not break existing EC2 stuff13:10
daboonce the call is handled by the api, the rest is identical13:10
jaypipesdabo: "super" as in "above the OS API and the EC2 API"...13:10
dabojaypipes: then no13:11
dabothe scheduler works independently of the origin of the request. Why would it have to be different?13:11
jaypipessandywalsh, dabo: OK, I guess I'll have to revisit the EC2 API docs for RunInstances... I didn't remember a whole lot of distributed scheduler/request functionality there...13:11
dabojaypipes: the only difference I saw was the 'availability zone' concept, which could easily be folded into the general host filtering stuff13:12
sandywalshjaypipes, let's do a quick inventory: ZoneManager, HostFilter and CostScheduler could all work with EC2.13:12
jaypipesdabo: the main difference (AFAIK) is that in the EC2 API there is no way to query an availability zone for a set of hosts that may be available.13:13
sandywalshjaypipes, the routing stuff in compute.api() could need to support EC2 (extending their API, but not busting it)13:13
sandywalshjaypipes, nor can OS API, we added a new /zones/* REST resoruce13:13
sandywalsh*resource13:13
sandywalshjaypipes, we could do the same with EC213:14
sandywalshagain, "embrace and extend"13:14
jaypipessandywalsh: right, that's kind of what I meant :) We *have* the ability to do that with the OS API :)13:14
dabojaypipes: this really sucks over IRC13:14
jaypipesdabo: hehe :)13:14
sandywalshjaypipes, I don't think we have any more ability than EC2 ... wrt 1.0 and 1.113:15
daboI only consider supporting ec2 api to mean that an external request on that api is correctly handled. It doesn't mean that the inter-zone communication is handled via the ec2 api13:15
sandywalshwhich are meant to be backwards compatible with RS API13:15
sandywalsh2.0 however13:15
sandywalshdabo, to a degree, if a customer wants zones, but doesn't want to stand up an OS API server, they're hosed13:16
jaypipessandywalsh: right13:16
dabosandywalsh: that gets back to my distinction between the use of novaclient as an external client, and novaclient as an internal glue between zones13:17
daboright now we're cramming both sets of functionality into a single tool13:17
jaypipessandywalsh: but *adding* a resource like /zones doesn't break 1.0 or 1.1, AFAICT.13:17
sandywalshI don't have a problem with that since it's controlled by --enable_admin_api13:18
sandywalshjaypipes, correct13:18
jaypipessandywalsh: now, changing things like returning a reservation ID for POST /servers would break it, of course.13:18
sandywalshjaypipes, that's what I mean by extend vs. change13:18
jaypipessandywalsh: gotcha.13:18
sandywalshlikewise with my ML post this morning ... adding num_instances is no risk13:19
sandywalshbut changing the return type from Instance ID to Reservation ID is a no-no13:19
sandywalshand then we need new commands to list all instances given a Reservation ID13:19
sandywalsh(which should be a new command)13:20
sandywalshMy concern is having a parallel universe where all these extra commands are available13:20
jaypipessandywalsh: yup, agreed.13:22
sandywalshjaypipes, but I see your point about "what can live under api.openstack" ... lemme stew on that some more. My guts says they're common enough ... for now13:24
sandywalshs/guts/gut13:24
jaypipessandywalsh: that was me just be curious, nothing more.13:24
sandywalshstill, good observation13:25
jaypipessandywalsh: was just wondering about it, but I can understand why you want it outside the openstack folder.13:25
jaypipessandywalsh: added a comment on the merge prop about some completely different things...13:25
sandywalshcool ... thanks!13:25
jaypipesfeels good to finally be back to work.13:25
jaypipesheh, that sounds bad... ;)13:25
sandywalshwhere in greece were you?13:25
jaypipessandywalsh: mykonos.13:25
sandywalshawesome ... beautiful area13:26
sandywalshget over to turkey?13:26
sandywalshI spent nearly a month on Ios when I graduated ... too much fun.13:26
jaypipessandywalsh: my wife did for 6 days before meeting the rest of us in Greece. She had a blast.13:26
jaypipessandywalsh: though the hot air balloon she and her friend crashed pretty hard into the side of a small cliff!13:27
sandywalshwow  ... no way!13:27
jaypipessandywalsh: they were rattled but ok :)13:27
jaypipessandywalsh: had a great time overall. great to get away.13:27
sandywalshalmost as dangerous as the scooters over there :)13:27
sandywalshfor sure ... I have to get my wife over there sometime.13:28
sandywalshwell ... welcome back! The thing in front of you is called a "keyboard"13:28
jaypipessandywalsh: lol :) thx13:29
jaypipessandywalsh: and I hear you about the scooters. people are crazy on the road in mykonos.13:29
sandywalsh+ a cliff if you screw up13:30
jaypipessandywalsh: we had two cars (there were 9 of us staying at the villa) but I really liked bombing around the island on a 4-wheeler ATV most days.13:30
sandywalshawesome13:31
*** foxtrotgulf has joined #openstack-dev13:32
*** ameade has joined #openstack-dev14:07
*** dprince has joined #openstack-dev14:09
*** _0x44_ is now known as _0x4414:28
*** jkoelker has joined #openstack-dev14:37
mtaylorjaypipes: ola!14:38
jaypipesmtaylor: hi! :)14:39
markwashso--distributed sched guys. . I still don't get the necessity of N instances per create request14:48
markwashputting aside for a moment that fact jaypipes point about all-or-nothing requests14:48
*** rods has joined #openstack-dev14:48
sorenmarkwash: Just responsed on the mailing list about that.14:50
sorenOh.14:50
jaypipesmarkwash: it's the difference between treating the request as a unit or not.14:50
sorenDidn't make it to jay's response yet. Sounds like that sme thing.14:50
markwashbasically, the schedulers job is to map a pair of (current state, N <number requested>) -> a list of N hosts (possibly with repeats14:50
*** rods has quit IRC14:51
markwashcouldn't we make it so that taking the concatenation of repeating this mapping N times with one instance per request produce the same list?14:51
*** rods has joined #openstack-dev14:51
jaypipesmarkwash: hmm, I kind of think of it like this: the scheduler's job is to create a task for the entire request, then ask around the zones and hosts to see if the request can be fulfilled, then ask them to fulfill it.14:51
markwashjaypipes: this is a transactionality problem, and I get it that it sucks for it to be the clients job to manage that14:52
markwashjaypipes: but I wonder if it might be okay for it to remain a client problem for now14:53
markwashalso sorry for the unduly mathematical description14:53
jaypipesmarkwash: no, it's not just an atomicity problem. it's that it's a synchronisity (sp?) problem, as I hinted at in my ML post. Without reservations, you don't have true async operations. Without dealing with the entire thing as a singular request unit, you have no reservation ID.14:54
markwashjaypipes: I buy that--but I wonder if we could just fix instance id to be like that?14:54
*** sirp__ has joined #openstack-dev14:56
jaypipesmarkwash: Not until instance ID becomes non-zone-aware. Because while instance ID remains tied to a particular zone DB or URI structure, that part of the operation must by nature remain synchronous in order to return the instance ID through the original API call.14:56
markwashjaypipes: that's what I thought we might fix14:56
jaypipesmarkwash: if instance ID was a UUID, I suppose you could use instance ID as the reservation ID. But you'd still be left with the atomicity problems.14:56
markwashdoes the scheduler talk to other zones through the rest api? or something else?14:57
jaypipesmarkwash: via messaging layer (rpc)14:57
dabojaypipes: no14:57
daboall inter-zone communication is via http calls14:57
daboqueues don't span zones14:58
jaypipesdabo: sorry, my mistake.14:58
jaypipesmarkwash: ^^14:58
*** dragondm has joined #openstack-dev14:59
*** rods has quit IRC14:59
dabomarkwash: I suppose the api could create N UUIDs and return those to the caller, and then pass them down to the zones, but that seems rather clumsy compared to a single reservation ID14:59
markwashI think the trouble with pushing uuids (as I'm proposing) is that there is no sensible way to allow the instance id to be set by the http api client15:00
markwashthus, the distributed scheduler wouldn't have a mechanism for pushing the instance ids15:00
dabomarkwash: of course. Instance IDs are database-generated, and the client has no way to talk to the appropriate DB15:00
daboif we switched to UUIDs for instance IDs, then yes, the scheduler could pre-allocate and pass those down15:01
* jaypipes goes off to pre-generate 1 billion UUIDs.15:01
markwashI meant more that even in the case where the api is generating UUIDs for instance ids, when it hands them to the scheduler and the scheduler goes to talk to the http api of a child zone. . .15:02
notmynamebe careful! you don't want to exhaust the supply15:02
markwashI think there will not be a way to pass the uuid into the create call to the child zone15:03
markwashbecause on the api side we would not want to allow our clients to require certain uuids since it could create problems15:03
markwashnot sure if I'm being very clear. . . :-)15:04
jaypipesnotmyname :)15:04
notmynamejust make sure to save some for everybody else ;-)15:04
jaypipesmarkwash: hey, is jorge on IRC?15:04
*** antonyy has quit IRC15:04
dabomarkwash: if you used the pure OS API, then yes. But we are already adding "internal use only" calls to novaclient to provide the glue to enable inter-zone communication15:05
jaypipesmarkwash: not sure I quite understand that last point...15:05
markwashjaypipes: doesn't look like it, and I can't seem to get a hold of him15:05
jaypipesmarkwash: what is his nick?15:05
markwashdabo: that would cover it15:06
markwashjaypipes: without him on line, I have no tab completion to tell me :-)15:06
jaypipesmarkwash: hehe, ok, thought you might know it ;)15:06
*** yamahata_lt has quit IRC15:07
markwashI will reiterate, I think it would be straightforward to add res ids and multiple instances to a request to the api--I guess I am just worried about changing the api15:07
markwashand I guess that's Jorge's concern as well?15:08
jaypipesmarkwash: I suppose. But, then again, I wasn't really talking about 1.1. :) I was brainstorming for a 2.0 API.15:08
markwashjaypipes: oh--somehow I missed that completely15:09
jaypipesmarkwash: :)15:09
jaypipesmarkwash: the idea that you and jorge are talking about (using UUIDs or similar) and returning the "reservation ID" as the instance ID is perfectly fine for 1.X IMHO. I was prognosticating about a future, cleaner, atomic request-centric 2.0 API15:10
markwashjaypipes: I like that prognostication15:11
dabomarkwash: the new dist-scheduler code has the plumbing to return resID and accept N instances; it's just commented out for now. We still return a single instance ref.15:12
jaypipesmarkwash: and I'm not just considering it for Nova, but for Glance as well :) For instance, we're discussing a "builder" API that would allow people to construct images from pieces of other images or application stacks (think: rPath's rBuilder API). It would be best to have that use a reservation/fully async API15:12
*** cp16net has joined #openstack-dev15:12
*** Zangetsue has quit IRC15:15
*** rnirmal has joined #openstack-dev15:16
*** jorgew has joined #openstack-dev15:24
*** redbo has joined #openstack-dev15:29
*** ChanServ sets mode: +v redbo15:29
jaypipesthat was a short visit from our friend jorgew :)15:33
markwashenriching15:33
jaypipesmarkwash: :)15:34
ttxNova core reviewers: please give https://code.launchpad.net/~morita-kazutaka/nova/snapshot-volume/+merge/61071 some review love, looks like it's one of the few features that can actually make the diablo-1 milestone :)15:35
openstackjenkinsProject nova build #929: SUCCESS in 2 min 47 sec: http://jenkins.openstack.org/job/nova/929/15:39
openstackjenkinsTarmac: Fixes euca-attach-volume for iscsi using Xenserver15:39
openstackjenkinsMinor changes required to xenapi functions to get correct format for volume-id, iscsi-host, etc.15:39
BK_mansomebody changed Jenkins job for tarballs: http://nova.openstack.org/tarballs/?C=M;O=D15:47
*** foxtrotdelta has joined #openstack-dev15:50
*** foxtrotgulf has quit IRC15:52
*** antonyy has joined #openstack-dev15:57
sorenBK_man: Erk, thanks for the heads up.16:04
*** zigo-_- has quit IRC16:09
sorenBK_man: Excellent timing there. Thanks.16:12
blamar_soren: I have a branch dependent on a deb packaging change...how does that normally  play out?16:25
sorenblamar_: Depends. Which one?16:26
blamar_https://code.launchpad.net/~rackspace-titan/nova/libvirt-firewall-breakout16:26
sorenblamar_: Generally, we just let the change land on trunk and then fix the build afterwards.16:26
*** Tv has joined #openstack-dev16:27
blamar_soren: k, it's the ajaxterm patch, pretty simple fix if my branch goes through16:27
sorenPackages aren't kept indefinitely, so if one doesn't work, it's not a big deal.16:28
sorenAlso, if the package simpy fails to build, noone will ever be affected by it.16:29
soren...if the build succeeds, but the packages are borken afterwards, the distribution format provides use with a straightforward way to fix it (i.e. just fix it and let apt-get suck down the fix).16:30
sorenSo, in summary, go ahead an break it. No biggie. :)16:30
*** pyhole has quit IRC16:41
vishyis anyone around who is interested in no-db-messaging?  I'd like to discuss options based on markwash's last comment16:45
*** pyhole has joined #openstack-dev16:46
sorenblamar_: We need to stop shipping ajaxterm anyway. It's not our job (and we suck at it, too).16:47
blamar_soren: couldn't agree more16:48
blamar_:)16:48
sorenblamar_: So really the patch we carry should be applied to nova proper and we should yank ajaxterm.16:48
* soren files a bug, so we don't forget.16:48
blamar_much appreciated16:48
vishydoes anyone actually use ajaxterm?16:49
vishyit was useful before we had vnc...16:49
blamar_i'll admit it's handy for dashboards...16:49
*** bcwaldon has joined #openstack-dev16:49
blamar_but that should't be in nova16:49
sorenWe don't ship noVNC, do we?16:50
* soren sure hopes not16:51
jaypipesmarkwash: :P16:52
*** foxtrotdelta has quit IRC16:54
vishysoren: no, we have a separate version that people have to grab manually16:55
sorenvishy: O_o16:55
sorenvishy: That's even worse!16:55
vishyajaxterm should really be in dashboard if it is anywhere :)16:55
vishysoren: :)16:56
sorenNot kidding!16:56
sorenWhere is it?16:56
*** zaitcev has joined #openstack-dev16:57
vishysoren: https://github.com/openstack/noVNC16:57
* soren heads to dinner16:58
sorenWill have to cry later.16:58
* vishy looks forward to soren's tears17:02
* jaypipes gets soren a bucket.17:02
*** mgius has joined #openstack-dev17:05
*** cp16net has quit IRC17:08
*** foxtrotgulf has joined #openstack-dev17:16
*** anotherjesse has joined #openstack-dev17:17
vishyanotherjesse: ohai!17:19
westmaaso/17:19
westmaasanotherjesse: sounds like you are doing some work on authn integration?17:19
anotherjessewestmaas: yeah, working on the future ;)17:20
vishywestmaas: so we have some high speed deliverables that sort of depend on the first version of this, but we don't necessarily need/want to do it all17:20
anotherjessewestmaas: so far I've only had to push changes to dash/keystone17:20
anotherjessewestmaas: I pushed into keystone a shim (lazy user/project provisioning) and paste config17:21
anotherjessemy changes to dash are because I haven't figured out where best to put the token authentication17:21
westmaasanotherjesse: cool.  does this cover what you needed in the short term?17:22
westmaasor do you still have some additional work to do?17:22
anotherjessewestmaas: not 100% sure - what is your team working on w.r.t. this for next couple weeks?17:23
westmaasanotherjesse: we are going to decide in 40 minutes :)17:23
anotherjesseI was heads down in hacking it to work - now I'm to the point of fixing how it works17:23
anotherjessewestmaas: high level goals is to make it work well - large issues left are:17:25
blamar_anotherjesse: annoying, but can you / are you working against github "issues" or is it too early and the code changing to quickly?17:25
anotherjesseblamar_: for keystone?17:25
blamar_ya17:25
anotherjesseblamar_: it was changing quickly but I think we should use either the issues or the bzr project bugs17:26
anotherjessehttp://bugs.launchpad.net/keystone17:26
anotherjessethe project used issues before the summit - where it was decided to not use issues17:27
blamar_anotherjesse: k, been trying to follow a lot of the bugs and pull requests and just want to make sure we have discussion much like nova on a lot of this because it's so easy to change a new project17:27
anotherjessethus far my additions have been taking the existing auth_token and creating a new shim17:27
anotherjessehttps://github.com/khussein/keystone/blob/master/keystone/auth_protocols/nova_auth_token.py17:28
blamar_anotherjesse: yeah saw that, good stuff :)17:29
anotherjesseblamar_: it is probably all !@#!@$17:29
anotherjessesince I hadn't used paste, the api, or django before17:29
anotherjesseplus the copy&paste ware for the non-shim middleware is because I needed a file that works when put inside of nova17:30
anotherjesseeg, importing from other keystone files = break17:30
blamar_anotherjesse: Logic-wise though, there is a lot I want to bring up in the project to make things less coupled and more clean...but if we get the big picture nailed down for the migration to keystone it will be easier17:30
anotherjesseblamar_: the ugliest part is currently in the dashboard imho17:31
anotherjessewe need to add hooks to the openstack.compute library to allow dashboard to be the UI for authentication but then only store the token in the session17:31
anotherjesseplus keystone doesn't currently return the service catalog17:32
blamar_yeah, that's on my list somewhere17:32
anotherjesseblamar_: awesome17:33
anotherjesseblamar_: since both our teams are going to be touching the code we should probably share on the mailing list17:33
blamar_EC2 support non-existent as well from what I understand17:33
anotherjesseblamar_: short term I'm focusing on making the keystone+dash+nova (and then swift+glance) work together17:33
anotherjesseinteropt with aws calls is next after all that goodness lands - since you can use it without the keystone middleware if you care about it now17:34
blamar_anotherjesse: k, I'm not certain what we're going to be working on, but I'm mostly concerned with making certain keystone is going to be acceptably similar to other openstack projects for official inclusion17:35
vishymarkwalsh: + whoever is interested.  No-db-messaging17:36
anotherjessevishy: seriously - this is the morning of keystone ;)17:37
anotherjessenot your keystone-light solutions17:37
*** anotherjesse has quit IRC17:37
blamar_dammit, I'm not original..I have a branch of keystone called keystone-light :(17:37
vishymarkwalsh: so your concerns about updates being missed are valid17:37
vishymarkwalsh: I'm concerned that the alternate solution is too complex, but if we are willing to sta17:38
vishyfigure out the best approach and implement it I'm ok with it17:38
*** cloudnod has quit IRC17:39
blamar_vishy: I'm not liking the whole 'multicall' solution, but I'm having trouble putting it into words :)17:39
markwashvishy: I guess the complexity of having a permanent listener for volume updates is in the deployment details17:40
vishythe main sticking points are: how do we make it work with all of the services17:40
vishyblamar_: I was essentially trying to find the minimal amount of changes to make17:40
markwashi.e. how many are there and where do they live? and how do they play with the other gazilion listeners we'd need to expand this approach17:41
vishymarkwalsh: yes17:41
vishywow17:41
vishywhy do i keep putting an l in there17:41
vishymarkwash: ^^17:41
blamar_haha, we've been debating letting you know17:41
markwashI take it as a compliment, assuming you are comparing me with sandywalsh17:41
westmaasinterestingly mark's full name is Markl17:41
westmaasso you just had the l in the wrong place17:42
markwashwestmaas: no one is named that17:42
blamar_oh westmaaas...17:42
*** jesse_ has joined #openstack-dev17:42
sandywalshmarkwash ... I'd think twice about that association :)17:42
jesse_westmaas / blamar_ - back - network is FLAKEY today :(17:43
*** jesse_ is now known as anotherjesse17:43
vishymarkwash: yes that is the issue right now, so we could just start a greenthread in every api worker listening for updates, but I think this gets clunky with updates for all the services17:43
markwashI'm more worried about where we run those workers. right now you can run as many instances of the api and manager as you want17:44
vishymarkwash: I think what we really need is a new binary called nova-writer, but I hesitate to make it even harder to install17:44
vishymarkwash: if we are pushing it into a queue, it doesn't really matter if we have multiple running17:44
vishythe first "writer" to grab the message can update the db and go on17:45
vishyunless we're disucssing getting rid of the db altogether, in which case we need to do something like sandywalsh's fanout17:45
markwashI'm a bit unfamiliar with our messaging setup, can we rely on message ordering across multiple consumers?17:45
vishyordering I don't think so, but does the ordering matter?17:45
markwashwell, in the volume example, it seemed like there were multiple status updates feeding back to the listener possibly rapidly17:46
vishymarkwash: hmm, that is a good point, yes we don't want to update the status back to something else from available17:46
markwashso it would be unfortunate if the db got stuck in an intermediate status because ordering was broken17:46
vishythis is why something like the multicall solution is simpler.  There is a particular thread that is owning one set of updates.17:47
markwashbut right now, we seem to have only one authoritative source for the information, so I think a timing approach for ordering on the consumer side could be workable17:47
anotherjesseanyone else trying to do a clean install recently on maverick?17:47
anotherjesseI run into TypeError: f(ile) should be int, str, unicode or file, not <open GreenPipe '<fd:4>', mode 'wb' at 0x2a2bb90>17:47
anotherjessebecause python-eventlet from the ppa won't install - and the version that does doesn't have the patch17:48
markwashhmm yeah that's pretty much my worry, along with cluttering the install17:48
markwashbut I do get a warm and fuzzy cqrs feeling when I think about permanantly running listeners as autonomous components17:49
anotherjesseprobably just need to hit the ppa (hopefully will see soren later today :) )17:49
markwashmaybe we only need one nova-writer at a time in any given zone17:50
markwashthat could help too17:50
vishymarkwash: cqrs? Sure, but that gives us another SPOF...17:50
markwashI guess you could have multiple but only one running? its the kind of service that is easy to stand back up after it falls down and starts repairing missing updates immediately. . .17:51
markwashbut I'm not totally convinced on that point. .17:51
vishymarkwash: an alternate solution would be to have api get the state of all the volumes on boot17:51
vishymarkwash: and any that are not in available, it could start a polling thread with a timeout17:51
markwashI'm guessing we could solve the timing issues with vector clocks, but I couldn't find a good library to help with that so I'm hesitant to suggest it seriously17:53
vishymarkwash: yes, and we're adding a hell of a lot of complexity if we go down that route...17:54
markwashagreed17:55
blamar_yeah, lets throw CQRS and vector clocks right out :)17:55
markwashblamar_: you would17:55
bcwaldon. . .17:56
vishyblamar_: do you have an alternative suggestion?17:56
markwashsorry vishy these guys are really enjoying making fun of me right now :-)17:56
vishyblamar_: do we create nova-writer and just have it collect updates from the services and write them to the db?17:56
blamar_markwash: So replace all db.update_volume with rpc.cast('update_volume', volume_ref) first? what we kind of talked about earlier?17:56
markwashblamar_: in the manager that is17:57
blamar_right17:57
vishyas a side note I'm very annoyed that we don't have an easy way to deserialize from dates back to datetimes17:57
vishys/dates/date strings/17:57
blamar_and then have a rpc.call('volume_updated', volume_ref) for logic which needs to happen after successful creation17:58
*** antonyy_ has joined #openstack-dev17:58
vishyblamar_: hmm, who would own that?17:58
*** antonyy has quit IRC17:58
*** antonyy_ is now known as antonyy17:58
markwashvishy: I'd like to regroup around this a little bit and try to come up with a proposal that handles ordering17:58
markwashalso I have a sprint meeting like nowish17:59
vishyblamar_: I'd rather have that part use the nofification code17:59
vishyblamar_: rather than have a separate channel...17:59
*** markwash is now known as markwash-away17:59
vishymarkwash: ok17:59
vishymore discussion later17:59
*** antonyy has quit IRC18:02
*** antonyy_ has joined #openstack-dev18:02
*** dprince has quit IRC18:04
*** dprince has joined #openstack-dev18:07
*** antonyy_ has quit IRC18:15
*** antonyy has joined #openstack-dev18:15
*** anotherjesse has quit IRC18:34
*** cloudnod has joined #openstack-dev18:36
cloudnodah, back.18:41
cloudnoddownside to irccloud is that when they go down, you go down.18:41
*** anotherj1sse has joined #openstack-dev18:42
*** anotherj1sse has joined #openstack-dev18:44
*** anotherj1sse has joined #openstack-dev18:45
cloudnodhey jesse18:45
*** anotherj1sse has quit IRC18:48
*** anotherj1sse has joined #openstack-dev18:48
dabois there anyone available who is familiar with volume creation who could answer a few questions? I'm looking into extending that code to work across a nested zone structure, and I need a reality check.19:07
sorenanotherj1sse: What version from the ppa won't install?19:09
sorenanotherj1sse: pastebin or it didn't happen. :)19:19
*** Daviey has joined #openstack-dev19:22
DavieyHi. Are we allowed to use python 2.7 features in Nova?19:22
sorenNo :(19:23
jk0Daviey: we're supporting 2.619:23
Davieybum,19:23
Davieythanks19:23
sorenSome would say we should be glad we get to use 2.6 features.19:23
Davieytrue :)19:24
sorenI don't agree with them, but there you go :)19:24
sorenI am sort of glad that I don't have to maintain python 2.7 for the Lucid PPA.19:25
DavieyWell i am glad we aren't yet using 3.0 :)19:25
sorendoko wants us to :)19:25
sorenUnfortunately, a bunch of the libraries we use are on the python3 hall of shame.19:26
*** galstrom has joined #openstack-dev19:27
*** galstrom has left #openstack-dev19:27
vishydabo: I am familiar19:27
vishydabo: but I am about to eat lunch19:27
dabovishy: can you ping me when you're done? Maybe we can do a quick skype chat19:28
sorenanotherj1sse: E-mail me your response. I've got some stuff I need to do this evening, so I'll be (attempting to be) ignoring IRC.19:29
anotherj1ssesoren: will do - thx!19:30
sorenanotherj1sse: fwiw, I just installed python-eventlet from the ppa on a freshly started maverick instance in the rackspace cloud. Worked great.19:30
anotherj1ssesoren: I'm using lxc - maybe that is it?19:31
sorenanotherj1sse: if you could just share the apt-get output, I could probably tell you quite quickly.19:31
anotherj1ssesoren: http://pastie.org/193481419:31
sorenNothing to do with lxc.19:32
sorenanotherj1sse: apt-get update and try again.19:32
sorenanotherj1sse: ...and it's complaining about python, not python2.6.19:32
anotherj1ssesoren: i'm doing this from fresh lxc maverick containers19:32
anotherj1ssesoren: also I was looking at this really late friday ... so I could be really wrong :-/19:33
sorenanotherj1sse: If you could give me access to it, I could probably sort it out in a flash.19:33
anotherj1sseahh: python is older than python2.6!19:33
sorenWell...19:33
sorenpython2.6 is the actual python.19:33
anotherj1sseii  python                                 2.6.6-2ubuntu1                         interactive high-level object-oriented language (default version)19:33
anotherj1sseii  python2.6                              2.6.6-5ubuntu1                         An interactive high-level object-oriented language (version 2.6)19:34
sorenpython is not the actual interpreter.19:34
anotherj1sseright19:34
anotherj1sseafter apt-get update install still says:19:34
anotherj1sse python-eventlet : Depends: python (>= 2.6.6-2ubuntu2~) but 2.6.6-2ubuntu1 is to be installed19:34
sorenIt's also not strictly a metapackage, but sort of.19:34
sorenanotherj1sse: Pastebin your sources.list, please.19:35
anotherj1sseI'm on a lxc maverick container - lxc-create -t maverick; then adding nova-core-trunk-maverick.list19:36
anotherj1ssehttp://pastie.org/196272919:36
sorenThat's your problem.19:36
sorenYou need maverick-updates.19:36
sorenAdd:19:36
sorendeb http://archive.ubuntu.com/ubuntu maverick main universe19:36
sorenWhoops19:36
sorendeb http://archive.ubuntu.com/ubuntu maverick-updates main universe19:36
sorenapt-get update19:36
sorenand win.19:36
anotherj1sseshould we send a patch to lxc in natty - their template doesn't include maverick-updates19:37
sorenHmm...19:37
sorenNot sure.19:37
sorenIt's valid not to have it enabled.19:37
soren...but sure.19:37
anotherj1ssekinda weird you can have different versions of python/python2.6 ...  anyway I'll patch my shell script to add it19:38
anotherj1ssethanks!19:38
sorenSure thing.19:39
*** bcwaldon has quit IRC19:58
vishydabo: ping20:01
dabowanna jump on skype? Probably easier than typing everything20:02
daboI'm ed.leafe20:02
vishysure20:02
*** BinaryBlob has joined #openstack-dev20:06
*** zaitcev has quit IRC20:06
anotherj1ssewestmaas: how did your meeting about keystone go?20:06
*** dprince has quit IRC20:06
blamar_anotherj1sse: we're going to knock out service directory support more than likely20:08
westmaas^^20:09
uvirtbotwestmaas: Error: "^" is not a valid command.20:09
westmaasuvirtbot: quiet20:09
uvirtbotwestmaas: Error: "quiet" is not a valid command.20:09
*** zaitcev has joined #openstack-dev20:10
*** antonyy_ has joined #openstack-dev20:13
*** antonyy has quit IRC20:15
*** antonyy_ is now known as antonyy20:15
*** bcwaldon has joined #openstack-dev20:17
*** salv-orlando has joined #openstack-dev20:25
openstackjenkinsProject swift build #265: SUCCESS in 28 sec: http://jenkins.openstack.org/job/swift/265/20:32
openstackjenkinsTarmac: swift-ring-builder: Added list_parts command which shows common partitions for a given list of devices.20:32
*** markwash-away is now known as markwash20:34
anotherj1sseblamar_: awesomeness - that will allow the integration with dashboard not to be as hacky20:46
anotherj1sseblamar_: what approach are you guys going to take / eta?20:46
anotherj1sse;)20:46
anotherj1ssepvo: know if anyone is working on admin apis for openstack api?20:47
westmaasanotherj1sse: admin apis haven't been started yet.  there is a bp for it though.20:52
pvoanotherj1sse: yea, I was just looking the bp up.20:53
pvohttps://blueprints.launchpad.net/nova/+spec/admin-account-actions20:53
pvo?20:53
pvowestmaas: thats the one?20:53
westmaaspvo: yep20:53
westmaasthat's the one we said someone at rackspace would do it, its been assumed it would be titan, but not for sure20:53
pvoI'm thinking anotherj1sse may want ti sooner.20:53
anotherj1ssewestmaas / pvo - we'll probably play with it - i'll update the list as we work on it20:54
pvoit sooner20:54
westmaassweet20:54
anotherj1ssepvo: since I know you guys have reqs that will need to be handled20:54
pvoyea, I think we're going to find out real soon what all is needed.20:54
anotherj1ssepvo: if you have any newer thinking on it, feel free to email - otherwise we can meet in the mailing list20:54
pvosure thing20:54
westmaasanotherj1sse: I think that bp covers a lot of it, glenc did the research there20:55
westmaashttp://wiki.openstack.org/NovaAdminAPI20:55
westmaasthat wiki page does, rather20:55
glencanotherj1sse, let me know if I can be of any help20:57
vishymarkwash: done with your scrum?  New idea which i like20:58
markwashvishy: I'm here20:58
vishytermie made a suggestion which I think could solve a lot of our problems20:58
vishymarkwash: ^^ -> task queue20:59
vishymarkwash: the problem with having a separate writer/durable queue/vector clocks is we are basically creating our own distributed database20:59
markwashheh agreed21:00
vishymarkwash: which I think will lead to a whole slew of new problems21:00
vishyso here is the new proposal21:00
vishywhen we get a create_volume request in api, we create a task (new entity)21:00
vishythat task gets marked completed when the calls are finished.21:01
Davieyvishy, I'm duplicating (!) some of your code - which includes a NOTE(vish): <--- is it okay to keep your name there?21:01
Daviey(in my dupe)21:01
vishywe can have volume_apis check for unfinished tasks and handle them21:01
vishyDaviey: fine by me21:01
vishy:)21:01
markwashhmm, so basically we are calling instead of casting, but we're doing it asynchronously?21:01
Davieycool21:01
markwasherr, in a different thread?21:02
anotherj1ssevishy: we should just switch to zookeeper21:02
vishymarkwash: yes the idea being that the owner of the task is responsible for writing data to the db21:02
blamar_anotherj1sse: determining approach now, will try to get ETA out there21:03
vishymarkwash: we make tasks very easy to restart by making all of the steps in create_volume idempotent21:03
vishymarkwash: so we don't have any complicated logic in restart, we essentially just send the worker another create_volume with the same data21:04
markwashvishy: so if something dies in the middle, and the db looks off to the client, they can just resubmit?21:04
vishyright21:04
*** ameade has quit IRC21:05
vishymarkwash: there would have to be some logic to figure out when a worker has died21:05
vishy* api has died that is21:05
markwashpart of this I don't know much about. . . when a manager responds to a call, how does the actual return message make its way back?21:06
vishy* or we could just timeout tasks after a certain amount of time and restart21:06
markwashlike what is the topic or queue id for the return message?21:06
vishymsg_id21:07
vishya new queue is created based on the msg_id21:07
markwashgotcha21:08
markwashso with multicall, how do we know when the last return arrives?21:09
anotherj1sseblamar_: a phased approach that v0 has a hardcoded catalog from config would allow moving forward with how dashboard / cli works wit hit21:09
markwashI'm guessing we'd use multicall in the task queue? or maybe I'm misunderstanding how that would work21:10
vishymarkwash: we get back a none21:10
vishymarkwash: multicall is a nice little addition that means we don't have to break up into multiple api calls for each subtask21:12
vishymarkwash: but it would be possible to do the same thing without it21:12
blamar_anotherj1sse: a service catalog seems like an optional middleware loaded by keystone at deployment if that makes sense...the content of the service catalog could come from a static config for a first version (restating for my own sake)21:13
markwashvishy: no argument against multicall21:14
vishymarkwash: I'm thinking of prototyping the task queue idea21:14
markwashvishy: that all sounds like it would work.. for some reason I'm having a hard time wrapping my head around it all now21:15
markwashvishy: I blame my sprint meeting21:15
vishymarkwash: Probably be clearer with code21:15
markwashvishy: somewhat separately, I'm wondering if we should really be worrying about ordering at all21:15
vishymarkwash: it is making things more complicated, but I think the ability to recover from failures will be worth the complication...21:15
anotherj1sseblamar_: awesomeness - not sure if it can be optional - since the compute endpoint is only exposed via auth endpoint?21:16
markwashvishy: I mean, I think that even the current approach could suffer from the same type of ordering problem technically21:16
anotherj1sseblamar_: but we can discuss defaults later ;)21:16
markwashvishy: by current approach I meant the one you merge propped21:16
vishymarkwash: hmm, really, how?21:16
blamar_anotherj1sse: Yeah, optional is pretty big for me, not wanting to incur the wrath of other people not wanting to use a service directory but still use Keystone21:17
vishymarkwash: On the implementation level ordering is important21:17
vishymarkwash: but idempotency is a great way to handle ordering of operations21:17
markwashvishy: well, each return from a single multicall is sent to the same queue21:17
markwashvishy: so if rabbit burps, then it could be out of order. . I guess it only really matters if its the last one that gets out of order in the bigger view21:18
markwashvishy: agree with idempotency a lot--I like that we can just resubmit with no ill consequences21:18
markwashvishy: because ordering and other hiccups are probably very very rare21:19
vishymarkwash: I think messages stay in order in rabbit21:19
vishymarkwash: ordering is a problem if you have multiple readers21:19
markwashvishy: ah okay got it21:19
vishymarkwash: but i could be wrong21:20
vishymarkwash: hmm looks like you can in crazy failure situations get out of order messages21:21
markwashvishy: yeah that's what I was thinking21:22
markwashvishy: but it seems pretty rare21:22
vishyit would require one message to not get acked21:22
vishymarkwash: so with the writer and reader each in its own process, I think it actually can't happen21:22
markwashvishy: I guess its something to worry about when we recover from a failure--if the update consumer dies, it very well could die in the middle of processing and wouldn't send an ack21:23
markwashvishy: but even in that case, you could just say 'weird' and resubmit21:24
markwashand idempotency should handle it21:24
markwashI think?21:24
vishyyes, so basically the old messages in the queue would be totally ignored21:25
vishywe have to time them out somehow21:25
vishybut you are resubmitting the call/multicall, so there would be a totally new msg_id21:25
*** bcwaldon has quit IRC21:27
markwashvishy: is it explicitly a goal of this approach that only the api servers write to the db?21:27
markwashor would it make sense for the managers to write to the db and have the api servers only reading from it?21:27
vishymarkwash: the goal is to have api servers21:28
vishy(although it is not api specifically), just basically to minimize the number of writers21:28
vishyfor security and scalability, we don't want every host in the system writing to the db21:28
markwashvishy: again, I'm sorry if my brain is just not working right now, but what does the task queue approach buy us over what you have in no-db-messaging now?21:31
vishyrecovery21:32
markwashvishy: so some persistent record of the task is kept?21:32
vishyyes, it is a db object21:32
markwashvishy: until it is completed21:32
vishyso if we can check for uncompleted tasks periodically and restart them21:33
markwashvishy: okay cool, I think I've got it a lot better now!21:33
vishythat was the main concern with this approach right? What happens if the api dies before the create is finished?21:33
vishyI'm trying to solve that problem without writing our own DDS21:34
markwashvishy: yes that was my main concern21:34
markwashvishy: I didn't get the db record of the task part earlier for some reason :-)21:35
vishyah yeah21:36
vishyI don't think i ever specifically mentioned it21:36
markwashvishy: this might even be a simple addition to your current prop. . if the delayed create just sent some annotation about the queue_id and what it was listening for to the db when it started working, and then deleted it when it is done21:40
markwashvishy: we are sort of just tacking on a little bit of persistence to that task, giving us a handle for coming back to revisit it if its sticking around21:40
vishyyes.  +making sure the volume code is actually idempotent21:41
markwashvishy: but I won't talk your ear off. . I'll be happy to look at any more code you prop and I'll probably keep thinking about this a bit on my end as well21:41
vishycool21:41
vishymarkwash: we'll need a little bit of hacking to the scheduler to allow for making sure to resend a request to the same host21:48
Davieyjaypipes, around?21:56
*** foxtrotgulf has quit IRC22:01
*** BinaryBlob has quit IRC22:21
*** foxtrotgulf has joined #openstack-dev22:28
*** dragondm has quit IRC22:30
*** dragondm has joined #openstack-dev22:33
*** statik has quit IRC22:35
*** rnirmal has quit IRC23:00
*** adiantum has joined #openstack-dev23:38
*** Tv has quit IRC23:44
*** yamahata_lt has joined #openstack-dev23:47
*** jkoelker has quit IRC23:49
*** antonyy has quit IRC23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!