*** cloudgroups has joined #openstack-dev | 00:45 | |
*** cloudgroups has left #openstack-dev | 00:46 | |
*** dovetaildan has quit IRC | 00:59 | |
*** galthaus has joined #openstack-dev | 01:05 | |
*** galthaus has quit IRC | 01:35 | |
*** cloudgroups has joined #openstack-dev | 02:00 | |
*** lorin1 has joined #openstack-dev | 02:33 | |
*** alekibango has quit IRC | 03:03 | |
*** cloudgroups has left #openstack-dev | 03:08 | |
*** Binbin has joined #openstack-dev | 03:32 | |
*** clayg has quit IRC | 03:32 | |
*** clayg_ has joined #openstack-dev | 03:33 | |
*** clayg_ is now known as clayg | 03:38 | |
*** Binbin is now known as Binbin_afk | 03:51 | |
*** lorin1 has left #openstack-dev | 04:37 | |
*** Binbin_afk is now known as Binbin | 04:49 | |
*** foxtrotdelta has quit IRC | 04:56 | |
*** Binbin has quit IRC | 05:10 | |
*** Zangetsue has joined #openstack-dev | 05:35 | |
*** Arminder has quit IRC | 05:38 | |
*** Arminder has joined #openstack-dev | 06:08 | |
*** Arminder has quit IRC | 06:08 | |
*** Arminder has joined #openstack-dev | 06:11 | |
*** zigo-_- has joined #openstack-dev | 09:49 | |
*** elasticdog has quit IRC | 10:32 | |
*** Zangetsue has quit IRC | 10:44 | |
*** Zangetsue has joined #openstack-dev | 10:45 | |
*** Joelio has left #openstack-dev | 11:15 | |
*** elasticdog has joined #openstack-dev | 11:21 | |
*** jaypipes has joined #openstack-dev | 11:58 | |
*** thatsdone has joined #openstack-dev | 12:12 | |
*** thatsdone has quit IRC | 12:23 | |
*** dovetaildan has joined #openstack-dev | 12:23 | |
*** exlt has joined #openstack-dev | 12:36 | |
*** cloudnod has joined #openstack-dev | 12:38 | |
cloudnod | hi | 12:40 |
---|---|---|
jaypipes | cloudnod: hi | 12:52 |
ttx | diablo-1 feature needs some review love: https://code.launchpad.net/~morita-kazutaka/nova/snapshot-volume/+merge/61071 | 12:57 |
jaypipes | sandywalsh: around? | 12:58 |
cloudnod | i'm going to start hacking on some python so i can contribute | 12:59 |
notmyname | cloudnod: great :-) | 13:00 |
cloudnod | hear me roar. heh | 13:00 |
cloudnod | if it was written in perl or php I could crush it.... but probably good that it isn't. :) | 13:00 |
cloudnod | i have one more beta invite to irccloud if someone wants it... awesome browser-based irc client that stays connected even when you close browser... like using screen but slicker | 13:02 |
sandywalsh | jaypipes, hey! | 13:04 |
jaypipes | sandywalsh: hi! :) | 13:04 |
jaypipes | sandywalsh: flew over your little town when I headed over to Greece. Sorry, couldn't spot your house ;) | 13:04 |
jaypipes | sandywalsh: wanted to ask you a couple questions about the zone scheduler stuff.. | 13:05 |
sandywalsh | I was waving ... didn't you see me?! | 13:05 |
jaypipes | sandywalsh: hehe | 13:05 |
jaypipes | sandywalsh: is the distributed scheduler only working (or meant to work) with the OS API? | 13:05 |
*** yamahata_lt has joined #openstack-dev | 13:06 | |
sandywalsh | jaypipes, yup. at this point | 13:06 |
sandywalsh | jaypipes, once we get it working end-to-end we can abstract out the OS API calls into something that works with both | 13:06 |
jaypipes | sandywalsh: does the dist scheduler even a future with the EC2 API? | 13:06 |
jaypipes | even have... | 13:06 |
sandywalsh | jaypipes, the issue is, it's easier for us to extend the OS API ... not so much with the EC2 API | 13:07 |
jaypipes | sandywalsh: I'm wondering if the code belongs under /nova/api/openstack... | 13:07 |
sandywalsh | jaypipes, well, it doesn't affect the other operation at this point | 13:07 |
sandywalsh | jaypipes, and it *can* support EC2, it just doesn't currently | 13:08 |
jaypipes | sandywalsh: Oh, I'm not saying it should. I'm just curious as to why, since all the other stuff that only affects the OS API is under that directory. | 13:08 |
dabo | jaypipes: right now the main dist-scheduler branch passes all the ec2 tests | 13:08 |
dabo | that's not to say that it would run flawlessly, but that's at least a good sign | 13:08 |
jaypipes | dabo, sandywalsh: sorry, I'm not sure I understand how it *can* support the EC2 API since we don't have any control over the EC2 API. Do you mean as a sort of "super scheduler" that could route requests for new instances that come in on *either* API? | 13:09 |
dabo | the goal is that it *will* support ec2 | 13:09 |
dabo | don't see why it would have to be "super", but yes | 13:09 |
sandywalsh | jaypipes, embrace and extend would be the only choice. So long as we only add functionality and not break existing EC2 stuff | 13:10 |
dabo | once the call is handled by the api, the rest is identical | 13:10 |
jaypipes | dabo: "super" as in "above the OS API and the EC2 API"... | 13:10 |
dabo | jaypipes: then no | 13:11 |
dabo | the scheduler works independently of the origin of the request. Why would it have to be different? | 13:11 |
jaypipes | sandywalsh, dabo: OK, I guess I'll have to revisit the EC2 API docs for RunInstances... I didn't remember a whole lot of distributed scheduler/request functionality there... | 13:11 |
dabo | jaypipes: the only difference I saw was the 'availability zone' concept, which could easily be folded into the general host filtering stuff | 13:12 |
sandywalsh | jaypipes, let's do a quick inventory: ZoneManager, HostFilter and CostScheduler could all work with EC2. | 13:12 |
jaypipes | dabo: the main difference (AFAIK) is that in the EC2 API there is no way to query an availability zone for a set of hosts that may be available. | 13:13 |
sandywalsh | jaypipes, the routing stuff in compute.api() could need to support EC2 (extending their API, but not busting it) | 13:13 |
sandywalsh | jaypipes, nor can OS API, we added a new /zones/* REST resoruce | 13:13 |
sandywalsh | *resource | 13:13 |
sandywalsh | jaypipes, we could do the same with EC2 | 13:14 |
sandywalsh | again, "embrace and extend" | 13:14 |
jaypipes | sandywalsh: right, that's kind of what I meant :) We *have* the ability to do that with the OS API :) | 13:14 |
dabo | jaypipes: this really sucks over IRC | 13:14 |
jaypipes | dabo: hehe :) | 13:14 |
sandywalsh | jaypipes, I don't think we have any more ability than EC2 ... wrt 1.0 and 1.1 | 13:15 |
dabo | I only consider supporting ec2 api to mean that an external request on that api is correctly handled. It doesn't mean that the inter-zone communication is handled via the ec2 api | 13:15 |
sandywalsh | which are meant to be backwards compatible with RS API | 13:15 |
sandywalsh | 2.0 however | 13:15 |
sandywalsh | dabo, to a degree, if a customer wants zones, but doesn't want to stand up an OS API server, they're hosed | 13:16 |
jaypipes | sandywalsh: right | 13:16 |
dabo | sandywalsh: that gets back to my distinction between the use of novaclient as an external client, and novaclient as an internal glue between zones | 13:17 |
dabo | right now we're cramming both sets of functionality into a single tool | 13:17 |
jaypipes | sandywalsh: but *adding* a resource like /zones doesn't break 1.0 or 1.1, AFAICT. | 13:17 |
sandywalsh | I don't have a problem with that since it's controlled by --enable_admin_api | 13:18 |
sandywalsh | jaypipes, correct | 13:18 |
jaypipes | sandywalsh: now, changing things like returning a reservation ID for POST /servers would break it, of course. | 13:18 |
sandywalsh | jaypipes, that's what I mean by extend vs. change | 13:18 |
jaypipes | sandywalsh: gotcha. | 13:18 |
sandywalsh | likewise with my ML post this morning ... adding num_instances is no risk | 13:19 |
sandywalsh | but changing the return type from Instance ID to Reservation ID is a no-no | 13:19 |
sandywalsh | and then we need new commands to list all instances given a Reservation ID | 13:19 |
sandywalsh | (which should be a new command) | 13:20 |
sandywalsh | My concern is having a parallel universe where all these extra commands are available | 13:20 |
jaypipes | sandywalsh: yup, agreed. | 13:22 |
sandywalsh | jaypipes, but I see your point about "what can live under api.openstack" ... lemme stew on that some more. My guts says they're common enough ... for now | 13:24 |
sandywalsh | s/guts/gut | 13:24 |
jaypipes | sandywalsh: that was me just be curious, nothing more. | 13:24 |
sandywalsh | still, good observation | 13:25 |
jaypipes | sandywalsh: was just wondering about it, but I can understand why you want it outside the openstack folder. | 13:25 |
jaypipes | sandywalsh: added a comment on the merge prop about some completely different things... | 13:25 |
sandywalsh | cool ... thanks! | 13:25 |
jaypipes | feels good to finally be back to work. | 13:25 |
jaypipes | heh, that sounds bad... ;) | 13:25 |
sandywalsh | where in greece were you? | 13:25 |
jaypipes | sandywalsh: mykonos. | 13:25 |
sandywalsh | awesome ... beautiful area | 13:26 |
sandywalsh | get over to turkey? | 13:26 |
sandywalsh | I spent nearly a month on Ios when I graduated ... too much fun. | 13:26 |
jaypipes | sandywalsh: my wife did for 6 days before meeting the rest of us in Greece. She had a blast. | 13:26 |
jaypipes | sandywalsh: though the hot air balloon she and her friend crashed pretty hard into the side of a small cliff! | 13:27 |
sandywalsh | wow ... no way! | 13:27 |
jaypipes | sandywalsh: they were rattled but ok :) | 13:27 |
jaypipes | sandywalsh: had a great time overall. great to get away. | 13:27 |
sandywalsh | almost as dangerous as the scooters over there :) | 13:27 |
sandywalsh | for sure ... I have to get my wife over there sometime. | 13:28 |
sandywalsh | well ... welcome back! The thing in front of you is called a "keyboard" | 13:28 |
jaypipes | sandywalsh: lol :) thx | 13:29 |
jaypipes | sandywalsh: and I hear you about the scooters. people are crazy on the road in mykonos. | 13:29 |
sandywalsh | + a cliff if you screw up | 13:30 |
jaypipes | sandywalsh: we had two cars (there were 9 of us staying at the villa) but I really liked bombing around the island on a 4-wheeler ATV most days. | 13:30 |
sandywalsh | awesome | 13:31 |
*** foxtrotgulf has joined #openstack-dev | 13:32 | |
*** ameade has joined #openstack-dev | 14:07 | |
*** dprince has joined #openstack-dev | 14:09 | |
*** _0x44_ is now known as _0x44 | 14:28 | |
*** jkoelker has joined #openstack-dev | 14:37 | |
mtaylor | jaypipes: ola! | 14:38 |
jaypipes | mtaylor: hi! :) | 14:39 |
markwash | so--distributed sched guys. . I still don't get the necessity of N instances per create request | 14:48 |
markwash | putting aside for a moment that fact jaypipes point about all-or-nothing requests | 14:48 |
*** rods has joined #openstack-dev | 14:48 | |
soren | markwash: Just responsed on the mailing list about that. | 14:50 |
soren | Oh. | 14:50 |
jaypipes | markwash: it's the difference between treating the request as a unit or not. | 14:50 |
soren | Didn't make it to jay's response yet. Sounds like that sme thing. | 14:50 |
markwash | basically, the schedulers job is to map a pair of (current state, N <number requested>) -> a list of N hosts (possibly with repeats | 14:50 |
*** rods has quit IRC | 14:51 | |
markwash | couldn't we make it so that taking the concatenation of repeating this mapping N times with one instance per request produce the same list? | 14:51 |
*** rods has joined #openstack-dev | 14:51 | |
jaypipes | markwash: hmm, I kind of think of it like this: the scheduler's job is to create a task for the entire request, then ask around the zones and hosts to see if the request can be fulfilled, then ask them to fulfill it. | 14:51 |
markwash | jaypipes: this is a transactionality problem, and I get it that it sucks for it to be the clients job to manage that | 14:52 |
markwash | jaypipes: but I wonder if it might be okay for it to remain a client problem for now | 14:53 |
markwash | also sorry for the unduly mathematical description | 14:53 |
jaypipes | markwash: no, it's not just an atomicity problem. it's that it's a synchronisity (sp?) problem, as I hinted at in my ML post. Without reservations, you don't have true async operations. Without dealing with the entire thing as a singular request unit, you have no reservation ID. | 14:54 |
markwash | jaypipes: I buy that--but I wonder if we could just fix instance id to be like that? | 14:54 |
*** sirp__ has joined #openstack-dev | 14:56 | |
jaypipes | markwash: Not until instance ID becomes non-zone-aware. Because while instance ID remains tied to a particular zone DB or URI structure, that part of the operation must by nature remain synchronous in order to return the instance ID through the original API call. | 14:56 |
markwash | jaypipes: that's what I thought we might fix | 14:56 |
jaypipes | markwash: if instance ID was a UUID, I suppose you could use instance ID as the reservation ID. But you'd still be left with the atomicity problems. | 14:56 |
markwash | does the scheduler talk to other zones through the rest api? or something else? | 14:57 |
jaypipes | markwash: via messaging layer (rpc) | 14:57 |
dabo | jaypipes: no | 14:57 |
dabo | all inter-zone communication is via http calls | 14:57 |
dabo | queues don't span zones | 14:58 |
jaypipes | dabo: sorry, my mistake. | 14:58 |
jaypipes | markwash: ^^ | 14:58 |
*** dragondm has joined #openstack-dev | 14:59 | |
*** rods has quit IRC | 14:59 | |
dabo | markwash: I suppose the api could create N UUIDs and return those to the caller, and then pass them down to the zones, but that seems rather clumsy compared to a single reservation ID | 14:59 |
markwash | I think the trouble with pushing uuids (as I'm proposing) is that there is no sensible way to allow the instance id to be set by the http api client | 15:00 |
markwash | thus, the distributed scheduler wouldn't have a mechanism for pushing the instance ids | 15:00 |
dabo | markwash: of course. Instance IDs are database-generated, and the client has no way to talk to the appropriate DB | 15:00 |
dabo | if we switched to UUIDs for instance IDs, then yes, the scheduler could pre-allocate and pass those down | 15:01 |
* jaypipes goes off to pre-generate 1 billion UUIDs. | 15:01 | |
markwash | I meant more that even in the case where the api is generating UUIDs for instance ids, when it hands them to the scheduler and the scheduler goes to talk to the http api of a child zone. . . | 15:02 |
notmyname | be careful! you don't want to exhaust the supply | 15:02 |
markwash | I think there will not be a way to pass the uuid into the create call to the child zone | 15:03 |
markwash | because on the api side we would not want to allow our clients to require certain uuids since it could create problems | 15:03 |
markwash | not sure if I'm being very clear. . . :-) | 15:04 |
jaypipes | notmyname :) | 15:04 |
notmyname | just make sure to save some for everybody else ;-) | 15:04 |
jaypipes | markwash: hey, is jorge on IRC? | 15:04 |
*** antonyy has quit IRC | 15:04 | |
dabo | markwash: if you used the pure OS API, then yes. But we are already adding "internal use only" calls to novaclient to provide the glue to enable inter-zone communication | 15:05 |
jaypipes | markwash: not sure I quite understand that last point... | 15:05 |
markwash | jaypipes: doesn't look like it, and I can't seem to get a hold of him | 15:05 |
jaypipes | markwash: what is his nick? | 15:05 |
markwash | dabo: that would cover it | 15:06 |
markwash | jaypipes: without him on line, I have no tab completion to tell me :-) | 15:06 |
jaypipes | markwash: hehe, ok, thought you might know it ;) | 15:06 |
*** yamahata_lt has quit IRC | 15:07 | |
markwash | I will reiterate, I think it would be straightforward to add res ids and multiple instances to a request to the api--I guess I am just worried about changing the api | 15:07 |
markwash | and I guess that's Jorge's concern as well? | 15:08 |
jaypipes | markwash: I suppose. But, then again, I wasn't really talking about 1.1. :) I was brainstorming for a 2.0 API. | 15:08 |
markwash | jaypipes: oh--somehow I missed that completely | 15:09 |
jaypipes | markwash: :) | 15:09 |
jaypipes | markwash: the idea that you and jorge are talking about (using UUIDs or similar) and returning the "reservation ID" as the instance ID is perfectly fine for 1.X IMHO. I was prognosticating about a future, cleaner, atomic request-centric 2.0 API | 15:10 |
markwash | jaypipes: I like that prognostication | 15:11 |
dabo | markwash: the new dist-scheduler code has the plumbing to return resID and accept N instances; it's just commented out for now. We still return a single instance ref. | 15:12 |
jaypipes | markwash: and I'm not just considering it for Nova, but for Glance as well :) For instance, we're discussing a "builder" API that would allow people to construct images from pieces of other images or application stacks (think: rPath's rBuilder API). It would be best to have that use a reservation/fully async API | 15:12 |
*** cp16net has joined #openstack-dev | 15:12 | |
*** Zangetsue has quit IRC | 15:15 | |
*** rnirmal has joined #openstack-dev | 15:16 | |
*** jorgew has joined #openstack-dev | 15:24 | |
*** redbo has joined #openstack-dev | 15:29 | |
*** ChanServ sets mode: +v redbo | 15:29 | |
jaypipes | that was a short visit from our friend jorgew :) | 15:33 |
markwash | enriching | 15:33 |
jaypipes | markwash: :) | 15:34 |
ttx | Nova core reviewers: please give https://code.launchpad.net/~morita-kazutaka/nova/snapshot-volume/+merge/61071 some review love, looks like it's one of the few features that can actually make the diablo-1 milestone :) | 15:35 |
openstackjenkins | Project nova build #929: SUCCESS in 2 min 47 sec: http://jenkins.openstack.org/job/nova/929/ | 15:39 |
openstackjenkins | Tarmac: Fixes euca-attach-volume for iscsi using Xenserver | 15:39 |
openstackjenkins | Minor changes required to xenapi functions to get correct format for volume-id, iscsi-host, etc. | 15:39 |
BK_man | somebody changed Jenkins job for tarballs: http://nova.openstack.org/tarballs/?C=M;O=D | 15:47 |
*** foxtrotdelta has joined #openstack-dev | 15:50 | |
*** foxtrotgulf has quit IRC | 15:52 | |
*** antonyy has joined #openstack-dev | 15:57 | |
soren | BK_man: Erk, thanks for the heads up. | 16:04 |
*** zigo-_- has quit IRC | 16:09 | |
soren | BK_man: Excellent timing there. Thanks. | 16:12 |
blamar_ | soren: I have a branch dependent on a deb packaging change...how does that normally play out? | 16:25 |
soren | blamar_: Depends. Which one? | 16:26 |
blamar_ | https://code.launchpad.net/~rackspace-titan/nova/libvirt-firewall-breakout | 16:26 |
soren | blamar_: Generally, we just let the change land on trunk and then fix the build afterwards. | 16:26 |
*** Tv has joined #openstack-dev | 16:27 | |
blamar_ | soren: k, it's the ajaxterm patch, pretty simple fix if my branch goes through | 16:27 |
soren | Packages aren't kept indefinitely, so if one doesn't work, it's not a big deal. | 16:28 |
soren | Also, if the package simpy fails to build, noone will ever be affected by it. | 16:29 |
soren | ...if the build succeeds, but the packages are borken afterwards, the distribution format provides use with a straightforward way to fix it (i.e. just fix it and let apt-get suck down the fix). | 16:30 |
soren | So, in summary, go ahead an break it. No biggie. :) | 16:30 |
*** pyhole has quit IRC | 16:41 | |
vishy | is anyone around who is interested in no-db-messaging? I'd like to discuss options based on markwash's last comment | 16:45 |
*** pyhole has joined #openstack-dev | 16:46 | |
soren | blamar_: We need to stop shipping ajaxterm anyway. It's not our job (and we suck at it, too). | 16:47 |
blamar_ | soren: couldn't agree more | 16:48 |
blamar_ | :) | 16:48 |
soren | blamar_: So really the patch we carry should be applied to nova proper and we should yank ajaxterm. | 16:48 |
* soren files a bug, so we don't forget. | 16:48 | |
blamar_ | much appreciated | 16:48 |
vishy | does anyone actually use ajaxterm? | 16:49 |
vishy | it was useful before we had vnc... | 16:49 |
blamar_ | i'll admit it's handy for dashboards... | 16:49 |
*** bcwaldon has joined #openstack-dev | 16:49 | |
blamar_ | but that should't be in nova | 16:49 |
soren | We don't ship noVNC, do we? | 16:50 |
* soren sure hopes not | 16:51 | |
jaypipes | markwash: :P | 16:52 |
*** foxtrotdelta has quit IRC | 16:54 | |
vishy | soren: no, we have a separate version that people have to grab manually | 16:55 |
soren | vishy: O_o | 16:55 |
soren | vishy: That's even worse! | 16:55 |
vishy | ajaxterm should really be in dashboard if it is anywhere :) | 16:55 |
vishy | soren: :) | 16:56 |
soren | Not kidding! | 16:56 |
soren | Where is it? | 16:56 |
*** zaitcev has joined #openstack-dev | 16:57 | |
vishy | soren: https://github.com/openstack/noVNC | 16:57 |
* soren heads to dinner | 16:58 | |
soren | Will have to cry later. | 16:58 |
* vishy looks forward to soren's tears | 17:02 | |
* jaypipes gets soren a bucket. | 17:02 | |
*** mgius has joined #openstack-dev | 17:05 | |
*** cp16net has quit IRC | 17:08 | |
*** foxtrotgulf has joined #openstack-dev | 17:16 | |
*** anotherjesse has joined #openstack-dev | 17:17 | |
vishy | anotherjesse: ohai! | 17:19 |
westmaas | o/ | 17:19 |
westmaas | anotherjesse: sounds like you are doing some work on authn integration? | 17:19 |
anotherjesse | westmaas: yeah, working on the future ;) | 17:20 |
vishy | westmaas: so we have some high speed deliverables that sort of depend on the first version of this, but we don't necessarily need/want to do it all | 17:20 |
anotherjesse | westmaas: so far I've only had to push changes to dash/keystone | 17:20 |
anotherjesse | westmaas: I pushed into keystone a shim (lazy user/project provisioning) and paste config | 17:21 |
anotherjesse | my changes to dash are because I haven't figured out where best to put the token authentication | 17:21 |
westmaas | anotherjesse: cool. does this cover what you needed in the short term? | 17:22 |
westmaas | or do you still have some additional work to do? | 17:22 |
anotherjesse | westmaas: not 100% sure - what is your team working on w.r.t. this for next couple weeks? | 17:23 |
westmaas | anotherjesse: we are going to decide in 40 minutes :) | 17:23 |
anotherjesse | I was heads down in hacking it to work - now I'm to the point of fixing how it works | 17:23 |
anotherjesse | westmaas: high level goals is to make it work well - large issues left are: | 17:25 |
blamar_ | anotherjesse: annoying, but can you / are you working against github "issues" or is it too early and the code changing to quickly? | 17:25 |
anotherjesse | blamar_: for keystone? | 17:25 |
blamar_ | ya | 17:25 |
anotherjesse | blamar_: it was changing quickly but I think we should use either the issues or the bzr project bugs | 17:26 |
anotherjesse | http://bugs.launchpad.net/keystone | 17:26 |
anotherjesse | the project used issues before the summit - where it was decided to not use issues | 17:27 |
blamar_ | anotherjesse: k, been trying to follow a lot of the bugs and pull requests and just want to make sure we have discussion much like nova on a lot of this because it's so easy to change a new project | 17:27 |
anotherjesse | thus far my additions have been taking the existing auth_token and creating a new shim | 17:27 |
anotherjesse | https://github.com/khussein/keystone/blob/master/keystone/auth_protocols/nova_auth_token.py | 17:28 |
blamar_ | anotherjesse: yeah saw that, good stuff :) | 17:29 |
anotherjesse | blamar_: it is probably all !@#!@$ | 17:29 |
anotherjesse | since I hadn't used paste, the api, or django before | 17:29 |
anotherjesse | plus the copy&paste ware for the non-shim middleware is because I needed a file that works when put inside of nova | 17:30 |
anotherjesse | eg, importing from other keystone files = break | 17:30 |
blamar_ | anotherjesse: Logic-wise though, there is a lot I want to bring up in the project to make things less coupled and more clean...but if we get the big picture nailed down for the migration to keystone it will be easier | 17:30 |
anotherjesse | blamar_: the ugliest part is currently in the dashboard imho | 17:31 |
anotherjesse | we need to add hooks to the openstack.compute library to allow dashboard to be the UI for authentication but then only store the token in the session | 17:31 |
anotherjesse | plus keystone doesn't currently return the service catalog | 17:32 |
blamar_ | yeah, that's on my list somewhere | 17:32 |
anotherjesse | blamar_: awesome | 17:33 |
anotherjesse | blamar_: since both our teams are going to be touching the code we should probably share on the mailing list | 17:33 |
blamar_ | EC2 support non-existent as well from what I understand | 17:33 |
anotherjesse | blamar_: short term I'm focusing on making the keystone+dash+nova (and then swift+glance) work together | 17:33 |
anotherjesse | interopt with aws calls is next after all that goodness lands - since you can use it without the keystone middleware if you care about it now | 17:34 |
blamar_ | anotherjesse: k, I'm not certain what we're going to be working on, but I'm mostly concerned with making certain keystone is going to be acceptably similar to other openstack projects for official inclusion | 17:35 |
vishy | markwalsh: + whoever is interested. No-db-messaging | 17:36 |
anotherjesse | vishy: seriously - this is the morning of keystone ;) | 17:37 |
anotherjesse | not your keystone-light solutions | 17:37 |
*** anotherjesse has quit IRC | 17:37 | |
blamar_ | dammit, I'm not original..I have a branch of keystone called keystone-light :( | 17:37 |
vishy | markwalsh: so your concerns about updates being missed are valid | 17:37 |
vishy | markwalsh: I'm concerned that the alternate solution is too complex, but if we are willing to sta | 17:38 |
vishy | figure out the best approach and implement it I'm ok with it | 17:38 |
*** cloudnod has quit IRC | 17:39 | |
blamar_ | vishy: I'm not liking the whole 'multicall' solution, but I'm having trouble putting it into words :) | 17:39 |
markwash | vishy: I guess the complexity of having a permanent listener for volume updates is in the deployment details | 17:40 |
vishy | the main sticking points are: how do we make it work with all of the services | 17:40 |
vishy | blamar_: I was essentially trying to find the minimal amount of changes to make | 17:40 |
markwash | i.e. how many are there and where do they live? and how do they play with the other gazilion listeners we'd need to expand this approach | 17:41 |
vishy | markwalsh: yes | 17:41 |
vishy | wow | 17:41 |
vishy | why do i keep putting an l in there | 17:41 |
vishy | markwash: ^^ | 17:41 |
blamar_ | haha, we've been debating letting you know | 17:41 |
markwash | I take it as a compliment, assuming you are comparing me with sandywalsh | 17:41 |
westmaas | interestingly mark's full name is Markl | 17:41 |
westmaas | so you just had the l in the wrong place | 17:42 |
markwash | westmaas: no one is named that | 17:42 |
blamar_ | oh westmaaas... | 17:42 |
*** jesse_ has joined #openstack-dev | 17:42 | |
sandywalsh | markwash ... I'd think twice about that association :) | 17:42 |
jesse_ | westmaas / blamar_ - back - network is FLAKEY today :( | 17:43 |
*** jesse_ is now known as anotherjesse | 17:43 | |
vishy | markwash: yes that is the issue right now, so we could just start a greenthread in every api worker listening for updates, but I think this gets clunky with updates for all the services | 17:43 |
markwash | I'm more worried about where we run those workers. right now you can run as many instances of the api and manager as you want | 17:44 |
vishy | markwash: I think what we really need is a new binary called nova-writer, but I hesitate to make it even harder to install | 17:44 |
vishy | markwash: if we are pushing it into a queue, it doesn't really matter if we have multiple running | 17:44 |
vishy | the first "writer" to grab the message can update the db and go on | 17:45 |
vishy | unless we're disucssing getting rid of the db altogether, in which case we need to do something like sandywalsh's fanout | 17:45 |
markwash | I'm a bit unfamiliar with our messaging setup, can we rely on message ordering across multiple consumers? | 17:45 |
vishy | ordering I don't think so, but does the ordering matter? | 17:45 |
markwash | well, in the volume example, it seemed like there were multiple status updates feeding back to the listener possibly rapidly | 17:46 |
vishy | markwash: hmm, that is a good point, yes we don't want to update the status back to something else from available | 17:46 |
markwash | so it would be unfortunate if the db got stuck in an intermediate status because ordering was broken | 17:46 |
vishy | this is why something like the multicall solution is simpler. There is a particular thread that is owning one set of updates. | 17:47 |
markwash | but right now, we seem to have only one authoritative source for the information, so I think a timing approach for ordering on the consumer side could be workable | 17:47 |
anotherjesse | anyone else trying to do a clean install recently on maverick? | 17:47 |
anotherjesse | I run into TypeError: f(ile) should be int, str, unicode or file, not <open GreenPipe '<fd:4>', mode 'wb' at 0x2a2bb90> | 17:47 |
anotherjesse | because python-eventlet from the ppa won't install - and the version that does doesn't have the patch | 17:48 |
markwash | hmm yeah that's pretty much my worry, along with cluttering the install | 17:48 |
markwash | but I do get a warm and fuzzy cqrs feeling when I think about permanantly running listeners as autonomous components | 17:49 |
anotherjesse | probably just need to hit the ppa (hopefully will see soren later today :) ) | 17:49 |
markwash | maybe we only need one nova-writer at a time in any given zone | 17:50 |
markwash | that could help too | 17:50 |
vishy | markwash: cqrs? Sure, but that gives us another SPOF... | 17:50 |
markwash | I guess you could have multiple but only one running? its the kind of service that is easy to stand back up after it falls down and starts repairing missing updates immediately. . . | 17:51 |
markwash | but I'm not totally convinced on that point. . | 17:51 |
vishy | markwash: an alternate solution would be to have api get the state of all the volumes on boot | 17:51 |
vishy | markwash: and any that are not in available, it could start a polling thread with a timeout | 17:51 |
markwash | I'm guessing we could solve the timing issues with vector clocks, but I couldn't find a good library to help with that so I'm hesitant to suggest it seriously | 17:53 |
vishy | markwash: yes, and we're adding a hell of a lot of complexity if we go down that route... | 17:54 |
markwash | agreed | 17:55 |
blamar_ | yeah, lets throw CQRS and vector clocks right out :) | 17:55 |
markwash | blamar_: you would | 17:55 |
bcwaldon | . . . | 17:56 |
vishy | blamar_: do you have an alternative suggestion? | 17:56 |
markwash | sorry vishy these guys are really enjoying making fun of me right now :-) | 17:56 |
vishy | blamar_: do we create nova-writer and just have it collect updates from the services and write them to the db? | 17:56 |
blamar_ | markwash: So replace all db.update_volume with rpc.cast('update_volume', volume_ref) first? what we kind of talked about earlier? | 17:56 |
markwash | blamar_: in the manager that is | 17:57 |
blamar_ | right | 17:57 |
vishy | as a side note I'm very annoyed that we don't have an easy way to deserialize from dates back to datetimes | 17:57 |
vishy | s/dates/date strings/ | 17:57 |
blamar_ | and then have a rpc.call('volume_updated', volume_ref) for logic which needs to happen after successful creation | 17:58 |
*** antonyy_ has joined #openstack-dev | 17:58 | |
vishy | blamar_: hmm, who would own that? | 17:58 |
*** antonyy has quit IRC | 17:58 | |
*** antonyy_ is now known as antonyy | 17:58 | |
markwash | vishy: I'd like to regroup around this a little bit and try to come up with a proposal that handles ordering | 17:58 |
markwash | also I have a sprint meeting like nowish | 17:59 |
vishy | blamar_: I'd rather have that part use the nofification code | 17:59 |
vishy | blamar_: rather than have a separate channel... | 17:59 |
*** markwash is now known as markwash-away | 17:59 | |
vishy | markwash: ok | 17:59 |
vishy | more discussion later | 17:59 |
*** antonyy has quit IRC | 18:02 | |
*** antonyy_ has joined #openstack-dev | 18:02 | |
*** dprince has quit IRC | 18:04 | |
*** dprince has joined #openstack-dev | 18:07 | |
*** antonyy_ has quit IRC | 18:15 | |
*** antonyy has joined #openstack-dev | 18:15 | |
*** anotherjesse has quit IRC | 18:34 | |
*** cloudnod has joined #openstack-dev | 18:36 | |
cloudnod | ah, back. | 18:41 |
cloudnod | downside to irccloud is that when they go down, you go down. | 18:41 |
*** anotherj1sse has joined #openstack-dev | 18:42 | |
*** anotherj1sse has joined #openstack-dev | 18:44 | |
*** anotherj1sse has joined #openstack-dev | 18:45 | |
cloudnod | hey jesse | 18:45 |
*** anotherj1sse has quit IRC | 18:48 | |
*** anotherj1sse has joined #openstack-dev | 18:48 | |
dabo | is there anyone available who is familiar with volume creation who could answer a few questions? I'm looking into extending that code to work across a nested zone structure, and I need a reality check. | 19:07 |
soren | anotherj1sse: What version from the ppa won't install? | 19:09 |
soren | anotherj1sse: pastebin or it didn't happen. :) | 19:19 |
*** Daviey has joined #openstack-dev | 19:22 | |
Daviey | Hi. Are we allowed to use python 2.7 features in Nova? | 19:22 |
soren | No :( | 19:23 |
jk0 | Daviey: we're supporting 2.6 | 19:23 |
Daviey | bum, | 19:23 |
Daviey | thanks | 19:23 |
soren | Some would say we should be glad we get to use 2.6 features. | 19:23 |
Daviey | true :) | 19:24 |
soren | I don't agree with them, but there you go :) | 19:24 |
soren | I am sort of glad that I don't have to maintain python 2.7 for the Lucid PPA. | 19:25 |
Daviey | Well i am glad we aren't yet using 3.0 :) | 19:25 |
soren | doko wants us to :) | 19:25 |
soren | Unfortunately, a bunch of the libraries we use are on the python3 hall of shame. | 19:26 |
*** galstrom has joined #openstack-dev | 19:27 | |
*** galstrom has left #openstack-dev | 19:27 | |
vishy | dabo: I am familiar | 19:27 |
vishy | dabo: but I am about to eat lunch | 19:27 |
dabo | vishy: can you ping me when you're done? Maybe we can do a quick skype chat | 19:28 |
soren | anotherj1sse: E-mail me your response. I've got some stuff I need to do this evening, so I'll be (attempting to be) ignoring IRC. | 19:29 |
anotherj1sse | soren: will do - thx! | 19:30 |
soren | anotherj1sse: fwiw, I just installed python-eventlet from the ppa on a freshly started maverick instance in the rackspace cloud. Worked great. | 19:30 |
anotherj1sse | soren: I'm using lxc - maybe that is it? | 19:31 |
soren | anotherj1sse: if you could just share the apt-get output, I could probably tell you quite quickly. | 19:31 |
anotherj1sse | soren: http://pastie.org/1934814 | 19:31 |
soren | Nothing to do with lxc. | 19:32 |
soren | anotherj1sse: apt-get update and try again. | 19:32 |
soren | anotherj1sse: ...and it's complaining about python, not python2.6. | 19:32 |
anotherj1sse | soren: i'm doing this from fresh lxc maverick containers | 19:32 |
anotherj1sse | soren: also I was looking at this really late friday ... so I could be really wrong :-/ | 19:33 |
soren | anotherj1sse: If you could give me access to it, I could probably sort it out in a flash. | 19:33 |
anotherj1sse | ahh: python is older than python2.6! | 19:33 |
soren | Well... | 19:33 |
soren | python2.6 is the actual python. | 19:33 |
anotherj1sse | ii python 2.6.6-2ubuntu1 interactive high-level object-oriented language (default version) | 19:33 |
anotherj1sse | ii python2.6 2.6.6-5ubuntu1 An interactive high-level object-oriented language (version 2.6) | 19:34 |
soren | python is not the actual interpreter. | 19:34 |
anotherj1sse | right | 19:34 |
anotherj1sse | after apt-get update install still says: | 19:34 |
anotherj1sse | python-eventlet : Depends: python (>= 2.6.6-2ubuntu2~) but 2.6.6-2ubuntu1 is to be installed | 19:34 |
soren | It's also not strictly a metapackage, but sort of. | 19:34 |
soren | anotherj1sse: Pastebin your sources.list, please. | 19:35 |
anotherj1sse | I'm on a lxc maverick container - lxc-create -t maverick; then adding nova-core-trunk-maverick.list | 19:36 |
anotherj1sse | http://pastie.org/1962729 | 19:36 |
soren | That's your problem. | 19:36 |
soren | You need maverick-updates. | 19:36 |
soren | Add: | 19:36 |
soren | deb http://archive.ubuntu.com/ubuntu maverick main universe | 19:36 |
soren | Whoops | 19:36 |
soren | deb http://archive.ubuntu.com/ubuntu maverick-updates main universe | 19:36 |
soren | apt-get update | 19:36 |
soren | and win. | 19:36 |
anotherj1sse | should we send a patch to lxc in natty - their template doesn't include maverick-updates | 19:37 |
soren | Hmm... | 19:37 |
soren | Not sure. | 19:37 |
soren | It's valid not to have it enabled. | 19:37 |
soren | ...but sure. | 19:37 |
anotherj1sse | kinda weird you can have different versions of python/python2.6 ... anyway I'll patch my shell script to add it | 19:38 |
anotherj1sse | thanks! | 19:38 |
soren | Sure thing. | 19:39 |
*** bcwaldon has quit IRC | 19:58 | |
vishy | dabo: ping | 20:01 |
dabo | wanna jump on skype? Probably easier than typing everything | 20:02 |
dabo | I'm ed.leafe | 20:02 |
vishy | sure | 20:02 |
*** BinaryBlob has joined #openstack-dev | 20:06 | |
*** zaitcev has quit IRC | 20:06 | |
anotherj1sse | westmaas: how did your meeting about keystone go? | 20:06 |
*** dprince has quit IRC | 20:06 | |
blamar_ | anotherj1sse: we're going to knock out service directory support more than likely | 20:08 |
westmaas | ^^ | 20:09 |
uvirtbot | westmaas: Error: "^" is not a valid command. | 20:09 |
westmaas | uvirtbot: quiet | 20:09 |
uvirtbot | westmaas: Error: "quiet" is not a valid command. | 20:09 |
*** zaitcev has joined #openstack-dev | 20:10 | |
*** antonyy_ has joined #openstack-dev | 20:13 | |
*** antonyy has quit IRC | 20:15 | |
*** antonyy_ is now known as antonyy | 20:15 | |
*** bcwaldon has joined #openstack-dev | 20:17 | |
*** salv-orlando has joined #openstack-dev | 20:25 | |
openstackjenkins | Project swift build #265: SUCCESS in 28 sec: http://jenkins.openstack.org/job/swift/265/ | 20:32 |
openstackjenkins | Tarmac: swift-ring-builder: Added list_parts command which shows common partitions for a given list of devices. | 20:32 |
*** markwash-away is now known as markwash | 20:34 | |
anotherj1sse | blamar_: awesomeness - that will allow the integration with dashboard not to be as hacky | 20:46 |
anotherj1sse | blamar_: what approach are you guys going to take / eta? | 20:46 |
anotherj1sse | ;) | 20:46 |
anotherj1sse | pvo: know if anyone is working on admin apis for openstack api? | 20:47 |
westmaas | anotherj1sse: admin apis haven't been started yet. there is a bp for it though. | 20:52 |
pvo | anotherj1sse: yea, I was just looking the bp up. | 20:53 |
pvo | https://blueprints.launchpad.net/nova/+spec/admin-account-actions | 20:53 |
pvo | ? | 20:53 |
pvo | westmaas: thats the one? | 20:53 |
westmaas | pvo: yep | 20:53 |
westmaas | that's the one we said someone at rackspace would do it, its been assumed it would be titan, but not for sure | 20:53 |
pvo | I'm thinking anotherj1sse may want ti sooner. | 20:53 |
anotherj1sse | westmaas / pvo - we'll probably play with it - i'll update the list as we work on it | 20:54 |
pvo | it sooner | 20:54 |
westmaas | sweet | 20:54 |
anotherj1sse | pvo: since I know you guys have reqs that will need to be handled | 20:54 |
pvo | yea, I think we're going to find out real soon what all is needed. | 20:54 |
anotherj1sse | pvo: if you have any newer thinking on it, feel free to email - otherwise we can meet in the mailing list | 20:54 |
pvo | sure thing | 20:54 |
westmaas | anotherj1sse: I think that bp covers a lot of it, glenc did the research there | 20:55 |
westmaas | http://wiki.openstack.org/NovaAdminAPI | 20:55 |
westmaas | that wiki page does, rather | 20:55 |
glenc | anotherj1sse, let me know if I can be of any help | 20:57 |
vishy | markwash: done with your scrum? New idea which i like | 20:58 |
markwash | vishy: I'm here | 20:58 |
vishy | termie made a suggestion which I think could solve a lot of our problems | 20:58 |
vishy | markwash: ^^ -> task queue | 20:59 |
vishy | markwash: the problem with having a separate writer/durable queue/vector clocks is we are basically creating our own distributed database | 20:59 |
markwash | heh agreed | 21:00 |
vishy | markwash: which I think will lead to a whole slew of new problems | 21:00 |
vishy | so here is the new proposal | 21:00 |
vishy | when we get a create_volume request in api, we create a task (new entity) | 21:00 |
vishy | that task gets marked completed when the calls are finished. | 21:01 |
Daviey | vishy, I'm duplicating (!) some of your code - which includes a NOTE(vish): <--- is it okay to keep your name there? | 21:01 |
Daviey | (in my dupe) | 21:01 |
vishy | we can have volume_apis check for unfinished tasks and handle them | 21:01 |
vishy | Daviey: fine by me | 21:01 |
vishy | :) | 21:01 |
markwash | hmm, so basically we are calling instead of casting, but we're doing it asynchronously? | 21:01 |
Daviey | cool | 21:01 |
markwash | err, in a different thread? | 21:02 |
anotherj1sse | vishy: we should just switch to zookeeper | 21:02 |
vishy | markwash: yes the idea being that the owner of the task is responsible for writing data to the db | 21:02 |
blamar_ | anotherj1sse: determining approach now, will try to get ETA out there | 21:03 |
vishy | markwash: we make tasks very easy to restart by making all of the steps in create_volume idempotent | 21:03 |
vishy | markwash: so we don't have any complicated logic in restart, we essentially just send the worker another create_volume with the same data | 21:04 |
markwash | vishy: so if something dies in the middle, and the db looks off to the client, they can just resubmit? | 21:04 |
vishy | right | 21:04 |
*** ameade has quit IRC | 21:05 | |
vishy | markwash: there would have to be some logic to figure out when a worker has died | 21:05 |
vishy | * api has died that is | 21:05 |
markwash | part of this I don't know much about. . . when a manager responds to a call, how does the actual return message make its way back? | 21:06 |
vishy | * or we could just timeout tasks after a certain amount of time and restart | 21:06 |
markwash | like what is the topic or queue id for the return message? | 21:06 |
vishy | msg_id | 21:07 |
vishy | a new queue is created based on the msg_id | 21:07 |
markwash | gotcha | 21:08 |
markwash | so with multicall, how do we know when the last return arrives? | 21:09 |
anotherj1sse | blamar_: a phased approach that v0 has a hardcoded catalog from config would allow moving forward with how dashboard / cli works wit hit | 21:09 |
markwash | I'm guessing we'd use multicall in the task queue? or maybe I'm misunderstanding how that would work | 21:10 |
vishy | markwash: we get back a none | 21:10 |
vishy | markwash: multicall is a nice little addition that means we don't have to break up into multiple api calls for each subtask | 21:12 |
vishy | markwash: but it would be possible to do the same thing without it | 21:12 |
blamar_ | anotherj1sse: a service catalog seems like an optional middleware loaded by keystone at deployment if that makes sense...the content of the service catalog could come from a static config for a first version (restating for my own sake) | 21:13 |
markwash | vishy: no argument against multicall | 21:14 |
vishy | markwash: I'm thinking of prototyping the task queue idea | 21:14 |
markwash | vishy: that all sounds like it would work.. for some reason I'm having a hard time wrapping my head around it all now | 21:15 |
markwash | vishy: I blame my sprint meeting | 21:15 |
vishy | markwash: Probably be clearer with code | 21:15 |
markwash | vishy: somewhat separately, I'm wondering if we should really be worrying about ordering at all | 21:15 |
vishy | markwash: it is making things more complicated, but I think the ability to recover from failures will be worth the complication... | 21:15 |
anotherj1sse | blamar_: awesomeness - not sure if it can be optional - since the compute endpoint is only exposed via auth endpoint? | 21:16 |
markwash | vishy: I mean, I think that even the current approach could suffer from the same type of ordering problem technically | 21:16 |
anotherj1sse | blamar_: but we can discuss defaults later ;) | 21:16 |
markwash | vishy: by current approach I meant the one you merge propped | 21:16 |
vishy | markwash: hmm, really, how? | 21:16 |
blamar_ | anotherj1sse: Yeah, optional is pretty big for me, not wanting to incur the wrath of other people not wanting to use a service directory but still use Keystone | 21:17 |
vishy | markwash: On the implementation level ordering is important | 21:17 |
vishy | markwash: but idempotency is a great way to handle ordering of operations | 21:17 |
markwash | vishy: well, each return from a single multicall is sent to the same queue | 21:17 |
markwash | vishy: so if rabbit burps, then it could be out of order. . I guess it only really matters if its the last one that gets out of order in the bigger view | 21:18 |
markwash | vishy: agree with idempotency a lot--I like that we can just resubmit with no ill consequences | 21:18 |
markwash | vishy: because ordering and other hiccups are probably very very rare | 21:19 |
vishy | markwash: I think messages stay in order in rabbit | 21:19 |
vishy | markwash: ordering is a problem if you have multiple readers | 21:19 |
markwash | vishy: ah okay got it | 21:19 |
vishy | markwash: but i could be wrong | 21:20 |
vishy | markwash: hmm looks like you can in crazy failure situations get out of order messages | 21:21 |
markwash | vishy: yeah that's what I was thinking | 21:22 |
markwash | vishy: but it seems pretty rare | 21:22 |
vishy | it would require one message to not get acked | 21:22 |
vishy | markwash: so with the writer and reader each in its own process, I think it actually can't happen | 21:22 |
markwash | vishy: I guess its something to worry about when we recover from a failure--if the update consumer dies, it very well could die in the middle of processing and wouldn't send an ack | 21:23 |
markwash | vishy: but even in that case, you could just say 'weird' and resubmit | 21:24 |
markwash | and idempotency should handle it | 21:24 |
markwash | I think? | 21:24 |
vishy | yes, so basically the old messages in the queue would be totally ignored | 21:25 |
vishy | we have to time them out somehow | 21:25 |
vishy | but you are resubmitting the call/multicall, so there would be a totally new msg_id | 21:25 |
*** bcwaldon has quit IRC | 21:27 | |
markwash | vishy: is it explicitly a goal of this approach that only the api servers write to the db? | 21:27 |
markwash | or would it make sense for the managers to write to the db and have the api servers only reading from it? | 21:27 |
vishy | markwash: the goal is to have api servers | 21:28 |
vishy | (although it is not api specifically), just basically to minimize the number of writers | 21:28 |
vishy | for security and scalability, we don't want every host in the system writing to the db | 21:28 |
markwash | vishy: again, I'm sorry if my brain is just not working right now, but what does the task queue approach buy us over what you have in no-db-messaging now? | 21:31 |
vishy | recovery | 21:32 |
markwash | vishy: so some persistent record of the task is kept? | 21:32 |
vishy | yes, it is a db object | 21:32 |
markwash | vishy: until it is completed | 21:32 |
vishy | so if we can check for uncompleted tasks periodically and restart them | 21:33 |
markwash | vishy: okay cool, I think I've got it a lot better now! | 21:33 |
vishy | that was the main concern with this approach right? What happens if the api dies before the create is finished? | 21:33 |
vishy | I'm trying to solve that problem without writing our own DDS | 21:34 |
markwash | vishy: yes that was my main concern | 21:34 |
markwash | vishy: I didn't get the db record of the task part earlier for some reason :-) | 21:35 |
vishy | ah yeah | 21:36 |
vishy | I don't think i ever specifically mentioned it | 21:36 |
markwash | vishy: this might even be a simple addition to your current prop. . if the delayed create just sent some annotation about the queue_id and what it was listening for to the db when it started working, and then deleted it when it is done | 21:40 |
markwash | vishy: we are sort of just tacking on a little bit of persistence to that task, giving us a handle for coming back to revisit it if its sticking around | 21:40 |
vishy | yes. +making sure the volume code is actually idempotent | 21:41 |
markwash | vishy: but I won't talk your ear off. . I'll be happy to look at any more code you prop and I'll probably keep thinking about this a bit on my end as well | 21:41 |
vishy | cool | 21:41 |
vishy | markwash: we'll need a little bit of hacking to the scheduler to allow for making sure to resend a request to the same host | 21:48 |
Daviey | jaypipes, around? | 21:56 |
*** foxtrotgulf has quit IRC | 22:01 | |
*** BinaryBlob has quit IRC | 22:21 | |
*** foxtrotgulf has joined #openstack-dev | 22:28 | |
*** dragondm has quit IRC | 22:30 | |
*** dragondm has joined #openstack-dev | 22:33 | |
*** statik has quit IRC | 22:35 | |
*** rnirmal has quit IRC | 23:00 | |
*** adiantum has joined #openstack-dev | 23:38 | |
*** Tv has quit IRC | 23:44 | |
*** yamahata_lt has joined #openstack-dev | 23:47 | |
*** jkoelker has quit IRC | 23:49 | |
*** antonyy has quit IRC | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!