Wednesday, 2011-05-25

*** dragondm has quit IRC00:02
*** Tv has quit IRC00:20
*** antonyy has joined #openstack-dev00:22
*** Binbingone is now known as Binbin00:50
*** jdurgin has quit IRC00:55
*** mgius has quit IRC00:57
*** Tv_ has joined #openstack-dev00:59
*** cloudgroups has joined #openstack-dev01:06
*** cloudgroups has left #openstack-dev01:25
*** Binbin has quit IRC02:06
*** Arminder-Office has joined #openstack-dev02:10
*** Arminder has quit IRC02:13
*** Tv_ has quit IRC04:40
*** sirp__ has quit IRC05:37
*** zaitcev has quit IRC05:51
*** Binbin has joined #openstack-dev05:58
anotherj1ssehmm qemu powered libvirt is broken on maverick vms on rackspace cloud servers?06:23
anotherj1ssels: cannot access /sys/devices/virtual/dmi: No such file or directory06:23
*** cloudgroups has joined #openstack-dev06:29
sorenanotherj1sse: When do you see that error?06:29
anotherj1ssesoren: the error I see is that virsh list fails06:50
anotherj1ssesoren: as does python-libvirt06:50
anotherj1ssesoren: http://pastie.org/1969950 <- also I private messaged you root access06:52
*** antonyy has quit IRC06:54
*** openpercept has joined #openstack-dev07:42
*** cloudgroups has left #openstack-dev07:59
*** mancdaz has joined #openstack-dev08:12
*** mattray has joined #openstack-dev10:10
*** BinaryBlob has joined #openstack-dev10:30
*** mattray has quit IRC11:06
sandywalshsoren, just replied to your awesome flavor-queue idea.11:42
*** BinaryBlob has quit IRC11:42
sorensandywalsh: I have a bunch of notes on the subject from way back.. Trying to find them now.11:53
sorensandywalsh: Weird. I remember stumbling upon them less than a month ago, but now I can't find them.11:55
sandywalshsoren, I know the feeling11:57
sorenFound it!12:10
sorenJust where it should be, but apparantly I can't type.12:11
* soren pastebings12:11
sorenSee? I can't type properly.12:11
sandywalsha piece of crumpled up napkin behind the waste basket?12:11
*** BinaryBlob has joined #openstack-dev12:11
sorenNo, in my $HOME/Reference/OpenStack folder. There's only half a dozen things in there, not sure how I missed it first time I looked.12:12
sandywalsh"Note on scheduler idea: make it awesome. <details to follow>"12:12
sandywalsh:)12:12
*** BinaryBlob has quit IRC12:16
*** adiantum has joined #openstack-dev12:24
*** BinaryBlob has joined #openstack-dev12:30
*** Arminder-Office has quit IRC12:30
jaypipes*yawn*12:50
*** thatsdone has joined #openstack-dev12:52
*** Binbin is now known as Binbingone13:01
*** thatsdone has quit IRC13:05
*** ameade has joined #openstack-dev13:09
ttxhttp://wiki.openstack.org/reviewslist/ now features appropriate prioritization of stuff targeted to the next milestone13:24
ttxnext time you don't know what you should be reviewing, use it ^^13:25
ameade:)13:30
*** foxtrotgulf has joined #openstack-dev13:31
*** BinaryBlob has quit IRC13:39
*** dprince has joined #openstack-dev13:46
*** dprince has quit IRC13:55
*** openpercept has quit IRC14:02
jaypipesttx: I can hardly keep up with your prolific bug management the last few days :)14:07
ttxjaypipes: Phase 1: fix obvious bad status, Phase 2: touch new/undecided bugs, Phase 3: refresh Incomplete bugs...14:08
jaypipesttx: step 4: inundate Jay with emails :P14:08
ttxyep :)14:09
*** troytoman-away is now known as troytoman14:23
*** jkoelker has joined #openstack-dev14:24
*** Arminder has joined #openstack-dev14:26
* soren runs some errands14:35
*** pyhole has quit IRC14:52
*** pyhole has joined #openstack-dev14:52
*** dragondm has joined #openstack-dev15:00
*** troytoman is now known as troytoman-away15:11
annegentlecan anyone troubleshoot/fix http://eavesdrop.openstack.org?15:14
*** sirp__ has joined #openstack-dev15:18
*** openpercept has joined #openstack-dev15:19
*** openpercept is now known as Guest5774215:20
daboGot a quick, simple (1 line), but critical bug fix merge prop. Can I get some nova-core love for https://code.launchpad.net/~ed-leafe/nova/lp785843/+merge/62321?15:22
markwashtr3buchet: there?15:30
markwashdabo: looking now15:33
dabothx15:34
*** Tv_ has joined #openstack-dev15:34
markwashdabo: I presume you were hitting the error case and getting the wrong exception, and that's what lead to finding this problem?15:34
dabomarkwash: exactly. The exception being raised itself caused a different exception.15:35
*** rnirmal has joined #openstack-dev15:38
*** johnpur has joined #openstack-dev15:42
*** ChanServ sets mode: +v johnpur15:42
openstackjenkinsProject nova build #936: SUCCESS in 2 min 45 sec: http://jenkins.openstack.org/job/nova/936/15:49
openstackjenkinsTarmac: The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID.15:49
openstackjenkinsProject nova-tarball-bzr-delta build #181: FAILURE in 12 sec: http://jenkins.openstack.org/job/nova-tarball-bzr-delta/181/15:58
openstackjenkins* Tarmac: Created new libvirt directory, moved libvirt_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities.15:58
openstackjenkins* Tarmac: The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID.15:58
*** dprince has joined #openstack-dev16:00
*** rnirmal_ has joined #openstack-dev16:02
*** antonyy has joined #openstack-dev16:04
openstackjenkinsProject nova build #937: SUCCESS in 2 min 41 sec: http://jenkins.openstack.org/job/nova/937/16:04
openstackjenkinsTarmac: Created new libvirt directory, moved libvirt_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities.16:04
*** rnirmal_ has quit IRC16:04
*** rnirmal has quit IRC16:06
*** rnirmal has joined #openstack-dev16:06
*** adiantum has quit IRC16:33
tr3buchetmarkwash: what's up?16:37
anotherj1ssepvo (or other cloud servers guys) - anyone know why qemu/libvirt doesn't work on maverick on cloud servers? - log here: http://pastie.org/1969950 - I can give root access16:38
sirp__if anyone is interested in dist-sched work, we'd love some feedback on https://code.launchpad.net/~sandy-walsh/nova/dist-sched-2a/+merge/61245 and https://code.launchpad.net/~rconradharris/nova/dist-sched-2b/+merge/6135216:39
pvoanotherj1sse: soren might know. I thought he had played with it in the past. I hadn't worked with libvirt as much.16:40
pvocould it be the kernel?16:40
anotherj1ssepvo: that is what sleepsonthefloor was thinking ...16:41
pvowhat kernel are you running?16:41
anotherj1ssepvo: Ubuntu 10.10 - uname -a reports: 2.6.35.4-rscloud #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64 GNU/Linux16:43
anotherj1sseso it is your fault :)16:43
pvoha. msg me your slice_id and I can poke around .16:44
dprinceanotherj1sse: Are you using PPA?16:46
dprinceanotherj1sse: The PPA packages currently require libvirt 0.8.8. I think Soren built a version of libvirt 0.8.8 specifically for the PPA.16:47
dprinceanotherj1sse: The Cloud Servers kernel doesn't support the libvirt 0.8.8. So that is the cause of your error.16:47
dprinceanotherj1sse:: Essentially my guess would be that your libvirt installation isn't working.16:48
dprinceanotherj1sse: What I do for SmokeStack (which currently runs on Cloud Servers until I can get it running on a Nova installation) is just use libvirt 0.8.3 which works great.16:49
dprinceanotherj1sse: Libvirt 0.8.3 won't support LXC but it should work fine otherwise.16:49
anotherj1ssedprince: it should be - checking16:49
*** adiantum has joined #openstack-dev16:49
anotherj1sseii  libvirt0                        0.8.8-1ubuntu3~ppamaverick1                                 library for interfacing with different virtualization systems16:50
dprincedprince: Yep. That version won't work. At least I couldn't get it to.16:50
anotherj1ssedprince: thx16:50
dprinceanotherj1sse: I love it when I talk to myself. :)16:50
anotherj1sseI see the source16:51
dprinceanotherj1sse: If you know a better solution I'm all ears.16:51
*** zaitcev has joined #openstack-dev16:56
anotherj1ssepvo: my account - anotherjesse - responds "too many requests" when I try to create a cloud server ... are there quotas I'm hitting?16:57
*** jdurgin has joined #openstack-dev16:57
pvoyep, there are.16:57
pvoi can't adjust those, but I know who can.16:57
anotherj1sseawesome - want an email or is this good enough?16:58
dprinceanotherj1sse: file a ticket in the Cloud Servers control panel and they'll bump them for you.16:58
pvoif you can email me, I know who to send it to.16:58
*** bcwaldon has joined #openstack-dev16:58
anotherj1ssedprince: you hit it with smokestack too eh?16:59
dprinceanotherj1sse: My POST limits /servers are bumped on one account.16:59
dprinceanotherj1sse: We have another account we use in Titan that I still need to get the limits increased on.17:00
jaypipesmarkwash, bcwaldon: see my post re: pagination?17:09
jaypipessirp__: my comment about making register_models() just do the db_sync make sense?17:10
sirp__jaypipes: yeah, i think so, im gonna take a shot at auto-migrating17:11
jaypipessirp__: should just be like a couple lines of code to put in register_models..17:11
sirp__jaypipes: although, i do like the idea of having setup as an explicit separate step...17:11
jaypipessirp__: I'm not too thrilled with that, tbh.17:12
jaypipessirp__: especially since it's worked automatically for a while now.17:12
sirp__jaypipes, well, to be honest, it "worked" in a very broken way... left db's entirely inconsistent w/ no easy road to recovery17:12
jaypipessirp__: the easy way to recovery was to simply update the migration table to the latest version...17:13
sirp__jaypipes, we'll have to eyeball the schemas to figure out which version was auto-created, since the migrate_version wasn't populated17:13
sirp__jaypipes, that works if they were running the latest version...17:13
sirp__jaypipes, should be the case most of the time, but, is it a safe assumption in general? (not sure)17:14
jaypipessirp__: k. I still think this bug is a bit obscure and not something that would happen without the manual intervention from graham.17:14
sirp__jaypipes: hmm, it wasn't hard to replicate; really wouldn't consider it too obscure. You load up glance; it generates tables. You go to upgrade and it s'plodes :)17:15
sirp__jaypipes, seems like that's going to happen alot unless we come up with a way of preventing db from being inconsistent17:16
sirp__jaypipes: either take the rails approach of, always make db:migrate a pre-req step; or, auto migrate17:16
jaypipessirp__: how was the "load up glance" done, though?17:17
jaypipessirp__: on install, glance should have db_sync run.17:17
jaypipessirp__: so I don't see really how doing an upgrade on a regular install would produce the bug.17:18
sirp__jaypipes, good question. One way to trigger it, is to grab source and just run glance-registry17:19
sirp__jaypipes: agreed, on a normal install, this probably wouldn't be triggered17:19
jaypipessirp__: ok. so why not the solution I proposed, which is register_models() does a check for versioning and runs migrate.sync if no versioning is found? that would solve the bug.17:20
sirp__jaypipes, yeah, i'm okay with that as a solution. I personally would prefer making the migration a separate required step (makes things more explicit and all); but, handling this automatically doesn't seem terrible either17:21
jaypipessirp__: I hear ya. I prefer the automated step vs. the manual one in this case, partly because that is inline with nova, too.17:22
jaypipessirp__: though I think the same "bug" exists in nova, too.17:22
sirp__jaypipes, yeah, im a little confused as to why nova hasn't experienced something like this yet... was going to look into that shortly. Regardless, I'll go ahead and toss in auto-migration and re-propose17:23
*** Guest57742 is now known as openpercept17:24
*** openpercept has quit IRC17:24
*** openpercept has joined #openstack-dev17:24
*** rnirmal has quit IRC17:24
jaypipessirp__: cheers mate17:25
sirp__ jaypipes: good deal17:25
sandywalshjaypipes, damn you for bringing up pagination ... I was hoping that problem would just go away if we ignored it :) (got the same problem with GET /servers/ )18:01
bcwaldonsoren: I would *really* appreciate feedback on this MP18:02
bcwaldonsoren: http://www.openstack.org/blog/2011/05/openstack-conference-and-design-summit-spring-2011-overview-video/18:02
jaypipessandywalsh: hehe18:02
bcwaldonnope18:02
bcwaldonsoren: https://code.launchpad.net/~rackspace-titan/nova/osapi-serialization/+merge/6165618:03
markwashjaypipes: why is marker the id of the *first* record returned in the original query? what if the first record in the original query has a very old updated_at time?18:05
markwashjaypipes: for example if we are sorting ascending by updated_at18:07
jaypipesmarkwash: sorry, I was incorrect in stating that. It should have been that marker is the timestamp of the initial query.18:11
bcwaldonjaypipes: could you also use created_at?18:12
jaypipesbcwaldon: I suppose.18:14
markwashbcwaldon: but you really want the maximum of all created_at and all deleted_at times in the table18:15
markwashbcwaldon: maybe even all updated_at times too18:16
markwashbcwaldon: so you might as well use now()18:16
jaypipesmarkwash: no, you are comparing created_at <= @time_of_initial_query18:19
jaypipesmarkwash: or even created_at <= @time_of_initial_query AND deleted_at <= @time_of_initial_query18:20
*** mgius has joined #openstack-dev18:22
markwashjaypipes: agree, but i believe there is no effective difference in behavior for times between max(created_at across rows, deleted_at across rows) and now()18:23
markwashjaypipes: just doing it based on the max is presumably an expensive lookup18:23
jaypipesmarkwash: there's a big difference in efficiency :)18:24
markwashjaypipes: agree :-)18:24
jaypipesmarkwash: plus, the user may wait quite some time between going to page 2.18:24
markwashjaypipes: I was attempting to make perhaps an academic point to bcwaldon18:24
jaypipesmarkwash: and I'm proposing to ensure that the page 2 the user sees corresponds exactly to the page 2 of the initial query (shouldn't include new rows or be missing deleted stuff)18:24
jaypipesmarkwash: you and your academia!18:25
markwashjaypipes: I far prefer macadamia18:25
bcwaldonseconded18:25
jaypipesmarkwash: lol18:25
* jaypipes partial to cashews.18:25
markwashjaypipes: agree that the timestamp must be the same across separate queries, i was just talking about valid values of the timestamp when it generated on the first query18:25
* bcwaldon I welcome all nuts18:25
jaypipesbcwaldon: you ARE a nut. :)18:27
bcwaldonwell...18:27
jaypipesmarkwash: understood18:27
*** rnirmal has joined #openstack-dev18:28
*** blamar_ has quit IRC18:32
*** blamar_ has joined #openstack-dev18:34
*** blamar_ has quit IRC18:37
*** blamar_ has joined #openstack-dev18:39
*** blamar_ has quit IRC18:43
*** blamar_ has joined #openstack-dev18:44
*** dprince has quit IRC18:49
anotherj1sse[B19:18
vishymarkwash: any comments on tasks?19:21
*** adiantum has quit IRC19:29
*** antonyy has quit IRC19:30
*** antonyy_ has joined #openstack-dev19:30
markwashvishy: inded19:30
markwashvishy: indeed even19:31
markwashvishy: looking at that code it is pretty awesome19:31
markwashvishy: maybe kind of scary awesome?19:31
markwashvishy: I'm a little worried about something like that running in production--mostly around the information in the task table19:31
markwashvishy: do we have to migrate that data? is there maybe some other place we could store it?19:32
markwashvishy: also, I am curious if this solves our "many writers" problem? currently there are many places other than the 'api' server that use the various api classes19:32
markwashvishy: like compute manager makes references to network.api19:32
markwashvishy: perhaps there are more such cases, which would mean we still have to configure a ton of nodes for writing19:33
markwashvishy: as nifty as it all is, I keep going back to having nova-writer :-/19:33
markwashvishy: and I'm wondering if I need to make some code to back that up as a real proposal19:33
vishymarkwash: many writers is fine.  Large number of writers is not19:34
vishymarkwash: yes, migrations seem like they could be scary19:34
markwashvishy: yeah, I'm wondering where the line is and what side of it we're on with these general no-db-messaging approaches19:35
vishybut i think we can get around that by enforce clearing the task queue before migrating19:35
markwashlike would all the compute nodes need write permission?19:35
vishymarkwash: the purpose of moving it this way is to take write away from compute nodes19:35
vishyI don't think it is really an "api" concern specifically.  It is really business layer19:36
vishymarkwash: I'm trying to make the nodes into stupid task executors19:36
markwashvishy: maybe I was mistaken. . I see now that the compute manager has direct references to the network and volume managers, but it also has references to network_api19:37
markwashvishy: and presumably the network_api would need write permissions19:37
vishyultimately everything should be going through the api19:38
markwashvishy: going through which api?19:39
vishyas long as a given "api" can own the task...it tells the compute/volume nodes to execute chunks of the tasks and keeps track of the task19:39
vishycompute should only talk to network through network.api19:39
vishyetc..19:39
vishyi really hate calling these api's, it is really more of a supervisor19:39
markwashso, if we take the volume create approach and apply it to the networking code19:41
vishyso compute communicates through the api to the volume supervisor.  The volume supervisor owns the tasks and the database writing. then sends idempotent messages to the workers to achieve the task19:41
markwashokay, I think I buy the idea of the supervisor, but is that in the example code anywhere?19:42
vishymarkwash: network is a little different at the moment because it doesn't have edge-workers19:42
markwashvishy: ah yes good point--maybe its not the best example then19:42
vishymarkwash: the code in volume.api is the supervisor, there isn't a separate object at the moment19:43
vishyso essentially in the current code nova-api is acting as the supervisor19:43
vishy(because it is the actual process that imports volume.api and runs the code)19:44
markwashvishy: I kinda love the supervisor approach if we move it out from the volume api19:44
vishymarkwash: yes, I suppose I'm a little worried about having another class/worker19:45
vishythat essentially just passes around the same requests19:45
markwashvishy: I think we should probably look at the naming around api and manager as you were saying19:45
markwashvishy: really, api is just the same interface as manager, but with the implication that its operating through messaging19:45
markwashvishy: so its more like volume.Manager and volume.RemoteManager19:46
markwashnot that I love the name Manager :-)19:46
vishyright19:46
vishyI don't know that we need volume.api19:46
vishyperhaps we just rename volume.api to volume.supervisor19:47
vishythe supervisor has an api19:47
vishyjust like the manager has an api19:47
vishyseems like it would be nice to modify scheduler so it could run in-process as well19:48
markwashvishy: that sounds like it could work19:48
vishyalthough i'm not sure if that works with the zone stuff19:48
markwashvishy: I kind of like how flexible that is in the context of moving from in project libraries to separate services19:49
markwashvishy: because we could separate out at either the manager or the supervisor layer, depending on where the very front end wants to get its data19:49
markwashvishy: maybe I'm running with this in a direction you're not so fond of :-)19:51
vishynot sure what you mean by separate out19:51
vishythe scheduler?19:51
markwashwhen we replace nova code with external services, we can do it either by replacing the manager and keeping the supervisor around to update the nova db19:52
markwashor we can stop reading from the nova db and replace all of the supervisor and manager code for that19:52
markwashbut I'm probably missing something wrt zones19:53
markwashvishy: I guess I still come back to nova-writer in a way because I really like CQRS19:54
markwashvishy: if you're interested I think maybe you can get a good sense of it from http://www.udidahan.com/2009/12/09/clarified-cqrs/19:54
markwashvishy: but I'm not suggesting that as a blocker on the supervisor approach necessarily19:55
vishyah interesting19:56
jaypipesvishy: pls see https://bugs.launchpad.net/nova/+bug/788295.20:03
uvirtbotLaunchpad bug 788295 in nova "UnboundLocalError in exception handling" [Undecided,New]20:03
vishyjaypipes: my guess is that has something to do with not using glance to store images20:05
vishyhe said it was an old install right?20:05
jaypipesvishy: hmm, not sure... I've asked somik to join us here.20:06
*** somik_ has joined #openstack-dev20:07
jaypipessomik_: heya. vishy had a question for you.20:08
somik_sure- go ahead20:08
jaypipesdabo: you marked the bug duplicate... good to know. maybe you have some input for somik_ .20:08
dabojaypipes: somik_: I don't know about the underlying bug in glance; my fix was only for the additional bug in the exception handler20:09
vishyi don't know if there as an underlying bug20:10
somik_dabo: I tried the exception handler fix but to no avail.20:10
vishyperhaps the settings are just old20:10
somik_so I am wondering if its just a config issue20:10
somik_vishy: what do you mean by settings are just old? any specific config i hsould look at>20:11
dabosomik_: you should have gotten the "StorageError: Timeout waiting for device sdb to be created" error, but not the "Error: local variable 'instance_obj' referenced before assignment" message20:11
jaypipesdabo: see the pastebin link in the bug report...20:12
dabojaypipes: ok, that's what I would expect20:12
vishysomik_: do you have anything in your initial install that you need?  I might suggest just wiping it and starting over20:13
*** rnirmal has quit IRC20:13
somik_vishy: this is a new install with trunk from 2 days ago, so i havent been able to create any VMs since then.20:13
*** rnirmal has joined #openstack-dev20:14
vishyoh!20:15
vishysomik_: i thought you had upgraded an old install20:15
vishydid you follow the xen install guide?20:16
somik_vishy: correct, with few changes to nova.conf20:16
vishysomik_: not sure how helpful I'm going to be here20:16
somik_because i have two hypervisors running compute and a cloud controller running api, scheduelr, glance and dashboard20:17
vishyi haven't done any troubleshooting for xen issues20:17
*** lorin1 has joined #openstack-dev20:18
somik_i guess if you guys are running trunk and can spawn VMs without issues then thats a positive, maybe i'll check-in with josh kearney since he wrote the xen guide.20:18
jk0what's up?20:19
jk0somik_: there was a fix we merged today that may fix what you've been seeing20:19
jk0try pulling trunk and spawning again20:20
somik_+jk0: i can try that, i was talking about this bug - https://bugs.launchpad.net/nova/+bug/788295 just fyi for others20:20
uvirtbotsomik_: Error: Could not parse data returned by Launchpad: timed out20:20
somik_i'll try to pull trunk again then may be its all fixed20:22
jk0somik_: yeah, I believe that was taken care of20:22
somik_+jk0: cool thanks then, I'll be back if it wasn't ;)20:23
openstackjenkinsProject nova build #938: SUCCESS in 2 min 42 sec: http://jenkins.openstack.org/job/nova/938/20:39
openstackjenkins* Tarmac: Several changes designed to bring the openstack api 1.1 closer to spec20:39
openstackjenkins- add ram limits to the nova compute quotas20:39
openstackjenkins- enable injected file limits and injected file size limits to be overridden in the quota database table20:39
openstackjenkins- expose quota limits as absolute limits in the openstack api 1.1 limits resource20:40
openstackjenkins- add support for controlling 'unlimited' quotas to nova-manage20:40
openstackjenkins* Tarmac: During the API create call, the API would kick off a build and then loop in a greenthread waiting for the scheduler to pick a host for the instance.  After API would see a host was picked, it would cast to the compute node's set_admin_password method.20:40
openstackjenkinsThe API server really should not have to do this.  The password to set should be pushed along with the build request, instead.  The compute node can then set the password after it detects the instance has booted.  This removes a greenthread from the API server, a loop that constantly checks the DB for the host, and finally a cast to the compute node.20:40
ttxsoren: around ?20:45
ttxsoren: see my email, and let's discuss that tomorrow if needed.20:48
sorenttx: I am.20:49
sorenttx: But don't tell anyone.20:49
ttxsoren: I won't, unles you tell someone I was here.20:49
ttxsoren: does my email make sense ?20:49
sorenttx: No :)20:50
ttxsoren: ok... I don't think I have enough energy left to explain what I mean, so let's do that tomorrow ?20:50
sorenttx: Ok. The question on my mind is: Why does it need tobe different from what Nova is doing?20:51
sorenttx: ..but you can answer that tomorrow :)20:51
ttxsoren: I'm not sure it would. And I'll explain that tomorrow.20:52
ttxgood night :)20:52
sorenG'night :)20:52
somik_+jk0: I pulled the latest trunk and the result is pretty much the same atleast to my naive eyes - http://paste.openstack.org/show/1418/20:59
jk0that's a different exception20:59
jk0are you running the compute node inside xenserver?20:59
jk0and running as root?20:59
somik_+jk0: yup and its running i am running compute sudo inside ubuntu a VM on the xenserver21:01
jk0ah, what version?21:03
jk0if Maverick, make sure this is on your flagfile: --xenapi_remap_vbd_dev=true21:03
bcwaldonsoren: since you're here, may I ask a favor of you?21:04
somik_+jk0: its maverick and i have that on the flagfile..21:04
bcwaldonsoren: I would LOVE to get some feedback on this MP: https://code.launchpad.net/~rackspace-titan/nova/osapi-serialization/+merge/6165621:05
*** lorin1 has left #openstack-dev21:05
somik_+jk0: its lucid but i have the maverick flag maybe thats the issue. lemme retry21:10
jk0yes, that would do it21:10
sorenbcwaldon: I'm not really here. Well, I am, but I'm not working.21:16
*** bcwaldon has quit IRC21:18
openstackjenkinsProject nova build #939: SUCCESS in 2 min 42 sec: http://jenkins.openstack.org/job/nova/939/21:18
openstackjenkinsTarmac: Fixed the mistyped line referred to in bug 78702321:18
uvirtbotLaunchpad bug 787023 in nova "rate limits builder uses incorrect key" [Medium,In progress] https://launchpad.net/bugs/78702321:18
*** RobertLaptop has quit IRC21:21
somik_+jk0: that was it! thanks! the vm spawned fine now21:26
jk0awesome21:27
jk0no problem21:27
*** RobertLaptop has joined #openstack-dev21:33
*** ameade has quit IRC21:37
vishyconsidering proposing the following diff for merge: http://pastie.org/197350021:56
vishyit is good on so many levels21:56
*** foxtrotgulf has quit IRC22:04
*** mgius has quit IRC22:10
*** jkoelker has quit IRC22:40
*** somik_ has quit IRC23:03
*** dragondm has quit IRC23:25
*** rnirmal has quit IRC23:32
*** Tv_ has quit IRC23:33
*** johnpur has quit IRC23:37
*** bcwaldon has joined #openstack-dev23:51
*** antonyy_ has quit IRC23:57
cloudnodhm, irccloud complaining that my usage exceeds beta allowance23:58
cloudnodbut now i'm hooked.  #smart23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!