*** jordandev has quit IRC | 00:08 | |
*** jbaker has quit IRC | 00:12 | |
*** jordandev has joined #openstack | 00:12 | |
*** rogue780 has quit IRC | 00:16 | |
*** joearnold has quit IRC | 00:19 | |
uvirtbot | New bug: #692803 in nova "instances are not resumed after node reboot" [Undecided,New] https://launchpad.net/bugs/692803 | 00:21 |
---|---|---|
*** winston-d has joined #openstack | 00:24 | |
*** dfg_ has quit IRC | 00:25 | |
*** jordandev has quit IRC | 00:27 | |
uvirtbot | New bug: #692805 in nova "manually running euca-reboot-instances fails after node reboot" [Undecided,New] https://launchpad.net/bugs/692805 | 00:37 |
*** HouseAway is now known as AimanA | 00:41 | |
*** _skrusty has quit IRC | 00:44 | |
*** ar1 has joined #openstack | 00:53 | |
*** _skrusty has joined #openstack | 00:56 | |
*** irahgel1 has left #openstack | 00:59 | |
*** daleolds has joined #openstack | 01:04 | |
vishy | eday: feel like helping me debug eventlet + rpc.call issues? | 01:07 |
vishy | termie isn't around atm | 01:07 |
eday | sure | 01:08 |
vishy | eday: so nested calls are broken | 01:09 |
vishy | sometimes they just never return | 01:09 |
eday | where are they nested? | 01:10 |
vishy | and sometimes they raise an exception | 01:10 |
vishy | did you get my pm or did i paste too many lines? | 01:11 |
eday | I don't see how they could be nested | 01:11 |
*** sophiap_ has joined #openstack | 01:11 | |
*** sophiap has quit IRC | 01:11 | |
*** sophiap_ is now known as sophiap | 01:11 | |
vishy | meaning if you are in the middle of a call and you try to do another one | 01:12 |
vishy | as in a call to compute that calls network | 01:12 |
vishy | it goes boom if it is on the same host | 01:12 |
eday | is this when not going through rabbit? | 01:13 |
eday | might need to pool the connectionobjects for everything, not just temp consumers then | 01:14 |
vishy | http://pastie.org/1393827 | 01:16 |
vishy | this is the test that blows things up | 01:16 |
vishy | interesting | 01:17 |
vishy | it doesn't blow up when using real rabbit | 01:17 |
*** zaitcev has joined #openstack | 01:17 | |
vishy | eday: i get this RuntimeError: Second simultaneous read on fileno 8 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular soc ket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False) | 01:18 |
vishy | eday: when i try to listen on multiple queues with the same object | 01:18 |
*** kevnfx has joined #openstack | 01:19 | |
eday | try this in rpc.cast | 01:19 |
eday | - conn = Connection.instance() | 01:19 |
eday | + conn = Connection.instance(True) | 01:19 |
eday | err, .call | 01:20 |
*** gaveen has quit IRC | 01:20 | |
*** dendro-afk is now known as dendrobates | 01:21 | |
vishy | k | 01:22 |
*** Ryan_Lane has quit IRC | 01:24 | |
eday | the rpc.Connection class assumes only one instance/process, unless you pass True to it | 01:24 |
vishy | correction: it is broken with real rabbit too | 01:25 |
eday | which is why we pass True for temp queues, since it needs to create one for return first and one for sending | 01:25 |
vishy | but only half of the time | 01:25 |
vishy | and that change didn't help | 01:25 |
vishy | so i tried modifying instance to return a new object every time | 01:28 |
vishy | still get simultaneous reads | 01:29 |
vishy | in fetch | 01:29 |
eday | thats with rabbit? | 01:29 |
vishy | yeah | 01:29 |
eday | so, without rabbit, I think it locks up because the thread is blocking on a wait (what I'm seeing) before it can run the nested call | 01:30 |
vishy | yeah what i don't get is how is the callback hitting before wait_msg.__call__() is going | 01:31 |
vishy | this is where monkeypatching can get painful | 01:32 |
*** odyi has quit IRC | 01:33 | |
vishy | i'm blaming the consumer.wait(limit=1) | 01:34 |
eday | hmm, fakerabbit is a singleton too, I wonder if thats borking it | 01:34 |
*** dragondm has quit IRC | 01:34 | |
*** odyi has joined #openstack | 01:35 | |
*** reldan has quit IRC | 01:35 | |
vishy | let me try switching | 01:36 |
vishy | hmm that just made it fail completely | 01:37 |
*** daleolds has quit IRC | 01:38 | |
vishy | i'm blaming the greenthread.sleep(0) | 01:38 |
vishy | actually | 01:39 |
eday | hmm, still not sure what that is doing :) | 01:40 |
eday | I guess that is a cooperative yield | 01:41 |
*** roamin9 has joined #openstack | 01:41 | |
*** reldan has joined #openstack | 01:44 | |
vishy | I emailed termie, so hopefully he has an idea. | 01:46 |
vishy | unless you have any other brilliant things we could try | 01:46 |
eday | well, creating a new connection for each rpc call should fix rabbit | 01:47 |
vishy | it doesn't though | 01:48 |
vishy | still get duplicate reads on fetch | 01:48 |
vishy | oo just noticed there is one in msg_reply as well | 01:49 |
vishy | let me try there | 01:49 |
eday | I would just change it at the top | 01:50 |
eday | change the default in Connection.instance = True | 01:50 |
vishy | yeah i tried that | 01:52 |
vishy | still gives me duplicate reads | 01:52 |
eday | hm | 01:53 |
eday | I'll have to look further, removing the singleton fds should have done it | 01:53 |
eday | gottarun now though | 01:53 |
vishy | http://pastie.org/1393884 | 01:54 |
vishy | that is what I'm trying to do | 01:54 |
vishy | bleh | 01:58 |
vishy | ok might have figured something out | 01:59 |
*** _skrusty has quit IRC | 01:59 | |
*** Ryan_Lane has joined #openstack | 02:00 | |
nelson__ | is swift extensible short of modifying the source using plugins? | 02:06 |
nelson__ | and, is there a nagios script? (I couldn't find one). | 02:06 |
*** _skrusty has joined #openstack | 02:12 | |
*** hadrian has quit IRC | 02:13 | |
*** opengeard has joined #openstack | 02:17 | |
*** reldan has quit IRC | 02:21 | |
notmyname | nelson__: still there? | 02:29 |
nelson__ | yep. | 02:29 |
nelson__ | are you? | 02:29 |
nelson__ | :) | 02:29 |
notmyname | nelson__: swift is extensible with wsgi middleware. familiar with it? | 02:29 |
nelson__ | yes! Thanks, that's the answer for that checkbox. | 02:29 |
nelson__ | and ... nagios? We can probably write that if needed. | 02:30 |
notmyname | since we (Rackspace) us swift as cloud files, we don't have all the parts of the product open-sourced. all the storage stuff is there, but the billing, auth, and cdn stuff is product specific and kept in house. most of that is really just wsgi middleware we deploy internally | 02:31 |
notmyname | we don't have any special hooks for nagios or other monitoring stuff (other than looking at the swift logs) | 02:32 |
nelson__ | sure, that makes sense. | 02:32 |
nelson__ | okay, I'll put in a "no". | 02:32 |
* notmyname wonders if the cloudkick acquisition could result in some openstack monitoring goodness | 02:32 | |
notmyname | nelson__: what's the nagios question, specifically. I can ask our ops guys | 02:33 |
*** jdurgin has quit IRC | 02:33 | |
*** jbaker has joined #openstack | 02:33 | |
nelson__ | it's an open source version of What'sUp? | 02:34 |
nelson__ | (I think). We get real-time reports of service accessibility. | 02:34 |
notmyname | ya, but what's the question you are trying to answer? | 02:34 |
nelson__ | Nagios is just a framework. You need a per-service script which tests the service and then reports back to the framework. | 02:35 |
notmyname | ah, ok | 02:35 |
*** reldan has joined #openstack | 02:35 | |
nelson__ | I don't think it's going to be that hard to write one; but write why if one exists? | 02:35 |
* nelson__ hugs opensource. | 02:35 | |
*** dirakx has joined #openstack | 02:35 | |
notmyname | I'm not really familiar with nagios either, so I'd like to ask our ops guys who run the monitoring stuff | 02:35 |
nelson__ | http://wikitech.wikimedia.org/view/Media_server/Distributed_File_Storage_choices <--- if anybody wants to do a once-over on that, I'd appreciate it. Obviously only bother with the OpenStack parts. :) | 02:45 |
notmyname | nelson__: of course, I take issue with a few things :-) | 02:47 |
nelson__ | Oh, feel free to correct me. I was mostly working off the documentation and whatever else I could find, so I almost certainly have some things wrong / incomplete. | 02:48 |
notmyname | let's start with the matrix | 02:49 |
nelson__ | Sure. | 02:49 |
notmyname | I'd say direct HTTP is supported with swift, unless you mean something other than what I'm thinking | 02:49 |
nelson__ | probably. for me, "direct http" means "can we expose this to the squids / varnishes"? | 02:50 |
nelson__ | if we need to have a web server in front, then the answer is "no". | 02:50 |
nelson__ | I expect that we might have to force the caches to include an authorization token. | 02:51 |
nelson__ | I also expect that's just configuration on their end. | 02:51 |
notmyname | all we have in front of swift is a load balancer | 02:51 |
notmyname | but swift talks HTTP direct to the client | 02:52 |
nelson__ | client software or web browser? | 02:52 |
notmyname | either, depends on your auth (which is another thing I'd like to go over in a bit) | 02:52 |
notmyname | but for now, think of browser as a subset of client software | 02:53 |
notmyname | so creating stuff is a PUT, fetching is a GET, etc | 02:53 |
nelson__ | We have caches in front of our media server, so no web browser touches it directly. The real question is whether our caches can access swift directly. It sounds like your answer is "definitely maybe". :) | 02:53 |
notmyname | we don't currently support form uploads (POST for uploads POST currently only supports updating metadata) | 02:54 |
nelson__ | we'll have code to do that; always. | 02:54 |
notmyname | if you have the containers marked as public, then definitely yes | 02:54 |
notmyname | if you don't, it depends on your auth system | 02:55 |
nelson__ | okay, I'll change that then. | 02:55 |
nelson__ | we have some private wikis. We'll have to use auth for them. | 02:55 |
notmyname | that being said, because swift is (can be) used for large files, I don't know that I'd recommend using caching for the entire objects. it will work, but you have to be careful. what if someone tries to download a huge object? | 02:56 |
notmyname | ok, so moving on | 02:56 |
notmyname | under "load balanced for the same data" | 02:56 |
nelson__ | we don't have any really huge objects yet. we'll have to deal with streaming for them. | 02:56 |
notmyname | the client talks to the swift proxy server. the proxy server handles all of the talking on the internal network to the storage nodes. the proxies can (or should) be load balanced | 02:57 |
notmyname | that may be a little different than what you are asking, but IMO you don't have to worry about that detail. swift hides it from you | 02:58 |
notmyname | so, internally, the proxy server does request the object from <num_replicas> servers and returns the object from first storage node to respond, but you don't have to load balance any of that | 02:59 |
nelson__ | I think I understand that. | 02:59 |
notmyname | was that similar to what you were asking, or am I on the wrong track? | 03:00 |
nelson__ | yes, it's exactly what I was asking. | 03:00 |
notmyname | ok | 03:00 |
nelson__ | and yes, we're poking into the internals a bit. | 03:00 |
notmyname | ok, that's good. understand your stack :-) | 03:00 |
notmyname | I'll skip "authentication" and come back to that later | 03:01 |
nelson__ | okay. | 03:01 |
notmyname | what is "supports unpublished files..." | 03:01 |
notmyname | ...for new upload | 03:01 |
nelson__ | We accept file uploads before we've established the copyright permission. | 03:02 |
nelson__ | Need to make sure that they're not visible to the web. | 03:02 |
nelson__ | but I think we can upload to a user that isn't public and then transfer it afterwards. | 03:03 |
notmyname | ya that makes sense | 03:03 |
*** sirp1 has quit IRC | 03:03 | |
*** sirp1 has joined #openstack | 03:03 | |
notmyname | se support ACLs, and they are based on the swift account. it's also dependent on your auth system | 03:03 |
notmyname | so you can give one account access to read or write to a container in another account | 03:04 |
nelson__ | yep, and I think even if that doesn't work, we can have an area which is not visible to the web, and then rename the file into the correct place (copy/delete if necessary). | 03:04 |
notmyname | ya, that works too | 03:05 |
notmyname | ok. "max files size" | 03:05 |
notmyname | 5GB is right. and it's not entirely right | 03:05 |
notmyname | swift now (as of this morning) supports arbitrarily sized objects | 03:05 |
nelson__ | but.... ? | 03:06 |
notmyname | upload the chunks of the objects as normal objects (each with the 5GB limit--or whatever you have changed the constant to) | 03:06 |
notmyname | then create a zero-byte object with an X-Object-Manifest header. this is the "manifest object" that ties the others together | 03:07 |
nelson__ | pretty sure we're going to want to have our own chunking system. Otherwise it gets crazy with such huge files. | 03:07 |
notmyname | by GET'ing the manifest object, all of the parts are cat'ed out to the client as one object. | 03:07 |
nelson__ | oh! Interesting. | 03:07 |
notmyname | so swift supports unlimited files sizes. uploaded in up to 5GB chunks | 03:07 |
nelson__ | okay, I'll put that in. | 03:07 |
notmyname | and the 5GB limit is simply a constant in the code that can be changed to anything. | 03:08 |
notmyname | there are things to take in to consideration, of course | 03:08 |
notmyname | but it's 5GB by default | 03:08 |
notmyname | you will still have access to all the chunks. and side effects of this system allow you to actually append to an object or even insert in or maybe even sync chunks of the large object | 03:09 |
nelson__ | we expect to upload in chunks; if only to make sure that uploads actually run to completion. | 03:09 |
notmyname | you could have 10K 10 byte objects if you wanted | 03:09 |
notmyname | or 100K. or more | 03:09 |
notmyname | right. | 03:10 |
notmyname | the chunks also give you a pseudo pause/resume for uploads | 03:10 |
nelson__ | right; our thought as well. | 03:11 |
notmyname | "maximum number of files"--where did you get the one million number from? adrian otto's blog? | 03:11 |
notmyname | again, that's right, but not the complete story | 03:11 |
nelson__ | probably. | 03:12 |
notmyname | containers get less performent (esp. for PUTs as they grow). but object GETs are unaffected by container size | 03:12 |
notmyname | we recommend that you start sharding your content across containers as you get more objects | 03:13 |
notmyname | but there is no hard number | 03:13 |
notmyname | we've put billions of objects in a container before (in testing) | 03:13 |
notmyname | there is nothing in the code that will stop you. it's a pain threshold thing | 03:14 |
notmyname | if you put a bung of objects, then only read them, you can have a huge number of objects with no problems | 03:14 |
notmyname | s/bung/bunch | 03:14 |
notmyname | if you have an active container (lots of concurrent reads and writes), I'd limit that closer to 1-10 million | 03:15 |
notmyname | and, again, there is no limit. it depends on your use case and your users | 03:16 |
nelson__ | so it's more that writes are slower when you get so many entries. | 03:16 |
notmyname | ya | 03:17 |
notmyname | wreese: feel free to jump in any time :-) | 03:17 |
notmyname | in our testing it has a pretty linear degradation of performance | 03:18 |
nelson__ | We can probably live with that. We'll definitely be testing it! | 03:19 |
nelson__ | I've saved my edits; what else? | 03:19 |
notmyname | my answer for replication is similar to my answer for load balancing. rsync is what we use internally (with some optimizations on what we sync) to sync data, but swift hides that from you | 03:20 |
notmyname | the "time before data is replicated is off" | 03:20 |
notmyname | we won't return success unless we have at least 2 good writes (assuming 3 replicas) | 03:20 |
notmyname | but that doesn't have anything to do with replication. | 03:21 |
notmyname | replication runs as a daemon on the storage nodes. I believe it has a config variable that can be tuned to set the aggressiveness of the checks | 03:21 |
notmyname | byt default, I think it's 300 seconds | 03:22 |
notmyname | so that ends up being your eventual consistency window. | 03:22 |
notmyname | and it's all quite tunable | 03:23 |
nelson__ | Okay, but when a file gets written, it gets written to two copies, right? | 03:23 |
nelson__ | and then if it gets fetched, only the new version is returned? | 03:24 |
notmyname | no, the fastest responder is returned :-) | 03:24 |
notmyname | if the one that didn't succeed comes back online and responds first, that is the content that is returned by the proxy | 03:25 |
notmyname | and replication will take care of getting the newer content on that node (eventually) | 03:25 |
notmyname | we've talked about changing that, but there are concerns about performance. and frankly, we need a good use case | 03:25 |
nelson__ | okay ... what if a file doesn't exist? | 03:27 |
nelson__ | as in ... new upload. | 03:27 |
notmyname | 404 | 03:28 |
notmyname | ok. I was a little off on how the proxy responds | 03:28 |
notmyname | it doesn't do it concurrently. but it does respond with the first storage node that gives a 2xx response | 03:29 |
nelson__ | ah, okay, that's good. | 03:29 |
notmyname | so if the first on 5xx, the second 404, and the third 200, then you get the data from the third | 03:29 |
nelson__ | and ... we can delete the file before uploading the new one. that will deposit a tombstone everywhere, right? | 03:29 |
notmyname | yes | 03:30 |
notmyname | at least, everywhere it can. again, replication handles that | 03:30 |
notmyname | last write wins. and DELETEs write a tombstone file | 03:30 |
nelson__ | okay, good. that's what we'll be doing anyway. Older files are kept in a directory inaccessible to the web. | 03:30 |
nelson__ | yeah, it's okay if there's a little weirdness when a storage node has gone away. | 03:31 |
notmyname | ok. I think that's it for the matrix. now the auth part | 03:31 |
*** jbaker has quit IRC | 03:31 | |
notmyname | swift is BYOA: bring your own auth. we include a devauth for testing, but do not recommend it for prod use | 03:31 |
*** kashyapc has joined #openstack | 03:32 | |
notmyname | the devauth server works like the rackspace cloud auth server (token-based), but there is no need to make your own work like that | 03:32 |
nelson__ | Oh, okay, that's fine. | 03:32 |
gholt | If you run with no auth wsgi installed at all, everything would be full public by default. ;) | 03:32 |
nelson__ | as far as I know, we only have a few private wikis, and right now they're handled by a script that handles auth* | 03:33 |
notmyname | if you have your existing id system, write wsgi middleware that authenticates against that using whatever params in the request you want (like HTTP basic auth or some cookie) | 03:33 |
notmyname | nelson__: so to back up, you're evaluating this for use with wikimedia. as in backing wikipedia? (*hopes*) | 03:33 |
gholt | But, even with devauth or swauth (a proposed alternative) there is the concept of "public containers" where you can access files with no credentials. | 03:33 |
nelson__ | yes; all of the media files; images of all stripes, MIDI, audio, and video files. | 03:34 |
notmyname | that's really, really cool to me | 03:35 |
nelson__ | Hehe, I'm excited about it too. Right now it's all coming from one machine. | 03:37 |
notmyname | nelson__: gholt wants me to also point out that swauth (the proposed replacement for the current devauth) will be production ready ;-) | 03:37 |
nelson__ | well, all the originals. The resized images come from a second machine. | 03:37 |
nelson__ | You can probably see how excited we are about moving to a DFS. | 03:38 |
notmyname | looks like you've done a lot of homework | 03:38 |
notmyname | just saw the "rack awareness/off site" row in the matrix | 03:39 |
notmyname | swift supports the concept of zones. a zone is something that is as isolated as you can make it. if that's server, ok. rack, better. data center, great | 03:39 |
notmyname | swift will store each copy of the object in a different zone | 03:40 |
nelson__ | Yes, we're bringing up another data center "soonest". Right now we're very afraid of hurricanes in Florida. :) | 03:40 |
*** reldan has quit IRC | 03:41 | |
notmyname | if you do end up using swift (or "i'd use it if it had this one other thing") is that we need use cases for new stuff. the large object support was directly made in response to a use case that NASA had. their users tend to have a little more data than the average rackspace user, so the different use cases are great to hear | 03:43 |
*** AimanA is now known as HouseAway | 03:43 | |
notmyname | and as an interesting piece of trivia, swift was written to replace an internal system that was modeled after mogilefs | 03:44 |
notmyname | (i see you have it as another column) | 03:44 |
nelson__ | Hehe, yes, they seem to be incestuous, with each one trying to out-do the other. :) | 03:45 |
vishy | eday: I got the tests working with real rabbit, turns out i was creating queues wrong and i had to change one instance() to take True. Still broken with fake_rabbit however | 03:45 |
notmyname | nelson__: on nested containers in swift: http://programmerthoughts.com/programming/nested-folders-in-cloud-files/ | 03:46 |
*** schisamo has quit IRC | 03:49 | |
nelson__ | yeah, we don't actually need a hierarchy. | 03:50 |
notmyname | ah, ok. saw that you mentioned it is all. it's there if you need it :-) | 03:50 |
nelson__ | we have multiple projects plus classifications, like thumbnails, and the filename which is unique across the entire project. | 03:51 |
*** damon__ has joined #openstack | 03:51 | |
nelson__ | so we might have long filenames (keys), but we don't need a hierarchy. | 03:51 |
*** damon__ has quit IRC | 03:51 | |
notmyname | if you were to name tumbnails something like "/project_contianer/2010/thumbnails/cool_img.jpg" you could get a listing of all thumbnails in 2010, for example | 03:52 |
notmyname | nelson__: any other questions? I'm hoping to stay up to see the eclipse, so I've got lots of time :-) | 03:53 |
nelson__ | Hehe, yeah, me too. :) | 03:53 |
nelson__ | although I can see it out the window from my bed, getting horizontal at that time of night is dangerous. :) | 03:53 |
notmyname | no kidding | 03:54 |
notmyname | I find myself thinking, "why is it so hard to stay up? I did this all the time in college." | 03:54 |
*** roamin9 has quit IRC | 03:54 | |
*** roamin9 has joined #openstack | 03:55 | |
notmyname | nelson__: I'd love to see your final comparison of storage systems. it would be pretty cool to read about how you made the decision and implemented it | 03:56 |
nelson__ | We're pretty open about everything we do ... being a nonprofit, and dedicated to encyclopaedic publishing. | 03:57 |
*** jakedahn has joined #openstack | 04:13 | |
*** kashyapc_ has joined #openstack | 04:35 | |
*** kashyapc has quit IRC | 04:38 | |
*** omidhdl has joined #openstack | 04:40 | |
*** miclor___ has joined #openstack | 04:41 | |
*** miclorb has quit IRC | 04:43 | |
*** omidhdl has quit IRC | 04:46 | |
*** omidhdl1 has joined #openstack | 04:46 | |
*** miclor___ has quit IRC | 05:10 | |
jeremyb | notmyname: i'm still halfway through scrollback, but is it possible for auth middleware to weigh in on object writes/reads or only container/acct wide decisions? | 05:15 |
jeremyb | i.e. with the current API is an "append only" / WORM container possible? so, some users can write new files but not delete and read anything, some can just read and then maybe there's a user that can do anything (i guess the actual acct owner) | 05:17 |
jeremyb | hrmmmm, so can you use empty objects with manifests as symlinks? can they cross from one container to another? | 05:20 |
jeremyb | what if you have perms to read the object with the manifest but not what it points to? does a read fail? | 05:21 |
jeremyb | nelson__: ^^^ | 05:21 |
nelson__ | ya got me. | 05:21 |
jeremyb | nelson__: i just thought you might be interested in the answers | 05:22 |
nelson__ | what answers? :) | 05:22 |
*** omidhdl has joined #openstack | 05:22 | |
jeremyb | nelson__: they haven't come in yet | 05:22 |
gholt | jeremyb: You can set read and write acls. No real way to limit deletes/overwrites because of the consistency window. | 05:22 |
*** omidhdl1 has quit IRC | 05:23 | |
jeremyb | gholt: don't follow | 05:23 |
jeremyb | i understand you can write to 2 different nodes and they'll fight over who wrote last | 05:23 |
gholt | If you try to limit overwrites, what if one node doesn't have the original object and accepts the write? Then you have to back track to get rid of the one new write. | 05:24 |
jeremyb | but if the object's days old and the nodes are all up to date... | 05:24 |
gholt | Yeah, for now we just erred on the side of what can truly be accomplished. To say we limit overwrites but not be able to truly enforce it would be bad. | 05:24 |
jeremyb | i'm saying if the node doesn't know then let them fight but if it does know then allow it to block | 05:25 |
gholt | I guess: It's a possibility if it's really needed, but would probably have to be a toggleable feature, hehe. | 05:25 |
gholt | A node could go down at any time, meaning older objects would also be affected by such a window | 05:26 |
jeremyb | sure. it can even be a wsgi api feature that has big scary warnings in the sphynx docs and doesn't get implemented in swauth | 05:26 |
gholt | Well, you do need the hooks in Swift itself, to pass the acl information back, but yeah, it could be done. Just not atm and not for the next Bexar release. | 05:27 |
gholt | Oh, and acls are currently only for containers, no object-level acls at this time. | 05:27 |
gholt | For manifests, yes they can go across containers, the acls for both must be met in such a case. | 05:28 |
jeremyb | is bexar planned for end of jan? | 05:29 |
gholt | I think so, lemme check... | 05:29 |
*** omidhdl has quit IRC | 05:30 | |
gholt | Feb 3 | 05:30 |
gholt | http://wiki.openstack.org/Release | 05:30 |
jeremyb | woot, thanks | 05:30 |
jeremyb | you don't know olpc's holt, do you? | 05:31 |
gholt | The basics of what you can control with your own auth wsgi middleware is on a request as it's inbound, and then a final callback with any x-container-read or x-container-write information. | 05:31 |
jeremyb | which is stored with the container itself (on nodes) | 05:32 |
gholt | To add what your talking about, you'd just add a x-container-delete and x-container-overwrite acl or somesuch. | 05:32 |
gholt | Yeah, in the metadata for the container. | 05:32 |
jeremyb | right | 05:32 |
gholt | I bet it'd be acceptable to all if that was added but off by default, with the warnings you mentioned that it's not failsafe. :) | 05:33 |
jeremyb | if you can read an object in a container there's no way to block listing of the contents of that container? | 05:33 |
jeremyb | e.g. so you can read if you know the name but not if you don't | 05:34 |
gholt | Not currently. We didn't have many use cases to go on. And some of those use cases if infrequent could be served with their own container. | 05:34 |
*** sirp1 has quit IRC | 05:34 | |
jeremyb | right | 05:34 |
gholt | Keep the code as simple as possible, but no simpler. Hehe | 05:35 |
jeremyb | heh | 05:35 |
jeremyb | is there a "leader" for the various ring servers? like accounts or containers. e.g. will all reads/writes for a container or account listing (not objects themselves) try leader first and then others if leader's not available? | 05:35 |
jeremyb | or is it like with objects, first responder | 05:36 |
gholt | For the symlinks/manifests, it's important to note they only follow the one jump, btw. | 05:36 |
jeremyb | k | 05:36 |
jeremyb | also, last i checked there was little to no support/testing for replica counts != 3 | 05:37 |
gholt | For the ring, for a given item, the three (by default) nodes are considered in order. So yeah, GETs will hit the primary first. | 05:37 |
jeremyb | any work being done with that or an outline of what should be tested | 05:37 |
gholt | True, there's been more lately, and some bugs found in that area. But I think they're all (I hope) cleaned up now. | 05:37 |
jeremyb | more what? | 05:37 |
gholt | The main thing was grepping for the number 3. hehe | 05:37 |
jeremyb | hah | 05:37 |
jeremyb | it's 11 in binary... | 05:38 |
gholt | We've had bugs reported that there was a spot in the replicator that assumed 3 nodes on handoff, for instance. But that's fixed in trunk now. | 05:38 |
jeremyb | but that would just result in the wrong number getting replicated or what? | 05:38 |
jeremyb | s/number getting replicated/eventual replica count/ | 05:39 |
gholt | Hmm. I don't remember exactly. Something like the handoff node would get confused as to whether it should delete it's copy or not after getting it back to one of the primary nodes. | 05:39 |
jeremyb | i mean was there any remote risk of dataloss or returning the wrong data? | 05:40 |
*** achew22 has joined #openstack | 05:40 | |
gholt | Checking. Launchpad is slow. :) | 05:40 |
gholt | https://bugs.launchpad.net/swift/+bug/685730 | 05:40 |
uvirtbot | Launchpad bug 685730 in swift "object-replicator: replica deletion decision is wrong if replica_count != 3?" [Undecided,New] | 05:40 |
jeremyb | ok, i'll read that in the morning | 05:41 |
achew22 | When I start openstack using novascript's run I get a "nova.exception.NotFound: Class get_connection cannot be found" is there anything I did wrong in setting this up? | 05:41 |
gholt | If you had 2 replicas, it'd mean you'd never get rid of handoff copies. If you had 4, it'd confuse itself and delete the fourth copy constantly (I believe). | 05:41 |
jeremyb | ok, but you'd never have less than 2 and it'd never return one object when you asked for a different one? | 05:42 |
gholt | achew22: I'm not much for nova information sorry. :/ Hopefully somebody's on right now that is. | 05:42 |
jeremyb | or half an object or complete garbage or something | 05:42 |
achew22 | no worries, things are pre alpha and I'm amazed that I even got one instance running | 05:42 |
achew22 | I threw a little party :) | 05:43 |
gholt | Hehe | 05:43 |
jeremyb | any checksums on read or that's left to the periodic checksum service? (forget the name) | 05:43 |
gholt | jeremyb: No, no chance on that. Things are atomic. They're either there or they aren't. And yeah, in this case I think you're right about never being under 2. But it's all just 'thoughts', not tested to see what the bug did really. :) | 05:43 |
jeremyb | right | 05:44 |
jeremyb | well if someone could write up a little on what situations to test that would be nice :-) | 05:44 |
gholt | ETags on objects are the md5 sum of the contents. The auditor checks that on occasion. Though that sucker is a fight, because you don't want the load too high, but you don't want the checks too seldom. | 05:44 |
jeremyb | so, clients check md5 by default? (ewwwww, md5) what if a client finds an error? can it let someone know to get the replica fixed? | 05:45 |
gholt | Yeah, for the != 3 stuff we need to get a lab set up and some automated tests that check the backend copies, etc. | 05:45 |
*** omidhdl has joined #openstack | 05:46 | |
gholt | Clients don't have to, they can if they want. The auditor, if it finds a bad checksum will quarantine the offender into a separate dir and the replicator will put a fresh copy in place from another node | 05:46 |
jeremyb | but if the client does check and finds a problem can it get the auditor to prioritize that object? | 05:47 |
*** f4m8_ is now known as f4m8 | 05:47 | |
gholt | Hmm. not right now. Good idea though. I think it'd be better implemented that the object server does the checksum on GETs automatically and quarantines the offender right away. | 05:48 |
jeremyb | sure, but maybe not on every request if it's requested dozens of times a min | 05:48 |
jeremyb | i guess there's no way for it to know if a request was served from the OS disk cache or from real disk | 05:49 |
gholt | Yeah, maybe not. Though md5 isn't too bad on cpu and we've had headroom there. Also, wouldn't work for range requests at all, hehe. | 05:49 |
jeremyb | yeah, heh | 05:50 |
jeremyb | how do range reqs work with manifests? | 05:50 |
jeremyb | (i'm assuming the OS disk cache in memory is less vuln. to corruption than real spinning disk) | 05:50 |
gholt | Range requests on manifests was fun. :) | 05:50 |
gholt | Think of it how you might do it manually. A container listing has all the segments that comprise the whole file. The json formatted listing also has the length of each segment. | 05:51 |
gholt | So seeking is just a matter of jumping over segments until you get where you want, then a range request on that first served segment (probably) and then serving until you get to your end point. | 05:52 |
jeremyb | does the manifest itself not have the lengths? | 05:52 |
gholt | No, the manifest simply has the info to do the container listing. | 05:52 |
gholt | You could actually append to a manifested large object that way if you wanted. | 05:53 |
*** omidhdl has quit IRC | 05:53 | |
gholt | But, if you do a HEAD on the manifest, you'll get the total length, unless it's more than 10k segments. | 05:53 |
jeremyb | haha, that's 48+ TB for one object | 05:54 |
jeremyb | does NASA do that? | 05:54 |
gholt | :) Well, yeah, but you can also use smaller, each-not-even-the-same-size segments. | 05:54 |
jeremyb | (i'm not seeing the modifying in the middle use case being common) | 05:54 |
jeremyb | right | 05:54 |
jeremyb | so, what if you write 1000 smallish segments and then you want to compact them into one segment? can you do a copy or you have to download and upload? | 05:55 |
gholt | Nah, middle modifying would be weird. Maybe for a tileset game that let you pull the whole board with a manifest or something, hehe | 05:55 |
gholt | The COPY request will let you do that, limited to 5G of course. | 05:56 |
jeremyb | great | 05:56 |
jeremyb | i mean if you copy the manifest | 05:56 |
jeremyb | that won't just make a new manifest? | 05:56 |
gholt | Oh wait, I'm wrong on that. | 05:56 |
gholt | And I coded it, lol. | 05:56 |
jeremyb | heh | 05:56 |
gholt | COPY on the manifest just makes another manifest (zero bytes taken up). | 05:56 |
gholt | We discussed that, I remember now. Though what you're saying would be darn useful. Hmmm. | 05:57 |
jeremyb | e.g. for something that does frequent writes but doesn't really have a need for all the pieces in the end run. could do them incrementally and periodically compact | 05:58 |
* gholt is taking notes. :) | 05:58 | |
* jeremyb wonders if transparent compression is anywhere on the horizon | 05:58 | |
gholt | Changing the manifest COPY to compact makes sense. If you really want another manifest it's cheap to HEAD the original and PUT the new with the same X-Object-Manifest header. | 05:59 |
gholt | It's not cheap to pull down the whole object and reupload it to compact. | 05:59 |
jeremyb | right, agree on cheap | 05:59 |
jeremyb | not sure about api change though | 05:59 |
jeremyb | is there any api versioning? | 06:00 |
gholt | Well, the manifest code is for Bexar, so it's not set in stone yet anyway | 06:00 |
jeremyb | oh, right | 06:00 |
gholt | It just merged today, in fact, heheh | 06:00 |
gholt | The compression was talked about, but it costs cpu and saves disk. Cheaper for the customer, more expensive for the provider, if you're only charging for requests/space-used. | 06:01 |
jeremyb | but if you're running your own cluster... | 06:01 |
gholt | It'd have to be charged differently and it'd work, I'm sure. Which means it'd have to have a bit of extra logging to indicate a compression-request or something. | 06:01 |
gholt | Yeah, that too. :D | 06:02 |
gholt | But honestly, if at all possible, it's better to use the client's CPU. :D | 06:02 |
jeremyb | i was thinking clients could request to have the compressed form and then you save on the wire too and only have to compress once | 06:02 |
jeremyb | so, really cost to provider would be different for writes but no different for reads assuming a compression capable client | 06:03 |
jeremyb | right? | 06:04 |
gholt | Ah damn, it's getting late and I have to be at work early tomorrow. Great ideas! Keep them coming and bring them up again if you don't see them for next release cycle (Cactus, Feb 3rd start). | 06:05 |
* jeremyb was just reading http://wiki.openstack.org/CactusReleaseSchedule | 06:05 | |
gholt | True, it'd be different on storage used. I guess one could charge on the uncompressed size, lol. | 06:05 |
jeremyb | i have to be sleeping too | 06:06 |
jeremyb | TZ? | 06:06 |
gholt | Central for me. 12am | 06:06 |
jeremyb | aha, America/New_York here | 06:06 |
*** omidhdl has joined #openstack | 06:06 | |
jeremyb | nelson__: look up :) | 06:06 |
gholt | I hear there's some moon thing happening. Hehe | 06:06 |
jeremyb | yeah, i heard, will miss it | 06:06 |
gholt | Looks to be near 100% overcast here anyways | 06:07 |
gholt | Ah well, night! | 06:08 |
jeremyb | btw, (not necessarily now...) i'm wondering how if at all log aggregation is done. e.g. for billing. is it just concentrated on the proxy servers and you're responsible for any further consolidation? | 06:08 |
jeremyb | good night! | 06:08 |
*** DubLo7 has quit IRC | 06:08 | |
gholt | Ah, that's notmyname's realm. Ping him at some point and he'll give you his run down. :D | 06:09 |
jeremyb | aha | 06:09 |
jeremyb | gholt: btw, you should take a look at tahoe lafs at some point. i have some more ideas (that i've mentioned here before actually) that come from that direction | 06:12 |
* jeremyb sleeps | 06:12 | |
*** DubLo7 has joined #openstack | 06:19 | |
achew22 | Sorry to repeat. When I start openstack using novascript's run I get a "nova.exception.NotFound: Class get_connection cannot be found" is there anything I did wrong in setting this up? | 06:25 |
*** jakedahn has quit IRC | 06:25 | |
*** ramkrsna has joined #openstack | 06:29 | |
*** Ryan_Lane has quit IRC | 06:30 | |
*** kashyapc_ has quit IRC | 06:31 | |
*** aimon has quit IRC | 06:39 | |
*** aimon has joined #openstack | 06:39 | |
*** miclorb_ has joined #openstack | 06:52 | |
*** guigui1 has joined #openstack | 06:59 | |
*** kevnfx has quit IRC | 07:01 | |
notmyname | jeremyb: ask me tomorrow about the stats. there is a fairly complete stats system that can be used to feed a billing system included in swift | 07:06 |
*** achew22 has quit IRC | 07:07 | |
*** achew22 has joined #openstack | 07:07 | |
*** achew22 has left #openstack | 07:15 | |
xtoddx | _cerberus_: if you get a chance can you take a look at lp:~anso/nova/paste to see that no openstack api stuff is broken or forgotten in that change? (other than the subdomain stuff, which I'll need to work up a mapper for, similar to Paste.urlmap) | 07:24 |
*** miclorb_ has quit IRC | 07:37 | |
*** Ryan_Lane has joined #openstack | 07:59 | |
*** rcc has joined #openstack | 08:02 | |
*** brd_from_italy has joined #openstack | 08:07 | |
*** Ryan_Lane has quit IRC | 08:10 | |
*** zaitcev has quit IRC | 08:21 | |
*** Cybodog has quit IRC | 08:27 | |
*** doude has joined #openstack | 08:29 | |
*** miclorb has joined #openstack | 08:30 | |
*** larstobi has joined #openstack | 08:32 | |
*** kashyapc has joined #openstack | 08:43 | |
*** allsystemsarego has joined #openstack | 08:44 | |
*** miclorb has quit IRC | 08:48 | |
*** miclorb has joined #openstack | 09:18 | |
*** irahgel1 has joined #openstack | 09:19 | |
*** Abd4llA has joined #openstack | 09:19 | |
*** Cybodog has joined #openstack | 09:37 | |
*** roamin9 has quit IRC | 09:44 | |
*** ar1 has quit IRC | 09:45 | |
*** omidhdl1 has joined #openstack | 09:59 | |
*** omidhdl has quit IRC | 10:00 | |
*** aimon has quit IRC | 10:03 | |
*** HugoKuo has joined #openstack | 10:04 | |
*** irahgel1 has left #openstack | 10:06 | |
*** miclorb has quit IRC | 10:12 | |
*** jordandev has joined #openstack | 10:15 | |
*** HugoKuo_ has joined #openstack | 10:23 | |
*** vish1 has joined #openstack | 10:26 | |
*** zns has joined #openstack | 10:27 | |
*** [ack]_ has joined #openstack | 10:27 | |
*** zns has quit IRC | 10:27 | |
*** cw_ has joined #openstack | 10:28 | |
*** mattt_ has joined #openstack | 10:28 | |
*** termie_ has joined #openstack | 10:28 | |
*** kashyapc_ has joined #openstack | 10:31 | |
*** HugoKuo has quit IRC | 10:32 | |
*** kashyapc has quit IRC | 10:32 | |
*** dirakx has quit IRC | 10:32 | |
*** [ack] has quit IRC | 10:32 | |
*** cw has quit IRC | 10:32 | |
*** termie has quit IRC | 10:32 | |
*** pquerna has quit IRC | 10:32 | |
*** clayg has quit IRC | 10:32 | |
*** mattt has quit IRC | 10:32 | |
*** vishy has quit IRC | 10:32 | |
*** asksol has quit IRC | 10:32 | |
*** clayg_ has joined #openstack | 10:32 | |
*** asksol has joined #openstack | 10:33 | |
*** pquerna has joined #openstack | 10:33 | |
*** clayg_ is now known as clayg | 10:34 | |
*** omidhdl1 has quit IRC | 11:04 | |
*** Abd4llA has quit IRC | 11:11 | |
*** Abd4llA has joined #openstack | 11:17 | |
*** ibarrera has joined #openstack | 11:38 | |
*** miclorb has joined #openstack | 11:40 | |
*** miclorb has quit IRC | 11:43 | |
*** miclorb has joined #openstack | 11:44 | |
*** miclorb has quit IRC | 11:46 | |
*** smaresca has quit IRC | 11:48 | |
*** omidhdl has joined #openstack | 11:51 | |
sandywalsh | o/ | 11:53 |
ttx | sandywalsh: o/ | 11:53 |
*** reldan has joined #openstack | 11:53 | |
*** bigd_ has joined #openstack | 11:58 | |
bigd_ | hey | 11:59 |
bigd_ | someone alive? | 11:59 |
sandywalsh | somewhat :) | 11:59 |
bigd_ | ^^ | 11:59 |
uvirtbot | bigd_: Error: "^" is not a valid command. | 11:59 |
bigd_ | ;-) | 11:59 |
bigd_ | im pretty new to all that cloud/cluster stuff, so may i ask some (hope not to dump) questions? | 12:00 |
*** smaresca has joined #openstack | 12:01 | |
sandywalsh | go for it ... if I can't help I'm sure someone can. | 12:01 |
*** jordandev has quit IRC | 12:02 | |
bigd_ | how does openstack distributed the work? one or more VMs per node or can i combine the resources of all nodes to power one "big" VM? | 12:03 |
*** smaresca has quit IRC | 12:08 | |
sandywalsh | bigd_, AFAIK you can partition groups of hosts within zones. There are schedulers allocated per zone. Additionally network and compute services reside in each zone. | 12:09 |
sandywalsh | bigd_, so you can decide how you want to partition. Machine, geography, business line, etc. | 12:10 |
bigd_ | what do you mean with "compute services"? | 12:11 |
sandywalsh | bigd_, the compute service is the business logic for controlling the openstack nova modules. | 12:14 |
sandywalsh | bigd_, compute divvies up the work and puts the tasks in rabbitmq queues. Then the various services (network, etc.) pick them up and do the work. | 12:16 |
bigd_ | im not sure if i got it right ... by defining a zone with 10 nodes i am able to put one VM (lets say Win2008) in this zone and this VM would use all resources? | 12:17 |
bigd_ | all resources of the current zone | 12:18 |
*** feather has quit IRC | 12:18 | |
sandywalsh | bigd_, well, you need to decide which hypervisor you're going to use: kvm, xenserver, etc. That hypervisor may run many instances of the guest os (linux, windows, etc). Hopefully I'm understanding your question? | 12:19 |
*** smaresca has joined #openstack | 12:21 | |
*** WonTu has joined #openstack | 12:22 | |
*** irahgel has left #openstack | 12:22 | |
*** WonTu has left #openstack | 12:22 | |
bigd_ | we are geting closer ;-) ... sure there is a underlying engine like KVM etc... my point is if one instance would be running on a certain noe of the zone | 12:22 |
bigd_ | *node | 12:22 |
*** reldan has quit IRC | 12:23 | |
sandywalsh | bigd_, yes. From what I understand, the scheduler decides where an instance is run. | 12:24 |
sandywalsh | bigd_, there are migration operations for moving instances/snapshots from one host to another | 12:25 |
bigd_ | hmm ... for our goal thats what we dont want as each node would be a desktop pc ... we want to combine their power | 12:27 |
patri0t | bigd_: http://wiki.openstack.org/Overview | 12:28 |
patri0t | bigd_: http://nova.openstack.org/service.architecture.html | 12:28 |
patri0t | bigd_: http://www.box.net/shared/static/ussls7gp2j.png | 12:28 |
patri0t | bigd_: these may help in general | 12:29 |
*** ctennis has quit IRC | 12:29 | |
bigd_ | thanks | 12:33 |
bigd_ | as openstack cant do what we need, you have maybe a suggestion what could be worth a look? | 12:35 |
patri0t | can you restate your goal? | 12:38 |
bigd_ | or to be more precise: you know any solution to combine many hosts/PCs to one "big"? Or lets say at least that a operating system would see it as one system/host | 12:41 |
patri0t | It more depends on what exactly you want to do | 12:43 |
patri0t | the difference between cloud computing/grid computing may address the same issue | 12:43 |
patri0t | if you want to do one task using several machines grid computing should be a good approach | 12:45 |
patri0t | otherwise if you have several tasks and several workers and you may go for cloud computing | 12:46 |
bigd_ | patri0t ... the point is that we have got requests to find a soultion to help doing renderjobs (FX, 3d, etc) AND common computing tasks (simulations or calculations) | 12:46 |
*** ctennis has joined #openstack | 12:46 | |
*** ctennis has joined #openstack | 12:46 | |
fabiand | bigd_: I suppose that it will also depend on the specififc application what kind of cluster/grid/cloud will help you | 12:48 |
patri0t | bigd_: again it depends how you do those tasks, should they be done in parallel? or regularly you do one task first then start the next one (probably from the input of the previous task) | 12:48 |
bigd_ | thats my problems ... thats varies with every task | 12:49 |
bigd_ | and thats why i would like a "virtual base" | 12:50 |
bigd_ | so i dont have to change the hole setup | 12:50 |
bigd_ | for example for rendering openstack would fit perfectly | 12:50 |
bigd_ | as you need a lot of instances | 12:51 |
bigd_ | and i have a lot of (small) nodes/hosts | 12:51 |
fabiand | bigd_: sure, you can also use the instances as mpi-nodes for e.g. simulations | 12:51 |
*** hazmat has joined #openstack | 12:52 | |
*** schisamo has joined #openstack | 12:52 | |
bigd_ | doesnt that eat up to much power as it would be another logical layer on top? | 12:53 |
fabiand | there will be some performance loss because they are just instances, at least you have got an infrastructure to deplaoy many nodes .. | 12:53 |
fabiand | and the performance loss depends on the instance configuration (hypervisor, ..) | 12:55 |
bigd_ | i think give it a try, thanks | 13:00 |
*** DubLo7 has quit IRC | 13:03 | |
*** reldan has joined #openstack | 13:05 | |
bigd_ | btw, can openstack handle changes of the avaible nodes/hosts at runtime in it's current version? lets say 3 of 10 nodes get turned off or lose network connection without any warning | 13:11 |
*** ramkrsna has quit IRC | 13:16 | |
*** westmaas has joined #openstack | 13:19 | |
*** nelson__ has quit IRC | 13:21 | |
*** nelson__ has joined #openstack | 13:22 | |
*** doude has quit IRC | 13:24 | |
*** doude has joined #openstack | 13:25 | |
*** hadrian has joined #openstack | 13:35 | |
*** Podilarius has joined #openstack | 13:36 | |
*** krish has joined #openstack | 13:44 | |
*** aliguori has joined #openstack | 13:47 | |
*** Abd4llA has quit IRC | 13:48 | |
*** krish has left #openstack | 13:48 | |
olivier_ | Hi all | 13:55 |
olivier_ | My swift storage node indicate in their log file "object-replicator dev/sdb1 is not mounted | 13:56 |
olivier_ | But my disk is mounted.... How to fix that ? | 13:56 |
*** kevnfx has joined #openstack | 13:57 | |
uvirtbot | New bug: #692994 in nova "nova-compute will not recover from loss of its xapi session" [Undecided,New] https://launchpad.net/bugs/692994 | 14:01 |
*** DubLo7 has joined #openstack | 14:03 | |
*** alekibango has quit IRC | 14:04 | |
*** Daviey has quit IRC | 14:08 | |
*** Daviey has joined #openstack | 14:17 | |
*** alekibango has joined #openstack | 14:17 | |
*** nati has joined #openstack | 14:25 | |
*** omidhdl has quit IRC | 14:26 | |
*** ppetraki has joined #openstack | 14:36 | |
*** irahgel has joined #openstack | 14:37 | |
*** nati has quit IRC | 14:37 | |
notmyname | olivier_: are you running a stand-alone system or in the SAIO? | 14:42 |
notmyname | more specifically, are you mounting real devices or loopback devices? | 14:43 |
olivier_ | I'm mounting real device | 14:46 |
olivier_ | What do you mean by SAIO ? | 14:46 |
*** larstobi has quit IRC | 14:46 | |
*** f4m8 is now known as f4m8_ | 14:47 | |
olivier_ | I've got a lab with VM (1 proxy and 3 storage) | 14:47 |
olivier_ | and for each storage node, the mount point is a real device (/dev/sdb1) | 14:49 |
notmyname | SAIO == swift all in one. the VM system we use for dev and is good for a "get your feet wet" test | 14:55 |
notmyname | if you are running it in a VM, you should probably add "mount_check = false" to the [DEFAULT] section of /etc/swift/object-server.conf | 14:56 |
olivier_ | But swift didn't know that I'm using a VM ? | 14:57 |
*** jdarcy has joined #openstack | 14:58 | |
notmyname | olivier_: what's a happening is that "os.path.ismount(dev_path)" is returning false for your storage node | 15:06 |
*** gondoi has joined #openstack | 15:09 | |
*** reldan has quit IRC | 15:12 | |
olivier_ | ok, I've added "mount_check = false" in each 3 configurationf files (account-server.conf, object-server.conf, container-server.conf) on my each 3 storage nodes | 15:13 |
olivier_ | And I didn't have this error message anymore:Â Thanks | 15:13 |
olivier_ | But this didn't resolve my problem for creating a user with swift-auth-add-user (Update failed: 503 Service Unavailable) | 15:15 |
notmyname | you are running the proxy and the 3 storage nodes in one VM? | 15:16 |
fabiand | (does someone know if this channel is logged somewhere?) | 15:17 |
notmyname | are these the instructions you used? http://swift.openstack.org/development_saio.html | 15:17 |
notmyname | fabiand: http://eavesdrop.openstack.org/irclogs/ | 15:17 |
olivier_ | notmyname: I've got 4 VMs | 15:18 |
fabiand | notmyname: thank you. | 15:19 |
olivier_ | And I've used: http://swift.openstack.org/howto_installmultinode.html | 15:19 |
notmyname | olivier_: ok, so you have 4 VMs that you put swift on (1 proxy + 3 storage nodes)? I think I'm following now | 15:20 |
olivier_ | yes | 15:21 |
*** dendrobates is now known as dendro-afk | 15:23 | |
*** dendro-afk is now known as dendrobates | 15:24 | |
notmyname | olivier_: you are running the auth server, right? | 15:24 |
*** jkakar_ has joined #openstack | 15:25 | |
olivier_ | yes | 15:25 |
olivier_ | here my log and configuration files: http://pastebin.com/qaGY7ZLD | 15:25 |
*** jkakar has quit IRC | 15:26 | |
*** kevnfx has quit IRC | 15:26 | |
*** bigd_ has quit IRC | 15:29 | |
notmyname | olivier_: honestly, I don't know. nothing jumps out at me, but I'm not an expert in that part of the code. unfortunately, the people who are are either on vacation or in an all-day meeting | 15:30 |
olivier_ | ok, thanks for your time | 15:32 |
_0x44 | jaypipes: You around? | 15:33 |
piken | morning everyone | 15:33 |
jaypipes | _0x44: yup | 15:33 |
_0x44 | jaypipes: I'm looking through your merge req, and I noticed you're using iteritems() a lot. Are we not targetting compatibility with py3k? | 15:34 |
_0x44 | (This isn't a complaint or a nitpick about the patch, just curiosity) | 15:35 |
jaypipes | _0x44: :( I'm not a py3k expert... could you advise on what to change there? | 15:36 |
*** irahgel has quit IRC | 15:36 | |
_0x44 | jaypipes: iteritems went away, so all that's left is items() | 15:36 |
jaypipes | ah... | 15:36 |
_0x44 | jaypipes: I dunno why they chose to get rid of it | 15:36 |
jaypipes | _0x44: I didn't know that. So, I can just use items() where I am now using iteritems()? | 15:36 |
_0x44 | jaypipes: Yup :) | 15:37 |
jaypipes | _0x44: good to know! I will change things as I edit files. Thanks for the heads up! | 15:37 |
_0x44 | You're welcome :) | 15:37 |
*** johnpur has joined #openstack | 15:37 | |
*** ChanServ sets mode: +v johnpur | 15:37 | |
* jaypipes checks off his "learn one thing a day" task... | 15:37 | |
jaypipes | _0x44: you know, old dog, new tricks, and all that ;) | 15:38 |
_0x44 | That's a really good task, I probably should have one like that. Omniscience makes that so difficult though. ;) | 15:38 |
jaypipes | _0x44: indeed, it would :P | 15:38 |
_0x44 | One thing to note is that iteritems returns an iterator and items returns a list of tuples | 15:38 |
*** hggdh has quit IRC | 15:38 | |
jaypipes | _0x44: but effectively, the same usage... | 15:39 |
_0x44 | Yes, same interface just one is "nicer" for some value of "nicer" | 15:39 |
jaypipes | _0x44: hehe | 15:39 |
jaypipes | _0x44: how's Italy, btw? having a good time? | 15:39 |
_0x44 | jaypipes: It's great! A bit chilly, though. :) My girlfriend came in on Saturday, and we're going to hit Florence for Christmas | 15:40 |
jaypipes | _0x44: I don't want to hear about chilly :) hasn't gotten above freezing here in 10 days or so... | 15:41 |
_0x44 | Pavia is hovering around freezing, but is really humid | 15:41 |
jaypipes | _0x44: when I say to my dogs "let's go for a walk" and they don't even get out of their beds, you know it's cold. | 15:41 |
_0x44 | Give me 15F and dry over 32F and snow/sleeting :) | 15:42 |
_0x44 | Oh man, you should get your dogs some boots and parkas, I've seen them all over the place here. | 15:42 |
*** dfg_ has joined #openstack | 15:42 | |
jaypipes | hehe | 15:43 |
*** hggdh has joined #openstack | 15:43 | |
*** reldan has joined #openstack | 15:44 | |
dendrobates | jaypipes: testing your i18n branch... | 15:44 |
*** calavera has quit IRC | 15:45 | |
*** jkakar_ is now known as jkakar | 15:45 | |
dendrobates | oops, you already pushed it. | 15:46 |
jaypipes | dendrobates: ya, it's already been reviewed... about 15 times. | 15:47 |
dendrobates | I know, what ended up being the problem with the tests? | 15:47 |
jaypipes | dendrobates: not sure...still waiting to see if it bombs again :( | 15:48 |
* ttx crosses fingers | 15:49 | |
*** MarkAtwood has quit IRC | 15:49 | |
ttx | jaypipes: about the glance api, looks like I should be tracking the subspecs rather than the parent one, would you agree ? | 15:49 |
jaypipes | ttx: yep, and they are all up 2 date. | 15:50 |
ttx | ok, will untarget api for bexar, even if that sounds weird. | 15:51 |
jaypipes | ttx: why? it's done... | 15:52 |
jaypipes | ttx: the final piece is in code review right now.. | 15:52 |
ttx | jaypipes: I keep "unified-api", but remove "api" from the list, since it's just a master spec | 15:52 |
ttx | jaypipes: what are you using "beta available" for ? | 15:53 |
jaypipes | ttx: using that for "it's in trunk, but before release" | 15:54 |
ttx | jaypipes: I use it for "testable but not proposed for trunk yet" | 15:54 |
jaypipes | ttx: ah, sorry. feel free to corect me :) | 15:54 |
*** irahgel has joined #openstack | 15:54 | |
ttx | jaypipes: I use "Implemented" for "Merged in trunk" | 15:55 |
ttx | jaypipes: ok, will do | 15:55 |
jaypipes | ttx: then feel free to mark implemented :) I'll remember that for future :) | 15:55 |
* ttx likes to see green. | 15:55 | |
*** MarkAtwood has joined #openstack | 15:55 | |
ttx | jaypipes: otherwise they don't show up as completed... so they still appear in the deps graph. | 15:56 |
jaypipes | ttx: understood. | 15:56 |
*** hazmat has quit IRC | 15:59 | |
*** MarkAtwood has quit IRC | 16:02 | |
dendrobates | jaypipes: the i18n patch is still hanging on test_create_instance_associates_security_groups for me. | 16:03 |
*** jbaker has joined #openstack | 16:04 | |
*** kevnfx has joined #openstack | 16:04 | |
jaypipes | dendrobates: gah. :( do we know what is actually hanging on? | 16:04 |
jaypipes | dendrobates: all tests pass for me locally, so I'm unsure how to fix. :( | 16:04 |
dendrobates | I am looking at the traceback. will paste it | 16:04 |
*** MarkAtwood has joined #openstack | 16:07 | |
*** jero is now known as jero_market | 16:07 | |
dendrobates | nothing useful afaics http://paste.openstack.org/show/325/ | 16:07 |
dendrobates | I'm trying it again | 16:08 |
*** jero_market is now known as jero | 16:09 | |
*** infernix has quit IRC | 16:09 | |
*** sophiap_ has joined #openstack | 16:10 | |
*** sophiap has quit IRC | 16:10 | |
*** sophiap_ is now known as sophiap | 16:10 | |
dendrobates | jaypipes: are you using the virt_env to test? | 16:11 |
*** dirakx has joined #openstack | 16:12 | |
tr3buchet | anyone else having trouble getting the nova-compute to run with latest trunk? | 16:17 |
*** dragondm has joined #openstack | 16:17 | |
tr3buchet | keep getting this: http://pastie.org/1393586 | 16:17 |
*** guigui1 has quit IRC | 16:17 | |
dabo | tr3buchet: running fine for me. Do you have a get_connection() method in nova/virt/connection.py? | 16:19 |
tr3buchet | yes, line 34 | 16:19 |
tr3buchet | dabo ^ | 16:19 |
tr3buchet | and --connection_type=xenapi as a flag | 16:20 |
rbergeron | dendrobates: are you still planning on coming to fudcon? :) | 16:21 |
jaypipes | dendrobates: yes | 16:21 |
dendrobates | rbergeron: I cannot come due to a conflict. I am trying to get someone to go in my place | 16:22 |
rbergeron | awe | 16:22 |
* rbergeron makes a sadface | 16:22 | |
dabo | tr3buchet: try adding these debugging lines: http://pastie.org/1395359 | 16:24 |
dabo | tr3buchet: when I run that, I get this in my compute session: http://pastie.org/1395361 | 16:25 |
tr3buchet | yeah me too | 16:29 |
rbergeron | dendrobates: well lmk if/when you find someone - we'd love to have additional openstack folks present | 16:29 |
tr3buchet | and in api i get a lot of import failed | 16:29 |
tr3buchet | for nova.image.s3.S3ImageService and nova.network.manager.FlatManager, | 16:30 |
*** seshu has joined #openstack | 16:30 | |
dabo | tr3buchet: yeah, it's not a "failure" so much as it is not a value that can be imported using __import__() directly. | 16:30 |
*** jkakar has quit IRC | 16:32 | |
dabo | the thing is, your paste showed that you are getting a "Class get_connection cannot be found" error. If that were the case, you'd never see the "CLS <function get_connection at 0x30088c0>" line in the debug output. | 16:32 |
tr3buchet | http://pastie.org/1395388 | 16:32 |
dendrobates | I know some of you must have opinions on my proposal for adding new core devs... | 16:32 |
tr3buchet | output from compute | 16:32 |
dabo | tr3buchet: did you add the verticle pipes to the debug output? | 16:33 |
tr3buchet | yes, i usually do that ensure problem is some trailing garbage in strings | 16:34 |
tr3buchet | isn't | 16:34 |
dabo | tr3buchet: and you didn't get the "CLS" line like I pasted. | 16:34 |
tr3buchet | i put it in the function... | 16:34 |
*** kashyapc_ has quit IRC | 16:34 | |
tr3buchet | oh | 16:35 |
tr3buchet | well that's because i'm getting the error on the import_class line probably | 16:35 |
tr3buchet | yep, look at my paste, it never gets to the CLS line | 16:35 |
dabo | tr3buchet: that's why the "CLS" line is there - to verify that the import_class() call succeeded. | 16:35 |
tr3buchet | it did not succeed | 16:37 |
tr3buchet | when it calls import_class() on nova.virt.connection.get_connection it fails | 16:39 |
dabo | tr3buchet: I don't know what to tell you. I just grabbed a fresh copy of trunk from lp, and it runs fine. | 16:40 |
tr3buchet | that's exactly what i just did :( | 16:40 |
dabo | tr3buchet: don't know why this would give you your error, but did you do all the usual stuff? source novarc, run as root, etc? | 16:41 |
tr3buchet | yes, i did the same thing i did when it ran before i pulled trunk | 16:41 |
tr3buchet | the only change was pulling trunk | 16:41 |
dabo | tr3buchet: sorry, I'm out of ideas | 16:42 |
tr3buchet | rm -rf nova && bzr branch lp:nova/trunk nova | 16:42 |
* tr3buchet is waiting for bzr | 16:42 | |
tr3buchet | failed | 16:43 |
jk0 | you don't need to run it as root btw | 16:43 |
tr3buchet | same error | 16:43 |
*** ibarrera has quit IRC | 16:43 | |
tr3buchet | i run it with sudo | 16:43 |
jk0 | it doesn't need root privs to run | 16:44 |
tr3buchet | without sudo, same error | 16:44 |
jk0 | not saying that is the cause, just in general | 16:44 |
tr3buchet | ah | 16:44 |
jaypipes | dendrobates: any thoughts on why the i18n branch is failing? :( | 16:45 |
dabo | jk0: really? When I first started running nova, everyone told me to run as root | 16:45 |
jk0 | I've never had to do it | 16:45 |
*** WonTu has joined #openstack | 16:45 | |
*** WonTu has left #openstack | 16:45 | |
comstud | dabo, i think it'll need to run as root to talk to xenstore for the RS agent | 16:46 |
dabo | comstud: why? are the xenstore read/write methods only available to root? | 16:47 |
dabo | jk0: heh, just tried running as me, and it looks like everything is working. | 16:47 |
dendrobates | jaypipes: Does where it is hanging give you any clues? I need to squeeze some more verbose debugging out of the tests. | 16:48 |
*** jkakar has joined #openstack | 16:49 | |
comstud | dabo- they are in the guest.. i'm not sure about the host side, now | 16:50 |
jk0 | the problem you guys are seeing is cheetah was added as a dep | 16:50 |
jaypipes | dendrobates: no, unfortunately it doesn't... | 16:50 |
jk0 | install python-cheetah and things will work fine | 16:50 |
dabo | try: pip install Cheetah==2.4.2.1 | 16:51 |
tr3buchet | i used apt-get | 16:52 |
tr3buchet | at any rate that solved it, thanks guys | 16:52 |
tr3buchet | no idea how you came up with that | 16:52 |
*** kashyapc_ has joined #openstack | 16:52 | |
*** kevnfx has quit IRC | 16:53 | |
jk0 | use packages if you can | 16:53 |
jk0 | everything we need is in apt | 16:53 |
*** kevnfx has joined #openstack | 16:53 | |
jk0 | tr3buchet: I ran the unit tests | 16:53 |
tr3buchet | ah, good idea | 16:53 |
dragondm | fyi: it's a good idea, when pulling from trunk, to do: sudo pip install -r tools/pip-requires | 16:55 |
dabo | jk0: sorry, I'm not used to using packages for language dependencies. pip works cross-platform, cross-distro, etc. | 16:55 |
jk0 | my thoughts are to stick with packages since that's what will be used in the releases | 16:56 |
jk0 | at least if they install thru apt | 16:56 |
* ttx will be back in a couple hours | 16:57 | |
dendrobates | jaypipes: trying pdb to see if I can get more info | 16:57 |
dabo | jk0: understood. It's best to do that for compatibility's sake now; long-term I don't think we should depend on a single distro for all OpenStack work. | 16:57 |
jk0 | afaik know, we only officially support one distro (10.10) | 16:58 |
ttx | suspense on the i18n branch is killing me :) | 16:58 |
jk0 | -know | 16:58 |
dabo | jk0: correct. You do see my point that 2-3 years from now, we most likely won't be only supporting ubu 10.10. When that happens, we won't be able to rely on apt packages for consistency. | 16:59 |
jk0 | I understand, but my point is that right now, we're only supporting one distro, so we need to make sure we're developing against those packages and not something that might be in pip | 17:00 |
dragondm | true, an we need to make sure our deps in pip-requires are accurate | 17:00 |
dragondm | silly question: how is our hudson build handling deps? | 17:01 |
dragondm | does it use the pip-requires? | 17:02 |
spectorclan | OpenStack Design Summit - Forming a Program Committee ; more info at http://www.openstack.org/blog/2010/12/openstack-design-summit-program-committee/ | 17:03 |
*** jkakar has quit IRC | 17:08 | |
*** Ryan_Lane has joined #openstack | 17:11 | |
*** kevnfx has quit IRC | 17:12 | |
jk0 | jaypipes: would you have a sec to take another look at https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/44251 please? | 17:13 |
*** doude has quit IRC | 17:21 | |
*** jkakar has joined #openstack | 17:25 | |
*** pquerna has quit IRC | 17:35 | |
*** pquerna has joined #openstack | 17:35 | |
*** jkakar has quit IRC | 17:43 | |
jaypipes | jk0: yup, doing so now. | 17:44 |
jaypipes | jk0: approved. | 17:45 |
jk0 | thanks jau | 17:45 |
jk0 | *jay | 17:45 |
*** kevnfx has joined #openstack | 17:45 | |
jaypipes | jk0: no probs :) | 17:45 |
*** Ryan_Lane has quit IRC | 17:47 | |
*** kevnfx_ has joined #openstack | 17:48 | |
*** kevnfx has quit IRC | 17:48 | |
*** kevnfx_ is now known as kevnfx | 17:48 | |
jk0 | tr3buchet: how did you and sandywalsh test your pause/unpause API? | 17:50 |
jk0 | you guys write up an docs on that? | 17:50 |
tr3buchet | no | 17:50 |
tr3buchet | we tested using cloudervers api | 17:51 |
jk0 | ah ok, thanks | 17:51 |
tr3buchet | he added pause and unpause | 17:51 |
tr3buchet | if you pull from his branch | 17:51 |
openstackhudson | Project nova-tarmac build #45,496: ABORTED in 2 hr 5 min: http://hudson.openstack.org/job/nova-tarmac/45496/ | 17:53 |
jaypipes | dendrobates: can you pm me the server creds to log into the hudson box that runs the tarmac job? I need to modify nova-test.sh to output more verbose stuff... | 17:54 |
*** joearnold has joined #openstack | 17:55 | |
*** jkakar has joined #openstack | 17:57 | |
dendrobates | jaypipes: I don't have credentials on that box. soren and mtaylor do. | 17:58 |
jaypipes | dendrobates: k. | 17:58 |
*** westmaas has quit IRC | 17:58 | |
*** jdurgin has joined #openstack | 17:59 | |
openstackhudson | Project nova build #313: SUCCESS in 1 min 14 sec: http://hudson.openstack.org/job/nova/313/ | 17:59 |
*** ccustine has joined #openstack | 18:00 | |
dendrobates | jaypipes: if you send me the changes I'll run them locally and paste the results | 18:11 |
jaypipes | dendrobates: I don't know what test_nova.sh looks like :( | 18:12 |
dendrobates | jaypipes: right, but since I seem to be able to reproduce it, you could try editing run_tests.py to get what you want | 18:13 |
jaypipes | dendrobates: how can you reproduce it? | 18:13 |
*** kevnfx_ has joined #openstack | 18:14 | |
*** kevnfx has quit IRC | 18:14 | |
*** kevnfx_ is now known as kevnfx | 18:14 | |
dendrobates | I get the same result, hanging at 100% of a cpu just running the test locally | 18:14 |
dendrobates | all other branches work fine | 18:14 |
dendrobates | only yours hangs | 18:15 |
jaypipes | dendrobates: ah, when not running in a VM? | 18:15 |
dendrobates | yes | 18:15 |
jaypipes | dendrobates: s/VM/virtenv | 18:15 |
dendrobates | yes | 18:15 |
jaypipes | dendrobates: gotcha. lemme see what I can uncover. | 18:15 |
*** daleolds has joined #openstack | 18:19 | |
*** joearnol_ has joined #openstack | 18:20 | |
*** joearnold has quit IRC | 18:23 | |
eday | vish1: where do things stand with the nested rpc stuff? all fixed, or is the fakerabbit one still hanging? | 18:24 |
vish1 | achew22: it is because of a recent dependency on cheetah apt-get-install python-cheetah will get you going. I've updated the github novascript to include it | 18:24 |
vish1 | eday: fakerabbit is still hanging. Termie suggested that we probably need to start a greenthread for each call (i was going to experiment with that in fakerabbit this morning) | 18:25 |
*** vish1 is now known as vishy | 18:25 | |
*** hggdh has quit IRC | 18:25 | |
*** hggdh has joined #openstack | 18:26 | |
eday | vishy: ahh, ok | 18:27 |
eday | let me know if I can help | 18:27 |
vishy | tr3buchet: same issue ^^ python cheetah dependency | 18:28 |
*** kevnfx has quit IRC | 18:28 | |
vishy | hah I should have finished scrollback, looks like jk0 beat me to it | 18:29 |
*** kashyapc_ has quit IRC | 18:29 | |
jk0 | :) | 18:29 |
mtaylor | jaypipes: I have credentials everywhere | 18:31 |
jaypipes | mtaylor: hey :) | 18:32 |
jaypipes | mtaylor: trying to figure out what the heck is going on when merging my i18n-strings branch... | 18:32 |
mtaylor | jaypipes: it hates bunnies | 18:32 |
jaypipes | mtaylor: in virt_env, works flawlessly, outside of virt_env, hangs with cpu 100%... | 18:32 |
mtaylor | jaypipes: well, fwiw, the sum total of test_nova.sh is: | 18:33 |
mtaylor | pep8 --repeat --show-pep8 --show-source bin/* nova && python run_tests.py && nos | 18:33 |
mtaylor | etests -w nova/tests/api && python setup.py sdist | 18:33 |
mtaylor | except all on one line | 18:33 |
jaypipes | mtaylor: k, I'm trying to diagnose on my local machine...I'll let you know if I get stuck any furhter... | 18:34 |
mtaylor | jaypipes: ok. | 18:34 |
*** kevnfx has joined #openstack | 18:34 | |
*** kevnfx_ has joined #openstack | 18:35 | |
*** kevnfx has quit IRC | 18:35 | |
*** kevnfx_ is now known as kevnfx | 18:35 | |
*** nelson__ has quit IRC | 18:50 | |
*** kevnfx_ has joined #openstack | 18:56 | |
*** kevnfx has quit IRC | 18:56 | |
*** kevnfx_ is now known as kevnfx | 18:56 | |
*** joearnol_ has quit IRC | 18:57 | |
soren | mtaylor: Did you see my question last night about permissions on the hudson box? | 18:58 |
mtaylor | soren: nope, sorry. missed it. what's up? | 18:58 |
soren | 22:10 <+soren> -rwxr-x--- 1 hudson adm 26338 Sep 15 17:08 /usr/lib/python2.6/os.py | 18:59 |
soren | 22:10 <+soren> -rwxr-x--- 1 hudson adm 26303 Dec 7 17:20 /usr/lib/python2.6/os.pyc | 18:59 |
soren | For instance. | 18:59 |
soren | There's a bunch like it. | 18:59 |
*** kevnfx has quit IRC | 19:05 | |
soren | mtaylor: Any idea what heck that is all about? | 19:09 |
mtaylor | uh. no | 19:10 |
mtaylor | that's... very weird | 19:10 |
soren | mtaylor: They all have the same timestamps. | 19:12 |
soren | Waay in the past. There's now way they've been that way since Sep 15. | 19:13 |
soren | I've most certainly run python stuff as myself on there since then. | 19:13 |
soren | ...and that's not possible now. | 19:13 |
soren | Er... Well, I guess a few things work. | 19:13 |
soren | bzr doesn't at all, for instance (due to not being able to read os.py. | 19:13 |
mtaylor | soren: what the hell | 19:16 |
soren | What the hell indeed. | 19:16 |
soren | Dec 7 may be accurate. | 19:16 |
soren | Dec 7 17:31:20 openstack-hudson sudo: mordred : TTY=pts/0 ; PWD=/home/mordred ; USER=root ; COMMAND=/bin/su - | 19:18 |
* soren glances at mtaylor | 19:18 | |
mtaylor | why would I have done a chown/chmod on /usr/lib though... | 19:19 |
soren | Evil? | 19:19 |
soren | Malice? | 19:19 |
mtaylor | possibly | 19:19 |
soren | I don't know. | 19:19 |
soren | Bunch of half interesting things in dpkg.log from Dec 7 at exatly that time :) | 19:19 |
*** sophiap has quit IRC | 19:20 | |
soren | mtaylor: Hey, that's the day you upgraded to Maverick, wasn't it? | 19:21 |
*** WonTu has joined #openstack | 19:22 | |
vishy | eday: not making a whole lot of progress here, I might just wait for termie to hack on it. My knowledge of eventlet/greenthreads is clearly not complete enough. | 19:22 |
*** WonTu has left #openstack | 19:22 | |
eday | vishy: if you want to push a branch I can look | 19:23 |
mtaylor | soren: oh - yes it was! | 19:23 |
vishy | eday: lp:~/vishvananda/nova/move-ip-allocation - try python run_tests.py RpcTestCase | 19:24 |
soren | mtaylor: It seems to be contained to just those files. | 19:25 |
mtaylor | soren: just os.py and os.pyc ? | 19:25 |
vishy | and of course there is an extra slash between ~ and vishvananda | 19:25 |
soren | mtaylor: No, err.. | 19:25 |
soren | mtaylor: Just stuff from python2.6-minimal. | 19:25 |
henrichrubin | hi, does anyone have any comments on this? https://blueprints.launchpad.net/nova/+spec/frontend-heterogenous-architecture-support | 19:25 |
mtaylor | soren: that's really f-ing messed up | 19:25 |
eday | vishy: looking | 19:26 |
vishy | henrichrubin: i don't see why you need to add architecture to instance types | 19:26 |
*** larstobi has joined #openstack | 19:26 | |
vishy | it should be based on the image | 19:26 |
vishy | and perhaps stored in the instance table | 19:27 |
henrichrubin | if we have several different architectures, then we have to check if the instance matches the physical host | 19:27 |
soren | mtaylor: Oh, we can't use the timestamps. | 19:27 |
vishy | right but this is not an instance type requirement | 19:27 |
henrichrubin | and the image matches both as well | 19:27 |
vishy | this is an image type requirement | 19:28 |
vishy | instance type is how much ram/cpu/disk space the instance should get | 19:28 |
soren | mtaylor: Ha hah! | 19:28 |
vishy | that should be orthogonal to the type of processor it is running | 19:28 |
mtaylor | soren: found it? | 19:28 |
soren | mtaylor: ctime for those files is 17:28 on Dec 7. | 19:28 |
soren | mtaylor: Looking at auth.log, there's some su'ing around that time. | 19:29 |
henrichrubin | understand. but how can the scheduler determine which node to run on? | 19:29 |
vishy | henrichrubin: it looks at the architecture of the image and schedules to a matching node | 19:30 |
henrichrubin | i think a HOSTS table is needed that contains information about a physical host | 19:30 |
eday | vishy: how was that last failing for you? I get exceptions.AttributeError: 'WaitMessage' object has no attribute 'result' | 19:30 |
vishy | henrichrubin: absolutely, I would initially add it to the service table | 19:30 |
vishy | eday: yeah that is it although occassionally it just hangs | 19:30 |
henrichrubin | or host_arch could be specified in nova.conf | 19:30 |
eday | vishy: ok, just making sure | 19:31 |
vishy | henrichrubin: yes you could do it with a flag as well. That is how i'm specifying network_host in one of my branches, but I think eventually we will need more data about a host as you are saying. | 19:31 |
soren | mtaylor: Getting closer. | 19:32 |
soren | mtaylor: The ctime for all the fucked files are Dec 7 17:28:37-17:28:38. | 19:33 |
soren | mtaylor: From dpkg.log: | 19:33 |
soren | Er.. | 19:33 |
* vishy hands soren a magnifying glass | 19:33 | |
vishy | go, sherlock! | 19:33 |
soren | 2010-12-07 17:28:33 status half-configured hudson 1.388 | 19:34 |
soren | 2010-12-07 17:28:38 status installed hudson 1.388 | 19:34 |
ttx | if sherlock could unfuck the i18n merge, I'd buy him a new pipe. | 19:34 |
soren | Busted! | 19:34 |
* soren goes to stare at hudson. | 19:34 | |
jaypipes | ttx: hell, I'd buy him two. | 19:35 |
henrichrubin | vishy: thanks. i'll have to investigate how the scheduler knows the image's architecture, before it selects a node. any idea? | 19:35 |
* ttx stares at third-party packaging with usual arrogance of distribution developers | 19:35 | |
soren | hudson's postinst does indeed chown and chmod a bunch of things. | 19:35 |
* ttx sighs | 19:35 | |
* mtaylor sighs at the hudson packaging | 19:36 | |
soren | It seems to be good about it, though. | 19:36 |
soren | find /var/lib/hudson -path "*jobs" -prune -o -exec chown hudson:adm {} + || true | 19:36 |
*** opengeard has quit IRC | 19:36 | |
soren | That looks benign to me. | 19:36 |
*** Abd4llA has joined #openstack | 19:36 | |
mtaylor | soren: symlink traversal perhaps? | 19:37 |
soren | Does running tests create symlinks? | 19:37 |
soren | Maybe. | 19:37 |
soren | I'm not sure. | 19:37 |
mtaylor | perhaps if something is creating a venv | 19:37 |
soren | I hate those things. | 19:38 |
vishy | henrichrubin: yeah it is a little tough because we don't have images in the datamodel. I would add a field to the instances table, and modify compute.api to set the field it when it retrieves data about the image in create_image. | 19:38 |
mtaylor | that might symlink to the /usr/lib python lib? | 19:38 |
soren | It's possible. | 19:38 |
soren | That would certainly explain it. | 19:38 |
soren | Bah. It seems there's no compromise involved. | 19:39 |
soren | Any objection to my just reinstalling python2.6-minimal? | 19:39 |
vishy | soren: add -h to chown? | 19:41 |
ttx | vishy: not sure soren wants to work on Hudson packaging | 19:41 |
soren | I'm sure I don't. | 19:42 |
soren | mtaylor: 19:39 <+soren> Any objection to my just reinstalling python2.6-minimal? | 19:42 |
mtaylor | soren: not at all | 19:43 |
soren | Rock. | 19:43 |
mtaylor | packaging java for debian still royally sucks | 19:43 |
soren | Python works again. | 19:43 |
soren | Yay. | 19:43 |
mtaylor | yay! | 19:43 |
soren | I'd love to blame this on Java, but I can't really. | 19:43 |
henrichrubin | vishy: thanks. i'll take a stab at it. only difficulty i see is the scheduler. | 19:43 |
mtaylor | well, it's marginally realated - it's so hard to package properly | 19:44 |
soren | gah | 19:44 |
mtaylor | so it means we're left with things like hudson which are packaged "good enough" but not really | 19:44 |
*** henrichrubin has quit IRC | 19:45 | |
*** henrichrubin has joined #openstack | 19:45 | |
soren | jaypipes: bzr on the hudson box should be functional now. | 19:45 |
*** westmaas has joined #openstack | 19:50 | |
*** henrichrubin has quit IRC | 19:55 | |
*** henrichrubin has joined #openstack | 19:55 | |
*** hazmat has joined #openstack | 19:56 | |
soren | jaypipes: It looks very much rpc related to me. | 19:57 |
*** sophiap has joined #openstack | 20:00 | |
ttx | Team meeting in one hour in #openstack-meeting ! | 20:00 |
vishy | eday: out for lunch. msg me if you eureka | 20:04 |
eday | vishy: I think I see the problem, working on a fix | 20:06 |
eday | backend is assuming only 1 consumer at a time | 20:06 |
*** miclorb_ has joined #openstack | 20:16 | |
rbergeron | /win 48 | 20:18 |
eday | wow, 48? | 20:24 |
* eday thought 20 was a lot | 20:24 | |
*** reldan has quit IRC | 20:29 | |
*** HouseAway is now known as AimanA | 20:40 | |
soren | eday: Really? This is /win 61. #rabbitmq is /win 175. | 20:44 |
eday | soren: heh, I like to keep a tidy set of channels I guess | 20:46 |
*** dragondm has quit IRC | 20:48 | |
*** iammartian has left #openstack | 20:49 | |
*** dragondm has joined #openstack | 20:50 | |
*** zaitcev has joined #openstack | 20:51 | |
*** rcc has quit IRC | 20:52 | |
eday | vishfixed it :) | 20:54 |
dendrobates | openstack meeting in 4 min in #openstack-meeting | 20:56 |
dendrobates | ttx lost internet, so I'll be filling in. | 20:57 |
vishy | eday: nice! plz to have the fix? | 20:57 |
* jeremyb wonders why there's just a big combined channel for both nova and swift | 20:58 | |
eday | pushing now, one sec | 20:58 |
*** Ryan_Lane has joined #openstack | 20:58 | |
soren | jeremyb: If it gets overwhelming, we can split them. For now, there's really little point. | 20:58 |
jeremyb | soren: i have sometimes been overwhelmed | 20:59 |
soren | jeremyb: We also don't split developer talk from user help. | 20:59 |
jeremyb | (i come from the swift interest first, maybe nova in 3-6 months time perspective) | 20:59 |
jeremyb | right, i don't think it's a user vs. dev issue i'm seeing | 20:59 |
* vishy likes soren when he's on vacation | 20:59 | |
jeremyb | was mostly just wondering if it had ever been discussed before and what the rationale was | 21:00 |
jeremyb | Ryan_Lane: you missed the meeting announce... starting any sec in #openstack-meeting (FYI) | 21:00 |
soren | jeremyb: No no, I'm not suggesting that. I'm just pointing another sort of split that we also don't do. | 21:00 |
jeremyb | soren: right | 21:01 |
vishy | I'm pretty good at blocking out the swift stuff | 21:01 |
soren | jeremyb: It's a volume question, really. Until there's enough volume to sustain two channels, we'd rather try to grow synergy or something. | 21:01 |
vishy | If i see a bunch of comments by notmyname, I scroll through them quickly :p | 21:01 |
eday | vishy: lp:~eday/nova/fakerabbit-fix | 21:01 |
jeremyb | yeah, volume is exactly why i brought it up | 21:01 |
notmyname | vishy: where's the love? | 21:02 |
vishy | eday: thx do you want to propose for merge separately? or should i just merge it and propose with mine? | 21:02 |
soren | vishy: "bzr lp-open lp:~eday/nova/fakerabbit-fix" <--- not sure if you know that trick | 21:02 |
eday | vishy: merge into yours and check it out | 21:02 |
soren | mtaylor: <kick> 21:02 < dendrobates> ACTION: mtaylor to bring down the openstack-discuss group | 21:02 |
vishy | soren: no didn't know that | 21:02 |
soren | vishy: There's also "bzr merge --preview 21:02 < dendrobates> ACTION: mtaylor to bring down the openstack-discuss group | 21:03 |
soren | Gah. I suck at cut-and-paste. | 21:03 |
soren | vishy: There's also "bzr merge --preview lp:~eday/nova/fakerabbit-fix" from another branch. | 21:03 |
vishy | soren: weird the lp open is trying to make me login | 21:05 |
vishy | with links | 21:05 |
*** jbaker has quit IRC | 21:06 | |
soren | vishy: Ah. | 21:06 |
Ryan_Lane | I'm in SLC airport :) | 21:06 |
vishy | eday: you're a star. It works with RpcTestCase and CloudTestCase | 21:07 |
vishy | eday: ship it! | 21:07 |
notmyname | jeremyb: I think this channel has been about 70/30 split for nova/swift conversations, so the nova guys tune us out and the swift guys show up when someone mentions swift. | 21:07 |
Ryan_Lane | I also don't seem to be able to join | 21:07 |
jeremyb | Ryan_Lane: you're in now | 21:07 |
eday | vishy: hehe, glad it works for you too | 21:08 |
soren | jaypipes: That branch from eday may fix your stuff, too. | 21:08 |
*** Ryan_Lane_ has joined #openstack | 21:09 | |
*** Ryan_Lane has quit IRC | 21:12 | |
*** reldan has joined #openstack | 21:12 | |
*** kevnfx has joined #openstack | 21:19 | |
*** Ryan_Lane_ has quit IRC | 21:24 | |
vishy | soren: are the upstart scripts working in the ppa? | 21:25 |
*** johnpur has quit IRC | 21:25 | |
soren | Mostly. | 21:25 |
soren | vishy: There's some weirdness with the objectstore one that I can't completely work out. | 21:26 |
vishy | soren: ah, good ol' objectstore | 21:26 |
vishy | soren: someone needs to write an extension/proxy to glance to allow for uploading bundles and registering images | 21:27 |
vishy | soren: so we can kill it | 21:27 |
soren | Isn't that what Glance does. | 21:27 |
soren | ? | 21:27 |
vishy | soren: i don't think it replicates the S3 api or knows how to decrypt manifests/bundles | 21:28 |
soren | Oh. | 21:28 |
soren | :( | 21:28 |
vishy | eday: hmm proposing based on yours didn't work so well :) | 21:29 |
alekibango | someone, when this will work? https://blueprints.launchpad.net/nova/+spec/instance-migration | 21:29 |
*** Abd4llA has quit IRC | 21:29 | |
*** masumotok_ has joined #openstack | 21:29 | |
vishy | alekibango: don't think it is targetted yet | 21:31 |
*** jdarcy has quit IRC | 21:31 | |
alekibango | imho thats one of really important things, along with restart after reboot :) | 21:32 |
eday | vishy: hmm? I would just merge it into yours and propose yours | 21:32 |
vishy | eday: yeah that is what i did. I didn't realize that you had based it off my branch | 21:32 |
vishy | :) | 21:32 |
*** DubLo7 has quit IRC | 21:33 | |
alekibango | vishy: i would try to help with the reboot issue, as it bites me now, if you would give me some tips | 21:33 |
eday | vishy: hat should still work, it should only merge in my commit | 21:33 |
jk0 | anyone up for a review? :) https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/44394 | 21:34 |
vishy | eday: yeah but it doesn't work to put in your branch as a prereq cuz in shows a 2 line diff. :) | 21:35 |
*** guynaor has left #openstack | 21:35 | |
eday | ahh | 21:35 |
*** reldan has quit IRC | 21:36 | |
*** kevnfx has left #openstack | 21:37 | |
*** reldan has joined #openstack | 21:38 | |
*** hggdh has quit IRC | 21:39 | |
alekibango | it would be cool to have a fuctional test telling us if after a merge nova would still work | 21:40 |
*** gondoi_ has joined #openstack | 21:40 | |
alekibango | automated on changes on trunk and merge requests | 21:41 |
*** gondoi_ has quit IRC | 21:41 | |
eday | alekibango: we talked about that at the summit, and there is a blueprint, but no one has time to work on it yet | 21:41 |
alekibango | i think that should be high priority for nova... | 21:41 |
eday | alekibango: me too, as do others | 21:42 |
*** hggdh has joined #openstack | 21:42 | |
alekibango | i am willing to help this happen, but i dont feel confident yet to lead on this | 21:43 |
alekibango | so if you will need help, ask me | 21:43 |
alekibango | eday: maybe we should together organize some sprint for this | 21:44 |
alekibango | or at least little talk/etherpad session for start | 21:44 |
eday | alekibango: jaypipes was leading this at the summit, way want to talk with him too | 21:44 |
alekibango | jaypipes: automated functional tests! talk to me if you will need help | 21:45 |
*** irahgel has quit IRC | 21:45 | |
*** kevnfx has joined #openstack | 21:46 | |
alekibango | eday: now i can install nova cllusters automatically in ~300-400 seconds (using fai) | 21:47 |
alekibango | on real hw | 21:47 |
alekibango | that might be way to test it somehow | 21:47 |
vishy | alekibango: lp:~vishvananda/nova/fix-reboot | 21:48 |
alekibango | :) | 21:48 |
vishy | (untested but i think i got everything) | 21:48 |
alekibango | will test | 21:48 |
vishy | thx | 21:48 |
soren | alekibango: We don't really need real hardware, though. | 21:48 |
soren | alekibango: That's why I added the UML backend. | 21:48 |
soren | alekibango: So that we could actually test everything in the cloud | 21:49 |
alekibango | well, you do if you also want to test xen and kvm | 21:49 |
alekibango | its sweet :) | 21:49 |
soren | And VMWare and blah and foo and bar.. | 21:49 |
soren | Sure, hardware is needed for that. | 21:49 |
alekibango | if someone will donate few servers, i could help by installing/configuring fai there | 21:50 |
soren | ..but the vast majority of Nova doesn't care about the differences between UML and KVM. | 21:50 |
alekibango | can install debian or ubuntu servers with nova -- and test even more platforms | 21:50 |
alekibango | even fedora etc | 21:50 |
alekibango | soren: i agree - but i also noted how hard is for xen people to get review | 21:51 |
soren | alekibango: That's really an artifact of poor unit tests (which they are addressing). | 21:51 |
soren | In an ideal world, to review some new code, I would only have to review the unit tests. | 21:52 |
alekibango | well, unit tests can test everything | 21:52 |
alekibango | cant* | 21:52 |
alekibango | :) | 21:52 |
soren | If I believe they're correct and they pass, everything should be fine. | 21:52 |
eday | alekibango: see smoketests for some other integration tests, I don't think they work anymore though | 21:53 |
soren | eday: They do. | 21:53 |
eday | oh, cool | 21:53 |
soren | eday: They were fixed reasonably recently. | 21:53 |
soren | eday: r456 | 21:53 |
eday | wasnt sure if that had hit trunk | 21:53 |
alekibango | well still nothing beats functional tests :) | 21:54 |
alekibango | and automated ones are great move forward | 21:54 |
soren | Definitely! | 21:54 |
alekibango | thats why i am talking about them... i fell the need - even reading those tests would help people to use nova correctly :) | 21:55 |
alekibango | so they will help documenting | 21:55 |
soren | Yeah. | 21:56 |
alekibango | i see strong synergy possibilities with automated functional tests... :) | 21:56 |
soren | vishy: Ok, the next packages will be teh awesome. | 21:57 |
soren | vishy: nova-objectstore now plays nicely. | 21:57 |
vishy | soren: hawt | 21:58 |
* soren clims back under his rock | 21:58 | |
*** nelson__ has joined #openstack | 21:58 | |
*** irahgel has joined #openstack | 21:59 | |
*** jbaker has joined #openstack | 22:00 | |
*** ctennis has quit IRC | 22:03 | |
*** westmaas has quit IRC | 22:05 | |
nelson__ | a question I can't find an answer to on openstack.com, or on the wiki: who is selling commercial support for openstack? | 22:07 |
zykes- | nelson__: you can buy Mikokura stack | 22:08 |
zykes- | which is supported - based on openstack | 22:08 |
vishy | nelson__: various companies are doing support | 22:08 |
zykes- | or you can wait until canonical starts to support it | 22:08 |
nelson__ | Okay, so why are they so shy? | 22:08 |
zykes- | or vishy :p | 22:08 |
alekibango | vishy: will test tomorrow, my body needs some sleep | 22:08 |
vishy | anso labs (the company i work for) is one. | 22:08 |
vishy | alekibango: ok let me know if it works and i'll propose for review | 22:09 |
zykes- | vishy: ain't you working for rspace ? | 22:09 |
nelson__ | why isn't there a page in the wiki entitled "commercial support"? | 22:09 |
alekibango | vishy: will do | 22:09 |
vishy | zykes-: no, | 22:09 |
zykes- | ok | 22:09 |
vishy | zykes-: I work for anso, which is the company that does the development on NASA's cloud | 22:10 |
jeremyb | to clarify, i think nelson__ is primarily interested in swift atm | 22:10 |
jeremyb | not support for ryan's compute nodes, right? | 22:10 |
nelson__ | right. | 22:10 |
vishy | nelson__: ah, we are primarily nova experts, but we will be offering wider support as well | 22:10 |
vishy | nelson__: the reason there isn't more info about commercial support is openstack is still very young, so most companies are still organizing their support | 22:11 |
annegentle | nelson__: yep, we're young but definitely looking at ways to provide support/services at a pro-level. what would you have in mind? | 22:12 |
annegentle | nelson__: for the wiki page? Just an explanation of the current state? | 22:12 |
vishy | nelson__: it sounds like it would be something that would be great to add to the wiki :) | 22:12 |
vishy | nelson__: if you are at liberty to disclose, which company do you work for? | 22:13 |
nelson__ | annegentle: see http://qmail.org/top.html#paidsup | 22:13 |
nelson__ | vishy: contracting. | 22:13 |
annegentle | nelson__: ah qmail, she says fondly. Such polite error messages they have. | 22:14 |
nelson__ | hehe. | 22:14 |
nelson__ | yeah, before people got used to them, they thought a person was apologizing for failure to deliver. :) | 22:15 |
nelson__ | annegentle: that's the kinda of page I was looking for for openstack swift. | 22:15 |
alekibango | nelson__: see how helpfull those people are? you call that no support? take your time to try calling microsoft once, heh... | 22:15 |
*** Ryan_Lane has joined #openstack | 22:16 | |
alekibango | nelson__: if you will wait few months, there will be lots of companies supporting openstack, thats sure... its still so young | 22:18 |
nelson__ | "still so young"?? Who would want to adopt brand new code?? | 22:19 |
* nelson__ runs away screaming. | 22:19 | |
Xenith | I'm certainly pushing for my company to support openstack. Would be nice to add that to our contracting services. | 22:19 |
* nelson__ teaches alekibango how not to market his software. :) | 22:20 | |
alekibango | its not my software yet ... but potentially will be, as my merge requests is waiting for rewiew | 22:20 |
alekibango | :D | 22:21 |
nelson__ | :D | 22:21 |
annegentle | nelson__: while it's not fair to compare swift and nova ( just like you shouldn't compare children!), nova's experiencing more code enhancements right now | 22:21 |
dabo | nelson__: swift has been used in production for Rackspace for some time. It's the OpenStack project that's "so young" | 22:21 |
alekibango | nelson__: there are some big companies who love openstack... http://www.openstack.org/community/ | 22:21 |
Ryan_Lane | I'm a fan so far :) | 22:22 |
alekibango | dabo: right, i still think of nova when i say openstack | 22:22 |
nelson__ | dabo: yes, I know. just picking on alekibango. otherwise I really WOULD be running away screaming. | 22:22 |
Ryan_Lane | nelson__: see even WMF loves it :D | 22:22 |
annegentle | nelson__: hee hee | 22:22 |
alekibango | nelson__: :) | 22:22 |
Ryan_Lane | nelson__: ;) | 22:22 |
Ryan_Lane | of course I'm using nova | 22:22 |
Ryan_Lane | (nelson__ is also working with WMF :D ) | 22:23 |
alekibango | nelson__: but fact is, nova is missing few important features to be usefull in production. but i believe bexar release will be the bomb | 22:23 |
alekibango | if we will not fail | 22:23 |
Ryan_Lane | alekibango: will it have persistent images? | 22:23 |
alekibango | Ryan_Lane: i for one hope for sheepdog integration | 22:23 |
alekibango | or similar system | 22:24 |
jk0 | would anyone in core mind reviewing https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/44394 ? | 22:24 |
Ryan_Lane | some storage architecture direction would be nice :) | 22:24 |
*** ctennis has joined #openstack | 22:25 | |
Ryan_Lane | I also like to see vm migration | 22:25 |
alekibango | live migration? | 22:25 |
alekibango | or migration | 22:25 |
Ryan_Lane | I'd* | 22:25 |
*** gondoi has quit IRC | 22:25 | |
Ryan_Lane | both | 22:25 |
alekibango | yes | 22:25 |
Ryan_Lane | if the storage architecture is shared, storage migration isn't as big of a deal | 22:25 |
alekibango | then we could implement live rescheduler :) | 22:25 |
zykes- | isn't there a ceph integration thing going on ? | 22:25 |
alekibango | yes it is | 22:25 |
zykes- | ;p | 22:26 |
alekibango | but ceph is still highly experimental code | 22:26 |
zykes- | for bexar ? | 22:26 |
zykes- | ah | 22:26 |
alekibango | not sure if for bexar | 22:26 |
alekibango | byt someone is testing it | 22:26 |
alekibango | browse irc logs to find who | 22:26 |
alekibango | (keyword rbd or ceph) | 22:26 |
*** _skrusty has quit IRC | 22:27 | |
Ryan_Lane | sheepdog would be really nice :) | 22:29 |
Ryan_Lane | looks like it supports snapshots and thin provisioning | 22:29 |
Ryan_Lane | and cloning \o/ | 22:29 |
alekibango | yes yes | 22:29 |
alekibango | and its fast, reliable | 22:29 |
alekibango | and decouples cpu from harddrive | 22:30 |
alekibango | so you can have diskless nodes | 22:30 |
alekibango | for nova-compute | 22:30 |
alekibango | thats my goal for next few months | 22:30 |
Ryan_Lane | awesome | 22:31 |
Ryan_Lane | awesome it has conversion support for existing images too | 22:32 |
alekibango | yes i love it and even shaun the sheep loves this dog | 22:36 |
*** _skrusty has joined #openstack | 22:41 | |
alekibango | btw the man talking about testing ceph/rbd was KyleM1... | 22:41 |
zykes- | ok: ) | 22:43 |
zykes- | is sheepdogg support coming for bexar ? | 22:43 |
jeremyb | sheepdogg? | 22:43 |
jeremyb | oh | 22:43 |
zaitcev | I almost can guarantee that support of sheepdog does not appear in Bexar. It was talked about but not for Bexar. | 22:44 |
*** kevnfx has quit IRC | 22:45 | |
zaitcev | Poke jcsmith about it. He tried to find a sensible way to attach it... I did not get details but they were thinking about emulating one of NDB derivatives. | 22:45 |
zaitcev | The main problem is that sheepdog is only available in qemu. | 22:46 |
zaitcev | If your hypervisor uses qemu to emulate its devices, you're in luck. If not, no sheepdog for you. | 22:46 |
*** Ryan_Lane has quit IRC | 22:48 | |
zykes- | does kvm use qemu ? | 22:50 |
tr3buchet | can someone help me out with a merge? https://code.launchpad.net/~tr3buchet/nova/os_api_images/+merge/44181 | 22:50 |
*** kevnfx has joined #openstack | 22:51 | |
*** allsystemsarego has quit IRC | 22:52 | |
*** dendrobates is now known as dendro-afk | 22:55 | |
*** dirakx has quit IRC | 22:57 | |
*** bolapara has joined #openstack | 22:58 | |
openstackhudson | Project nova build #314: SUCCESS in 1 min 15 sec: http://hudson.openstack.org/job/nova/314/ | 22:59 |
*** termie_ is now known as termie | 23:00 | |
*** termie has joined #openstack | 23:00 | |
*** dendro-afk is now known as dendrobates | 23:01 | |
jk0 | tr3buchet: those spaces aren't gone yet :) | 23:04 |
*** seshu has left #openstack | 23:05 | |
mtaylor | soren: on it | 23:06 |
*** joearnold has joined #openstack | 23:07 | |
*** ppetraki has quit IRC | 23:08 | |
*** aliguori has quit IRC | 23:09 | |
uvirtbot | New bug: #693211 in swift "variable name collision in access processor" [High,In progress] https://launchpad.net/bugs/693211 | 23:21 |
*** joearnol_ has joined #openstack | 23:27 | |
*** joearnold has quit IRC | 23:28 | |
vishy | notmyname: no lack of love here, I'm just glad that you have the swift questions handled. I have my hands full with nova atm. | 23:29 |
*** MarkAtwood has quit IRC | 23:32 | |
*** MarkAtwood has joined #openstack | 23:32 | |
*** kevnfx has quit IRC | 23:36 | |
*** schisamo has quit IRC | 23:42 | |
*** jbaker has quit IRC | 23:53 | |
*** sirp1 has joined #openstack | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!