Thursday, 2011-08-04

*** rods has quit IRC00:00
*** mnaser has quit IRC00:01
*** stewart has quit IRC00:03
*** dragondm has quit IRC00:04
*** Lynelle has quit IRC00:05
*** littleidea has joined #openstack00:08
*** stewart has joined #openstack00:12
*** mszilagyi has quit IRC00:12
*** FallenPegasus has quit IRC00:19
*** FallenPegasus has joined #openstack00:19
*** FallenPegasus is now known as MarkAtwood00:19
*** rchavik has joined #openstack00:19
*** rchavik has joined #openstack00:19
notmynameRickB17: the best example is to use the swift tool that comes with the swift code00:26
notmynameRickB17: I've got some other, not so full-featured, examples in my github account (eg https://github.com/notmyname/python_scripts/tree/master/cf_speed)00:27
RickB17thanks, i'll check it out00:29
*** huslage has quit IRC00:32
*** ton_katsu has joined #openstack00:34
*** worstadmin has joined #openstack00:37
*** netmarkjp has joined #openstack00:37
*** po has quit IRC00:38
*** littleidea has quit IRC00:38
*** cole has quit IRC00:42
*** netmarkjp has left #openstack00:44
*** stewart has quit IRC00:45
*** huslage has joined #openstack00:50
*** ncode has joined #openstack00:53
*** ncode has joined #openstack00:53
*** ncode has quit IRC00:53
*** worstadmin has quit IRC00:55
*** worstadmin has joined #openstack00:56
*** ncode has joined #openstack00:56
*** ncode has joined #openstack00:56
*** jdurgin has quit IRC00:58
*** maplebed is now known as maplebed|afk01:02
winston-dnotmyname : hi, may i ask you a question about swift?01:03
notmynamewinston-d: sure. I'll try to help01:03
winston-dnotmyname : thanks.  i am wondering what would happen if i doubled the number of partitions while the zone/device remain the same.  Will that affect the available of service (there are existing data on Swift)?01:05
notmynamewinston-d: you're wandering into uncharted territory :-)01:08
*** mgius has quit IRC01:08
notmynameso as a general rule, changing the partition power to an existing cluster doesn't loose data, it just makes it unavailable until the cluster settles down. replication pretty much has to move everything somewhere else. more data == longer time01:09
notmynameit should be bad enough to strongly reconsider doing it01:09
notmynamehowever01:10
*** mgius has joined #openstack01:10
*** littleidea has joined #openstack01:11
notmynameerr..scratch that "however". I was considering if there might be a corner case. I don't think there is (in the 30 seconds I've spent thinking of it)01:12
notmynamewinston-d: so consider your cluster unavailable until all of the data can be moved all around01:13
winston-dnotmyname : actually someone did that here in their setup, and found after that doing that, swift service became very unstable.  The intention was to test the behavior when expending swift with more storage node & devices.  They then suspect that Swift doesn't scale well.  I strongly doubted that.  And I challenged their way of doing the test.01:14
notmynameif you keep the same partition count (ring partition power), adding zones and nodes is very easy. that's why we say choose a good partition power up front when you are building your cluster01:16
winston-dnotmyname : so can I say that the rule of thumb for extending/scaling swift is to make sure the partition size remain the same as original one?01:17
notmynamein this person's tests, did the swift cluster eventually settle back down?01:17
notmynamewinston-d: yes. the partition power should never change for the life of the whole cluster (as a general rule--ie don't do it unless you know exactly what you're doing)01:18
*** ccc11 has joined #openstack01:19
winston-dnotmyname : they did it last Saturday and said the service is not fully recovered.  I will check if it's fully settle down now.01:19
notmynameit should depend on the amount of data and the size of the internal network between the storage nodes01:20
winston-dnotmyname : what's the exact definition of 'good partition power'?  Let's say, I have only 5 storage nodes and each with 24 disks for now.  And I intend to extend it late this year or early next year to total 7 nodes with 24 disks each.  what's the 'good partition number' for this case?01:21
XenithSo I understand that RAID isn't really recommended with swift. But has anyone tried to build a swift deployment on top of FreeBSD and ZFS?01:21
notmynamegood partition power == smallest power of 2, N, such that 2**N > number_of_storage_nodes when your cluster is at maximum size01:23
notmynamewinston-d: so figure out how big your cluster can be (budget, DC space, etc) and go from there01:23
notmynamewinston-d: so with your numbers there, a power of 15 would be minimum (2**14 is 16384 which is < 7*24*100)01:25
notmynamebasically the target is roughly 100 partitions on each storage volume. that gives each partition about 1% of the space01:26
winston-dnotmyname : so you mean even if i have only 5 nodes for now, but in future, if it will grow to 50 nodes, then i should set partition power to 17? (2^16 < 50*24*100 < 2^17)01:26
notmynamewinston-d: exactly01:27
notmynamewinston-d: the disadvantage of setting the power too high is that ring management operations take longer and more space will be used by the fs for directory inodes01:28
*** kidrock has joined #openstack01:28
winston-dnotmyname : i'm confused.  if the partition number is set according to future size, then before i actually reach that size, each disk will have way more partitions than 100.  right?01:29
notmynameyes01:29
notmynameXenith: noone has and reported on it, that I'm aware of. I suspect that RAIDZ will have the same issues as RAID 5 or 6 in that small, random read/write loads (which swift does almost exclusively) will be bad for performance01:30
winston-dnotmyname : what if some day i found 50 nodes is too small and needs to be extended to 100.  partition power will have to be altered eventually.01:30
notmynamewinston-d: and there is the confounding factor of differing volumes having differing weights in the ring :-)01:30
notmynamewinston-d: build a 2nd cluster and migrate the data. or have _long_ periods of downtime while the cluster rebalances all the data01:31
winston-dnotmyname : hmm.  i have more questions now. :)01:33
notmynamethat's why I'm here :-)01:34
winston-dnotmyname : in previous case, what if one can keep the size of partition remains the same?  for 50 nodes w/ 24 disks each, the partition power is set to 17, and for 100nodes, partition power set to 18.  Will swift still suffer service downtime while rebalances data?01:37
notmynamewinston-d: the default partition power (ie what's in the docs) is 18. there probably isn't a problem with using that, even if you don't plan on getting that big. yes, it will be slightly less efficient, but probably not a problem01:37
*** osier has joined #openstack01:38
notmyname"keep the size of partition remains the same"?? I don't follow. I think you may have accidentally a word01:39
winston-dnotmyname : 17 for 50 nodes with 24 disks each will roughly keep one partition to 1% of disk size. and 18 for 100 nodes will do the same.  i am asking because i though the size of the partition changed is root cause of downtime we saw here.01:41
notmynameah01:41
notmynamethe downtime has nothing to do with the percent of disk it uses01:41
notmynamethe partition power is used like this (python pseudocode):01:42
notmynamenode_list = md5_hash(obj_url)[:ring.partition_power]01:42
notmynameerr01:42
notmynamepartition = md5_hash(obj_url)[:ring.partition_power]01:43
notmynameand then use the partition to find the 3 nodes it goes on01:43
*** miclorb_ has quit IRC01:43
notmynameso if the partition power changes, everything is remapped in the cluster01:43
notmynameand the proxy will be looking in the new location but replication hadn't moved the data to the new location yet01:44
*** jakedahn has joined #openstack01:44
winston-dnotmyname : i see.01:44
winston-dnotmyname : now questions for 2nd cluster scaling.01:45
winston-dnotmyname : my understanding is 'cluster' here means a swift setup, right? if i have multiple swift cluster, then there should be some sort of cluster-level proxy to determine which cluster the data resists in?01:47
notmynameyes. by cluster I mean one logical, autonomous swift install01:48
notmynameany multi-cluster stuff you have (for HA or whatever) should be implemented on top of it. the auth system would be a great place, for example01:48
notmyname(well, HA is a little more complicated since you probably want synchronization in that case)01:49
winston-dnotmyname : yes, things will be way more complicated.01:49
notmynamedistributed systems always are :-)01:49
winston-dnotmyname : how can i migrate data bewteen clusters? if there existing code in swift to do this?01:50
notmynamewinston-d: there is the container sync feature which could be used as part of a migration. other than that, it's GETs and PUTs with a fat pipe between them01:51
notmynamewinston-d: for multi-cluser, if you (wikimedia) had one cluster for A-M and another for N-Z, the auth system could return both endpoints and then your code could choose the right cluster based on the request01:53
notmyname(^ silly non-HA example)01:53
winston-dnotmyname : I also have some concern that if large partition power would affect performance. in the case, 18 for 5 nodes x 24disks?01:54
*** nati has joined #openstack01:55
notmynamering management operations would be affected (building, rebalancing, etc) but not standard, live ring operations. ring management is done off-line01:55
winston-dnotmyname : that means once the ring is settle down, everything should work as fast as smaller partition power, say 16?01:57
notmynameyes, I think so01:57
winston-dnotmyname : that's good.  we may design a test case to verify that. :)01:58
*** kaz_ has quit IRC01:59
*** Cyns has joined #openstack02:00
*** miclorb_ has joined #openstack02:01
winston-dnotmyname: by the way, do you have some insights about how many operations per second can one storage node support (1GbE to proxy, 20 or 30 something disks)?  and how big is the cloud file cluster in rackspace? 50/100 nodes?02:01
*** huslage has quit IRC02:01
*** clauden has quit IRC02:03
notmynamewinston-d: I can't (not allowed to) say anything about Rackspace's size other than "billions of objects, petabytes of data". ops/sec depends a lot on your deployment (hardware, SSL, etc)02:03
*** kaz has joined #openstack02:03
catarrhineI would like to do a lab of swift and compute, what's a decent amount of nodes to be able to test all the features adequetely? 8?02:04
winston-dnotmyname : i understand.  :)  thanks all the same.02:04
notmynamewinston-d: sorry I can't share more02:04
winston-dnotmyname : it's ok. totally understand.02:05
notmynamecatarrhine: 8 for both or 8 each?02:05
catarrhinewell I'm thinking auth node, proxy node, 3 nodes for storage and 3 for compute?02:06
*** Dboy has joined #openstack02:06
catarrhineI'm not sure how the two projects work together02:06
notmynamenot quite like that, unfortunately02:06
HugoKuo_morning XD02:07
DboyHave a nice day everyone02:07
DboyI want to ask something about the Glance and Swift02:07
notmynamecatarrhine: it all depends on what you are trying to show. for a multi-server swift setup, I'd say one proxy/auth and 4 storage servers02:08
catarrhineok02:08
*** cole has joined #openstack02:08
Dboyis there any tutorial about the installed and confiruing the Glance and Swift02:08
DboyI have 1 cloud controler where installed Glance API02:10
notmynamecatarrhine: but again, it depends on what you want to show. for example, you could run multiple storage servers (to simulate more than you really have) on one box. this is how we do dev work (all services running on one VM)02:10
Dboyand I have 1 host where I installed Swift Proxy02:11
DboyHow can I connect them to store my data02:11
*** mrrk has quit IRC02:12
catarrhinewhat about for Nova notmyname?02:12
notmynameno idea02:12
* notmyname passes the "swift answer guy" torch to someone else for the evening :-)02:13
*** GeoDud has quit IRC02:16
*** mattray has joined #openstack02:17
DboyHi everyone02:17
DboyI want to ask something about the Glance and Swift02:17
Dboyis there any tutorial about the installed and confiruing the Glance and Swift02:17
DboyI have 1 cloud controler where installed Glance API02:17
Dboyand I have 1 host where I installed Swift Proxy02:18
DboyHow can I connect them to store my data02:18
DboyFor installed and configured the swift proxy, I used this link http://swift.openstack.org/development_saio.html#optional-setting-up-rsyslog-for-individual-logging , and the result of testing is successful02:20
*** johnmark has left #openstack02:21
DboyIt configured in a single host, and now I want to connect the swift server host to glance host02:21
Dboylike this link http://glance.openstack.org/architecture.html02:22
DboySo how can I do? any suggestion?02:23
Dboythank you in advance?02:23
*** primeministerp|h has left #openstack02:31
creihtwinston-d: There isn't a good solution for changing the ring size of a real cluster, so I wouldn't even consider that a possibility (other than building a new cluster and possibly migrating)02:33
creihtIt is better to overshoot by a power or so on the ring size02:33
winston-dcreiht : hi~~ that's good to know. :)02:34
creihtIt has been a while since I have done swift stuff, but I'm not even certain that replication would fix a ring size change02:34
creihtsince replication works on partitions, not the objects in the partition02:34
notmyname(I'm happy to be corrected by creiht)02:35
creihtwhen expanding a cluster, when you add new nodes, the ring builder insures that a.) only one replica of a partition gets moved, and that it moves as few partitions as possible02:35
creihtensures02:36
creihtor something... it is late :)02:36
notmynameya, I remember that now. all the existing stuff would have to be rehashed. replication doesn't do that02:36
creihtright02:36
*** littleidea has quit IRC02:36
creihtand the way the mapping works, it is likely that a large majority of the objects would have to be moved as they likely map to different partitions02:36
creihtso short story, don't change the partition power :)02:37
*** lborda has quit IRC02:37
winston-dcreiht : so that means, altered partition number of an existing swift setup means killing data? because it won't eventually settle down?02:37
creihtAt one point I had a good chart that showed the max size of clusters for the different partition powers02:37
creihtwinston-d: correct02:38
winston-dcreiht : can you share the chart?02:38
creihtunfortunately I can't remember where it is02:38
creihtbut I could re-create it pretty easily02:38
creihthrm... might be a good thing for a google spreadsheet02:39
*** mattray has quit IRC02:39
creihtthe nice thing about this situation, is that certain aspects of the cluster actually get more efficient, as it gets bigger02:39
*** littleidea has joined #openstack02:40
creihtwinston-d: I'll try to whip something together for the ring stuff02:40
winston-dcreiht : thanks.02:40
creihtAs to expected performance, I was doing some benchmarking a while back in the lab with 1 proxy and 5 storage nodes02:41
creihtof course this was a while ago, so a lot might have changed02:41
winston-dcreiht : 'get more efficient', can you explain more? which part of swift works more efficient as cluster gets bigger (closer to upper limit to partition power, i guess?)?02:41
creihtgoing for transactions, I was able to get around 2K puts/s02:42
*** mattray has joined #openstack02:42
creihtfor very small objects02:42
*** mattray has quit IRC02:42
winston-dcreiht : and 2k ops/s, that's for 1 proxy with only 1GbE? for how many disks?02:43
creihtyeah02:43
*** andyandy_ has quit IRC02:43
creiht24 disks per storage node02:43
creihtand that stays pretty consistent until the objects are large enough to saturate the network02:43
creihtI think GETs for those same object were around 10k GET/s02:44
creihtthat scales pretty linearly as you add proxys/storage nodes02:44
creihtof course it may very depending on hardware/network/etc.02:45
winston-dcreiht : i see. 2k for PUTs, 10 for GETs same objects. we'll try to match that in our setup.02:45
creihtthat was also with a lot of tuning02:45
winston-dcreiht : is this data of swift-bench or some other tools?02:45
creihtyeah02:45
winston-dcreiht : tuning for # of threads for different services and also TCP stuff?02:46
creihtok looking back at old notes, GETs were around 8k/s02:46
creihtwinston-d: yeah, I tried to capture a lot of that type of stuf here: http://swift.openstack.org/deployment_guide.html02:47
* winston-d writes down swift perf numbers.02:47
winston-dcreiht : btw, my colleague confirmed that he did met CloudFiles java binding thread unsafe issue.  he'll contact Lowell directly.02:48
creihttowards the end of that doc has some system tuning stuff that we do02:48
creihtwinston-d: cool, and thanks02:48
winston-dcreiht : and he was using CloudFiles java bindings with Swift.  not with CloudFiles. :)02:49
creiht:)02:49
creihtwill be the same issue either way :)02:50
winston-dright02:51
winston-dcreiht : could you be more specific on the 'more efficient' part for 'the nice thing about this situation, is that certain aspects of the cluster actually get more efficient, as it gets bigger'?02:52
creihtoh yeah, sorry02:52
creihtfor example spreading reads/writes across the cluster02:52
creihtalso replication gets more efficient02:52
winston-dcreiht : our partner would like to use swift for production, so they are very concern about performance, as well as reliability and scalability.02:53
creihtunderstood02:53
creihtwinston-d: Joe Arnold from cloud scaling also has given some great presentations on their experiences so far02:55
creihtand blog posts02:55
winston-dcreiht : so, large partition power for a relatively smal swift setup, will impact the performance somehow.02:55
creihthttp://cloudscaling.com/blog/author/joe02:55
winston-dcreiht : their project with KT? let me digg some useful information. :) thanks02:56
creihtthat and internap, and others02:56
creihtFortunately they can share a lot more than we can02:56
creihtHis presentation from the last dev summit is also very good02:56
creihthe lays out a pretty good plan for a small 1PB cluster02:57
creihtiirc02:57
winston-dcreiht : :) yes02:57
winston-dcreiht : thanks for your help, again.02:59
*** lborda has joined #openstack03:00
winston-dcreiht : i actually send inquiry to Dr. Hwang from KT for their swift benchmark, but no response (yet).03:03
*** mrrk has joined #openstack03:05
*** mrjazzcat has quit IRC03:06
*** kashyap has joined #openstack03:08
*** mnaser has joined #openstack03:12
kidrockHi all!03:12
kidrockI deployed newest dashboard which check out form GitHub.03:12
kidrockI login into the dashboard, and then select tab 'instances', and following error occurred:03:12
kidrockerror: Unable to get instance list: This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.03:13
*** mnaser has quit IRC03:13
kidrockdoes anyone use this version of dashboard and how can i config keystone to authenticate against to openstack.03:13
creihtwinston-d: https://spreadsheets.google.com/spreadsheet/ccc?key=0At4SyQdtaLTjdGp6R0ZiNnhYQV9WUDJFcllHODJJZFE&hl=en_US03:14
creihtshould give you a rough estimate03:14
creihtthough it is late, while it looks right at the moment, I will have my coworkers take a look at it tomorrow :)03:15
creihtwinston-d: you can change the numbers at the bottom (like drive size) and everything else should auto update03:16
creihtwell.. hrm, not sure if you can change it, does it let you make a local copy of it?03:17
creihtI haven't done a lot in google docs yet03:17
winston-dcreiht : thanks! it seems I can't change it.03:18
creihthrm03:18
*** Cyns has quit IRC03:19
creihtwinston-d: as to disk failure, if you read the recent backblaze article about their storage pod, we have noticed very similar types of disk failure rates03:24
winston-dcreiht : great.03:24
*** miclorb_ has quit IRC03:24
* winston-d googling 03:25
creihthttp://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/03:25
winston-dcreiht : you're so considerate. :)03:25
*** DrHouseMD is now known as HouseAway03:26
creihthrm... I seem to have messed up the spreadsheet :/03:27
winston-d5% disk failure per year. hmm.03:31
winston-di've been seeing more than that with drives of certain brand(s).03:31
*** Cyns has joined #openstack03:31
*** stewart has joined #openstack03:31
creihtyeah, it will vary a bit by brand03:32
creihtIt is also a good idea to do some burn in of the drives before using, as most drives, if they are going to fail, they will fail pretty early03:32
winston-dexactlly! yesterday, one drive failed right after i installed it, on a RAID 0.03:33
*** rchavik has quit IRC03:34
*** mrrk has quit IRC03:37
*** shentonfreude has joined #openstack03:37
*** osier has quit IRC03:45
*** jiboumans has quit IRC03:47
*** jiboumans has joined #openstack03:47
*** obino1 has quit IRC03:54
*** HowardRoark has quit IRC03:55
creihtwinston-d: updated the spreadsheet btw, I thought the numbers looked a little low :)03:58
creihtI forgot that there are 3 (replicas) * the number of paritions to spread among the drives03:58
*** MarkAtwood has left #openstack03:59
creihtwinston-d: I should also note that it is really a soft limit04:00
creihtyou can go fewer than say 100 parts per drive, it just means that there will be a bit more variance between drives04:00
creihtwe chose 100, because that would mean that there would be a +/- 1% difference on the number of partitions on each drive04:00
*** mfer has quit IRC04:02
*** ejat has joined #openstack04:05
*** ejat has joined #openstack04:05
*** Dboy has left #openstack04:06
*** Dboy has joined #openstack04:07
Dboyhi everyone04:08
Dboyplease describe for me about this error04:08
Dboyroot@CNServerCC:/var/log/glance# glance add name="TestSwift-1" distro="Test1" is_public=True  < /root/ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz Failed to add image. Got error: 400 Bad Request  The server could not comply with the request since it is either malformed or otherwise incorrect.   Error uploading image: No module named swift.common Note: Your image metadata may still be in the registry, but the image's status will likely04:08
Dboyhttp://paste.openstack.org/show/2036/04:09
*** mrrk has joined #openstack04:12
*** RickB17 has quit IRC04:15
*** cole has quit IRC04:17
*** obino has joined #openstack04:17
*** j05h has joined #openstack04:18
*** miclorb_ has joined #openstack04:25
*** undertree has quit IRC04:29
*** jtanner has quit IRC04:29
*** TimR has quit IRC04:30
*** TimR has joined #openstack04:31
*** Dboy has left #openstack04:33
*** deepest has joined #openstack04:34
deepestHi all you guys04:35
deepestI highly appreciate your comment for my error04:35
deepestplease check it for me and correct me04:35
deepest http://paste.openstack.org/show/2036/04:35
deepestI got that error when I tried to add a new image for testing Glance04:36
*** jakedahn has quit IRC04:37
*** shentonfreude has quit IRC04:39
winston-dcreiht : still there?04:43
deepestyep04:43
deepestsorry04:43
*** wariola has joined #openstack04:43
*** miclorb_ has quit IRC04:43
*** shentonfreude has joined #openstack04:43
*** miclorb_ has joined #openstack04:53
*** _mage_ has joined #openstack04:58
*** everett has joined #openstack04:58
everetthi all. regarding vnc proxy... where did http://github.com/openstack/noVNC.git go?04:59
*** ejat has quit IRC05:00
*** osier has joined #openstack05:01
*** ejat has joined #openstack05:01
*** jakedahn has joined #openstack05:01
*** Cyns has quit IRC05:03
*** littleidea has quit IRC05:08
everettah. i see it's at https://github.com/openstack/noVNC.git05:18
everettthe docs at http://nova.openstack.org/runnova/vncconsole.html need to be updated.05:18
*** everett has left #openstack05:19
*** f4m8_ is now known as f4m805:20
*** hingo has joined #openstack05:28
*** msinhore has quit IRC05:33
*** stewart has quit IRC05:38
*** reed has quit IRC05:41
*** cp16net has quit IRC05:52
*** cp16net has joined #openstack05:52
*** wariola has quit IRC06:01
*** kashyap has quit IRC06:01
*** nati has quit IRC06:03
*** Alowishus has quit IRC06:07
*** sdadh011 has quit IRC06:09
*** Alowishus has joined #openstack06:10
*** sdadh01 has joined #openstack06:12
*** cp16net has quit IRC06:16
*** ejat has quit IRC06:16
*** stewart has joined #openstack06:23
*** javiF has joined #openstack06:23
*** kashyap has joined #openstack06:25
*** kidrock has quit IRC06:28
*** kidrock has joined #openstack06:28
*** RMS-ict has quit IRC06:34
*** Tribaal has joined #openstack06:34
*** mgoldmann has joined #openstack06:36
*** deepest has quit IRC06:40
*** YorikSar has joined #openstack06:41
*** kidrock_ has joined #openstack06:41
*** ejat has joined #openstack06:41
*** ejat has joined #openstack06:41
*** kidrock has quit IRC06:42
*** jedi4ever has joined #openstack06:45
*** Pjack has joined #openstack06:45
*** HugoKuo__ has joined #openstack06:45
*** miclorb_ has quit IRC06:47
*** Pjack has quit IRC06:48
*** HugoKuo_ has quit IRC06:48
*** kidrock_ has quit IRC06:50
YorikSarjeblair: Hi06:50
YorikSarjeblair: I've wrote on maillist about Gerrit problem.06:50
*** ejat has quit IRC06:56
*** rchavik has joined #openstack06:56
*** ejat has joined #openstack07:01
*** ejat has joined #openstack07:01
*** tudamp has joined #openstack07:01
*** reidrac has joined #openstack07:12
*** cole has joined #openstack07:14
*** rchavik has quit IRC07:14
*** guigui has joined #openstack07:15
*** cole has quit IRC07:16
*** ejat has quit IRC07:18
*** kidrock has joined #openstack07:24
*** mnour has quit IRC07:25
*** ejat has joined #openstack07:26
*** ejat has joined #openstack07:26
*** stewart has quit IRC07:28
*** mrrk has quit IRC07:34
*** deepest has joined #openstack07:34
*** ejat has quit IRC07:37
*** nickon has joined #openstack07:50
*** rchavik has joined #openstack07:57
*** CloudAche84 has joined #openstack07:59
*** willaerk has joined #openstack08:02
*** ccc11 has quit IRC08:03
*** jeffjapan has quit IRC08:04
*** 45PAAEYQV has joined #openstack08:04
*** ccc11 has joined #openstack08:04
*** AhmedSoliman has joined #openstack08:06
*** guigui has quit IRC08:07
*** 45PAAEYQV has quit IRC08:09
*** guigui has joined #openstack08:11
*** irahgel has joined #openstack08:15
*** darraghb has joined #openstack08:17
*** MarcMorata has joined #openstack08:29
*** sandywalsh has quit IRC08:34
*** dirkx has joined #openstack08:36
*** daysmen has joined #openstack08:49
*** deepest has quit IRC08:54
*** ejat has joined #openstack08:54
*** ejat has joined #openstack08:54
*** TimR has quit IRC08:56
*** TimR has joined #openstack08:57
*** npmapn has joined #openstack09:01
TimRHi - anybody ohter there running multiple glance-api serves begind a laod balancer ?09:03
CloudAche84no bute I'd like to know how to do that too!09:05
CloudAche84but09:05
CloudAche84we are planning to09:05
CloudAche84I can't imagine there being too much of an issue with it. Does glance use sessions?09:07
*** mnour has joined #openstack09:08
*** mnour has quit IRC09:12
*** mnour has joined #openstack09:12
*** TeTeT has joined #openstack09:15
*** argonius has joined #openstack09:15
argoniushi09:15
CloudAche84hi09:15
argoniusi was trying to install openstack nova on centos 5 using the instructions on http://wiki.openstack.org/NovaInstall/CentOSNotes09:16
TeTeThello, I have a Ubuntu 11.10 Oneiric system where I tried to test openstack / nova with lxc. I have plenty of shutdown instances, but I cannot terminate them completely - compute log contains info 'Found instance ... in DB but no VM. State=5, so setting state to shutoff'09:16
argoniuson step "cp /opt/nova/etc/nova-api.conf /etc/nova"  i ran into the problem that there is no nova-api.conf ??09:16
TeTeTnow I cannot launch new instances any longer, instanceLimitExceeded - any way to remove the dead instances completely w/o tearing down the cloud completely?09:16
CloudAche84TeTeT: I expect you need to update some records in the DB09:17
argoniusthere is only a file called api-paste.ini but this is not the same as nova-api.conf isn't it?09:17
TeTeTCloudAche84: ouch, any pointers / guides on how to do that?09:19
*** ejat has quit IRC09:19
CloudAche84are you familiar with MySQL at all?09:19
CloudAche84actually hold on09:20
TeTeTCloudAche84: rusty at it, but I guess I can cope with it09:20
CloudAche84argonius: bear with me I might be able to help you in a minute09:20
argoniusCloudAche84: ah fine, i will wait for you ;)09:20
CloudAche84TeTeT: try this script first: http://pastebin.com/0mr64Atu09:22
CloudAche84should terminate all running instances cleanly09:22
CloudAche84if not then we may need to hack the DB09:22
TimRCloudach84:  I have set up a dual API config one node running API-server, Registry and MySQL; 2nd node running just the API-Server. Have not config'd the LoadBalancer yet09:22
TeTeTCloudAche84: already tried 'euca-terminate-instances $( euca-describe-instances | grep INST| awk '{ print $2 }' | xargs )09:23
TeTeTCloudAche84: multiple times, they just never get out of shutdown :(09:23
CloudAche84ah ok09:23
CloudAche84in that case lets try something else09:24
TeTeTCloudAche84: I can't see a mysql instance running , ps aux | grep -i mysql, is it possible that nova @ ubuntu uses a different db backend?09:25
*** zz_bonzay|away is now known as bonzay09:25
CloudAche84doubt it09:26
CloudAche84unless it uses sqlite or something09:26
CloudAche84but I wouldnt have thought so09:26
CloudAche84what does your --sql_connection= say in /etc/nova/nova.conf?09:27
TeTeTCloudAche84: not specified09:27
*** mnour has quit IRC09:27
*** mnour has joined #openstack09:27
TeTeTCloudAche84: content of nova.conf is http://pastebin.ubuntu.com/65853909:28
CloudAche84argonius: on my Ubuntu system nova-api.conf is the init script and therefore lives in /etc/init/nova-api.conf09:28
CloudAche84oh..09:28
CloudAche84argonius: all nova service config lives in /etc/nova/nova.conf09:29
CloudAche84TeTeT: there seems to be a lot of stuff missing there..09:29
CloudAche84how did you install?09:30
TeTeTCloudAche84: it's the most basic setup I've got from zul's blogpost on getting nova / lxc started09:30
CloudAche84which os?09:30
TeTeTCloudAche84: I started on ubuntu 11.04, then updated to 11.1009:31
TeTeTCloudAche84: just digging up the article from zul that presents details09:31
TeTeTCloudAche84: and thanks for your time and help! Much appreciated09:32
CloudAche84no prob.09:32
CloudAche84Happy to share my (limited!) knowledge09:32
TeTeTCloudAche84: http://zulcss.wordpress.com/2011/03/31/running-nova-openstack-on-amazon-ec2/09:32
TeTeTsudo apt-get install python-nova nova-common nova-api nova-compute nova-network nova-objectstore nova-scheduler nova-volume python-nova.adminclient unzip euca2ools09:33
CloudAche84this article: http://zulcss.wordpress.com/2011/03/31/running-nova-openstack-on-amazon-ec2/?09:33
CloudAche84ah09:33
*** mnour has quit IRC09:33
CloudAche84lol09:33
TeTeThe he09:33
*** mnour has joined #openstack09:34
CloudAche84what does this produce: nova-manage conf li|grep sql09:36
*** mnour has quit IRC09:36
TeTeTCloudAche84: using sqlite, --db_backend=sqlalchemy09:37
argoniusre09:37
argoniussry customer calling ....09:37
argonius;)09:37
*** dirkx has quit IRC09:38
argoniusCloudAche84: as i said, i use centos and installing from branch (bzr)09:38
argoniusCloudAche84: i have to copy nova-api.conf to /etc/nova/09:38
argoniusbut in the branch, there is no nova-api.conf file :(09:39
*** bonzay has left #openstack09:39
argoniusCloudAche84: maybe a mistake in the actual branch of nova?09:39
CloudAche84possible09:39
argoniusCloudAche84: can you send me your nova-api.conf09:39
matttmorning morning09:39
argonius'cause i do not really know, what should be in this file09:39
matttnova-api.conf?09:39
argoniusit's the first time of trying openstack09:39
TeTeTCloudAche84: found the db in /var/lib/nova/nova.sqlite - I'm not sure how to see the tables in the db though09:40
argoniusmattt: yes09:40
matttargonius: newer versions use a single conf file, nova.conf09:40
argoniusmattt: ahh and where can i find it09:40
matttargonius: how did you install nova?  if you use a package manager, then /etc/nova09:40
argoniusmattt: i was using http://wiki.openstack.org/NovaInstall/CentOSNotes09:40
matttargonius: which version of nova did you use?09:41
argoniusmattt: i am not really sure     how to find this out?09:41
matttargonius: rpm -qa | grep nova maybe?09:42
mattt(not sure how the packages are named)09:42
argoniusnothing09:42
argoniusi take it via bzr09:42
argoniusnot sure about this but maybe this will get an actual branch09:42
argonius?09:42
matttargonius: yeah, you're right ... you're not pulling from a package manager09:42
CloudAche84TeTeT: apt-get install sqlite09:42
*** deepest has joined #openstack09:42
argoniusnova --version does not really work09:42
CloudAche84then sqlite <dbname>09:43
TeTeTCloudAche84: already past that, I found the instance table, shall I just delete all of them?09:43
matttargonius: sorry, don't have _any_ experience w/ nova on centos -- i will tell you i used the ppas on ubuntu (http://wiki.openstack.org/PPAs) and everything just works out of the box09:43
CloudAche84but I thiknk you should consider setting it up to use a proper DB09:43
argoniushmm09:43
TeTeTCloudAche84: yeah, once this showcase vm works ;)09:43
argoniusmattt: but i do not want to use ubuntu .... :(09:43
matttargonius: i may try it when i get some time, but at work at the minute :(09:43
argoniusmattt: i am at work 2 :p09:43
argoniusbut i just take the time09:44
argonius;)09:44
mattthaha09:44
*** deepest has left #openstack09:44
CloudAche84no09:44
argoniuscan i just build the nova.conf file by my self?09:44
*** deepest has joined #openstack09:44
matttargonius: let me see if i can get you my default conf09:45
deepestHi you guys09:45
CloudAche84TeteT:paste me the first record09:45
matttwhat paste servers do you guys use?09:45
argoniusmattt: this would be nicccceee09:45
argonius:)09:45
CloudAche84I used pastebin09:45
argoniusyeah me 209:45
CloudAche84but it seems there is a pastebin.ubuntu.com09:45
matttargonius: http://pastebin.com/KWe7wPBc09:46
argoniusmattt: thx a lot09:46
deepestI want to ask you about my error of swift09:46
matttnp, that's stock on the ubuntu packages tho, so i'm not entirely sure if it'll work on centos :(09:46
matttbrb, got some stuff to do09:46
TeTeTCloudAche84: oops, too late, already delete from instances issued, sorry. Instances seem gone, but I guess not cleanly09:46
deepestcan anyone help me?09:46
argoniusmattt: this is your nova.conf??09:46
argoniuswhich references self to nova.conf??09:46
CloudAche84ouch09:46
CloudAche84restart the services and see if a new instances table is created09:47
TeTeTCloudAche84: thanks for your help, the table is there, I only deleted the rows09:47
CloudAche84nice hatchet job tho :P09:47
CloudAche84ah ok09:47
argoniusdamn09:47
argoniusnext failure09:47
argoniusi think for first tests i should use ubuntu and .deb packages09:48
argonius:(09:48
matttargonius: yeah man, that's the default conf!09:48
matttthen i add to it09:48
CloudAche84It seems Ubuntu is best supported at the moment argonius09:48
mattti'd have to agree09:48
TeTeTCloudAche84: I started a new instance and terminated it right away, seems to work now09:49
matttonce you get familiar w/ openstack, you can start hacking with it on centos09:49
CloudAche84yep09:49
argoniusCloudAche84: that's realy crazy09:49
argoniusCloudAche84: cause this software comes from rackspace09:49
CloudAche84not really considering the amount of Canonical people on it :)09:49
argoniuswhich really often uses rhel09:49
argoniusi am confused09:50
argonius:D09:50
argoniusmaybe someone has 2-3 minutes to answer some additional questions about the basics of openstack?09:51
CloudAche84will try09:51
*** RMS-ict has joined #openstack09:51
argoniusi was reading the introduction but i am not really sure about the coherence between the differents softwaresssss09:51
argoniusi was just seeing a video about openstack09:52
argoniusin wich i can just click and click09:52
argoniusto deploy many virtual instances09:52
CloudAche84yep09:52
argoniusso09:52
argoniusin different groups09:52
argoniusthe webinterface can be running on my server?09:52
argoniusor is this just running on installer.openstack.org?09:52
argoniusthis was not really clear in the vid09:53
argonius;)09:53
CloudAche84web interface runs on your server09:53
argoniusok09:53
argoniusand the webinterface is openstack nova?09:53
CloudAche84it isnt part of the main nova packages yet though09:53
CloudAche84no09:53
argoniusoh ok09:53
argoniusnova is just the software for deploying the virtual instances, isn't it?09:53
argoniuslike a xml parser or rpc daemon09:54
CloudAche84penstack nova is nova-compute, nova-api, nova-network, nova-scheduler09:54
CloudAche84nova is the compute platform which distributes VMs around hypervisors09:54
argoniusok09:55
CloudAche84Glance is an image server which organises and retreieves images from low cost file storage (such as S3)09:55
argoniusi read something about these nova-* apps09:55
CloudAche84Swift is a scale-out storage solution like S3 designed to be resilient on commodity hardware09:55
argoniusah ok09:56
CloudAche84Openstack-Dashboard is a first go at a UI for Openstack Nova09:56
CloudAche84which uses the API presented by Nova & Glance09:56
CloudAche84does that help?09:56
argoniusis it possible to firstly use just one server for everything?09:56
argoniusyes a bit ;)09:56
CloudAche84yes09:57
argoniusso local storage is supported?09:57
CloudAche84although it may make sense to run swift on a different server09:57
argoniusyes of course09:57
CloudAche84but you dont HAVE to use Swift (I'm not)09:57
argoniusbut for beginning i just have one server09:57
argoniusah ok09:57
*** miclorb_ has joined #openstack09:57
*** ccc111 has joined #openstack09:57
deepestwhat is the relationship between Glace and Swift?09:57
argoniusso the main parts for me are09:57
CloudAche84you can tell Glance to just use a folder on the server to store images09:58
argoniusthe nova-suite09:58
argoniusglance09:58
argoniusand openstack dashboard?09:58
CloudAche84yes09:58
CloudAche84although you dont need to use Glance either09:58
argoniusah ok09:58
argoniusand hypervisor?09:58
CloudAche84as you can set --image_service= to --image_service=nova.image.local.LocalImageService in nova.conf09:59
argoniuskvm is supported afaik, isn't it09:59
CloudAche84yep09:59
argoniushmm09:59
CloudAche84KVM is enabled as part of the default setup09:59
CloudAche84but there are a number of "supported" hypervisros09:59
argoniushmm but with cpu virtualization support09:59
*** miclorb_ has quit IRC09:59
CloudAche84inc Xen, ESX, Lxc etc10:00
argoniuskvm is the best at the moment i think10:00
argoniusisn't it?10:00
CloudAche84Horses for courses I think but most development seems to be done against KVM10:00
argoniusdo u have some experience10:00
argoniushmmhh10:00
*** ccc11 has quit IRC10:01
argoniusok so i will just try to setup ubuntu with nova suite and openstack-dashboard ;)10:01
CloudAche84although Citrix are developing a commercial Openstack solution which will use Xen10:01
CloudAche84so I think Xen may be the stronger candidate in the future10:01
argoniushmm maybe10:01
argoniuswe will c ;)10:01
CloudAche84yep10:01
argoniusare you often here?10:01
CloudAche84try to be10:01
argoniusfine fine10:01
argoniusi will try it to in the future10:02
CloudAche84the more people in here the better I think. Will help drive the development quicker10:02
argoniusi think openstack will be a great project10:02
CloudAche84me too10:02
argoniusCloudAche84: where r you from, if i may ask10:02
CloudAche84I hope so anyway10:02
CloudAche84because we are deploying a lot of it :)10:02
argoniushrhr10:03
deepestHi CloudAche84, are U still there?10:04
deepestcan I ask U a question10:04
CloudAche84yep10:04
deepestI have some problem about Swift10:04
deepestI want U check for me about the error10:04
CloudAche84I havent deployed swift yet ( we arent planning to use it)10:05
deepestI dont know where it is10:05
CloudAche84where what is?10:05
deepestthe error10:05
argoniusso ok, i will now try to deploy it on ubuntu, thanks a lot for your help CloudAche8410:05
CloudAche84no prob10:05
*** argonius is now known as argonius_afk10:05
argonius_afk<--- afk10:05
deepesthere is my glance-api.conf10:05
deepesthttp://paste.openstack.org/show/2039/10:05
*** RMS-ict has quit IRC10:05
*** miclorb_ has joined #openstack10:06
*** RMS-ict has joined #openstack10:06
deepestand here http://paste.openstack.org/show/2038/, this is an error what I got10:06
deepestHow can I solve this error?10:07
CloudAche84like I say I havent deployed swift but to me your store_key looks wrong10:08
*** miclorb_ has quit IRC10:09
deepestI think so, but I dont know how to find the correct store_key10:09
deepestbecause the tutorial didn't say anything about the store_key http://swift.openstack.org/development_saio.html#optional-setting-up-rsyslog-for-individual-logging10:10
deepestdo U have any suggestion?10:10
*** cian has joined #openstack10:11
CloudAche84not really apar from check this out. Maybe it will help generaet a proper key? http://docs.openstack.org/cactus/openstack-object-storage/admin/content/verify-swift-installation.html10:16
deepestOk10:18
deepestthanks U10:18
deepestI will check it10:18
*** RMS-ict has quit IRC10:19
*** RMS-ict has joined #openstack10:20
*** vernhart has quit IRC10:24
*** RMS-ict has quit IRC10:24
*** vernhart has joined #openstack10:24
*** miclorb_ has joined #openstack10:27
*** miclorb_ has quit IRC10:28
*** TeTeT has quit IRC10:31
*** deepest has quit IRC10:38
*** ccc111 has quit IRC10:45
*** TimR has quit IRC10:51
*** TimR has joined #openstack10:51
*** guigui has quit IRC10:55
*** zigo has joined #openstack10:58
*** ton_katsu has quit IRC10:59
*** kidrock has quit IRC10:59
*** guigui has joined #openstack11:01
*** Ryan_Lane has joined #openstack11:05
*** ctennis has quit IRC11:16
*** TimR has quit IRC11:17
*** TimR has joined #openstack11:18
*** markvoelker has joined #openstack11:18
*** shang has quit IRC11:19
*** truijllo has joined #openstack11:22
*** dirkx has joined #openstack11:22
truijllohi guys!11:23
CloudAche84hi11:25
truijlloI'm working on the dashboard, using trunk versions ( for nova and dashboard ) launching an instance, in the db in user_id field there's a dump of the user objects ( similar thing in project_id )11:27
truijlloin nova-api.log I see something like11:27
truijllo2011-08-04 13:21:26,696 nova.compute.api: Casting to scheduler for Project('1234', '1234', 'joeuser', '1234', ['admin', 'joeadmin', 'joetest', 'joeuser', 'muriel', 'truijllo'])/User('joeuser', 'joeuser', '13988189-dc63-4088-9f1b-eb14af725679', '4e4c9ea6-31d5-4f39-b33e-7b114cc8f0da', False)'s instance 24 (single-shot) 2011-08-04 13:21:26,697 nova.rpc: Making asynchronous cast on scheduler... 2011-08-04 13:21:26,697 nova.rpc: Crea11:27
truijllo( ops )11:27
truijlloso...I obtain an error11:27
truijlloBUT .... If I try to run an instance using eucatools everything works and in the user_id field there's the old-fashoned username11:29
*** ctennis has joined #openstack11:30
YorikSartruijllo: What is that error?11:30
*** PeteDaGuru has joined #openstack11:30
truijllo( the readable log is here http://pastie.org/2319268 )11:30
YorikSarWhat version of nova do you use?11:32
truijlloin the dashboard I obtain this message:  Unable to launch instance: 400 Bad Request The server could not comply with the request since it is either malformed or otherwise incorrect.11:33
truijllonova is in the current trunk version11:33
*** shang has joined #openstack11:35
*** littleidea has joined #openstack11:36
YorikSarDo you get any signs of this request in comput log?11:37
*** littleidea has quit IRC11:37
truijllonope11:39
*** viraptor has joined #openstack11:40
truijllonothing in compute.log or schedule.log11:40
*** vcaron has joined #openstack11:41
YorikSarIt looks like I understand where did this problem come from.11:42
*** littleidea has joined #openstack11:42
YorikSarYou should try the version before commit #134811:43
truijlloof nova ?11:43
YorikSarYes11:43
truijlloso... 1348 is the first release with this problem, I'am right ?11:44
YorikSarIt's my guess. There were a lot of work related to users and projects in that commit.11:45
truijllothank you so much, I'll try 1347 asap, I've looked for blueprints o something about but nothing appears for this issue11:46
YorikSarAnd it looks like the source of your problem is sending User object instead of user id in cast context11:46
truijlloyes, you're right11:47
*** mfer has joined #openstack11:49
*** CloudAche84 has quit IRC11:55
*** _mage_ has quit IRC11:56
*** lborda has quit IRC11:56
*** dirkx has quit IRC11:58
*** ncode has quit IRC12:00
*** ccc11 has joined #openstack12:04
*** jtanner has joined #openstack12:04
*** yosh has quit IRC12:06
*** yosh has joined #openstack12:08
*** CloudAche84 has joined #openstack12:14
*** RMS-ict has joined #openstack12:15
*** AhmedSoliman has quit IRC12:16
*** Kiall has quit IRC12:16
*** Kiall has joined #openstack12:17
*** AhmedSoliman has joined #openstack12:17
*** KAM has joined #openstack12:20
*** Ryan_Lane has quit IRC12:23
*** yosh has quit IRC12:26
*** blamar has quit IRC12:27
*** blamar has joined #openstack12:27
*** dprince has joined #openstack12:28
*** ncode has joined #openstack12:31
*** yosh has joined #openstack12:35
*** javiF has quit IRC12:35
*** tomh__ has quit IRC12:42
*** marrusl has joined #openstack12:43
*** Tribaal is now known as Tribaalatchu12:48
*** Ryan_Lane has joined #openstack12:49
*** msivanes has joined #openstack12:51
*** truijllo has quit IRC12:53
*** osier has quit IRC12:53
*** alfred_mancx has joined #openstack12:54
*** tomh_ has joined #openstack12:56
*** freeflying has quit IRC12:58
*** freeflying has joined #openstack12:58
*** mnaser has joined #openstack12:59
*** mnaser has quit IRC13:02
*** mnaser has joined #openstack13:02
*** alfred_mancx has quit IRC13:03
*** sandywalsh has joined #openstack13:05
*** Ryan_Lane has quit IRC13:05
*** RMS-ict has quit IRC13:09
*** mnaser has quit IRC13:10
*** RMS-ict has joined #openstack13:11
*** truijllo has joined #openstack13:11
*** hadrian has joined #openstack13:12
*** BK_man has joined #openstack13:12
*** jtanner has quit IRC13:13
*** MarcMorata has quit IRC13:13
*** MarcMorata has joined #openstack13:14
*** vcaron has quit IRC13:15
*** Shentonfreude has joined #openstack13:15
*** imsplitbit has joined #openstack13:17
*** adambergstein_ has joined #openstack13:19
*** adambergstein has quit IRC13:19
*** adambergstein_ is now known as adambergstein13:19
*** HowardRoark has joined #openstack13:24
*** cereal_bars has joined #openstack13:25
*** aaron_husng has joined #openstack13:26
*** dolphm has joined #openstack13:26
*** TimR has quit IRC13:27
*** TimR has joined #openstack13:28
*** jkoelker has joined #openstack13:29
adambergsteinhi folks, i have an issue. euca-describe-instances has shown my instance as 'building' for several hours, my nova-compute log shows this 'Found instance 'instance-0000000d' in DB but no VM. State=9, so assuming spawn is in progress.'13:31
adambergsteindoes anyone know what that means?13:31
*** pimpministerp has joined #openstack13:33
*** bcwaldon has joined #openstack13:34
*** primeministerp has quit IRC13:34
*** pimpministerp is now known as primeminsterp13:34
*** kashyap has quit IRC13:35
*** undertree has joined #openstack13:35
*** jatsrt has joined #openstack13:37
*** bsza has joined #openstack13:37
*** worstadmin has quit IRC13:39
*** mitchless has quit IRC13:40
*** reidrac has quit IRC13:40
*** dolphm has quit IRC13:40
jatsrtanyone around this morning13:41
jatsrtor evening or afternoon depending on your part of the world13:42
*** lborda has joined #openstack13:43
*** vishnu has joined #openstack13:44
*** jj0hns0n has joined #openstack13:44
ttxchmouel: any progress on the London meetup thing ? At this point, that would make a very late announcement (cc: Daviey)13:45
*** netmarkjp has joined #openstack13:47
*** lborda_ has joined #openstack13:47
*** jtanner has joined #openstack13:47
jatsrtso I asked the other day, but had to run, so I'll ask again13:48
jatsrtany reason with the latest code that my instances no longer get a hostname13:48
jatsrtwas there a change to the way they are generated/assigned?13:48
*** lborda has quit IRC13:48
*** kbringard has joined #openstack13:51
*** worstadmin has joined #openstack13:51
truijlloYorikSar: yeah! it works! revision 1347 worked like a charm13:52
*** vcaron has joined #openstack13:52
*** vcaron has left #openstack13:53
*** huslage has joined #openstack13:53
*** KAM has quit IRC13:54
*** aaron_husng has quit IRC13:54
*** f4m8 is now known as f4m8_13:55
*** javiF has joined #openstack13:57
YorikSartruijllo: You should try a 1348 revision and if it does not work file a bug.13:58
*** dolphm has joined #openstack13:58
*** littleidea has quit IRC13:59
YorikSaradambergstein: you should check the log before that 'Found instance' messages appeared.14:00
*** mdomsch has joined #openstack14:01
*** LiamMac has joined #openstack14:01
*** ejat has joined #openstack14:04
*** nmistry has joined #openstack14:04
*** whitt has joined #openstack14:05
*** dendro-afk is now known as dendrobates14:06
jatsrthmmm, going through the code and not finding anything that looks like the hostname functionality would have changed14:07
*** AhmedSoliman has quit IRC14:09
*** cloudvirt has joined #openstack14:10
tudamphi all, can anyone help me with the nova scheduler?14:11
*** cmdrsizzlorr has joined #openstack14:11
*** cloudvirt has quit IRC14:11
*** dolphm has quit IRC14:12
*** dolphm has joined #openstack14:13
*** cloudvirt has joined #openstack14:14
adambergsteinYorikSar: which log? nova-compute?14:14
jatsrtOK: with the hostname problem14:15
jatsrtinstead of the instance_id it is now sending "server_x"14:15
jatsrtwhich appears to come from display name?14:15
jatsrt_ is invalid to the "hostname" command14:16
*** reidrac has joined #openstack14:16
jatsrtso no hostname is being set by coud-init14:16
*** nmistry has quit IRC14:16
TREllisjatsrt: btw, I'm also seeing this behaviour14:17
TREllisyeah, meta-data shows server_x instead of i-xxxxxxxx14:17
jatsrtso this is "hostname" in the db14:17
jatsrtfor the instance14:17
jatsrtjust not too sure where that is coming from yet14:18
TREllisis there a bug open?14:18
jatsrtnot too sure yet14:18
*** ldlework has joined #openstack14:19
*** worstadmin has quit IRC14:20
jatsrthard bug to search for but doesn't look like it14:20
jatsrtsearching for server_, hostname, too generic14:20
TREllisok, want me to create one? or will you?14:21
TREllisI planned to do that yesterday, but got sidetracked14:21
YorikSaradambergstein: Yes. I guess, compute node failed to finish provisioning and came to silent error14:21
adambergsteinYorikSar: http://paste.openstack.org/show/2040/14:22
adambergsteinthat was nova-compute14:23
*** reed has joined #openstack14:23
adambergsteinYorikSar: ill check the other logs14:23
adambergsteinunder the nova dir14:23
YorikSaradambergstein: Yeah, I thought so. I don't know, why this happens, but sometimes that lock files are just stuck around during nova reboot.14:25
YorikSaradambergstein: Delete all of them and this should do it.14:25
adambergsteinYorikSar: how do i do that?14:25
jatsrtTREllis: still diggin14:25
jatsrtthis change:14:25
jatsrthttp://bazaar.launchpad.net/~hudson-openstack/nova/trunk/revision/1188.4.114:25
*** kidrock has joined #openstack14:25
adambergsteingo into my lock directory and clear the files?14:25
adambergsteinmight there be a permissions issue?14:25
jatsrtis what made it work differently, but it seems somewhere that it changed more14:26
YorikSaradambergstein: Yeah, jus clear it. Or if it's empty anyway. there should be a permission issue14:26
jatsrtotherwise you need to have a proper display name set for every instance started14:26
adambergsteinYorikSar: Thanks, i'll give this a shot!14:26
jatsrtstill seems like a bug that the default behavior is to set it to server_%d14:27
jatsrtwhich would always be invalid14:27
*** jj0hns0n has quit IRC14:29
adambergsteinYorikSar: should i clear out my instances and/or images and rebuild them?14:29
*** HowardRoark has quit IRC14:29
jatsrtTREllis: looking at it more, that whole checkin was just broken, it takes display name and replaces " " with "_" which is invalid even if it worked right14:31
jatsrtwondering why it was though tat we needed to not use the instance_id?14:31
*** kidrock has quit IRC14:32
adambergsteinYorikSar: Thank you!!!14:32
adambergsteinthat worked14:33
*** aliguori has quit IRC14:33
adambergsteini have a new issue now but i will research it14:33
vishnuI just installed cactus on RHEL6. When I start my instances it went from launching to shutdown. where should I look for errors?14:33
jatsrt"Made hostname independent from ec2 id. Add generation of hostnames based on display name."14:33
*** jfluhmann has joined #openstack14:33
jatsrtSo, the problem with this logic is that if you are using basic euca tools to run instances, there is no way to set a display name, so this would never work14:34
jatsrtTREllis: would you like to enter the bug14:34
*** dgags has joined #openstack14:35
*** mgoldmann has quit IRC14:36
chmouelttx: I didn't get any answers as well for the budget side so I got to say there is not going to be any organised14:36
*** ejat has quit IRC14:37
ttxchmouel: right, let's drop the idea, then14:37
*** jj0hns0n has joined #openstack14:40
*** dirkx has joined #openstack14:40
*** MarcMorata has quit IRC14:40
jatsrtTREllis: Sorry, I entered the bug https://bugs.launchpad.net/nova/+bug/820962  my first one :-)14:41
adambergsteinYorikSar: I don't know if you have time to look at this, but here is my recent issue: http://paste.openstack.org/show/2041/ and i found this bug report: https://bugs.launchpad.net/nova/+bug/807764. it looks like there is a patch, how would i go about installing that?14:41
*** TimR has quit IRC14:42
*** mattray has joined #openstack14:42
TREllisjatsrt: cool thanks14:42
jatsrtnp14:42
*** imsplitbit has quit IRC14:42
*** mattray has quit IRC14:46
*** rnirmal has joined #openstack14:47
*** dendrobates is now known as dendro-afk14:47
*** cp16net has joined #openstack14:49
*** mattray has joined #openstack14:53
*** javiF has quit IRC14:54
*** cp16net has quit IRC14:54
*** cp16net has joined #openstack14:54
*** tudamp has left #openstack14:55
*** cp16net_ has joined #openstack14:56
*** willaerk has quit IRC14:57
*** cp16net has quit IRC14:59
*** cp16net_ is now known as cp16net14:59
cmdrsizzlorrhi, I'm having some trouble with a "waiting for metadata service" entry in console output. Using the 11.04 uec image and FlatNetworking. Would anyone have any suggestions as to what to check?14:59
cmdrsizzlorrAccording to some pages, I've also added suitable iptables prerouting entries.15:00
*** deshantm_laptop has joined #openstack15:01
*** aliguori has joined #openstack15:03
*** jj0hns0n_ has joined #openstack15:03
*** amccabe has joined #openstack15:04
*** jj0hns0n has quit IRC15:05
*** jj0hns0n_ is now known as jj0hns0n15:05
*** dragondm has joined #openstack15:06
*** reidrac has quit IRC15:08
*** llang629 has joined #openstack15:09
*** llang629 has left #openstack15:09
*** ccc11 has quit IRC15:10
*** Ryan_Lane has joined #openstack15:12
*** truijllo has quit IRC15:14
*** RMS-ict has quit IRC15:16
*** RMS-ict has joined #openstack15:16
jatsrtanyone using python-novaclient15:18
jatsrtand know why everything I do is a 404 or a 50015:18
adambergsteinhas anyone seen this issue before? http://paste.openstack.org/show/2042/15:19
*** HowardRoark has joined #openstack15:20
*** mdomsch has quit IRC15:21
*** msivanes1 has joined #openstack15:21
*** herve06 has joined #openstack15:22
*** cmdrsizzlorr has quit IRC15:22
*** jj0hns0n_ has joined #openstack15:22
*** msivanes has quit IRC15:23
*** Ryan_Lane has quit IRC15:23
*** obino has quit IRC15:24
*** Tribaalatchu has quit IRC15:25
*** jj0hns0n has quit IRC15:26
*** jj0hns0n_ is now known as jj0hns0n15:26
*** alandman has joined #openstack15:26
jatsrtadambergstein: yes15:27
jatsrta few things I have noticed with that15:28
*** nickon has quit IRC15:28
jatsrtdo you have virtualization enabled for you processors?15:28
jatsrtnot just capable, but enabled in the bios if needs be15:28
kbringardthat error seems to mask other issues a lot of the time as well15:28
jatsrtare you sharing the instances directory for live migration15:29
*** aliguori has quit IRC15:29
jatsrtif so, are all your machines the same proc type15:29
jatsrtkbringard: yep, a very generic something went wrong error15:29
creihtwinston-d: I'm in now, if you are still up15:30
jatsrtsince this one came from "create_new_domain" seems like it would be virtualization being enabled or not15:30
*** dendro-afk is now known as dendrobates15:30
*** dolphm has quit IRC15:33
*** guigui has quit IRC15:37
*** msinhore has joined #openstack15:37
*** dolphm has joined #openstack15:38
*** CatKiller has joined #openstack15:39
*** Cyns has joined #openstack15:43
*** aliguori has joined #openstack15:43
*** MarcMorata has joined #openstack15:45
*** jj0hns0n has quit IRC15:46
*** dolphm has quit IRC15:51
*** obino has joined #openstack15:52
*** imsplitbit has joined #openstack15:56
*** maplebed|afk has quit IRC15:58
*** javiF has joined #openstack15:59
*** stewart has joined #openstack16:00
*** negronjl has quit IRC16:00
*** negronjl has joined #openstack16:00
*** jtanner has quit IRC16:02
*** mrrk has joined #openstack16:03
*** joar has quit IRC16:08
*** jtanner has joined #openstack16:12
*** herve06 has quit IRC16:12
*** dirkx has quit IRC16:14
*** maplebed has joined #openstack16:16
*** mrrk has quit IRC16:19
*** javiF has quit IRC16:20
*** galstrom has joined #openstack16:21
*** huslage has quit IRC16:22
*** cole has joined #openstack16:25
jatsrtOK, so it seems almost everything I trie to do with nova command line is getting 501 errors, anyone else experience this?16:27
*** Nagaraju has joined #openstack16:27
Nagarajuhi16:27
jatsrtis it a possible version mismatch, though both are current head16:27
kbringardby nova command line, do you mean nova-manage?16:28
kbringardthe openstack tools?16:28
kbringardthe euca2ools?16:28
Nagarajuhow about nova scheduler  framework16:28
Nagarajucan we write our own nova scheduler by inheriting or using scheduler framework16:28
jatsrtno I mean "nova" from pyton-novaclient"16:28
jatsrtnova list16:28
jatsrtok16:29
kbringardah, OK16:29
jatsrtnova boot16:29
jatsrtnot ok16:29
kbringardnova list16:29
kbringard'x-server-management-url'16:29
kbringardthat's what mine is doing16:29
jatsrtI had that too, it was pointing to 8773 and not 877416:29
viraptorjatsrt: logs of nova-api should tell you what's failing (provided you actually get to the right server)16:29
WormMankbringard: I've seen that when using the wrong URL16:29
kbringardah, I don't use the openstack api as much as the ec216:30
jatsrtviraptor: getting generic 501, not implemented, no errors16:30
*** primeminsterp has quit IRC16:30
kbringarddo I need to recreate my rc file?16:30
jatsrtis it just the case that it isn't actually working yet16:30
* kbringard thinks16:30
jatsrtkbrngard: no it's an environment variable16:30
jatsrtcheck for your NOVA_ exports16:31
jatsrtshould be in your novarc16:31
*** lborda_ has quit IRC16:31
kbringardright, I meant, I wonder if I'm getting the wrong url because the endpoint changed and I hadn't regenned new rc files16:31
*** lborda has joined #openstack16:31
kbringardbut I regrabbed the zip file and the URL is the same, and I'm getting the same thing now16:32
kbringardno error in the api log though, weird16:32
*** undertree has left #openstack16:32
*** marrusl has quit IRC16:32
WormManI usually just try random endpoints until it works :)16:32
kbringardhaha, yea, I was going to use it's internal IP and see if that fixes it16:32
WormManor run lsof on the control node to see where it's listening16:32
*** JStoker has quit IRC16:33
*** Nagaraju has quit IRC16:33
kbringardyea, I checked netstat and it's on 0.0.0.0:877416:33
kbringardnova-api   1864     nova    5u  IPv4 62461843      0t0  TCP *:8774 (LISTEN)16:33
jatsrtmaybe that is my problem, should there be something "more" in the nova.conf16:34
*** JStoker has joined #openstack16:34
*** JStoker has quit IRC16:34
*** JStoker has joined #openstack16:34
*** marrusl has joined #openstack16:34
WormMankbringard: try connecting with a web client and see what you get?(curl/wget)16:34
kbringardyea, I was just about to do that16:35
WormManwhen I hist my NOVA_URL with curl I get an unauthorized message16:35
jatsrtthese are the defaults:16:35
jatsrtDEFINE_string('osapi_host', '$my_ip', 'ip of api server')16:35
jatsrtDEFINE_string('osapi_scheme', 'http', 'prefix for openstack')16:35
jatsrtDEFINE_integer('osapi_port', 8774, 'OpenStack API port')16:35
jatsrtcould try to explicitly set them16:35
kbringardoh, snap, mine looks to be a keystone thing16:35
*** obino has quit IRC16:35
kbringardThe resource must be accessed through a proxy located at <a href="http://127.0.0.1:5001">http://127.0.0.1:5001</a>;16:36
kbringardbut I don't have the keystone daemon up atm16:36
kbringardthat would certainly be the issue :-)16:36
*** irahgel has left #openstack16:37
*** netmarkjp has left #openstack16:37
*** netmarkjp has joined #openstack16:37
*** jfluhmann_ has joined #openstack16:37
*** jc_smith has quit IRC16:38
*** mwhooker has quit IRC16:38
*** jfluhmann has quit IRC16:38
*** CloudAche84 has quit IRC16:39
*** obino has joined #openstack16:39
*** murkk has quit IRC16:40
*** RMS-ict has quit IRC16:40
*** Ephur has joined #openstack16:41
*** heckj has joined #openstack16:41
jatsrtI think I'm going to chalk up the behavio of nova client, to "not done yet"16:42
*** vishnu has quit IRC16:44
kbringardseems reasonable to me16:46
*** jdurgin has joined #openstack16:47
*** dendrobates is now known as dendro-afk16:50
*** pguth66 has joined #openstack16:50
*** clauden has joined #openstack16:54
*** primeministerp has joined #openstack16:57
*** dirkx has joined #openstack16:57
*** cole has quit IRC16:57
*** primeministerp has quit IRC16:58
*** primeministerp has joined #openstack16:59
*** mrrk has joined #openstack16:59
*** mwhooker has joined #openstack17:04
*** jc_smith has joined #openstack17:07
*** kashyap has joined #openstack17:07
*** dendro-afk is now known as dendrobates17:09
*** jedi4ever has quit IRC17:10
*** rchavik has quit IRC17:15
*** dirkx has quit IRC17:15
*** imsplitbit has quit IRC17:17
*** imsplitbit has joined #openstack17:18
*** RickB17 has joined #openstack17:23
WormManooo, this looks fun, transparent_hugepages... too bad my Ubbuntu kernel doesn't have it enabled.17:25
RickB17does storageblaze run SWIFT? or do theyhave their own platform?17:25
*** adjohn has joined #openstack17:26
*** jtanner has quit IRC17:32
*** kashyap_ has joined #openstack17:39
*** kashyap has quit IRC17:40
*** dirkx has joined #openstack17:44
*** littleidea has joined #openstack17:49
*** daysmen has quit IRC17:49
*** zigo has quit IRC17:50
*** mnaser has joined #openstack17:59
*** adjohn has quit IRC17:59
*** adjohn has joined #openstack17:59
*** dprince has quit IRC18:00
*** tomh_ has quit IRC18:00
*** cloudvirt has quit IRC18:01
*** alandman has quit IRC18:02
creihtRickB17: you mean backblaze?  If so, they have their own software18:03
*** dirkx has quit IRC18:05
*** aliguori has quit IRC18:05
*** MarcMorata has quit IRC18:07
*** dirkx has joined #openstack18:07
*** hggdh has quit IRC18:10
*** joearnold has joined #openstack18:11
*** jtanner has joined #openstack18:12
*** adiantum has joined #openstack18:12
*** hggdh has joined #openstack18:13
*** stewart has quit IRC18:13
*** huslage has joined #openstack18:15
adambergsteinjatsrt: kbringard: sorry i was away from the computer for a little while18:15
*** nickethier has quit IRC18:15
adambergsteini am fairly certain the machine is set up for virtualization18:15
*** dirkx has quit IRC18:16
adambergsteini ran some of the checks before i installed KVM18:16
*** huslage has quit IRC18:17
*** iRTermite has quit IRC18:17
*** huslage has joined #openstack18:17
*** Cyns has quit IRC18:17
*** huslage has quit IRC18:19
*** iRTermite has joined #openstack18:20
adambergsteinkbringard: i have made it a few more steps since yesterday, but still not there18:20
*** ejat has joined #openstack18:20
*** ejat has joined #openstack18:20
*** dolphm has joined #openstack18:21
adambergsteini tried flatdhcp but it made my nova-network go bad, so i reverted back to flat network18:21
*** huslage has joined #openstack18:21
*** aliguori has joined #openstack18:22
kbringardhmmm, I know the least about flat, so I'm probably not going to be much help18:23
*** GeoDud has joined #openstack18:26
adambergsteinkbringard: did you see the issue i posted earlier? i don't think it was related to flat18:27
kbringardoh, sorry, I didn't18:27
*** huslage has quit IRC18:27
kbringardoh wait18:27
kbringardthe qemu thing18:27
adambergsteinhttp://paste.openstack.org/show/2042/18:27
adambergsteinyes18:27
*** huslage has joined #openstack18:28
adambergsteinkbringard: i didn't find a whole lot of info with that error18:29
*** huslage has quit IRC18:29
kbringardI've seen that error for a lot of things… let me think for a moment18:29
adambergsteinok18:29
*** huslage has joined #openstack18:29
*** cole has joined #openstack18:30
jatsrtadambergstein: do you have /dev/kvm ?18:34
adambergsteinlet me check18:35
adambergsteinyes18:35
*** maplebed has quit IRC18:35
jatsrtok, single machine or multiple machines18:36
RickB17creiht: yup, thanks.18:36
jatsrtalso, assuming nothing is running, clear out /var/lib/nova/instances18:36
adambergsteinsingle machine18:36
adambergsteinok..18:36
adambergsteinthere are three directories in there18:37
adambergstein_base  instance-00000010  instance-0000001118:37
*** huslage has quit IRC18:37
adambergsteinclear them all out?18:37
*** huslage has joined #openstack18:37
jatsrtare you actually running any instances18:37
adambergsteini have attempted to18:38
adambergsteini believe all are 'shutdown' right now because of the issue18:38
jatsrtif not then yes clear it all out, that will recreate them, watch all your logs for errors when you run again18:38
adambergsteinhttp://paste.openstack.org/show/2056/18:38
jatsrtterminate them all18:39
jatsrtthen clear out the dirs18:39
adambergsteinok18:39
*** maplebed_ has joined #openstack18:39
adambergsteinthe only thing left now is '_base'18:40
*** FallenPegasus has joined #openstack18:40
*** stewart has joined #openstack18:40
adambergsteindo you want that removed too?18:40
*** darraghb has quit IRC18:40
*** FallenPegasus is now known as MarkAtwood18:40
jatsrtyeah, you can clear that too18:41
*** Ephur has quit IRC18:41
jatsrtnot that I expect this to fix anything, just make sure we are starting clean :-)18:41
adambergsteinok18:41
adambergsteincool deal18:41
adambergsteini am a total noob, so forgive me18:41
*** huslage has quit IRC18:41
jatsrtnot a problem18:42
*** ejat has quit IRC18:42
adambergsteinok its all clear18:42
*** iRTermite has quit IRC18:42
*** npmapn has quit IRC18:42
*** npmapn has joined #openstack18:43
*** iRTermite has joined #openstack18:45
adambergsteinjatsrt: should i try pushing another instance?18:46
jatsrtwhat are using for your image, stock ubuntu cloud image?18:46
adambergsteinor should we fix something first?18:46
adambergsteinlet me check18:47
jatsrteuca-describe-images18:47
*** jedi4ever has joined #openstack18:47
adambergsteinEasyCrawler2/lucid-server-uec-amd64.img.manifest.xml18:47
adambergsteinyea18:47
jatsrtok, and you have 64 bit machine?18:47
adambergsteini have ran that a whole bunch :) got it down now18:47
adambergsteinyes18:47
jatsrtok, run it and watch all of our logs18:47
jatsrtsee if you get the same exception18:47
adambergsteinok18:47
adambergsteineuca-run-instance?18:48
adambergsteinright?18:48
jatsrtyep18:48
jatsrt-t m1.tiny -z nova -k yourkey ami-000000??18:48
adambergsteinyea18:48
adambergsteincheck nova-compute log?18:49
jatsrtyes18:49
adambergsteinsame error18:49
kbringardyou may be missing some dependency somewhere… did you install from packages on Ubuntu?18:49
adambergsteinhttp://paste.openstack.org/show/2057/18:49
adambergsteini did, yes18:50
*** argv0 has joined #openstack18:50
adambergsteini also set up KVM from packages18:50
*** adjohn has quit IRC18:51
jatsrtso, I believe when I saw this it was the instances not getting made right, check the _base folder and the instance folder18:51
jatsrtsee what is in them18:51
jatsrtare you using glance or just objectstore18:51
kbringardjatsrt: yes18:52
jatsrtalso is the instances directory local, ifs, etc?18:52
kbringardthat ^^18:52
kbringardthe last time I saw this it was because the cached image on the compute node was 0 bytes18:52
jatsrtdid you see any errors in the glance log18:52
jatsrtkbringard: right18:52
kbringardgood catch dude18:53
jatsrtwe just cleared out the cache and it did it again, so something is not copying right or it is 0 bytes in glance18:53
jatsrtcan check that in /var/lib/glance18:53
kbringardsorry, I have much else going on so I'm only sort of paying attention :-/18:53
adambergsteini am using glance18:53
jatsrtmight not have registered properly18:53
*** HouseAway is now known as DrHouseMD18:53
jatsrtcan you paste the euca-describe-images for me18:53
adambergsteinyes18:53
jatsrtalso ls -lah of /var/lib/glance18:53
*** galstrom is now known as jshepher18:54
adambergsteinI'm getting it right now18:54
adambergsteinhttp://paste.openstack.org/show/2058/18:54
*** jshepher has joined #openstack18:54
jatsrtsorry ls the images directory18:55
adambergsteinok18:55
*** jshepher has left #openstack18:55
adambergsteinhttp://paste.openstack.org/show/2059/18:55
jatsrtok, your images ar 0 bytes18:56
jatsrtthat's the problem18:56
jatsrtyour uec-publish-tarball did not work right18:56
adambergsteinhmmmm18:56
adambergsteinok18:56
adambergsteinwant me to clear them and try new ones?18:56
adambergsteini don't get any errors when i run that18:56
*** bcwaldon has quit IRC18:57
jatsrtI believe I had that issue at some point, nut sure why though18:57
jatsrtyou can just try it again into a new bucket18:57
adambergsteini will do a euca-deregister18:57
adambergsteinclear out some18:57
jatsrtcan you paste your nova.conf too18:58
adambergsteinyep18:58
adambergsteinhttp://paste.openstack.org/show/2060/18:58
*** npmapn has quit IRC18:59
adambergsteinimages reregistered18:59
adambergsteincan you step me through this from scratch to verify i am doing this right?18:59
adambergsteini was following these instructions previously: http://wiki.openstack.org/RunningNova18:59
*** npmapn has joined #openstack19:00
*** johnmark has joined #openstack19:00
*** jakedahn has quit IRC19:01
jatsrtso what point are you at o19:01
*** jakedahn has joined #openstack19:01
adambergsteini have all images deregistered19:01
jatsrtdid you rerun ue-publish-tarball?19:01
adambergsteinno19:01
*** med_out is now known as med19:01
adambergsteinwant me to do that?19:01
jatsrtone sec19:02
adambergsteinshould i create a new bucket?19:02
adambergsteinok19:02
jatsrtso if you look at /var/lib/nova/buckets/<bucket name>19:02
jatsrtdo you see a bunch of files of non 0 size19:02
*** sugar_punkin has joined #openstack19:03
adambergsteinlet me look19:03
*** alfred_mancx has joined #openstack19:03
*** sugar_punkin has quit IRC19:04
adambergsteinjatsrt: http://paste.openstack.org/show/2062/19:04
*** sugar_punkin has joined #openstack19:04
jatsrtlooks ok there, you could just try to reregister the image19:04
adambergsteinok, uec-publish-tarball?19:04
jatsrteuca-register EasyCrawler2/lucid-server-uec-amd64-vmlinuz-virtual.manifest.xml19:05
jatsrtfirst to register the kernel19:05
jatsrtsee what that generates in the glance/images dir19:05
*** rnirmal has quit IRC19:05
adambergsteinUnknownError: An unknown error has occurred. Please try your request again.19:06
jatsrthmmm19:06
*** imsplitbit has quit IRC19:06
adambergsteinill check the log19:06
jatsrthmm19:07
*** cp16net_ has joined #openstack19:07
jatsrtmake sure you source your novarc19:07
adambergsteinok19:07
jatsrtand you could just use uec-publish-tarball to a new bucket19:07
jatsrttrying to save you some space though :-)19:07
kbringardyou can try my glance-uploader script19:07
kbringardhttps://github.com/kevinbringard/OpenStack-tools19:08
kbringardthere is a bash script which relies on the glance-upload stuff that come with glance19:08
*** jakedahn_ has joined #openstack19:08
adambergsteinoh, cool19:08
kbringardand there is a ruby script, that uses ogle19:08
kbringardwhich is a gem that interfaces directly with the glance API19:08
adambergsteini created a new bucket19:08
jatsrtok euca-describe-images quickly19:09
jatsrtif you can19:09
*** brd_from_italy has joined #openstack19:09
*** sugar_punkin has quit IRC19:09
adambergsteinkbringard: ill keep working with this for now19:09
adambergsteinbtu i will try it later19:09
kbringardokie19:09
kbringardno worries19:09
kbringardjust trying to help :-)19:09
*** imsplitbit has joined #openstack19:09
adambergsteini appreciate it, you guys rock19:09
jatsrtok thought19:10
adambergsteinjatsrt: i have not ran uec-publish yet19:10
adambergsteinshould i try that again?19:10
jatsrtyour bucket is all owned by root?19:10
adambergsteinlet me look19:10
adambergsteini ran sudo when making the bucket19:10
adambergsteinso i think so19:10
jatsrtyes run uec-publish-tarball <blah>.tar.gz newbucket19:10
adambergsteinok19:10
jatsrtdon't pre make the bucket19:10
*** cp16net has quit IRC19:10
adambergsteinsudo or not?19:10
jatsrtnope19:11
adambergsteinok19:11
adambergsteinits going...19:11
*** jakedahn has quit IRC19:11
*** jakedahn_ is now known as jakedahn19:11
jatsrtdo an ls on the bucket directory when it's done19:11
*** cp16net_ has quit IRC19:11
adambergsteinok19:11
jatsrtwondering if it was just nova not being able to read the file contents19:12
jatsrtshouldn't be though19:12
jatsrtalso as soon as the publish finishes19:12
adambergsteinhttp://paste.openstack.org/show/2063/19:12
*** MarcMorata has joined #openstack19:12
jatsrtrun euca-describe-images19:12
adambergsteinhttp://paste.openstack.org/show/2064/19:13
jatsrtI'm surprised they are all owned by root19:13
jatsrtok ls the glances images dir19:13
jatsrtlook for 12 and 1319:13
jatsrtshould be > 019:13
jatsrtsorry 11 and 1219:13
adambergsteinhttp://paste.openstack.org/show/2065/19:14
adambergsteinnot greater than 019:14
jatsrtstill 019:14
adambergstein:(19:14
*** cp16net has joined #openstack19:14
adambergsteinyes19:14
jatsrtmight just be a permissions issue, not too sure why your buckets are owned by root19:14
jatsrtthough it shouldn't make a difference19:15
jatsrtnothing in the glance log?19:15
adambergsteinis there any specific configuration needed for glance?19:15
adambergsteinis it in /var/log/glance?19:15
jatsrtso what happens is glance reads the bucket and assembles the image in the /var/lib/glance/images19:15
adambergsteinapi or registry?19:15
jatsrtthat is what gets copied to the host for KVM to manipulate19:15
adambergsteinoh ok19:15
jatsrtnot too suew19:15
jatsrtsure19:15
adambergsteinill check both19:16
jatsrtshould look something like:19:16
jatsrthttp://paste.openstack.org/show/2067/19:16
*** bcwaldon has joined #openstack19:17
adambergsteinhmmm19:17
adambergsteinweird.19:17
jatsrtyou aren't out of FS space right/19:17
adambergsteinglance api log19:17
adambergsteinhttp://paste.openstack.org/show/2068/19:17
adambergsteinno i have tons19:17
adambergsteinprob over 200 gigs19:17
jatsrtso, maybe just to be clean....19:19
jatsrtapt-get purge glance19:19
jatsrtrm -rf /var/lib/glance19:19
jatsrtrm -rf /var/log/glance19:19
jatsrtapt-get install glance19:19
adambergsteinok, done19:20
jatsrtdid you change any /etc/glance/ files19:20
adambergsteinno19:20
jatsrt hmmm, should work19:20
adambergsteinok, want me to try again?19:21
adambergsteinsince i redid glance?19:21
jatsrtyeah, register the tar ball again19:21
jatsrtsee what you get19:21
*** adiantum has quit IRC19:21
jatsrttail /var/log/nova/nova-objectstore.log too19:21
adambergsteinok19:21
adambergsteinits running19:21
adambergsteinhttp://paste.openstack.org/show/2070/19:24
adambergsteinwant me to check glance images?19:24
jatsrtyep19:24
*** adjohn has joined #openstack19:24
adambergsteinstill 019:24
adambergstein :919:24
adambergstein:(19:24
jatsrtwell crap19:24
*** mablebed__ has joined #openstack19:24
adambergsteini concur19:24
jatsrtno other errors anywhere?19:25
adambergsteinno other configuration needed for glance/compute19:25
jatsrtsudo restart nova-objectstore19:25
jatsrt?19:25
jatsrtNo I'm reaching19:25
adambergsteincompute19:26
adambergsteinhttp://paste.openstack.org/show/2072/19:26
*** Ephur has joined #openstack19:26
jatsrtare all of you nova services running19:26
jatsrtyou can manually restart them, or just reboot19:27
jatsrtlooks temporary though19:27
adambergsteinhttp://paste.openstack.org/show/2074/19:27
adambergsteinwant me to uec-publish again?19:27
jatsrtnot sure it will help, but yes19:27
jatsrtnot too sure what would keep glance from getting the data from the bucket19:28
adambergsteinfailed to upload because it was there19:28
adambergsteinlet me euca-rereg19:28
*** maplebed_ has quit IRC19:28
adambergsteinand ill try again19:28
jatsrtI think I had this and it was a permissions issue, but I had an error19:28
adambergsteinwant me to check objectstore log?19:28
*** argonius_afk has quit IRC19:28
jatsrtlook for any clue you can19:29
*** mablebed__ is now known as maplebed19:29
*** daysmen has joined #openstack19:30
adambergsteinI'm trying uec-publish again19:30
adambergsteinand ill dig around the logs19:30
adambergsteinimages dir still 019:30
jatsrtgot me, I'm stumpped19:30
adambergsteinnothing in object store19:31
*** dendrobates is now known as dendro-afk19:31
*** nphase_ has joined #openstack19:31
*** nphase_ has joined #openstack19:31
adambergsteinnetwork log looks clean19:31
*** nphase has quit IRC19:32
*** nphase_ is now known as nphase19:32
adambergsteinhmm i dont know19:32
adambergsteinjatsrt: i can blow this away and start over, but id like to make sure i build it right19:32
adambergsteinwould you be willing to help me through it?19:33
jatsrtwell, my day is winding up soon, but yes and there are others here19:35
jatsrtanother option is to try to clear pieces out19:35
jatsrtopenstack is pretty good with not leaving garbage behind19:36
jatsrtso maybe purge out nova-objectstore19:36
jatsrtrm the dirs19:36
*** Ephur has quit IRC19:36
jatsrtthen reinstall and try again19:36
*** Ephur has joined #openstack19:39
adambergsteinok19:39
adambergsteini will try that19:40
adambergsteinthanks again :)19:40
*** mitchless has joined #openstack19:42
*** Ephur has quit IRC19:43
*** alfred_mancx has quit IRC19:44
*** daysmen has quit IRC19:45
*** hggdh has quit IRC19:49
*** hggdh_ has joined #openstack19:49
*** dendro-afk is now known as dendrobates19:51
*** hggdh_ is now known as hggdh19:52
*** daysmen has joined #openstack19:53
*** imsplitbit has quit IRC19:55
*** bcwaldon has quit IRC19:55
*** brd_from_italy has quit IRC19:56
*** mgius has quit IRC19:56
*** bcwaldon has joined #openstack19:56
*** imsplitbit has joined #openstack19:57
*** daysmen has quit IRC19:57
*** pguth66 has quit IRC19:57
*** rnirmal has joined #openstack19:58
*** cp16net_ has joined #openstack20:00
*** bsza has quit IRC20:01
*** HowardRoark has quit IRC20:01
*** cp16net has quit IRC20:04
*** cp16net_ is now known as cp16net20:04
*** HowardRoark has joined #openstack20:05
*** ejat has joined #openstack20:15
*** jatsrt has left #openstack20:15
*** cereal_bars has quit IRC20:18
*** stewart has quit IRC20:18
*** imsplitbit has quit IRC20:19
*** imsplitbit has joined #openstack20:20
*** nphase has quit IRC20:20
*** nphase has joined #openstack20:21
*** nphase has joined #openstack20:21
*** adjohn has quit IRC20:23
*** jedi4ever has quit IRC20:27
*** mdomsch has joined #openstack20:27
*** stewart has joined #openstack20:30
*** ctennis has quit IRC20:32
*** mgoldmann has joined #openstack20:39
*** HowardRoark has quit IRC20:43
*** npmapn has quit IRC20:44
*** HowardRoark has joined #openstack20:46
*** shaon has joined #openstack20:47
*** dgags has quit IRC20:47
shaonI am trying to regiser an image, but it got stuck everytime when untarring the test.img.manifest.xml20:49
*** clauden has quit IRC20:50
*** GeoDud has quit IRC20:51
*** GeoDud has joined #openstack20:54
*** ejat has quit IRC20:56
*** cp16net has quit IRC21:04
*** adjohn has joined #openstack21:05
*** ctennis has joined #openstack21:08
*** dolphm has quit IRC21:08
*** mgoldmann has quit IRC21:09
*** rnirmal has quit IRC21:09
*** deshantm_laptop has quit IRC21:09
*** dendrobates is now known as dendro-afk21:10
*** daysmen has joined #openstack21:13
*** ejat has joined #openstack21:14
*** marrusl has quit IRC21:16
*** javiF has joined #openstack21:16
*** adjohn has quit IRC21:17
*** bcwaldon has quit IRC21:18
*** adjohn has joined #openstack21:24
*** liemmn has joined #openstack21:24
*** KnuckleSangwich has quit IRC21:24
*** stewart has quit IRC21:26
*** stewart has joined #openstack21:27
*** _adjohn has joined #openstack21:29
*** adjohn has quit IRC21:29
*** _adjohn is now known as adjohn21:29
*** daysmen has quit IRC21:30
*** daysmen has joined #openstack21:32
*** MarcMorata has quit IRC21:33
*** imsplitbit has quit IRC21:34
*** sloop has joined #openstack21:45
*** jj0hns0n has joined #openstack21:47
*** jtanner has quit IRC21:48
*** jj0hns0n_ has joined #openstack21:49
*** jj0hns0n has quit IRC21:49
*** jj0hns0n_ is now known as jj0hns0n21:49
*** pothos_ has joined #openstack21:50
*** pothos has quit IRC21:52
*** pothos_ is now known as pothos21:53
argv0whats the best single-machine install script for ubuntu 11 ?21:53
argv0(for nova)21:54
*** msinhore has quit IRC21:59
*** msinhore1 has joined #openstack21:59
*** mrrk has quit IRC22:03
*** jfluhmann__ has joined #openstack22:05
*** LiamMac has quit IRC22:05
*** markvoelker has quit IRC22:07
*** daysmen has quit IRC22:07
*** jfluhmann_ has quit IRC22:07
*** adjohn has quit IRC22:12
*** mdomsch has quit IRC22:14
*** ldlework has quit IRC22:14
*** adjohn has joined #openstack22:17
*** kbringard has quit IRC22:20
*** jj0hns0n has quit IRC22:21
*** msivanes1 has quit IRC22:22
*** ncode has quit IRC22:23
heckjargv0: check out the cloudbuilders or nebula's "auto.sh" script -22:23
heckjargv0: https://github.com/4P/deployscripts or https://github.com/cloudbuilders/deploy.sh22:23
argv0thanks!22:24
*** HowardRoark has quit IRC22:26
*** MarkAtwood has quit IRC22:29
*** cole has quit IRC22:30
*** shaon has quit IRC22:31
*** Shentonfreude has quit IRC22:35
*** miclorb_ has joined #openstack22:36
*** mitchless has quit IRC22:38
*** jkoelker has quit IRC22:38
*** mattray has quit IRC22:41
*** aliguori has quit IRC22:44
*** netmarkjp has left #openstack22:51
*** amccabe has quit IRC23:03
*** FallenPegasus has joined #openstack23:07
*** FallenPegasus is now known as MarkAtwood23:08
*** mfer has quit IRC23:09
*** ckmason has joined #openstack23:09
*** mattray has joined #openstack23:10
*** mattray has quit IRC23:10
*** HowardRoark has joined #openstack23:10
*** javiF has quit IRC23:12
*** huslage has joined #openstack23:13
*** MarkAtwood has quit IRC23:13
*** FallenPegasus has joined #openstack23:13
*** ckmason has quit IRC23:17
*** joearnold has quit IRC23:19
*** nphase has quit IRC23:21
*** littleidea has quit IRC23:21
*** nphase has joined #openstack23:21
*** nphase has joined #openstack23:21
*** RickB17 has quit IRC23:22
*** FallenPegasus has quit IRC23:23
*** CatKiller has quit IRC23:26
*** CatKiller has joined #openstack23:26
*** ncode has joined #openstack23:28
*** RickB17 has joined #openstack23:33
*** RobertLaptop has quit IRC23:35
*** RobertLaptop has joined #openstack23:35
*** marrusl has joined #openstack23:36
*** erichagedorn has joined #openstack23:39
*** FallenPegasus has joined #openstack23:40
*** jeffjapan has joined #openstack23:44
*** martin has quit IRC23:45
*** ejat has quit IRC23:47
*** martin has joined #openstack23:51
*** mfer has joined #openstack23:52
*** mrrk has joined #openstack23:55
*** adjohn has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!