Friday, 2012-05-11

*** lloydde has quit IRC00:00
*** rbasak has quit IRC00:06
*** pixelbeat has quit IRC00:07
*** epim has quit IRC00:13
*** mcclurmc_ has quit IRC00:14
*** torgomatic has quit IRC00:15
*** bsza has joined #openstack-dev00:16
*** jasonz has left #openstack-dev00:17
*** danwent has quit IRC00:18
*** utlemming has joined #openstack-dev00:18
*** danwent has joined #openstack-dev00:22
*** andresambrois has quit IRC00:26
*** spiffxp has quit IRC00:26
*** mgius has quit IRC00:26
*** torgomatic has joined #openstack-dev00:28
*** mnewby_ has joined #openstack-dev00:40
*** mnewby has quit IRC00:43
*** issackelly has joined #openstack-dev00:45
issackellyIt seems weird the way this is presented http://dl.dropbox.com/u/70585674/Screen%20Shot%202012-05-10%20at%204.34.13%20PM.png  It says five projects, highlights three, and then lists two as "new projects"00:46
*** edygarcia has quit IRC00:49
*** maplebed has quit IRC00:52
*** vikram1 has joined #openstack-dev00:52
*** rods has quit IRC00:53
*** ohnoimdead has quit IRC00:53
*** danwent has quit IRC00:53
*** danwent has joined #openstack-dev00:54
*** jemartin has joined #openstack-dev00:58
*** danwent has quit IRC01:00
*** asdfasdf has quit IRC01:00
*** danwent has joined #openstack-dev01:02
*** koolhead17 has quit IRC01:03
*** dprince has joined #openstack-dev01:04
*** danwent_ has joined #openstack-dev01:04
*** danwent has quit IRC01:08
*** danwent_ has quit IRC01:09
*** jdurgin has quit IRC01:18
*** jemartin has quit IRC01:21
*** jgriffith has quit IRC01:22
*** utlemming has quit IRC01:22
*** shang has quit IRC01:23
*** bsza has quit IRC01:27
*** adjohn has joined #openstack-dev01:29
*** littleidea has quit IRC01:30
*** torgomatic has quit IRC01:33
*** shang has joined #openstack-dev01:36
*** danwent has joined #openstack-dev01:36
*** johnpostlethwait has quit IRC01:39
*** Mandell has quit IRC01:40
*** jgriffith has joined #openstack-dev01:41
*** hub-cap has joined #openstack-dev01:53
*** novas0x2a|lapto1 has quit IRC01:55
*** roge has quit IRC01:56
*** hub_cap has quit IRC01:57
*** littleidea has joined #openstack-dev01:58
*** littleidea has left #openstack-dev02:00
*** rgoodwin has quit IRC02:00
*** Ryan_Lane has quit IRC02:09
*** vikram1 has left #openstack-dev02:12
*** jgriffith has quit IRC02:13
*** jgriffith has joined #openstack-dev02:28
*** dtroyer_zzz is now known as dtroyer02:28
*** tryggvil_ has quit IRC02:37
*** issackelly has quit IRC02:39
*** mnewby has joined #openstack-dev02:39
*** dprince has quit IRC02:43
*** johnpostlethwait has joined #openstack-dev02:55
*** harlowja has quit IRC02:58
*** jgriffith has quit IRC03:01
*** lloydde has joined #openstack-dev03:06
*** markvoelker has joined #openstack-dev03:08
*** markvoelker has left #openstack-dev03:08
*** dhellmann has joined #openstack-dev03:09
*** benner has quit IRC03:10
*** adjohn has quit IRC03:13
*** lloydde has quit IRC03:18
*** lloydde has joined #openstack-dev03:19
*** lloydde_ has joined #openstack-dev03:22
*** lloydde has quit IRC03:23
*** markmcclain has joined #openstack-dev03:29
*** benner has joined #openstack-dev03:29
*** mjfork has quit IRC03:30
*** utlemming has joined #openstack-dev03:40
*** Mandell has joined #openstack-dev03:42
*** utlemming has quit IRC03:46
*** littleidea has joined #openstack-dev03:57
*** troytoman-away is now known as troytoman04:03
*** sandywalsh has quit IRC04:04
*** johnpostlethwait has quit IRC04:06
*** koolhead17 has joined #openstack-dev04:10
*** ywu has quit IRC04:10
*** markmcclain has quit IRC04:14
*** anderstj has joined #openstack-dev04:20
*** markmcclain has joined #openstack-dev04:24
*** adjohn has joined #openstack-dev04:28
*** johnpostlethwait has joined #openstack-dev04:29
*** jgriffith has joined #openstack-dev04:33
*** Ryan_Lane has joined #openstack-dev04:47
*** mnewby has quit IRC05:02
*** mnewby has joined #openstack-dev05:03
*** sdague_ has joined #openstack-dev05:05
*** sdague has quit IRC05:08
*** openstack has joined #openstack-dev05:11
*** ahale has joined #openstack-dev05:12
*** mcclurmc_ has joined #openstack-dev05:12
*** seats_ has joined #openstack-dev05:12
*** jgriffith has quit IRC05:12
*** seats has quit IRC05:12
*** seats_ is now known as seats05:12
*** glenc has joined #openstack-dev05:13
*** dtroyer is now known as dtroyer_zzz05:17
*** bcwaldon has joined #openstack-dev05:18
*** johnpostlethwait has quit IRC05:21
*** jaypipes has joined #openstack-dev05:22
*** sacharya has quit IRC05:23
*** vincentricci has left #openstack-dev05:29
*** davidha has quit IRC05:30
*** davidha_who_took has joined #openstack-dev05:30
*** troytoman is now known as troytoman-away05:30
*** mrunge has quit IRC05:32
*** mrunge has joined #openstack-dev05:32
*** davidha_who_took is now known as davidha05:33
*** littleidea has quit IRC05:42
*** hattwick has quit IRC06:01
*** asalkeld has quit IRC06:06
eglynn__Mikal: mornin'/evenin' ;)06:20
eglynn__Mikal: have you any views on the usefullness of https://review.openstack.org/7302 for your image replication tool?06:20
*** pmezard has joined #openstack-dev06:22
mikaleglynn__: heya06:23
mikalI haven't looked yet to be honest, I've been off sick most of this week06:24
mikalI shall look now06:24
*** anderstj has quit IRC06:24
mikalWow, that's a lot of review comments06:24
mikalSo... awsome doesn't do integer ids at all, right?06:25
*** lloydde_ has quit IRC06:25
mikalI wonder if we should just drop this id thing entirely and hand uuids back to ec2 clients06:25
mikalI wonder what would break06:25
*** lloydde has joined #openstack-dev06:28
eglynn__Mikal: well, the illusion that the clients are using EC2 would break I guess ...06:29
mikalYeah, I wonder what assumptions tools make06:29
mikalI suspect Amazon handles this stuff by making you upload images once per region?06:29
eglynn__Mikal: yeah, I don't think the ami-* IDs are region-portable06:30
mikaleglynn__: your code looks reasonable to me...06:30
eglynn__cool06:31
mikalI think the other reviewers have fair points, but this isn't a simple problem.06:32
mikalBasically the ec2 api makes replication really hard to do properly06:32
*** lloydde has quit IRC06:32
mikalBut it was clear from the design summit session that people really want replication06:32
mikalI sure do.06:32
mikalI wonder if the awsome people have experienced any tool problems with just using uuids for image ids. I should ask.06:33
*** davidkranz has quit IRC06:34
eglynn__mikal: IIRC AWSOME just relies on the ID mapping maintained internally by nova, but it would be good to ask those folks what their thoughts are ...06:35
mikalI thought I saw it return UUIDs in the euca-describe-images output06:35
mikalI might be confused though06:35
*** davidkranz has joined #openstack-dev06:41
*** zaitcev has quit IRC06:46
*** mrunge has quit IRC06:46
*** mrunge has joined #openstack-dev06:49
mikaleglynn__: you still around?06:52
*** markvoelker has joined #openstack-dev06:53
*** shang has quit IRC06:54
eglynn__mikal: I'm just about to leave ... the ami-* style ID would be used in say the euca-run-instances tool to identify the image to boot off06:57
eglynn__mikal: not sure this usage could shift to the native glance UUID without breaking lots of scripts etc.06:57
*** Shrews has quit IRC06:58
*** Shrews has joined #openstack-dev06:58
*** mrunge has quit IRC06:58
*** mrunge has joined #openstack-dev06:58
eglynn__mikal: I gotta scoot off now, we can continue the conversation over email or later on IRC07:00
mikalOkie07:01
mikalI just confirmed, awsome just uses the uuid07:01
mikalNot sure what it breaks though07:01
*** heyho has quit IRC07:03
*** eglynn__ has quit IRC07:05
*** Vek has joined #openstack-dev07:08
*** Vek has quit IRC07:08
*** Vek has joined #openstack-dev07:08
*** reidrac has joined #openstack-dev07:11
*** mcclurmc_ has quit IRC07:13
*** mszilagyi has quit IRC07:15
*** mnewby has quit IRC07:17
*** Mandell has quit IRC07:20
*** mnot has joined #openstack-dev07:28
*** adjohn has quit IRC07:33
*** pixelbeat has joined #openstack-dev07:33
*** danwent has quit IRC07:38
*** Stackops-Jorge has joined #openstack-dev07:41
*** clayg has quit IRC07:42
*** clayg_ is now known as clayg07:42
*** eglynn__ has joined #openstack-dev07:42
*** clayg_ has joined #openstack-dev07:43
*** apevec has joined #openstack-dev07:46
*** eglynn__ has quit IRC07:47
*** apevec has quit IRC07:48
*** adjohn has joined #openstack-dev07:48
*** eglynn__ has joined #openstack-dev07:50
*** dachary has joined #openstack-dev07:55
*** hattwick has joined #openstack-dev07:58
*** mancdaz has joined #openstack-dev08:00
*** adjohn has quit IRC08:01
*** eglynn__ has quit IRC08:05
*** vincentricci has joined #openstack-dev08:15
*** adjohn has joined #openstack-dev08:24
*** eglynn__ has joined #openstack-dev08:30
*** apevec has joined #openstack-dev08:39
*** Ryan_Lane has quit IRC08:40
*** gael has joined #openstack-dev08:47
*** danpb has joined #openstack-dev08:51
eglynn__mikal: you still there? (sorry, I drop my kids to school)08:52
eglynn__mikal: about that glance UUID versus ami-* style IDs discussion earlier08:52
mikaleglynn__: np08:54
*** derekh has joined #openstack-dev08:55
eglynn__mikal: AFAIK DescribeInstances just returns the ami-* style ID and the image name, but not the glance UUID08:55
*** darraghb has joined #openstack-dev08:55
mikalI was using describe images08:55
* mikal talks a look at describe instances08:55
eglynn__mikal: sorry I meant DescribeImages08:55
mikalYeah, ok08:56
mikalThat's true for the ec2 api built into OS08:56
mikalHowever, awsome just returns the UUI08:56
mikalD08:56
mikalI've asked them if they have reports of this causing tool breakages or not08:56
eglynn__mikal: similarly I thought the OS impl of RunInstances allows either the ami-* style ID or image name, but not the glance UUID08:57
mikalThere has to be an amazon spec for what's allowed in that filed08:57
mikalfield even08:57
mikalI should go find it08:57
eglynn__mikal: so I'm wondering how AWSOME just passes thru' the glance UUID08:58
mikalOh, I see your point08:58
*** anniec72 has joined #openstack-dev08:58
*** markmc has joined #openstack-dev08:58
* eglynn__ needs to take a closer at the AWSOME code ...08:59
mikalOh, the code seems to return image['id'] as the id08:59
* mikal goes to find image's birthplace08:59
*** vincentricci has quit IRC09:00
mikalAhhh, it calls client.get_images() and then just uses the id for each image09:01
mikalI am being ordered to come to dinner, but will be back in a bit09:02
mikalI suspect that the id being returned by the OS API is a uuid?09:03
eglynn__mikal: cool, no rush, laters ...09:04
eglynn__mikal: and yep, the OS API returns the UUID, whereas the ami-* IDs are only relevant to EC2-stylee usage ...09:05
hugokuo2Hi all09:09
hugokuo2[Swift]Does any one knows the hashes.pkl come from ?09:09
hugokuo2After an object been rsync to remote node by replicator09:10
hugokuo2I found that a hashes.pkl file been created09:11
*** davidkranz has quit IRC09:11
hugokuo2I want to know the usage of that pkl file .09:11
hugokuo2I try to open it by pikle . first value is the last 3 characters of object hash09:12
hugokuo2but what's the meaning of second value ?09:12
*** davidkranz has joined #openstack-dev09:26
*** armax has joined #openstack-dev09:27
*** eneabio has joined #openstack-dev09:35
eneabioI have a problem with scheduler, in particular least_cost.py...09:35
mikaleglynn__: yeah, so I think awsome is just using the glance uuid09:39
eglynn__mikal: yep, from the looks of the code for RunInstances, it needs a UUID to be passed, as it just passes thru' whatever it gets09:42
eglynn__mikal: so if it happened to receive an ami-* style ID (which would be normal usage for EC2 clients), the server boot would fail09:43
eglynn__mikal: (as nearly the first thing nova does is to check if the passed imageRef param is UUID-like and if not rejects it)09:44
eglynn__mikal: so seems to me AWSOME is broken in that respect09:44
eglynn__mikal: (well at least it breaks the normal EC2 client expectation about the type of IS to pass...)09:45
zykes-What kind of networking switches is optimal to use for Nova / Swift ?09:45
eglynn__mikal: backing up a bit ...09:46
mikaleglynn__: yeah, in a perfect world we would discover that uuids magically meet the spec somehow, and then we'd just forget about this integer mapping entirely09:47
eglynn__mikal: just chatting with markmc about this, his take was the key issue is just that the current sytle of ami-* ID generation is misleading coz it might give the wrong impression that IDs are region-portable09:47
eglynn__mikal: (as they kinda look and smell non-random, ami-00000001, ami-00000002 ... etc)09:47
mikalYeah, ok09:47
mikalAs in we expect users to have some sort of script which looks up the ami id for that region before launching an instance?09:48
mikalInstead of assuming it is the same in all regions?09:48
markmcright, since that's the way EC2 works09:48
eglynn__mikal: more that we make it absolutely clear that there should be no expectation of region-portability09:48
eglynn__mikal: there isn't currently, but maybe folks might assume there is coz the Ids look similar09:49
mikalThat would make replication easier, the replicator would just need to ensure an ec2 id is allocated, but not that its nessesarily the same as that of the master region.09:49
eglynn__mikal: I was comming at this from the direction of establishing that portability in "most" cases (modulo collisions)09:49
mikalIt would be _nice_ to have portable ids, it just seems not possible09:50
eglynn__mikal: yep, perfect portability is not possible without a central authority (122 bits into 32 bits doesn't go unfortunately :( ...)09:51
*** armax has quit IRC09:51
mikalAgreed09:51
eglynn__mikal: if we could rewind back to when glance changed from integer image IDs to UUIDs (pre-Essex I think)09:52
eglynn__mikal: we could ensure for the Amazon image formats (ami, ari, aki) a reduced UUID space was alllocated from09:53
eglynn__mikal: (i.e. instead of the full 122 bit randomness in the generated UUIDs, we could just have 32 bits of randomness for those image types)09:53
mikalI do note that the ec2 api spec just says "string" for that field09:54
mikalhttp://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeImages.html09:54
eglynn__mikal: de facto String == 'a[mkr]i-[a-f0-9]{8}'09:54
mikalYeah, it comes down to what people hardcoded into their tools...09:55
eglynn__=> 32 bits of randomness09:55
mikalBut a uuid does meet the requirements of the spec as written09:55
eglynn__mikal: but we could allocate UUIDs from a "reduced" domain (just for the amzn images)09:56
eglynn__mikal: we being the glance registry in this case09:56
mikalOr we could just give up and assume people already have tools to handle this because of how amazon does it.09:56
eglynn__mikal: allocating the IDs from a reduced space would enable a perfect mapping going forward09:56
mikalI don't have a good feel either way09:56
mikaleglynn__: true09:57
eglynn__mikal: "going forward" == only applying to fresh images09:57
mikalYep09:57
eglynn__mikal: yeah, I'll run the reduced UUID space idea past bcwaldon and vish later on (or on gerrit)09:58
mikalOkie09:58
mikalThanks for taking the time to think about this by the way09:58
mikalI really should cleanup my prototype code and put a review up09:58
eglynn__mikal: np ... otherwise I'll need to look at extending the native OS API to accept ami-* IDs as ImageRefs09:59
eglynn__mikal: yep, t'would be good to have a look at that replication code09:59
mikalIts nothing special10:01
mikalI would have done it this week except for being off sick10:01
mikalI'll try and make some time over the weekend to get it done10:01
*** tryggvil has joined #openstack-dev10:08
*** asalkeld has joined #openstack-dev10:12
*** tryggvil has quit IRC10:14
*** tryggvil has joined #openstack-dev10:16
*** berendt has joined #openstack-dev10:39
*** adjohn has quit IRC10:50
*** rkukura has joined #openstack-dev10:54
*** mjfork has joined #openstack-dev11:00
*** bsza has joined #openstack-dev11:01
*** dachary has quit IRC11:09
zykes-anyone from hp here ?11:09
*** markvoelker1 has joined #openstack-dev11:20
*** hub_cap has joined #openstack-dev11:23
*** hub_cap has quit IRC11:24
*** sandywalsh has joined #openstack-dev11:26
mikalI am confused about where the database for run_tests live. It is angry about the schema version...11:41
*** sandywalsh has quit IRC11:43
*** GheRivero has quit IRC11:50
*** berendt has quit IRC11:55
*** tryggvil has quit IRC11:59
*** mrunge has quit IRC12:03
*** lts has joined #openstack-dev12:06
*** salgado has joined #openstack-dev12:07
*** salgado has joined #openstack-dev12:07
*** dolphm has joined #openstack-dev12:15
*** dachary has joined #openstack-dev12:17
*** alaski has joined #openstack-dev12:19
*** hub_cap has joined #openstack-dev12:38
*** tryggvil_ has joined #openstack-dev12:43
*** dprince has joined #openstack-dev12:45
*** edygarcia has joined #openstack-dev12:46
*** rohitk has joined #openstack-dev12:50
*** sandywalsh has joined #openstack-dev12:55
*** statik has joined #openstack-dev12:57
*** jaypipes has quit IRC12:59
*** sdague_ is now known as sdague13:05
*** sandywalsh_ has joined #openstack-dev13:07
*** sandywalsh has quit IRC13:08
*** roge has joined #openstack-dev13:09
*** jaypipes has joined #openstack-dev13:13
*** eglynn__ has quit IRC13:21
*** eglynn__ has joined #openstack-dev13:21
*** mattray has joined #openstack-dev13:22
*** markmc has quit IRC13:22
*** markmc has joined #openstack-dev13:27
*** eglynn__ has quit IRC13:31
*** dhellmann_ has joined #openstack-dev13:31
*** eglynn has joined #openstack-dev13:31
*** dhellmann has quit IRC13:33
*** dhellmann_ is now known as dhellmann13:33
*** dhellmann has quit IRC13:34
*** AlanClark has joined #openstack-dev13:36
*** markmc has quit IRC13:38
*** derekh has quit IRC13:39
*** dtroyer_zzz is now known as dtroyer13:43
notmynamemtaylor: ping13:46
notmynamemtaylor: I only seem to get emails from jenkins when I was involved in the review. is it possible to always get merge emails from jenkins?13:47
*** markmcclain has quit IRC13:52
eneabioquestion: how I can do a rpc.call in least_cost.py to get _get_additional_capabilities from hosts?13:58
clarkbnotmyname: under your account settings there are per project email options13:58
clarkbi think one of the options should do what you want but i dont have them in front of me13:59
notmynameclarkb: thanks. I see new changes, all comments, and submitted changes13:59
davidkranzjaypipes: I posted some small changes in reponse to your comments. I have to leave at 3 so if you want anything else let me now a bit before then. I will push it before I leave.14:00
notmynameclarkb: I'd imagine what I'm looking for is in jenkins, not gerrit14:00
davidkranzjaypipes: https://review.openstack.org/#/c/6991/14:00
*** armax has joined #openstack-dev14:04
*** sacharya has joined #openstack-dev14:04
*** armax has quit IRC14:04
*** armax has joined #openstack-dev14:05
*** eglynn has quit IRC14:06
*** belliott_ is now known as belliott14:06
*** eglynn has joined #openstack-dev14:06
*** kbringard has joined #openstack-dev14:09
*** Gordonz has joined #openstack-dev14:13
*** mancdaz has quit IRC14:14
*** mancdaz has joined #openstack-dev14:14
dolphmis it possible to retrigger smokestack jobs without submitting a new patchset?14:16
dolphm(concerning https://review.openstack.org/#/c/6216/ )14:16
*** alaski has quit IRC14:16
*** alaski has joined #openstack-dev14:16
*** armax has quit IRC14:19
*** armax has joined #openstack-dev14:19
russellbdolphm: it is if you have an account, i'll try to do it for you14:21
*** alaski has quit IRC14:21
russellbdolphm: done14:22
*** markmcclain has joined #openstack-dev14:24
*** hub_cap has quit IRC14:24
dprincedolphm: Is that something you would fine useful?14:29
dprincedolphm: retriggering SmokeStack?14:29
dprincedolphm: I'm happy to give you an account to the webUI if you'd like to re-trigger things...14:30
dprincerussellb: If you do re-trigger to run the tests it won't repost results for the same patchset since results for that Git Hash have already been reported. The reporting is all based on the Git Hash's.14:31
dprincerussellb: You can however just re-run the tests again and view them in the SmokeStack UI, etc.14:31
dprincerussellb: you know this ;)14:31
russellbah, ok, thanks for that info14:31
dprincemayby not the Git hash bits...14:32
russellbyep, it's super convenient for smoke testing as i work on a patch set ...14:32
russellbthe re-run and view via UI part yes :)14:32
*** armax has left #openstack-dev14:35
*** ayoung has joined #openstack-dev14:41
*** sacharya has quit IRC14:42
*** alaski has joined #openstack-dev14:43
*** markmc has joined #openstack-dev14:44
mtaylornotmyname: hrm. no, actually the email should be coming from gerrit ... it's the one who does the merge14:44
mtaylornotmyname: but I'm not sure if the event you're talking about has an email setting associated with it (although it should)14:44
mtaylornotmyname: let us look in to it14:44
notmynamemtaylor: ok, cool. thanks14:44
jeblairnotmyname: is 'submitted changes' checked?14:46
jeblairnotmyname: 'submitted' is gerrit-speak for merged14:46
notmynamejeblair: no. ok. I was thinking that submitted was any new patch set for an existing change (to pair with "new changes")14:47
notmynamejeblair: ok, I've set that. I'll see what happens. thanks14:48
mtaylorjeblair: ah, good point. I will potentially close the bug I just filed14:49
*** dtroyer is now known as dtroyer_zzz14:49
*** milner has quit IRC14:50
*** dachary has quit IRC14:52
dolphmrussellb: thanks14:56
dolphmdprince: well, this is the first potential false-negative i've seen from smokestack, so maybe maybe not :)14:57
*** vincentricci has joined #openstack-dev15:00
*** hub_cap has joined #openstack-dev15:00
hub_capvishy: Hey man, ive got a question about adding a feature to novaclient, who should i ping?15:01
nvezos-simple-tenant-usage doesn't give any bandwidth information, how does on go to get that information?15:02
hub_capvishy: Basically, i want to be able to bypass the auth & service catalog lookup, and provide my own token and compute or volume url. the glanceclient already does this and its very convenient to be able to override this to talk to a local non service catalog'd nova install15:02
*** dtroyer_zzz is now known as dtroyer15:03
*** rnirmal has joined #openstack-dev15:06
*** anderstj has joined #openstack-dev15:08
*** reidrac has quit IRC15:08
*** anderstj has quit IRC15:08
*** sacharya has joined #openstack-dev15:12
*** dhellmann has joined #openstack-dev15:13
jaypipesdavidkranz: rock on. getting to the review now.15:14
jaypipesdavidkranz: bought a new driver yesterday and the links are calling me this afternoon ;)15:15
davidkranzjaypipes: Excellent. I haven't played since high school but getting back is probably in my future...15:15
jaypipesdavidkranz: oh, don't get me wrong. I still totally suck :) But at least now I enjoy the game a bit15:16
davidkranzjaypipes: :)15:16
devanandahub_cap: g'morning. got a few min to chat about kernels?15:18
hub_caphit me devananda15:18
hub_caplets mov eit to #openstack-infra15:18
*** littleidea has joined #openstack-dev15:19
jaypipesdavidkranz: real quick ... go to https://review.openstack.org/#/c/6991/2/stress/state.py15:19
davidkranzjaypipes: I'm there.15:19
jaypipesdavidkranz: is there a reason that server_id isn't passed in the constructor for ServerAssociatedState along with resource_id?15:20
davidkranzjaypipes: Because it is unknown when the object is created. The object will get mapped and remapped to servers in its lifetime.15:21
*** maoy has joined #openstack-dev15:21
maoyjerdfelt: hi are you around?15:21
jaypipesdavidkranz: gotcha, k, that's what I thought. just wanted to make sure that was the case.15:21
*** Mandell has joined #openstack-dev15:24
nvezDo I have to subscribe using AMQP to get bandwidth usage information?15:25
maoyjerdfelt: I'd like to chat with you about the tracking of RPC tasks work.15:26
hub_capjaypipes: can i get the actual image back from python-glanceclient via some command?15:34
*** eneabio has quit IRC15:36
jaypipeshub_cap: soon, yes.15:36
jaypipeshub_cap: bcwaldon is working on that I believe15:36
hub_capkk so not at present but soon :D15:36
jaypipeshub_cap: yuppers.15:37
hub_capheart15:38
bcwaldonhub_cap: I would encourage you to implement it!15:38
hub_capim sure u would! can i use netcat?15:38
bcwaldonhub_cap: can you describe how that would work?15:39
hub_capi have no clue i was joking so u would say hell no15:39
bcwaldonthats what I thought15:39
hub_capand i could be off that hookj15:39
hub_caphehe15:39
bcwaldonI'll look at it today15:39
hub_capive got to fix something in novaclient15:39
hub_capits not a big deal for me15:40
bcwaldonit should be super easy15:40
hub_capdont rush to it for my sake15:40
bcwaldon20 minutes15:40
bcwaldonyou'll have it15:40
hub_capi dont really need it i was just wondering trying to diagnose an issue15:40
hub_capwow udaman bcwaldon15:40
nvezhub_cap: I see you done some work on billing-related openstack stuff?  Do you know if I can request a "compute.instance.exists" or I just have to sit and wait for it?  If I wait for it, how can I do my testing and force nova to send that event?15:42
hub_capnvez: i have not actually done work myself on the notifications or billing. i have a team mate cp16net that has. I believe (cp16net can correct me) that u subscribe to a q15:43
hub_capcp16net: fill in the blanks15:43
*** jdurgin has joined #openstack-dev15:43
cp16netnvez: yeah so in order to get the exists events you have to cron a script called instance-usage-audit15:44
cp16netnvez: i think this defaults to the last month15:44
nvezI see, so if I run instance-usage-audit -- itll calculate and send an instance.exists event?15:44
cp16netnvez: if you set the flag in your nova.conf instance_usage_audit_period you can set it to hour/day/month/year15:45
cp16netbut this is always the previous period15:45
cp16netcorrect15:45
hub_capya nvez what cp16net said :P15:46
nvezcp16net: This might sound really silly, but I'm still figuring out AMQP, if the daemon waiting for instance.exists messages is down, will I miss the information?15:46
hub_capnvez: the messages go to the Q15:46
hub_capand wait for you to pluck them off15:46
nvezah i see15:46
hub_capso they will reside there until your daemon/process/magic broomstick pulls them off15:46
cp16netthats if you are using the rabbit_notifier15:46
hub_captru15:47
nveznon-nova related question but when i pull them from the queue do they automatically get removed from teh queue, or do I specifically say "remove", obviously id want to "remove" after I successfully added all the data15:47
*** danwent has joined #openstack-dev15:47
cp16netnvez: i think that depends on the client i believe but i think once you fetch a message it comes off the queue15:50
cp16neti think its because the queue is declared as a topic queue.15:50
*** dachary has joined #openstack-dev15:51
*** dachary has joined #openstack-dev15:51
*** dachary has quit IRC15:52
*** dachary1 has joined #openstack-dev15:52
nvezcp16net: ah i see, trying to call it however it keeps trying to run it for a month, where can I change the audit period if you know?15:53
*** dachary has joined #openstack-dev15:54
nvezsource code has this: cfg.StrOpt('instance_usage_audit_period', .. and it's set to month by default, but running instance-usage-audit --instance-usage-audit-period day doesnt work, hmm15:55
*** dachary1 has quit IRC15:57
nvezokay, so its not a cli option, its a nova cfg option, my bad15:57
*** nati has joined #openstack-dev15:57
*** markmc has quit IRC15:58
*** jdurgin has quit IRC15:58
*** dachary has quit IRC15:58
*** dhellmann has quit IRC15:59
*** epim has joined #openstack-dev16:00
*** markmcclain has quit IRC16:00
*** dachary has joined #openstack-dev16:02
*** alaski has quit IRC16:02
*** alaski has joined #openstack-dev16:03
*** Mandell has quit IRC16:04
*** mnewby has joined #openstack-dev16:05
*** andresambrois has joined #openstack-dev16:07
*** maplebed has joined #openstack-dev16:08
*** issackelly has joined #openstack-dev16:12
cp16netnvez: yes its picked up from the nova.conf options16:12
nvezsweet, alright, ill start working a bit on this hopefully ill have some sort of result by today!16:13
cp16netnice16:13
nvezoh small question cp16net -- does the instance exists give info for servers that were terminated in that period?16:13
nvezaka server running for 3 hours only and got terminated after16:13
cp16netno its only what exists16:13
*** mnot has quit IRC16:14
nvezso basically if someone starts a server everyday at 1am and shuts it down at 11pm and i do daily audit reports, i cant see the bandwidth usage for it?16:14
cp16netwhen you terminate/resize an instance a new exists event is created at that time with the audit-period-beginning and audit-peroid-ending with the times16:14
nvezah gotcha16:15
*** adam_g has joined #openstack-dev16:15
*** eglynn has quit IRC16:16
nvezcp16net: so resize/terminate/create sends an exists as well, hmm, i would only have to use the terminate to get bandwidth usage however, because if i use resize i think ill have duplicate data16:16
*** torgomatic has joined #openstack-dev16:16
cp16netoh nevermind it looks like terminate instance doesnt send an exists16:16
cp16netonly resize and rebuild instance16:17
bcwaldonhub_cap: well I've had it done for a while, it just took forever to get a working dev env up16:17
hub_caphehe, that makes sense then for 20m16:17
cp16netnvez: i've been trying to figure out when all these events occur as i am working on adding volume notifications that are similar to the instance16:18
nvezcp16net: hmm, okay, well that leaves me with a possibility that i cant have precise bw monitoring16:18
nvezim curious if the delete sends a bandwidth used .. maybe..16:19
hub_capbcwaldon: quick Q for you, v1/images/image_id is returning a empty reply.. am i doing something drastically wrong?16:19
bcwaldonhub_cap: look in your headers16:19
bcwaldonhub_cap: and do you have data up there?16:19
hub_capimages/image GET will return the image, rigth?16:19
bcwaldonhub_cap: yes16:19
hub_capkk, let me verify16:19
bcwaldonhub_cap: assuming you've put it up16:20
hub_capits possible i didnt put the image properly up16:20
hub_capya16:20
hub_capi have a 63 meg gzip raw/bare image in /var/lib/glance/images/id16:20
bcwaldonhub_cap: and headers dont complain?16:20
bcwaldonhub_cap: does it have a size?16:20
hub_capsec let me list16:21
hub_capbcwaldon: http://paste.openstack.org/show/16973/16:21
cp16netnvez: after looking at the code it looks like the exists event should generate an event for each instance that was deleted in the time perioid and give the bandwidth16:21
bcwaldonhub_cap: yep, something wrong16:21
*** issackelly has quit IRC16:21
hub_capdo i have to provide the tenant header in the images/id call?16:21
*** issackelly has joined #openstack-dev16:22
bcwaldonhub_cap: no, just the auth token16:22
hub_capya thats what im putting. did u see something wrong from the listing i displayed?16:22
*** Stackops-Jorge has quit IRC16:22
bcwaldonhub_cap: nope16:22
hub_capdurn, time to grok the logs :D16:22
nvezcp16net: yep, you're right --- in notify_usage_exists: admin_context = nova.context.get_admin_context(read_deleted='yes')16:22
*** thingee has quit IRC16:23
cp16netyes :)16:23
hub_capi do see an error bcwaldon http://paste.openstack.org/show/16974/16:23
nvezand it does return a deleted_at as well in the usage16:23
hub_capps im not using devstack or anything to set up glance, so its VERY possible i borked something config related16:24
nvezwoo, sounds good, time to write up a small client (in the language everyone hates, java :D) :p16:24
bcwaldonhub_cap: hmm, thats pretty odd16:24
bcwaldonhub_cap: can I see your glance-api.conf and glance-api-paste.ini?16:24
hub_capbcwaldon: im good at screwing things up16:24
hub_capsure sec bcwaldon16:24
hub_capbcwaldon: u want registry confs too?16:26
bcwaldonhub_cap: not yet16:26
hub_capim curling to 9292 not 9191, is that my problem?16:26
hub_capok16:26
bcwaldon9292 is correct16:26
*** spiffxp has joined #openstack-dev16:29
*** salgado is now known as salgado-lunch16:29
*** jdg has joined #openstack-dev16:31
*** jemartin has joined #openstack-dev16:33
*** rods has joined #openstack-dev16:35
*** danwent has quit IRC16:36
*** tryggvil_ has quit IRC16:37
*** tryggvil has joined #openstack-dev16:38
*** dprince has left #openstack-dev16:43
*** dprince has quit IRC16:43
*** eglynn has joined #openstack-dev16:44
jdgrnirmal: ping16:46
*** bsza1 has joined #openstack-dev16:46
*** bsza has quit IRC16:47
*** jdg is now known as jgriff16:49
rnirmaljgriff:16:50
jgriffrnirmal: Just checking to see if you got the notification email?16:50
rnirmallet me check16:50
jgriffrnirmal: Or if things are still hosed16:50
rnirmaljgriff: nope... I even reset everything yesterday16:51
*** Mandell_ has joined #openstack-dev16:51
*** nati has quit IRC16:51
jgriffrnirmal: dang it... :(16:51
jgriffrnirmal: just curious, have you checked your settings in gerrit?16:52
rnirmalyeah I did check... like I said reset everything.16:53
jgriffbummer...  that exhausts my helpfulness16:54
rnirmalI'll figure it out someday.. :)16:54
jgriffLOL16:54
rnirmaljgriff: I'll take a look at the latest patch16:54
jgriffrnirmal: thanks :)16:55
rnirmalis there any way to look at all the diffs on a single tab..instead of the multiple tabs16:55
jgriffrnirmal: you mean all files in one?  No I don't believe so... the way to do that would be something like pull the review and do a git diff16:55
jgriffrnirmal: but I could be wrong16:56
rnirmalyeah that's what I did the last time around16:56
rnirmaldidn't find anything in the ui16:56
jgriffrnirmal: there are only three files that changed16:56
rnirmalI'll look thru them16:57
jgriffrnirmal: openstack/volume/volumes.py, db/api.py and cinder/exceptions.py16:57
rnirmaljgriff: thanks16:58
*** anderstj has joined #openstack-dev16:58
*** koolhead17 has quit IRC16:58
*** mszilagyi has joined #openstack-dev16:58
jaypipeshub_cap: speak of the devil. https://review.openstack.org/#/c/7352/17:02
*** eglynn has quit IRC17:04
*** apevec has quit IRC17:06
*** harlowja has joined #openstack-dev17:06
*** vladimir3p has joined #openstack-dev17:07
harlowjabcwaldon: in regard to that duplication registering of that opt, i guess moving the create_stores call out of this image api would solve it, although it is odd that the constructor would be called twice... thats really the only way that can happen17:08
*** vladimir3p has quit IRC17:08
*** koolhead17 has joined #openstack-dev17:11
bcwaldonharlowja: or duplicate calls to create_stores can just be ignored17:12
bcwaldonharlowja: not sure what the *best* way is here17:12
harlowjaya, or that, i think i see what the issue is17:12
harlowjaImageDataController in v2/ calls create_stores17:12
bcwaldonharlowja: since we are going to import glance.store separately in the v2 and v1 api17:12
harlowjaController in v1/ calls the same17:12
harlowjaya, hmm17:12
harlowjabcwaldon: should the create stores happen at a higher level, not in the apis?17:15
*** bhuvan_ has joined #openstack-dev17:15
*** markmcclain has joined #openstack-dev17:16
bcwaldonharlowja: there is no other component, so I don't think so17:16
bcwaldonyou could force it to happen in paste then pass it in, but that would be a massive hack17:17
*** dhellmann has joined #openstack-dev17:18
harlowjahmmm, there is a method like setup_logging, in glance/common/config, maybe there should be an equiv called setup_stores, that way it happens even before wsgi/paste17:20
harlowjaexcept for this in the v2, stuff,         self.store_api = store_api or glance.store17:21
harlowja        self.store_api.create_stores(conf), where it seems like it changes what store_api its constructing it from17:21
bcwaldonharlowja: ok, so we could just call a common setup method in bin/glance-api17:24
bcwaldonharlowja: then never call create_stores down below17:24
bcwaldonthat makes more sense17:24
*** mnewby has quit IRC17:24
harlowjaright, except for the v2 image_data class17:24
bcwaldonharlowja: you can remove the create_stores call from that17:24
harlowjakk17:25
harlowjalet me see what will happen with this17:25
bcwaldonkk17:25
bcwaldonharlowja: I'm going afk for a little bit17:25
harlowjak17:25
*** asisin has joined #openstack-dev17:25
*** mnewby has joined #openstack-dev17:26
*** mnewby has joined #openstack-dev17:26
*** salgado-lunch is now known as salgado17:26
*** darraghb has quit IRC17:30
*** armax has joined #openstack-dev17:30
*** armax has left #openstack-dev17:30
*** danpb has quit IRC17:36
*** davidha has quit IRC17:37
*** Ryan_Lane has joined #openstack-dev17:42
*** mcclurmc_ has joined #openstack-dev17:45
*** vincentricci has quit IRC17:46
*** johnpostlethwait has joined #openstack-dev17:56
*** edygarcia has quit IRC17:58
*** edygarcia has joined #openstack-dev17:59
*** edygarcia_ has joined #openstack-dev18:00
*** epim_ has joined #openstack-dev18:00
*** eglynn has joined #openstack-dev18:01
*** epim__ has joined #openstack-dev18:01
*** edygarcia has quit IRC18:03
*** edygarcia_ is now known as edygarcia18:03
*** epim has quit IRC18:04
*** epim__ is now known as epim18:04
*** hub_cap has quit IRC18:05
*** epim_ has quit IRC18:05
*** jemartin has quit IRC18:05
*** tryggvil has quit IRC18:13
*** markmcclain has quit IRC18:18
*** dhellmann has quit IRC18:18
*** markmcclain has joined #openstack-dev18:18
*** dhellmann has joined #openstack-dev18:18
*** lloydde has joined #openstack-dev18:22
*** dachary1 has joined #openstack-dev18:23
*** dachary has quit IRC18:24
*** AlanClark has quit IRC18:26
*** mattray has quit IRC18:27
*** Shrews has quit IRC18:31
*** edygarcia has quit IRC18:31
*** mrunge has joined #openstack-dev18:34
*** bhuvan_ has quit IRC18:35
*** lloydde has quit IRC18:38
cp16netpvo: i have a question on how to get a service_type into the notifications that are published? One thing i thought about was getting the service from the context or auth information and passing that along or maybe using a flag. The issue is because other services will utilize a nova installation and need to identify the service that is requesting to create a new resource and associate that resource with the service that called l18:40
cp16netreddwarf in my case.18:40
*** belliott has quit IRC18:42
jeblairjaypipes: gate-tempest-devstack-vm is enabled again18:43
jeblairit should start voting on changes after run #60218:43
*** koolhead17 has quit IRC18:43
openstackgerritVerification of a change to openstack/nova failed: Create an internal key pair API.  https://review.openstack.org/734218:47
nvezcp16net: i'm not having good luck right now.. I am binding my client to the "notifications" queue under the "nova" exchange and starting a queueingconsumer (java), everything gets created but im not getting any messages..18:49
nvezI'm running instance-usage-audit and I don't see much, any idea how I can check and see if the msg is being sent?18:50
nvezThere is data being sent cause it does say "found 2 instances"18:50
*** kbringard has quit IRC18:51
*** kbringard has joined #openstack-dev18:52
cp16netnvez: are you using the notification_driver='rabbit_notifier'?18:54
cp16netin your nova.cong18:54
cp16netf*18:54
nvezcp16net: not set, using devstack, let me see what the default is18:55
nvezcp16net: nova.notifier.no_op_notifier is the default, woops.18:55
cp16netnvez: so that notify method does nothing18:56
cp16net:)18:57
nvezTHERE WE GO!18:57
nveztwo messages for 2 instances, man, thank you so much cp16net -- im new to amqp so yeah18:58
cp16netnice18:58
nvezguess i had everything working, just didnt think of the notification driver, thanks again18:58
cp16netnp18:58
cp16netvishy: would you have any input on the question i asked pvo? ^^18:59
*** markmcclain has quit IRC18:59
*** markmcclain has joined #openstack-dev18:59
*** adjohn has joined #openstack-dev19:00
nvezcp16net: just to confirm as well that i did get 2 events, one for a deleted server and for an active one19:00
cp16netawesome19:01
nvezbandwidth key is empty tho, so hmm19:01
cp16netnvez: there is also a log_notifier that writes the events to the log file19:01
*** zaitcev has joined #openstack-dev19:02
nvezcp16net: ill check out all the other notifiers and also check out why the bandwidth has nothing inside of it, it should have a network assigned to it, maybe if 0 bw used then it wont do anything19:02
cp16netnvez: yeah the logic on how it gets the bandwidth is in the nova/compute/utils.py19:04
*** gyee has joined #openstack-dev19:04
cp16netnvez: looks like its just getting it from the db and i dont know what would write that to the db in the first place unless its some other type of agent19:05
nvezhmm, im gonna check it out and figure it out19:06
*** lloydde has joined #openstack-dev19:07
zykes-what does people use to deploy openstack in places ?19:09
*** jakedahn is now known as jakedahn_zz19:10
cp16netnvez: looks like only the xen driver supports bandwidth19:13
*** lorin1 has joined #openstack-dev19:15
nvezcp16net: xen or xenserver?19:17
cp16neti found it in the xenapi.py  i dont really know the difference19:18
cp16neti've been using kvm and openvz19:18
nvezi think xenapi is xenserver/xcp19:18
*** lifeless has quit IRC19:18
*** fc__ has quit IRC19:23
*** andrewbogott_ has quit IRC19:24
*** andrewbogott_ has joined #openstack-dev19:25
nvezyou are correct cp16net -- looks like only xenapi has get_all_bw_usage implemented (which is called to get the bw usage)19:25
nvezand that is actually the xenserver/xcp version19:25
*** mattray has joined #openstack-dev19:29
nvezis it acceptable to say that as of right now xenserver is the best implemented openstack driver?19:30
russellbthe libvirt (kvm) driver is used heavily, as well.19:31
epimAnyone aware of any blueprints to make openstack try to fping an IP and make sure it's alive before allocating it?19:32
epimmake sure it's not in use, I should say19:32
zykes-epim: isn'19:35
zykes-isn't that what the db is for ?19:35
epimWell, if my vlan is used for more than just VMs, or if someone swipes an IP from a dedicated vlan, nova will try to blindly allocate that IP19:36
*** hub_cap has joined #openstack-dev19:37
hub_capjaypipes: ya bcwaldon showed me that :D19:37
*** mcclurmc_ has quit IRC19:41
lorin1Are the "[no]foo" style config options still valid? Or do we just do "foo=false" now?19:42
*** lloydde has quit IRC19:43
*** lloydde has joined #openstack-dev19:44
*** andrewbogott_ has quit IRC19:44
*** andrewbogott_ has joined #openstack-dev19:44
nvezSo uh, why pick XenServer or XCP, I have a hard time deciding and I'm not sure which one would be better/worse, there doesnt seem to be an openstack comparision19:44
nvezI understand they're supposed to be "the same thing" but anyone had to make that choice and found a reason to pick one over the other for an OpenStack deployment?19:45
epimnvez: in theory XCP is just Xen with fancy management tools. Most, if not all of which are redundant to openstack19:47
epimactually, sorry, I was thinking Xenserver. XCP is mostly just xen-in-a-box.19:47
nvezepim: I'm aware of that, my conern is XenServer vs XCP (open source version of Citrix XenServer).. they seem to share a similar featureset19:47
*** lloydde has quit IRC19:48
*** jgriff has quit IRC19:55
*** dolphm has quit IRC19:58
*** davidha has joined #openstack-dev19:59
*** jakedahn_zz is now known as jakedahn19:59
*** dachary1 is now known as dachary20:04
*** anderstj has quit IRC20:08
*** anderstj_ has joined #openstack-dev20:08
nvezcp16net: sweet code review ;) hopefully it makes it!20:15
cp16netyeah that should add create/delete/exists events for volumes20:16
*** rnirmal has quit IRC20:18
*** markvoelker1 has quit IRC20:18
*** anniec72 has quit IRC20:20
*** maoy has quit IRC20:20
*** lifeless has joined #openstack-dev20:22
*** littleidea has quit IRC20:22
*** littleidea has joined #openstack-dev20:23
*** jgriff has joined #openstack-dev20:25
*** anniec72 has joined #openstack-dev20:26
*** alaski has quit IRC20:30
*** vincentricci has joined #openstack-dev20:30
*** novas0x2a|laptop has joined #openstack-dev20:32
*** anniec72 has quit IRC20:33
*** jemartin has joined #openstack-dev20:38
*** Gordonz has quit IRC20:40
*** johnpostlethwa-1 has joined #openstack-dev20:41
*** johnpostlethwait has quit IRC20:41
*** johnpostlethwa-1 has quit IRC20:43
*** johnpostlethwait has joined #openstack-dev20:43
*** ayoung_ has joined #openstack-dev20:47
bcwaldonjeblair: around?20:51
jeblairbcwaldon: t20:51
bcwaldonjeblair: why do we automagically set bugs to fixed by a committer rather than author?20:52
bcwaldonjeblair: its frustrating when I fix up someones patch, become the committer, then it merges20:52
bcwaldonjeblair: I didnt really fix that bug20:52
jeblairbcwaldon: because we didn't write the script right? :)20:52
bcwaldonjeblair: we didnt?20:53
jeblairbcwaldon: sorry, my way of agreeing with you.  :)20:53
bcwaldonjeblair: ok, which script is it?20:53
jeblairhttps://github.com/openstack/openstack-ci-puppet/blob/master/modules/gerrit/files/scripts/update_bug.py20:54
*** dhellmann has quit IRC20:55
jeblairbcwaldon: it may be that only the uploader, not the author of the commit, is a hook argument; it may need a bit more code to go grab that from the git log20:56
*** markmcclain has quit IRC20:57
bcwaldonjeblair: ok, I might look into fixing that little guy20:57
jeblairbcwaldon: awesome!20:57
*** mcclurmc_ has joined #openstack-dev21:03
hub_cappvo: u around?21:04
*** lts has quit IRC21:13
*** mcclurmc_ has quit IRC21:16
*** ayoung_ is now known as ayoung_afk21:16
*** lloydde has joined #openstack-dev21:21
*** dachary has quit IRC21:25
*** lloydde has quit IRC21:25
*** dachary has joined #openstack-dev21:25
*** lloydde has joined #openstack-dev21:26
*** hub_cap has quit IRC21:29
*** lorin1 has quit IRC21:31
*** jakedahn is now known as jakedahn_zz21:31
*** mcclurmc_ has joined #openstack-dev21:38
*** anderstj has joined #openstack-dev21:51
*** anderstj_ has quit IRC21:51
*** hattwick has quit IRC21:57
*** hattwick has joined #openstack-dev22:00
*** kbringard has quit IRC22:01
*** mrunge has quit IRC22:08
*** andrewsben is now known as andrewsben_zz22:10
mikalvishy: you around?22:11
*** sacharya has quit IRC22:14
*** koolhead17 has joined #openstack-dev22:14
*** gyee has quit IRC22:15
*** pixelbeat has quit IRC22:17
*** dachary has quit IRC22:29
*** dachary has joined #openstack-dev22:29
*** bsza1 has quit IRC22:32
*** jakedahn_zz is now known as jakedahn22:35
*** spiffxp has quit IRC22:38
*** mattray has quit IRC22:46
notmynamemtaylor: what's the best way to see the differences in two patch sets of a large merge if there have been commits between the patch sets?22:46
harlowjabcwaldon: whenever u get a chance, fixed the glance stuff....22:49
bcwaldonharlowja: kk, sometime today for sure22:49
zykes-notmyname: what alternatives are there for servers with 12+ drives ?22:50
notmynamezykes-: how do you mean? are you looking for hardware recommendations?22:52
*** spiffxp has joined #openstack-dev22:52
zykes-yes22:52
zykes-like enterprise ish22:53
*** anniec72 has joined #openstack-dev22:54
zykes-I think that the C* node listed in the reference arch is not so dense as it should be ?22:54
notmynamezykes-: I don't have any recommendations for brands. but for example, there is http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100008044+600029570+600010490&QksAutoSuggestion=&ShowDeactivatedMark=False&Configurator=&IsNodeId=1&Subcategory=412&description=&hisInDesc=&Ntk=&CFG=&SpeTabStoreType=&AdvancedSearch=1&srchInDesc=22:54
notmynamechange the number of drives you are looking for to find different stuff22:55
notmynamezykes-: one possible config is a smaller head unit and a JBOD or two.22:55
zykes-Ah, it sucks that Dell / HP isn't delivering stuff for it that's more dense22:55
notmynamewow. 72 drives in one 4U http://www.newegg.com/Product/Product.aspx?Item=N82E1681115221222:56
zykes-Can't HP / Dell make such a box22:57
notmynameno idea. you'd have to ask them :-)22:57
zykes-grrrr22:57
zykes-the crappy with with SM is that big customers here don't want to rely on anything that isn't HP, IBM / Dell22:58
notmynameya, I feel your pain on that22:58
*** dtroyer is now known as dtroyer_zzz22:58
jgriffzykes: I believe they have versions on part with what you're looking at but typically recommend going to an array or jbod as notmyname suggested22:58
notmynameyou may try looking at the opencompute reference platforms. HP is big into that (not sure about dell), and they have dense storage stuff22:58
jgriffs/on part/on par/22:58
zykes-notmyname: but no public products as of yet I get ?22:59
notmynamehttp://opencompute.org/projects/open-vault-storage/22:59
zykes-jgriff: versions on part ?22:59
notmynameI saw some things last week at the opencompute summit that was help at the rackspace office23:00
jgriffzykes: I don't work at HP any more and not a sales guy.. you'd have to ask them :)23:00
zykes-jgriff: you worked at HPCS ?23:00
jgriffzykes: Nope, storage group... unified storage23:01
notmynamezykes-: you can probably get the cheapest $/GB storage with a head unit and JBODs. on the other hand, I've heard of large deployments doing all 12 dirve configs in a 1U. it comes down a lot to what your use case is and what kind of deals you can get from vendors23:01
notmynamezykes-: but I've definitely seen problems convincing traditional storage and infrastructure people that consumer drives are ok (instead of enterprise drives)23:02
zykes-notmyname: yeah especially enterprise customers23:03
zykes-:p23:03
* jgriff big grin23:03
notmynamereliable software allows for unreliable (ie cheap) hardware23:03
zykes-jgriff: I wonder what HP uses for their swift stuff23:04
zykes-would like to get ahold of that :p23:04
*** kbringard has joined #openstack-dev23:04
jgriffzykes: I think notmyname pretty much indicated what they're doing23:04
*** jemartin has quit IRC23:04
*** kbringard has quit IRC23:05
notmynamezykes-: if you ever see johnpur in IRC, ask him. I don't know if he can share, but he would know (or know who to talk to)23:05
notmynamejgriff: I have no idea what HP is using23:05
zykes-is he ever online ?23:05
zykes-I feel that noone from HP is ever here :p23:05
*** dolphm has joined #openstack-dev23:05
notmynamezykes-: he's on during the ppb meetings normally23:06
jgriffnotmyname: Understand, I'm suggesting that you're strategy of scaling smaller (ie 12 drive units) or using jbods is "likely" accurate23:06
jgriffnotmyname: I'm not claiming to have that information either...23:06
notmynamejgriff: oh. perhaps. I'm not convinced that's the best way to go for general use case storage (ie public cloud provider). public cloud is generally more interested in denser storage than that23:07
jgriffnotmyname: Yeah, it's interesting the different use cases... all comes down to what they're targeting23:08
notmynamebut really, it's up to each provider to evaluate their use case (that's my default non-answer answer)23:08
jgriffnotmyname: :)23:08
notmynamezykes-: you may ask the wikipedia guys. they have a relatively small cluster, but they've been pretty open about all of the details23:08
mtaylornotmyname: hrm...23:08
harlowjasomeone from y! might be able to tell u also23:09
harlowjai can ask...23:09
harlowjawe like da storage, lol23:09
harlowjaepim: might have more details...23:09
zykes-harlowja: could you check ?23:09
*** dhellmann has joined #openstack-dev23:09
* epim perks up23:09
harlowjazykes-: ask epim whatever u want :-p23:10
zykes-are you allowed to share ? ;p23:10
harlowjaepim: how much hardware info can we share ;)23:10
epimperhaps, what am I sharing? I'm backscroll deficient :D23:10
zykes-Swift specifications23:10
harlowjaah, nm that, haha23:10
epimmrmm, a bit I suppose. I was thinking about releasing the results of my hypervisor performance tests to the world. But I don't want to deal with the hate mail23:10
harlowjawe aren't using swift ... (duplicate techs...)23:10
epimYeah, what Josh said23:11
zykes-what's yahoo using openstack for +23:11
epimStuff.23:11
epimheehehe23:11
harlowjalol23:11
epimjk23:11
notmynamemaplebed: are you able to share wikipedia's swift hardware specs?23:11
maplebedyeah.23:11
epimWe operate some of the largest and most power efficient datacenters int he world, Openstack will be another great way for us to leverage our awesome computing stack and become even more efficient23:12
maplebedhmm.  I was hoping they'd be here: http://wikitech.wikimedia.org/view/Swift/Dev_Notes#Hardware23:12
mtaylornotmyname: that's a good question23:12
epimmy god I swear that wasn't a practiced PR piece.23:12
harlowjalol23:13
notmynameepim: too late. you said "leverage" ;-)23:13
*** koolhead17 has quit IRC23:13
epimUgh, I know.. too many meetings :(23:13
harlowjasynergy23:13
harlowjaha23:13
epimat least I didn't say.. yes that.23:13
maplebednotmyname: one sec; I'll find the specifics.23:13
notmynamemaplebed: zykes- is looking for info23:13
zykes-harlowja: you based in the US ?23:13
harlowjasan jose, ca23:13
zykes-Ok23:14
notmynamemtaylor: I always manage to give you the best ones :-)23:14
zykes-I wonder when European companies will catch up with US ones :p23:14
maplebedcool.  I'm not sure our specs are the best...  our disks are pegged at 100% utilization pretty often.23:14
maplebedI wouldn't be surprised if, after doing some more analysis, we change stuff around.23:14
notmynamemtaylor: it may not be possible since the commit is amended and rebased. that info may simply be lost in the process23:14
*** dtroyer_zzz is now known as dtroyer23:14
notmynamezykes-: need more meetings and "synergies" in eastern europe?23:15
zykes-northern europe really23:16
harlowja** do u need to leverage more synergistic meetings in northern europe?23:16
*** lloydde has quit IRC23:16
*** ywu has joined #openstack-dev23:16
notmynamezykes-: not sure where I got eastern from....23:17
*** dtroyer is now known as dtroyer_zzz23:17
*** pmezard has quit IRC23:18
zykes-I don't think there's been like any even in Scandinavia yet23:18
*** dolphm has quit IRC23:18
maplebednotmyname, zykes-: storage: dell poweredge c2100 with 2 xeon E5645 2.4GHz chips, 48G ram and 12 2TB 7.2kRPM SAS disks.23:18
*** rbasak has joined #openstack-dev23:18
zykes-do you need that much ram for a storage node ?!23:19
notmynamedo you need SAS drives?23:19
zykes-refernce architecture says like 12 gb23:19
zykes-on a 24 tb node23:20
maplebedour boxes don't use all of the ram, so we could probably cut back a bit.  many of the disks peg 100% utilization (as reported by iostat).23:20
zykes-well, well, I need to go :)23:20
zykes-nn all23:20
maplebedpending data from some more stats around async pendings, I'm considering putting in 2 SSDs per storage node for containers, but I need evidence for that first.23:20
notmynameme too :-)23:20
zykes-Would be cool if there came some events soon though to say CopenHagen or StockHolm ;p23:21
zykes-Copenhagen / Stockholm sorry23:21
notmynamemaplebed: yes, that is a wise choice. RAX dramatically improved performance doing something like that23:21
maplebedzykes-: stats on our utilization: http://ganglia.wikimedia.org/latest/?c=Swift%20pmtpa&m=load_one&r=day&s=by%20name&hc=4&mc=223:21
*** anderstj has quit IRC23:40
nvezzykes-: bit late but I've spoken with Dell DCS and you'll likely rarely see a really high density storage server for now, but the C6100's are freaking sexy for how much power they give in it23:44
*** roge has quit IRC23:45
*** lloydde has joined #openstack-dev23:46
*** jdurgin has joined #openstack-dev23:47
*** epim has quit IRC23:53
*** Ryan_Lane has quit IRC23:54
*** Ryan_Lane has joined #openstack-dev23:54
*** novas0x2a|laptop has quit IRC23:56
*** jgriff has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!