Thursday, 2012-03-15

*** johnpostlethwait has quit IRC00:03
*** relateable has joined #openstack-dev00:04
ayounganotherjesse, yes,  I think about Pam.  But don't tell my wife00:04
*** roge has quit IRC00:07
ohnoimdeadjenkins is a little backed up: https://jenkins.openstack.org/job/gate-integration-tests-devstack-vm/00:12
ohnoimdeadoops, wrong channel00:12
adam_gvishy: yea, turns out to be a seperate bug. volumes get properly cleaned up when instance is terminated, but an instance shutting itself down (sudo poweroff) causes the volume to get stuck00:12
vishyadam_g: interesting.  I wonder why that would matter00:13
*** statik has quit IRC00:13
*** donaldngo_hp has joined #openstack-dev00:27
*** statik has joined #openstack-dev00:31
*** statik has joined #openstack-dev00:31
*** dprince has joined #openstack-dev00:34
adam_gvishy: an attempt to terminate the instance after it s shut itself down ends in 'Invalid: trying to destroy already destroyed instance', which apparently does not clean up existing volume attachments00:36
vishyadam_g: well that sounds fixable00:36
vishyadam_g: are you going to try and patch it?00:36
adam_gvishy: yeah, ill take a look00:37
jeblairanotherjesse: i don't believe gerrit trigger supports doing that out of the box.  we could probably add it.  long term fix: jclouds00:37
*** rkukura has quit IRC00:38
vishyjeblair: we need our own custom openstack install to run the vms against00:38
*** rkukura has joined #openstack-dev00:39
vishyjeblair: bonus: recent kernel + kvm in kvm00:39
bcwaldonjeblair: it was my idea!00:39
*** mnewby has quit IRC00:39
bcwaldonbut vishy has obviously thought about it more...00:39
jeblairwe totally want to do that00:39
vishybcwaldon: paying you back for yesterday00:39
vishy:p00:39
bcwaldonvishy: I don't even remember what I stole from you yesterday00:39
jeblairthe next update to the devstack gate scripts will support multiple providers.  we hope to add hp to that, and if we can rustle up an openstack cluster, we'll totally use it.00:41
bcwaldonvishy: https://review.openstack.org/#change,537700:41
bcwaldon-4654 lines00:41
jeblairvishy, anotherjesse: devstack gate has nodes now.00:44
vishybcwaldon: i was kind of hoping to keep the db code in00:44
vishyso that they can ship their code separately if need be00:44
bcwaldonvishy: I considered that00:44
bcwaldonbut doesnt that qualify as 'their' code?00:45
bcwaldonthat's part of vsa to me00:45
*** davlaps has joined #openstack-dev00:45
vishysure but there is no easy way for them to ship that part separately00:46
*** andrewsmedina has joined #openstack-dev00:47
bcwaldonok, then what about the code in the volume manager?00:47
bcwaldonI guess that is just notifications, not a big deal00:48
vishybcwaldon: ah i forgot they had a litle code in the manager00:48
bcwaldonas far as I'm concerned, the only feasible way to support this code outside of the codebase at this point is to maintain a fork00:50
vishybcwaldon: you may be correct00:51
vishyi think that is what they are doing00:51
bcwaldonvishy: then that raises the question: why not migrate the tables out?00:52
bcwaldonvishy: it doesn't make sense to create them in new installs00:52
*** devananda has joined #openstack-dev00:53
vishybcwaldon: i just hate the precedent of ripping out code with no warning00:53
vishybcwaldon: doesn't seem very community friendly00:54
bcwaldonpushing up code with no docs then letting it rot for months is a LOT worse00:54
bcwaldonits not like we're erasing it from history00:54
bcwaldonwe're removing it from our minds going forward00:54
*** adjohn has quit IRC00:54
*** rkukura has quit IRC00:55
vishybcwaldon: i suppose00:56
* vishy is sad00:56
bcwaldonas am I00:56
vishyi think we need a ml announcement at least00:57
bcwaldonok00:57
vishylike we did for hyper-v00:57
bcwaldonwould you like to -2 my prop and send an email?00:57
vishysure00:57
*** donaldngo_hp has quit IRC01:00
*** pixelbeat has quit IRC01:01
*** dalang has quit IRC01:02
jog0vishy, bcwaldon: glad to see there is both a ML discussion and something being done about it.   Shipping rotted code isn't so nice.01:03
vishybcwaldon: might as well do a db migration too if we are going to talk on the ml about it01:06
bcwaldonvishy: ok01:07
vishyi just realized getting it to work after we split volumes is going to be very difficult anyway01:07
vishyso that is another reason for torching it01:07
bcwaldonhuzzah!01:07
jog0huzzah!01:07
vishyjog0: I'm planning on regenerating the sample as the last commit before rc01:08
jog0sounds good to me01:08
jog0vishy:  when is rc?01:09
*** rkukura has joined #openstack-dev01:10
*** rkukura has left #openstack-dev01:10
*** rkukura has joined #openstack-dev01:11
*** rkukura has left #openstack-dev01:13
*** bencherian has joined #openstack-dev01:13
*** jdurgin has quit IRC01:14
vishyas soon as bugs are fixed01:14
*** andrewsmedina has quit IRC01:15
jog0vishy: good answer01:16
*** vincentricci has quit IRC01:18
*** PotHix has quit IRC01:25
*** danwent has joined #openstack-dev01:25
*** shevek_ has quit IRC01:26
*** deshantm has quit IRC01:30
comstudstrange01:37
comstudi haven't seen waldon's email yet01:37
comstudassuming he's sent it01:37
comstudok, guess he hasn't sent one yet01:38
comstudnot in the archive01:38
*** spiffxp has quit IRC01:39
*** dolphm_ has quit IRC01:39
vishycomstud: to zadara?01:52
vishycomstud: hmm i sent from wrong email01:53
*** bencherian has quit IRC01:54
vishycomstud: just sent it again from the right email01:54
*** novas0x2a|laptop has quit IRC01:54
comstudcools01:56
comstudit's in the archive now01:56
comstudthanks01:56
*** jakedahn is now known as jakedahn_zz01:56
*** Ryan_Lane has quit IRC01:57
*** bencherian has joined #openstack-dev01:57
*** roge has joined #openstack-dev01:58
*** ayoung has quit IRC02:02
*** rkukura has joined #openstack-dev02:08
*** maplebed has quit IRC02:14
*** misheska has quit IRC02:16
*** hitesh__ has joined #openstack-dev02:17
*** Mandell has quit IRC02:19
*** danwent has quit IRC02:22
*** markmcclain has quit IRC02:22
*** andrewsmedina has joined #openstack-dev02:23
*** jakedahn_zz is now known as jakedahn02:27
*** hitesh__ has quit IRC02:29
*** jakedahn is now known as jakedahn_zz02:39
*** reed has quit IRC02:44
*** devananda has quit IRC02:53
*** dprince has quit IRC02:54
*** hitesh__ has joined #openstack-dev02:54
*** bencherian has quit IRC03:00
*** deshantm has joined #openstack-dev03:19
*** hitesh__ has quit IRC03:27
*** relateable has quit IRC03:32
*** jakedahn_zz is now known as jakedahn03:34
*** dubsquared has quit IRC03:51
*** andrewsmedina has quit IRC04:09
*** roge has quit IRC04:15
redbocan someone tell me what I need to do to get my gerrit/launchpad account un screwed up?04:20
*** shang has quit IRC04:23
anotherjesseredbo: mtaylor / jeblair is who I usually ask for that sort of help04:23
notmynameand LinuxJedi04:23
*** martine has joined #openstack-dev04:24
redbomtaylor,jeblair,LinuxJedi: what do I need to go to get my gerrit account working?04:28
*** Mandell has joined #openstack-dev04:31
*** rohita has quit IRC04:34
redboI'm a bit sketchy on the details, but mtaylor suspected that because I merged accounts in launchpad once, my shit's all fucked up.04:34
*** littleidea has quit IRC04:36
*** dtroyer is now known as dtroyer_zzz04:40
*** shang has joined #openstack-dev04:40
*** pmyers has quit IRC04:41
*** pmyers has joined #openstack-dev04:44
*** littleidea has joined #openstack-dev04:48
mtaylorredbo: ah, sorry - I thought we had you - I think you fell off the todo list04:57
mtaylorgimme just a sec04:57
*** martine has quit IRC05:05
*** zaitcev has quit IRC05:15
*** davlaps has quit IRC05:22
*** hattwick has quit IRC05:33
*** deshantm has quit IRC05:42
*** adjohn has joined #openstack-dev05:45
*** Ryan_Lane has joined #openstack-dev06:01
*** phorgh has quit IRC06:03
*** danwent has joined #openstack-dev06:11
*** bepernoot has joined #openstack-dev06:36
*** deshantm has joined #openstack-dev06:37
*** bepernoot has quit IRC06:49
*** deshantm has quit IRC06:50
*** phorgh has joined #openstack-dev07:04
*** mtaylor_ has joined #openstack-dev07:16
*** gyee has quit IRC07:18
*** jakedahn is now known as jakedahn_zz07:19
*** Mandell has quit IRC07:19
*** zigo has joined #openstack-dev07:21
*** ghe_ has quit IRC07:21
*** mtaylor has quit IRC07:21
*** Mandell has joined #openstack-dev07:26
*** ghe_ has joined #openstack-dev07:28
*** rohita has joined #openstack-dev07:48
*** danwent has quit IRC07:53
*** shevek_ has joined #openstack-dev07:54
*** bepernoot has joined #openstack-dev07:54
*** danwent has joined #openstack-dev07:54
*** hattwick has joined #openstack-dev07:59
*** danwent has quit IRC08:00
rmkhmm -- how is keystone auth enabled for nova-api with essex?08:02
*** Ryan_Lane has quit IRC08:02
*** littleidea has quit IRC08:03
rmkI dont quite understand the paste config08:04
*** Mandell has quit IRC08:15
*** reidrac has joined #openstack-dev08:16
sorenttx: We don't have any sort of freeze policy for packaging code, do we=08:20
soren?08:20
ttxsoren: no08:20
ttxsoren: note that the PPA packaging could use a refresh from the precise packaging08:20
sorenttx: I'm doing tht right now.08:20
sorenttx: ...but I'm also working on a bunch of other changes.08:20
ttxsoren: we somehow need to solve this duplication of work. I know we tried before :)08:21
sorenttx: Since Ubuntu actually uses a different set of branches for their packaging, I don't actually have to worry too much about Ubuntu's FF.08:21
ttxyep08:21
sorenttx: I hope it'll all sort itself out in the q-timeframe.08:21
* mtaylor_ isn't going to hold his breath08:22
*** mtaylor_ is now known as mtaylor08:22
*** mtaylor has joined #openstack-dev08:22
*** ChanServ sets mode: +v mtaylor08:22
zykes-soren: around for a non-openstack question?08:24
sorenzykes-: I'll see if I can come up with a non-openstack answer.08:41
LarsErikPso, does anyone have a "stable" essex stack running. With VLANmanager, multihost? Just wanna know wether it's possible or not.. Running in 11.1008:48
LarsErikPsoory, wring chan08:51
*** maploin has joined #openstack-dev08:51
LarsErikPwrong08:52
*** maploin has quit IRC08:52
*** maploin has joined #openstack-dev08:52
zykes-soren: nvm, i found that the thing i needed for upstart was "expect fork"08:53
*** My1 has joined #openstack-dev09:11
*** My1 has left #openstack-dev09:11
*** ipl31 has quit IRC09:26
*** adam_g has quit IRC09:26
*** hyakuhei has quit IRC09:27
*** redbo has quit IRC09:27
*** med_ has quit IRC09:27
*** bencherian has joined #openstack-dev09:28
*** pixelbeat has joined #openstack-dev09:33
*** dneary has joined #openstack-dev09:39
*** ipl31 has joined #openstack-dev09:43
*** redbo has joined #openstack-dev09:44
*** ChanServ sets mode: +v redbo09:44
*** adam_g has joined #openstack-dev09:44
*** hyakuhei has joined #openstack-dev09:44
*** adjohn has quit IRC10:01
*** oneiroi has joined #openstack-dev10:02
*** bencherian has quit IRC10:09
*** hitesh_ has joined #openstack-dev10:12
*** tryggvil_ has joined #openstack-dev10:17
*** PotHix has joined #openstack-dev10:48
*** dachary has joined #openstack-dev10:50
*** rkukura has quit IRC10:58
*** gpernot has joined #openstack-dev11:07
*** dneary has quit IRC11:16
*** hitesh_ has quit IRC11:37
*** maploin has quit IRC11:49
*** davlaps has joined #openstack-dev12:02
*** rods has joined #openstack-dev12:06
eglynnanyone know if the "Archive the artifacts" check-box was deliberately un-ticked in the Jenkins config?12:10
eglynne.g. in the Post-build Actions under https://jenkins.openstack.org/job/gate-glance-python26/configure12:11
eglynnor https://jenkins.openstack.org/job/gate-glance-python27/configure12:11
* eglynn seems to remember the Build Artifacts being preserved previously12:11
*** hugokuo has joined #openstack-dev12:11
hugokuohttps://answers.launchpad.net/glance/+question/190763    help please , there's an issue ticket about glance to upload larger images to swift failed12:13
eglynnhugokuo: can you reproduce against E4?12:16
hugokuook12:17
hugokuogive it a try12:17
eglynnhugokuo: thanks! essex-4 included at least one fix for large image upload12:18
eglynnjeblair: there?12:20
*** dachary has quit IRC12:22
*** berendt has joined #openstack-dev12:25
*** Shrews has joined #openstack-dev12:27
*** bsza has joined #openstack-dev12:27
*** lts has joined #openstack-dev12:31
eglynnactually, ignore my questions above about archiving log files from Jenkin builds, wouldn't help in my case as the glance logs are dumped under random temp dirs12:38
*** martine has joined #openstack-dev12:40
*** dneary has joined #openstack-dev12:42
*** dneary has joined #openstack-dev12:42
*** medberry has joined #openstack-dev12:51
*** medberry has quit IRC12:51
*** medberry has joined #openstack-dev12:51
*** medberry is now known as med_12:52
*** danwent has joined #openstack-dev12:53
*** pixelbeat has quit IRC12:54
*** martine has quit IRC13:02
*** andrewsmedina has joined #openstack-dev13:04
*** phorgh1 has joined #openstack-dev13:05
*** phorgh has quit IRC13:05
*** danwent has quit IRC13:13
*** dprince has joined #openstack-dev13:15
*** ayoung has joined #openstack-dev13:17
*** pixelbeat has joined #openstack-dev13:24
*** littleidea has joined #openstack-dev13:24
*** littleidea has quit IRC13:25
*** danwent has joined #openstack-dev13:29
*** rkukura has joined #openstack-dev13:32
*** maploin has joined #openstack-dev13:34
*** maploin has quit IRC13:34
*** maploin has joined #openstack-dev13:34
*** dtroyer_zzz is now known as dtroyer13:35
*** derekh has joined #openstack-dev13:40
*** dtroyer is now known as dtroyer_zzz13:42
*** tryggvil_ has quit IRC13:43
*** kbringard has joined #openstack-dev13:44
*** danwent has quit IRC13:50
*** dachary has joined #openstack-dev13:52
*** dachary has joined #openstack-dev13:52
*** roge has joined #openstack-dev14:00
*** dtroyer_zzz is now known as dtroyer14:08
*** maploin has quit IRC14:17
*** glenc_ has joined #openstack-dev14:18
*** glenc has quit IRC14:19
*** glenc_ is now known as glenc14:20
*** danwent has joined #openstack-dev14:22
*** zzed has joined #openstack-dev14:27
*** sandywalsh has joined #openstack-dev14:34
*** reed has joined #openstack-dev14:37
*** flaviamissi has joined #openstack-dev14:43
eglynnjaypipes: hey14:51
*** dprince has quit IRC15:00
*** danwent has quit IRC15:00
*** dolphm has joined #openstack-dev15:00
*** byeager has joined #openstack-dev15:01
*** phorgh1 has quit IRC15:04
*** dprince has joined #openstack-dev15:04
*** phorgh has joined #openstack-dev15:06
notmynamechmouel: you saw the comments on your bin/swift + client.py patch?15:09
chmouelyeah I saw dfg comment working on addressing them15:10
notmynamecool15:10
*** phorgh has quit IRC15:13
*** littleidea has joined #openstack-dev15:13
*** zaitcev has joined #openstack-dev15:16
*** phorgh has joined #openstack-dev15:17
*** danwent has joined #openstack-dev15:18
*** reed has quit IRC15:18
*** AlanClark has joined #openstack-dev15:19
jaypipeseglynn: pong15:28
eglynnjaypipes: just wanted to check that I'm not missing something obvious on https://bugs.launchpad.net/glance/+bug/95552715:28
uvirtbotLaunchpad bug 955527 in glance "copy_from test case logic is invalid" [High,In progress]15:28
jaypipeseglynn: sorry, slow start this moring...15:29
eglynnjaypipes: np ;)15:29
eglynnjaypipes: I don't follow your logic on the delete of the original image being applied directly to the backend store15:29
eglynnjaypipes: i.e. DELETE /v1/images/original_image_id shouldn't impact on GET /v1/images/copy_image_id15:30
eglynnjaypipes: unless there's a bizarre ID collision15:30
jaypipeseglynn: one sec, reading...15:30
*** hhoover has joined #openstack-dev15:30
eglynnjaypipes: the "fix" proposed in https://review.openstack.org/5396 is just an extra test assertion to eliminate an ID collision between copy and original being the root of the issue15:33
*** hhoover has left #openstack-dev15:34
jaypipeseglynn: I think there's got to be something more to it...15:35
jaypipeseglynn: there's just a feeling I have that this can't be as common a thing (id colliision) as we've seen...15:36
eglynnjaypipes: yep, agree ... an ID collision would be bizarre, just wanted to eliminate that from the mix15:36
jaypipeseglynn: yeah, and I generally agree with you on the bug comments... just can't quite put my finger on it..15:37
eglynnjaypipes: also my comments about gremlins in swift-land were unfounded I think, as the same kind of failure intermittently hits a test copying s3->file as opposed to s3->swift15:37
eglynnI'll continue to poke at it ...15:37
jaypipeseglynn: I have a thought.15:38
eglynnjaypipes: shoot15:39
jaypipeseglynn: so.. perhaps the 404 is not referring to the object in S3/swift, but instead is referring to the container or bucket.15:39
eglynnjaypipes: but that wouldn't apply in the s3->file case, right?15:40
jaypipeseglynn: and it is the container/bucket that is not being managed in the same way in the setup_swift/setup_s3 (and teardown) decorators that it is in, say, functional.TestSwift.setUp()?15:40
jaypipeseglynn: yeah, it would, since the original is the one that is returning the 404 and the original is in S3 (never in file)15:40
eglynnjaypipes: it's the second GET /v1/images/copy_image_id that fails with 404, not the original_image_id15:43
jaypipeseglynn: it's been both :)15:44
jaypipeseglynn: lemme grab a link to show ya....15:44
jaypipeseglynn: https://jenkins.openstack.org/job/gate-glance-python26/98/console15:44
*** roge has quit IRC15:44
eglynnjaypipes: the most recent failure sequence ... GET /v1/images/copy_image_id->200, DELETE /v1/images/original_image_id->200, GET /v1/images/copy_image_id->40415:45
jaypipeseglynn: oh wait, wrong link... one sec15:45
jaypipeseglynn: damn it, you're right... as usual.15:46
jaypipesI think I've looked over this code so many times my eyes are going bonkers15:47
eglynnjaypipes: I hear ya ;)15:47
eglynnjaypipes: k, I'll dig on ... I've a feeling there's something obvious lurking ... #5396 will at least eliminate one bizarre and highly unlikely cause15:48
jaypipeseglynn: need to narrow it down to a pattern..15:49
jaypipeseglynn: is the pattern that the copy_from is always in S3?15:49
eglynnjaypipes: so far I've seen s3->{file|swift}, so yeah the source is always S315:50
jaypipeseglynn: k... that should be a clue at least.15:50
eglynnjaypipes: yep, I'll dig thru' the S3 deletion logic15:51
jaypipeseglynn: k, thx mate15:52
*** utlemming has quit IRC15:53
*** jdurgin has joined #openstack-dev15:56
*** utlemming has joined #openstack-dev15:56
*** LinuxJedi has quit IRC16:01
*** sdake has quit IRC16:01
*** davidkranz has quit IRC16:02
*** tryggvil_ has joined #openstack-dev16:02
*** rohita has quit IRC16:03
*** phorgh has quit IRC16:03
*** phorgh has joined #openstack-dev16:04
*** LinuxJedi has joined #openstack-dev16:04
*** bencherian has joined #openstack-dev16:05
andrewbogottanotherjesse:  Is nova auth (w/out keystone) already broken in essex, or is it just going to be broken in folsom?16:05
*** dachary has quit IRC16:05
andrewbogott(I ask because I'm looking at upgrading a Diablo install to Essex, and we don't use keystone currently.)16:05
*** martine has joined #openstack-dev16:06
andrewbogottwhich, for that matter, is there's a 'how to upgrade to essex' guide someplace?16:06
*** maplebed has joined #openstack-dev16:06
*** markmcclain has joined #openstack-dev16:06
*** rohita has joined #openstack-dev16:07
*** maploin has joined #openstack-dev16:08
*** maploin has quit IRC16:08
*** maploin has joined #openstack-dev16:08
jaypipesjeblair: looks like Jenkins is having some issues... the devstack-vm gate is failing. see here: https://jenkins.openstack.org/job/gate-integration-tests-devstack-vm/2593/console16:09
jaypipesjeblair: failing for all projects it seems, not just for glance runs16:10
jeblairjaypipes: looking into it16:10
jaypipesjeblair: cheers16:10
*** markmcclain has quit IRC16:10
LinuxJedijaypipes: we think there may be a resolver problem on the nodes16:11
*** reed has joined #openstack-dev16:12
jeblairanotherjesse: ping16:13
jaypipesLinuxJedi: k, I trust you're on the case :)16:16
*** Mandell has joined #openstack-dev16:17
*** bencherian has quit IRC16:21
*** phorgh1 has joined #openstack-dev16:22
*** phorgh has quit IRC16:24
*** reidrac has quit IRC16:24
*** mnewby has joined #openstack-dev16:25
*** martine has quit IRC16:26
*** LinuxJedi has quit IRC16:28
*** sdake has joined #openstack-dev16:28
eglynnbcwaldon: quick question?16:29
*** utlemming has quit IRC16:29
*** markmcclain has joined #openstack-dev16:30
*** LinuxJedi has joined #openstack-dev16:30
*** utlemming has joined #openstack-dev16:30
bcwaldonwhatsup16:30
eglynnbcwaldon: just wondering if this NotAuthorized mapping to NotFound in the registry controller is deliberate or copy'n'paste oversight?16:30
eglynnbcwaldon: https://github.com/openstack/glance/commit/7c4d9350#L3R27316:30
jeblairjaypipes: I believe there was a series of nodes that were created with wrong permissions on /etc/resolv.conf16:31
eglynnbcwaldon: (as opposed to NotAuthorized->webob.exc.HTTPForbidden, so that the client gets a 401)16:31
bcwaldoneglynn: looking...16:31
eglynnbcwaldon: reason I ask is that I'm trying to eliminate all unexpected sources of 404s wrt bug #95552716:31
uvirtbotLaunchpad bug 955527 in glance "copy_from test case logic is invalid" [High,In progress] https://launchpad.net/bugs/95552716:31
jeblairjaypipes: as in, we got them from rackspace cloud that way (the perms were right on the template node)16:31
jaypipesjeblair: interesting16:31
jeblairjaypipes: and I think we're past that batch now, new ones are coming up correctly16:31
*** bepernoot has quit IRC16:31
jeblairjaypipes: yeah, i would not have expected something like that to happen.16:32
jaypipesjeblair: ok. I'll keep an eye on it.16:32
bcwaldoneglynn: that code has changesd since the commit you reference, but yes, it is intended16:32
bcwaldoneglynn: we don't want to allow users to discover images by getting a 403 back16:32
bcwaldoneglynn: so we present unauthorized images as if they don't exist16:32
eglynnbcwaldon: k, makes sense16:32
* eglynn was grasping at straws in any case ...16:33
bcwaldoneglynn: look at the latest code, we realized we had to do it slightly differently for images that are public that the user doesn't own16:33
*** bengrue has quit IRC16:34
jeblairi retriggered the failed devstack jobs16:36
jeblairanotherjesse, sleepsonzzz: it looks like devstack clones quantum if horizon is enabled; is that used and should we start gating quantum on devstack?16:37
*** phorgh1 has quit IRC16:43
*** phorgh has joined #openstack-dev16:43
*** bencherian has joined #openstack-dev16:45
*** mszilagyi has joined #openstack-dev16:47
*** jdg has joined #openstack-dev16:52
*** med_ has quit IRC16:54
sleepsonzzzjeblair - it should only clone quantum-client16:55
sleepsonzzz(when horizon is enabled)16:55
jeblairsleepsonzzz: sorry, that's what i meant16:56
jeblairsleepsonzzz: so since we gate novaclient and keystoneclient, perhaps we should also gate quantumclient?16:57
*** vricci has joined #openstack-dev16:57
*** vricci is now known as vincentricci16:57
*** dolphm has quit IRC17:00
*** spiffxp has joined #openstack-dev17:03
*** medberry has joined #openstack-dev17:05
jeblairsleepsonzzz, anotherjesse: https://review.openstack.org/540117:05
*** medberry is now known as Guest3585717:06
jeblairdanwent: python-quantumclient is used by horizon in the devstack integration test gate.  how do you feel about gating python-quantumclient on devstack?  https://review.openstack.org/540117:07
maploinwhat's the relationship between the cloudbuilders/noNVC project and the kanaka/noVNC project (which looks like an upstream?)17:09
danwentjeblair:  gating both quantum and python-quantumclient on devstack is a big goal for Folsom.  A excercise script for quantum in devstack is in review currently.  Are you talking about doing this for Essex or folsom?17:09
danwentjeblair: btw, you can talk to devin, but quantum support in horizon isn't in great shape, and probably won't be supported for essex.17:10
danwentso gating for essex may not be necessary.  in Folsom, both horizon and nova will use python-quantumclient17:10
jeblairdanwent: ASAP, since it's already being pulled in by horizon.  i very much doubt it's being _used_ per se since quantum isn't being used.  but since it's still theoretically possible to break horizon by, say, having an import level error in python-quantumclient, it would be good to gate on it.  we like to gate all the dependencies that we control.17:11
jeblair(unless we can configure horizon in devstack to not use python-quantumclient at all)17:11
danwentjeblair: I believe you can, but you'd have to check with the horizon folks.17:13
*** stack_ has joined #openstack-dev17:13
danwentjeblair: but if not, I think it would make sense.  We're looking to freeze for the RC today for both quantum and python-quantumclient, so i'm a bit nervous about making CI changes.17:14
*** oneiroi has quit IRC17:14
danwentjeblair:  I don't know if the switch to enable/disable quantum client in horizon disable it at the import level, or just in terms of what is shown in the GUI17:15
*** Ravikumar_hp has joined #openstack-dev17:16
*** stack_ has quit IRC17:16
jeblairsleepsonzzz, anotherjesse: ^ do you have any thoughts on this?  in devstack, python-quantumclient is always required for horizon.  will horizon fail to start without it?17:18
*** dneary has quit IRC17:19
*** davidkranz_ has joined #openstack-dev17:19
*** hitesh__ has joined #openstack-dev17:20
davidkranz_jaypipes: I got a lot of this stuff in compute and network logs http://paste.openstack.org/show/9731/17:20
jaypipesdavidkranz_: lemme see if that stuff is in my devstack log output...17:20
sleepsonzzzjeblair - it used to be the case that horizon would crash if it was not installed.  But I cannot repro that now.  I'll check with folks to see if it is possible to remove the dep.17:21
*** bepernoot has joined #openstack-dev17:21
*** bencherian has quit IRC17:21
*** Ryan_Lane has joined #openstack-dev17:22
jeblairdanwent: (in case sleepsonzzz says we need to keep it): the change i'm proposing shouldn't really alter anything.  right now python-quantumclient is being installed in the devstack gate test from github.  either it's used by horizon, and an ungated change to quantumclient will break the gate test (so gating quantumclient will prevent that),  or it's not actually used by horizon, in which case, it still won't be used even if we gate on i17:22
adam_ganyone around with knowledge of the xenapi compute driver?17:23
comstudadam_g: yeah17:23
*** bencherian has joined #openstack-dev17:24
jaypipesdavidkranz_: lol, now the tests are running fine :)17:24
danwentjeblair: ok, go for it.  i'm in favor of gating in general, and this gets us going in the right direction17:24
jaypipesdavidkranz_: I'll keep running tempest and see if it happens.17:24
jeblairdanwent: cool, thanks.  looking forward to gating quatum too.  :)17:26
danwentjeblair: much appreciated, thanks17:26
*** shevek_ has quit IRC17:26
adam_gcomstud: im looking at bug #954692. in nova.compute.manager._shutdown_instance() an exception is raised if current_power_state == power_state.SHUTOFF. that condition has been there forever. the later call to driver.destroy() appears to work just fine for the libvirt driver in the case that the instance is already shutoff. that is, it cleans up networking/volumes in its driver specific way.  is the same true of xenapi? im wondering if that exception17:28
uvirtbotLaunchpad bug 954692 in nova "cannot detach volume from terminated instance" [Medium,In progress] https://launchpad.net/bugs/95469217:28
*** bepernoot has quit IRC17:31
comstudadam_g: the last part of your msg was cut off.. "im wondering if that exception" is the last part17:33
comstudadam_g: Gimme a few to look at what you're talking about17:33
*** AlanClark has quit IRC17:35
*** novas0x2a|laptop has joined #openstack-dev17:35
adam_gcomstud: ..wondeirng if raising an exception there is obsolete, if other compute drivers can call destroy() on an already POWEROFF'd instance17:35
*** Ravikumar_hp has quit IRC17:36
comstudadam_g: ok, sec17:37
*** bencherian has quit IRC17:37
*** belmoreira has joined #openstack-dev17:38
comstudadam_g: something doesn't feel quite right with the code.  i think it's assuming that if the instance is SHUTOFF, it's been destroyed from Xen.  But it's certainly possible that is not the case.17:39
comstudadam_g: Does seem like we should always try to destroy the instance from Xen17:40
adam_gcomstud: the only way ive been able to hit that code is to 'sudo poweroff' from inside an instance.  in libvirt's case that kills the kvm/libvirt domain. not sure what happens in the xenserver case17:40
comstudyep17:41
comstudthat's the case i was thinking of17:41
comstudin the xen case...17:41
adam_gcomstud: like i said, the condition doesnt seem to make sense for libvirt. it certainly seems like something that should be raised in the driver17:41
comstudyou can 'poweroff' and it won't go away17:41
comstudyou should be able to power it back on just fine17:41
comstudhowever17:41
comstudit appears if you do a 'sudo poweroff'...17:42
comstudand then try to delete the instance while it's in that state17:42
comstudwe'll leave the instance around in Xen17:42
adam_ghmm17:44
adam_gcomstud: you mean, that happens currently?17:45
vishycomstud, bcwaldon, pvo, johan_-_, tr3buchet, _cerberus_, jk0, Vek, blamar, any other core members i forgot: We have two reviews outstanding for rc-1: https://review.openstack.org/#change,5338 and the removal of VSA code (being disucssed on the ml)17:46
comstudi mean that.. that's how I read the code in master17:46
comstudadam_g: ^17:46
vishyplease let me know if there are any other blockers taht you are aware of17:46
comstudadam_g: and i think it's a bug.. it does appear that the whole 'if state == SHUTOFF' should go away17:46
vishy(and review and comment on the thread pls)17:46
jk0vishy: I was waiting for the ML to wrap up before approving the VSA MP17:46
comstud+1 for VSA going away17:46
tr3buchetvishy: on it17:47
comstudi thought i'd reviewd that s3 ssl one already17:47
*** Guest35857 is now known as med1234517:47
comstudi know i looked at it before17:48
comstudanyway, approved17:48
adam_gcomstud: thats what i figured. ill get something into gerrit shortly, thanks.17:48
tr3buchetdamn i was slowest17:48
tr3buchetthat thing is sup3r approved now17:48
* jk0 came in 2nd17:48
comstudadam_g: the one thing I wonder is if... there should be a 'try/except' around the 'seif.driver.destroy' call then17:48
*** med12345 is now known as med___17:48
comstudadam_g: so it'll do the volume cleanup even if self.driver.destroy fails17:49
*** hitesh__ has quit IRC17:49
adam_gcomstud: well, the volume cleanup can happen from the compute manager, but actually cleaning up the sessions/connections needs to happen in the compute driver AFAICS17:50
*** jakedahn_zz is now known as jakedahn17:50
vishyjk0: yeah i was just wondering if anyone had comments17:50
vishyfor the ml17:50
adam_gcomstud: but, if different hypervisors handle the case of SHUTOFF differnetly, i think it should be up to the driver to raise that exception if needed and, ya, catch it around self.driver.destroy17:51
*** dalang has joined #openstack-dev17:51
*** adjohn has joined #openstack-dev17:51
comstudadam_g: Correct.  I just meant that in the compute manager, we call self.driver.destroy which could potentially fail.  But I wonder if we should try/except it so we can continue running the code in compute manager that starts at line 68917:51
*** gyee has joined #openstack-dev17:51
eglynnjaypipes/jeblair: how about temporarily re-enabling the GLANCE_TEST_S3/SWIFT_CONF env vars on jenkins slaves after https://review.openstack.org/5396 lands?17:52
eglynn(disabled yesterday I believe)17:52
comstudadam_g: Yeah, ok.  Agree.. although nova should really have the same experience with libvirt and xenapi.  if there's a behavior difference, that'll need addressed at some point17:52
eglynnso that I can re-trigger a bunch of builds, hopefully repro the S3 failure, and eliminate an ID collision as the cause17:53
*** bengrue has joined #openstack-dev17:56
*** maploin has quit IRC18:01
*** gyee has quit IRC18:02
*** Glace_ has joined #openstack-dev18:02
*** gyee has joined #openstack-dev18:03
*** dolphm has joined #openstack-dev18:13
*** derekh has quit IRC18:13
hugokuoeglynn , ping .  the result of Glance E4 is working18:15
eglynnhugokuo: cool18:15
hugokuoeglynn , but we seems must run with E3  .....  ~"~18:16
hugokuoI got to find out the problem of E3 and fix it ....18:16
*** dolphm_ has joined #openstack-dev18:16
hugokuoand additional question , must I provide disk & container format in glance add command ?18:17
hugokuoin E418:17
blamartermie / vishy / bcwaldon: are there any docs on what roles should be? or are roles just free-form strings? looks like Glance is expecting "Admin" for an administrator and Nova is expecting "Admin" or "admin"... just curious which way to standardize18:17
bcwaldonblamar: I actually needed to bring this up, too18:17
bcwaldonblamar: since we are creating default rules with certain roles18:18
blamarand some auth systems return things like identity:admin or image:admin or compute:admin18:18
*** dolphm has quit IRC18:18
bcwaldonblamar: yeah...once vishy and sleepsonzzz stop talking IRL maybe they'll have some thoughts18:19
blamarbcwaldon: k18:19
*** deva has joined #openstack-dev18:19
eglynnhugokua: yes, we made the disk and container formats compulsory in E4 (enforced in the API layer, as opposed to the glance add cmd)18:19
*** deva is now known as devananda18:20
eglynnhugokuo: are you intending to apply a custom patch to E3? surely better to run with E4 ...18:21
vishyblamar: i prefer lowercase18:21
vishyblamar: maybe we should make the policy check .lower() roles before checking them18:21
hugokuoeglynn , my leader ask me to do that though .18:22
bcwaldonblamar: we do lower() in the common policy module18:22
hugokuoI do love latest code . but they seems worry about the integration risk between different versions of Openstack projects18:22
vishyblamar: in fact we ^^18:22
bcwaldonblamar: but looks like glance expects 'Admin' for the context.is_admin check18:23
bcwaldonblamar: which is probably incorrect18:23
blamarvishy / bcwaldon: what about certain auth services that might want to have administrators for just glance or just nova, ideas about that? say role = "compute:admin"? just not support that?18:24
vishyblamar: we definitely should do that ultimately18:24
vishyblamar: I think we just tried to keep it simple initially18:24
bcwaldonblamar: you'll need to modify the policy json file to get that18:24
vishyblamar: roles are just strings18:25
blamarvishy: yup, gotcha18:25
blamarhttp://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_authenticate_v2.0_tokens_Service_API_Client_Operations.html18:25
vishyblamar: so you could give the glance admin user just a 'admin' for the glance tenant18:25
bcwaldonblamar: well, roles are referenced from everywhere by their name (a string), but you refer to them by id in keystone itself18:25
blamarvishy: ^^ looks like example roles there18:25
vishyblamar: and not give it for the nova tenant18:25
blamarshould we update docs to have example roles be "Admin"?18:26
bcwaldonblamar: if anything, 'admin'18:26
blamaroh right18:26
bcwaldonblamar: but I wouldnt be against it if it makes this simpler to comprehend18:26
davidkranz_jaypipes: Just got lots of those errors on devstack/precise...18:26
jaypipesdavidkranz_: ok, so tempest ran cleanly through one time. next time I got that same error during the floating IP test. nothing bad in nova-network log but this was in nova-compute: http://paste.openstack.org/show/974418:26
jaypipesdavidkranz_: lol... jinx.18:27
davidkranz_This must be some horrible new racey type thing. We need to file a bug.18:28
vishyjaypipes, davidkranz_: looks like perhaps the domain has been suspended and resumed between?18:28
vishyor somehow the uuid has been regenerated18:28
vishywhile in the middle of the spawn18:28
vishy?18:28
davidkranz_jaypipes: I also got the first failure right after Positive test:Should return the list of floating IPs ... ok. Perhaps it is not random.18:29
jaypipesvishy: yeah, it's a weird one... we're trying to track it down. think it may be due to some side effects from a previous test because running the floating IP test alone works fine..18:30
*** dolphm_ has quit IRC18:30
*** n0ano has quit IRC18:31
*** dolphm has joined #openstack-dev18:31
*** johnpostlethwait has joined #openstack-dev18:31
jaypipesvishy: first we need to verify that it isn't the test case at fault...18:31
vishyjaypipes: well you mangaged to achieve a traceback i have never seen, so that is exciting18:32
*** med___ is now known as med__18:32
*** med__ is now known as med_18:32
*** med_ has joined #openstack-dev18:33
davidkranz_jaypipes: what os and hypervisor are you using?18:33
jaypipesvishy: glad we are exciting today :)18:33
*** gyee has quit IRC18:33
jaypipesdavidkranz_: libvirt/KVM on Xubuntu 11.1018:33
davidkranz_OK, so precise is not the problem.18:34
jaypipesdavidkranz_: nope18:34
*** jakedahn is now known as jakedahn_zz18:39
*** bepernoot has joined #openstack-dev18:41
*** Glace_ has quit IRC18:51
*** jdg has quit IRC18:52
*** Glace_ has joined #openstack-dev18:52
*** bepernoot has quit IRC18:53
*** gpernot has quit IRC18:55
*** Glacee has joined #openstack-dev18:57
*** Glace_ has quit IRC18:59
davidkranz_jaypipes, vishy: I just got it to sort of reproduce and submitted https://bugs.launchpad.net/nova/+bug/95631319:05
uvirtbotLaunchpad bug 956313 in nova "Nova loses ability to build instances" [Undecided,New]19:05
jaypipesdavidkranz_: yeah, the weird thing, though, is that I don't see that lock/semaphore error in my compute log... :(19:07
davidkranz_jaypipes: That is actually in the network log.19:08
*** bsza has quit IRC19:08
jaypipesdavidkranz_: yes, but the timeout is in the compute log, no?19:09
davidkranz_Yes.19:09
*** devananda has quit IRC19:11
*** devananda has joined #openstack-dev19:12
jaypipesdavidkranz_: I'm suspecting that cleanup of instances from prior test runs is wreaking havoc.19:12
davidkranz_jaypipes: Perhaps, but his being a black-box test it is not really tempest's problem. I don't think it makes sense to spend more time trying be beat up on essex until this is fixed, do you?19:13
*** flaviamissi has quit IRC19:14
*** pixelbeat has quit IRC19:14
davidkranz_jaypipes: I don't think I am enough of a nova expert to help debug it productively.19:14
jaypipesdavidkranz_: it's ok, you're doing fine reporting the bugs so far :)19:15
davidkranz_jaypipes: OK, I will tell Adam G and those guys about the current state.19:16
jaypipesdavidkranz_: yup, sounds good. If you notice any patterns anywhere else, just log another bug :)19:17
*** PotHix has quit IRC19:17
*** jakedahn_zz is now known as jakedahn19:22
eglynnjeblair: could I trouble you to temporarily re-enable those glance s3/swift env vars on the jenkins slaves?19:22
*** lts has quit IRC19:22
vishydavidkranz_, jaypipes: do you rerun all services in every test?19:25
notmynameDaviey: ping (about swift releases)19:26
jaypipesvishy: do you mean do we restart devstack/environment between each test?19:27
jaypipeseglynn, jeblair: before you do that... eglynn, I can email you some conf files so you can test locally...19:28
vishyjaypipes: yes that is what i mean19:28
eglynnjaypipes: cool, thanks19:28
vishyjaypipes: I'm trying to figure out how the lockfile error might occur19:28
jaypipesvishy: yes, I do killall screen, stack.sh, update tempest.conf with the new image UUID and then re-run tempest19:28
vishyjaypipes: and I'm wondering if there is an issue where it starts receiving requests before lockfile cleanup has finished19:29
*** sdake has quit IRC19:29
jaypipesvishy: and that error (the one here: http://paste.openstack.org/show/9744/) is reproduceable about 60% of test runs.19:29
davidkranz_vishy: I first saw these errors in a real cluster and only was using devstack today.19:29
vishydavidkranz_: the lockfile issue is very strange19:30
jeblaireglynn, jaypipes: i see jaypipes is sending you the files for local testing, so i'll hold off for now.  let me know when you're ready to re-enable.19:30
eglynnjeblair: great, thanks19:30
jaypipeseglynn: test account confs in your inbox.19:32
eglynnjaypipes: thanking you ...19:32
*** pixelbeat has joined #openstack-dev19:33
rmkI'm at a loss here.  Trying to use the RC1 packages on 12.04 with keystone -- dash is throwing errors like this: Error: Error fetching security_groups: Malformed request url (HTTP 400)19:33
*** rohita has quit IRC19:34
*** jakedahn is now known as jakedahn_zz19:34
jaypipesrmk: can you connect to keystone via curl?19:34
rmkyep19:35
rmkI'll try some of the more specific api calls to it now19:35
*** jakedahn_zz is now known as jakedahn19:36
*** sdake has joined #openstack-dev19:36
*** lts has joined #openstack-dev19:37
*** martine has joined #openstack-dev19:37
rmkkeystone --token ADMIN --endpoint http://127.0.0.1:35357/v2.0/ endpoint-list  is generating a stack trace19:37
rmkAttributeError: 'TemplatedCatalog' object has no attribute 'list_endpoints'19:38
jaypipesrmk: yep, that's a known bug.19:39
rmkI used the devstack script to generate my keystone data19:40
rmkThis is my first forray into the new keystone (which already seems way better)19:40
rmkservice-list comes back empty, that seems wrong19:41
claygvishy: can you look at my notes: https://bugs.launchpad.net/nova/+bug/94291819:43
uvirtbotLaunchpad bug 942918 in nova "nova-volume doesn't log request_id" [Undecided,New]19:44
*** bepernoot has joined #openstack-dev19:45
vishyclayg: interesting that sounds like a bug19:46
claygyeah well, I don't know about nothing fancy like that - I just want to get my reqeust_id's on my volume creates ;)19:47
*** bepernoot has quit IRC19:50
*** novas0x2a|laptop has quit IRC19:51
*** novas0x2a|laptop has joined #openstack-dev19:51
*** belmoreira has quit IRC19:53
*** n0ano has joined #openstack-dev20:00
*** belmoreira has joined #openstack-dev20:00
vishyclayg:  on it20:02
*** spiffxp has quit IRC20:05
vishyclayg, comstud: https://review.openstack.org/541220:08
jaypipesrmk: sorry, remember to prefix comments with jaypipes: :) otherwise I get pulled off course20:09
jaypipesrmk: yes, the service list and endpoint list when using templated catalog is buggy. lemme get you the bug links...20:10
jaypipesrmk: bugs are both in progress waiting for code review20:10
YorikSarhttps://review.openstack.org/4937 anyone?20:10
*** Glace_ has joined #openstack-dev20:10
*** dprince has quit IRC20:11
*** Glacee has quit IRC20:11
jaypipesrmk: https://bugs.launchpad.net/keystone/+bug/95408720:11
uvirtbotLaunchpad bug 954087 in keystone "endpoint-list with TemplatedCatalog backend raises AttributeError" [Critical,In progress]20:11
jaypipesrmk: https://bugs.launchpad.net/keystone/+bug/95408920:11
uvirtbotLaunchpad bug 954089 in keystone "service-list returns empty set for TemplatedCatalog backend" [Critical,In progress]20:11
*** phorgh has quit IRC20:12
YorikSarThe change hangs around waitign for approve for some time already... Can someone approve it?20:12
*** dachary has joined #openstack-dev20:13
YorikSarbtw, can we make uvirtbot do the same thing with changes it does with bugs?20:14
*** roge has joined #openstack-dev20:18
vishycomstud, johan_-_, Vek: there is n issue with lockfile that looks nasty: https://bugs.launchpad.net/nova/+bug/95631320:20
uvirtbotLaunchpad bug 956313 in nova "Nova loses ability to build instances" [High,Triaged]20:20
vishyI'm currently blaming the lock cleanup code20:21
vishybut any help would be appreciated20:21
comstudhrm20:21
comstudk20:21
vishyadam_g: do you have someone fixing bug 956366 ?20:22
uvirtbotLaunchpad bug 956366 in nova "self-referential security groups can not be deleted" [High,New] https://launchpad.net/bugs/95636620:22
adam_gvishy: me, i guess. its one of the 10 things im trying to do at once right now.20:23
vishypvo: if you could get someone to look at https://bugs.launchpad.net/ubuntu/+source/nova/+bug/954692 and verify that removing the check referenced would be ok for xenapi ?20:24
uvirtbotLaunchpad bug 954692 in nova "cannot detach volume from terminated instance" [Medium,In progress]20:24
*** roge has quit IRC20:24
pvovishy: how well do you know policy.json?20:25
comstudvishy: something removed the lock file20:25
*** n0ano has quit IRC20:25
*** n0ano has joined #openstack-dev20:26
*** roge has joined #openstack-dev20:27
eglynnjaypipes: I didn't realize that it was just a standard Amazon S3 account used by the func tests (as opposed to the cloudfiles S3 clone)20:30
eglynnjaypipes: I'd tried earlier many times to repro the issue against my own S3 account, so unlikely I'll see it with just different a/c credentials20:31
eglynnBTW getting ECONNREFUSED on the swift auth url, but the copy_to_file test should still repro the problem if it was gonna occur in my environment20:31
eglynnjaypipes: looks like something specific to the jenkins env20:31
comstudvishy: i added comments to both bugs, at least.20:32
eglynnjaypipes/jeblair: I had the tests running continually for an hour with no repro of the S3 issue, so maybe time to re-enable s3/swift on the jenkins slave?20:32
*** mnewby has quit IRC20:33
jaypipeseglynn: cool with me, sure.20:33
*** pixelbeat has quit IRC20:33
eglynnjaypipes: cool, I'll see if a bunch of jenkins build re-triggering causes it to recur20:33
*** Shrews has quit IRC20:35
vishypvo: very well20:36
vishycomstud, johan_-_: commented and updated: https://review.openstack.org/#change,541220:37
comstudyeah just +2'd20:38
eglynnjeblair: are you good with re-enabling s3/swift on glance tests?20:38
*** jabbott has joined #openstack-dev20:38
jabbottHello.20:38
jabbottHow is everyone today?20:38
jeblaireglynn: can do.20:38
pvo:q20:38
pvoer20:38
comstudvishy: i'm going offline for a bit.. fyi.  back later.20:39
eglynnjeblair: thanks much!20:39
vishycomstud: cool20:39
jeblaireglynn: done20:40
*** jabbott has left #openstack-dev20:41
eglynnjeblair: magic!20:42
*** tryggvil_ has quit IRC20:48
*** gyee has joined #openstack-dev20:51
rmkjaypipes: Thanks.  Yeah had to step away for a bit myself also.20:55
rmkI'll probably be pulling Essex down regularly now through the final release.20:55
rmkSuper impressed with the progress from Diablo.20:55
kbringardyea, essex is big pimpin'20:58
* kbringard is super excited about quantum20:58
* jaypipes just wants to get TryStack off of Diablo :)20:59
jaypipesvishy: Ever seen anything like this? http://paste.openstack.org/show/9788/20:59
*** lts has quit IRC21:00
jaypipesvishy: Every time anyone on TryStack does a snapshot, it fails (snapshot table is completely empty but dashboard has a record of something it thinks is a snapshot, in a "Queued" status). And logging into the compute node, I see the following (repeatable every time I click "snapshot" in the dashboard...21:00
vishyjaypipes: yeah your tmp drive is too small21:00
vishyi would guess21:00
jaypipesvishy: aha!21:01
jaypipesvishy: love the cryptic error message! :)21:01
kbringardthat's what you get for using tmpfs on a machine with < 4gb ram21:01
kbringard:-p21:01
jaypipesvishy: http://paste.openstack.org/show/9790/21:01
jaypipesvishy: not so sure the tmp drive is full :)21:02
jaypipeskbringard: actually, 48G tmp drive ;)21:02
kbringardpfft21:02
vishyjaypipes: hmm21:02
vishyjaypipes: how big is your root drive? 10G ?21:02
jaypipesvishy: how can I check what nova thinks is the tmp drive?21:02
jaypipesvishy: nope, only 2G21:02
vishyis it possible it isn't writable?21:02
jaypipesvishy: lemme check the fstab, one sec21:03
vishyyou could try running the command manually: qemu-img convert -f qcow2 -O raw -s e7ba4fb5f6f04f99b07d1d222ada0219 /opt/openstack/nova/instances/instance-00000548/disk /tmp/tmpIuOQo0/e7ba4fb5f6f04f99b07d1d222ada021921:03
jeblairjaypipes: there's nothing mounted over /tmp so it should be writing to /, which has only .5G free, no?21:03
vishythat sounds likely21:04
jaypipesvishy: see the original paste there... I did try running that command manually :) got a "qemu-img: Failed to load snapshot"21:04
jaypipesjeblair: yeah, that sounds about right... since the qemu command is trying /tmp/blah21:05
YorikSarWill that code cleanups land to RC or should I give up asking?21:05
vishyjaypipes: oh yeah the snapshot will have been deleted after the failure21:05
* jaypipes goes off to check our cookbooks....21:05
*** shevek_ has joined #openstack-dev21:05
*** n0ano has quit IRC21:06
*** spiffxp has joined #openstack-dev21:09
*** belmoreira has quit IRC21:10
*** n0ano has joined #openstack-dev21:12
rmkjaypipes: Interesting you brought this up, I'm having the same issue right this moment in one of our deployments.21:16
rmkWith snapshotting failing on Diablo.21:16
rmk512MB /tmp here as well.21:17
rmkThat's probably the issue.  Snapshots integrate the backing file, I assume.21:17
*** martine has quit IRC21:17
jaypipesrmk: yep.21:19
*** hhoover has joined #openstack-dev21:19
*** hhoover has left #openstack-dev21:19
rmktmp files are supposed to be created in the instances dir but they're not for snapshots.21:19
jaypipesrmk: right... that's what it looks like. either way, should have a larger /tmp dir21:20
vishyrmk: correct, they end up as raw21:20
vishyor qcow depending on the original image source dype21:20
rmkI've always kept temp sized relatively small to avoid having some process go insane and chew up the entire root disk.21:20
davidkranz_jaypipes: I had an issue where the disk filled up because the image cache is never cleaned up (in diablo, fixed in essex). Don't know if that is related.21:22
jaypipesdavidkranz_: nah, I think this is because the root device (in which /tmp is) is too small on the compute nodes...21:24
*** danwent has quit IRC21:27
*** zigo has quit IRC21:28
*** dolphm has quit IRC21:28
rmktemp_dir = tempfile.mkdtemp()21:34
rmkvirt/libvirt/connection.py, line 46721:34
*** sdake has quit IRC21:37
YorikSarWho can target bug to RC? https://bugs.launchpad.net/nova/+bug/95645721:38
uvirtbotLaunchpad bug 956457 in nova "Code cleanups, HACKING compliance  before RC" [Undecided,New]21:38
*** mnewby has joined #openstack-dev21:44
*** mnewby has quit IRC21:44
*** mnewby has joined #openstack-dev21:45
*** bhall has joined #openstack-dev21:45
*** rohita has joined #openstack-dev21:47
*** rohit404 has joined #openstack-dev21:49
*** andrewsmedina has quit IRC21:49
*** andrewsmedina has joined #openstack-dev21:49
annegentleanyone with Jenkins-fu - can you take a look at https://jenkins.openstack.org/view/Openstack-manuals/job/openstack-install-deploy-guide-trunk/configure and tell me why the FTP copy fails?21:51
annegentlemtaylor: jeblair (and what's the Jedi nick?) :)21:52
jeblairannegentle: because there is no webhelp/diablo directory:21:53
jeblairhttps://jenkins.openstack.org/view/Openstack-manuals/job/openstack-install-deploy-guide-trunk/ws/doc/src/docbkx/openstack-install/target/docbkx/webhelp/21:53
annegentlejeblair: hmmmming…..21:53
rohit404hey guys, keystone question - where's the entry point in auth_token.py before hitting service.py ?21:54
*** ayoung has quit IRC21:54
annegentlejeblair: ok, yeah I can fix that, nice.21:54
jeblairwoo :)21:54
* annegentle rebuilds21:55
*** rkukura has left #openstack-dev21:55
*** dolphm has joined #openstack-dev21:58
*** berendt has quit IRC22:05
*** andrewsmedina has quit IRC22:06
*** sandywalsh has quit IRC22:13
*** jakedahn is now known as jakedahn_zz22:17
*** danwent has joined #openstack-dev22:26
ttxdevcamcar: around?22:27
*** sdake has joined #openstack-dev22:27
*** devananda has quit IRC22:30
*** gyee has quit IRC22:37
anotherjessemarkmcclain: looks like you can do %3f according to python docs for https://review.openstack.org/#change,540922:37
anotherjessenevermind22:38
*** jdg has joined #openstack-dev22:38
rmkPerhaps this is a known Diablo issue.  If I reboot an instance with volumes attached, it doesn't reattach the volumes after it comes back up.  Nor can you detattach or reattach.22:39
rmkYou put yourself in a state where you basically cannot do anything with the VM22:39
*** dolphm has quit IRC22:40
*** dolphm has joined #openstack-dev22:41
*** zzed has quit IRC22:41
*** sandywalsh has joined #openstack-dev22:44
vishyadam_g: ping22:46
*** deva has joined #openstack-dev22:46
*** deva has quit IRC22:50
adam_gvishy: sup22:51
vishyadam_g: there are comments on the bug about detach volume22:52
adam_gvishy: right. comstud and i chatted about it here, im getting ready to submit a patch here soon22:53
adam_gvishy: i just submitted a very quick fix for the security group thing, https://review.openstack.org/#change,5424. im not entirely sure this is kosher, but it resolves the issues i was having.22:54
vishyadam_g: nice thanks…i will check that patch22:56
vishyadam_g: hmm, can't we just ignore self in the check?22:57
*** flaviamissi has joined #openstack-dev22:57
vishyadam_g: I do think we want to avoid deleting a group that is in use, just because of the way it is implemented22:58
vishyadam_g: or else we will get failures if we create a group, refer to it, then delete it22:58
*** dtroyer is now known as dtroyer_zzz22:58
*** dolphm has quit IRC23:00
*** jakedahn_zz is now known as jakedahn23:00
adam_gvishy: i believe it should be possible to delete groups that are being referenced by other groups, by comparing to AWS. a group that references itself is just one case where it gets wedged23:01
*** flaviamissi has quit IRC23:02
vishyadam_g: see my comment23:04
*** zaitcev has quit IRC23:04
adam_gvishy: for instance, creating a two groups (secgrp1, secgrp1) and 'euca-authorize -p 25 -o secgrp1 secgrp2'. i should be able to delete either group provided there are no runnign instances.23:04
adam_gk23:04
*** deva has joined #openstack-dev23:04
vishyadam_g: but there is no check for running instances23:04
*** deva is now known as devananda23:04
vishyadam_g: so if there are running instances, that rule stops it23:04
adam_gvishy: right, and that should stay23:05
*** novas0x2a|laptop has quit IRC23:06
vishyadam_g: so in your case if secgrp1 has running instances23:06
*** flaviamissi has joined #openstack-dev23:06
vishyand you try to delete secgroup1 it will get rejected somewhere else?23:06
*** novas0x2a|laptop has joined #openstack-dev23:06
*** zaitcev has joined #openstack-dev23:07
*** danwent has quit IRC23:07
adam_gvishy: no, its rejected in security_group_in_use() by the rest of the function that wasn't touched by that patch23:07
vishyadam_g: ok thinking through this23:08
vishyif you have a secgrp2 rule in place referencing secgrp123:08
vishyand you delete secgrp123:09
vishywhat happens?23:09
vishyadam_g: I think you would break the rule generation due to a dangling rule23:09
adam_gvishy: let me check23:11
*** flaviamissi has quit IRC23:11
*** flaviamissi has joined #openstack-dev23:11
*** danwent has joined #openstack-dev23:12
adam_gvishy: currently does authorizing traffic from another security group actually do anything on the compute/network/iptables-level?23:14
vishyit is supposed to23:14
russellbsorry if that code is broken, i added it :-/23:15
vishyadam_g: 328                         for instance in rule['grantee_group']['instances']:23:16
*** flaviamissi has quit IRC23:16
*** flaviamissi has joined #openstack-dev23:17
vishyadam_g: it might be that that still works fine23:18
vishyadam_g: I"m a little confused why it is using parent_group_id instead of grantee (group_id)23:20
vishybut if you can verify that deleting a rule that another rule is using doesn't explode23:20
vishyi'm ok23:20
adam_gvishy: it looks like, in that case, deleting the group that is referenced (at least) marks the correpsonding rule as deleted in the db23:21
*** kbringard has quit IRC23:21
*** flaviamissi has quit IRC23:21
vishyadam_g: oh really?23:21
vishyso it cascades?23:22
*** flaviamissi has joined #openstack-dev23:22
vishyadam_g: ah yes, I see that23:22
adam_ghttp://paste.ubuntu.com/885656/23:24
*** med_ has quit IRC23:25
vishyadam_g: something is wrong23:26
*** flaviamissi has quit IRC23:26
*** rohit404 has quit IRC23:28
*** med_ has joined #openstack-dev23:29
*** med_ has quit IRC23:29
*** med_ has joined #openstack-dev23:29
*** littleidea has quit IRC23:30
*** jakedahn is now known as jakedahn_zz23:32
*** littleidea has joined #openstack-dev23:33
vishyadam_g: there seems to be some conflation between parent_group_id and group_id23:33
vishyi need to sort this out23:33
*** pixelbeat has joined #openstack-dev23:34
adam_gvishy: in the db api or firewall?23:34
*** danwent has quit IRC23:34
vishyin the way that they are created23:35
vishyfor example in ec2 api i don't see where parent_group_id is used at all23:35
*** danwent has joined #openstack-dev23:36
vishyadam_g: ok i think i've wrappend my head around this23:38
*** agonella has joined #openstack-dev23:38
*** danwent has quit IRC23:39
vishyadam_g so the first query gets all the rules that reference this group23:40
vishyso we could add an exclude to the first filter23:41
vishyadam_g: but i'm trying to think if there is ever a case where we can't delete the rule23:42
russellbonce this is sorted, i'll write a unit test to cover it tomorrow23:43
adam_gvishy: an exclude to the first filter to exclude rules that reference themselves?23:43
vishyadam_g: ok i can't see any reason why we can't delete a group that is in use23:43
vishyif it is going to update all the rules that reference it anyway23:44
vishyadam_g: it shouldn't matter if it is referenced by other groups23:44
russellbbut this was originally done for EC2 compatibility23:45
russellbi'll check to see if groups used by groups was included in that, or if i just included it because i thought it sounded like a good idea ...23:45
vishyrussellb: please do23:45
adam_gvishy: im not sure i parse your conclusion: the patch that entirely drops that check is okay, or it should stay but filter self-referencing groups?23:46
vishyrussellb: since you can reference groups owned by other users it seems odd23:46
vishyadam_g: i think your patch can stay23:46
russellblooks like it was just me, heh - https://bugs.launchpad.net/nova/+bug/81787223:46
uvirtbotLaunchpad bug 817872 in nova "Delete security group that contains instances should not be allowed" [Medium,Fix released]23:46
russellbthe bug was just about not removing them if they are used by instances23:46
russellbso yeah, +1 on the patch23:46
vishyrussellb: i mean if i create a rule that references other_user:group, suddenly they can't delete their group?23:47
vishythat is odd23:47
vishycool23:47
russellba unit test will likely break, but that part of the test should just get removed23:47
russellbvishy: yep, agreed, my bad :)23:47
adam_gvishy: okay23:47
vishyadam_g: oh, did you run tests?23:47
adam_gvishy: no, i will now23:48
vishyrussellb: any idea which suite to run?23:48
russellbunit tests23:48
russellbnova.tests.api.ec2.test_cloud probably ...23:49
russellband nova.tests.api.openstack.compute.contrib.test_security_groups23:50
russellbactually just test_cloud looks like23:52
russellbadam_g: I think this is the only one that will fail with the patch ... http://paste.openstack.org/show/9829/23:53
russellbor I suppose it could just be changed to assert success instead of the exception23:54
adam_grussellb: thanks23:54
russellbnp23:54
*** ayoung has joined #openstack-dev23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!