*** johnpostlethwait has quit IRC | 00:03 | |
*** relateable has joined #openstack-dev | 00:04 | |
ayoung | anotherjesse, yes, I think about Pam. But don't tell my wife | 00:04 |
---|---|---|
*** roge has quit IRC | 00:07 | |
ohnoimdead | jenkins is a little backed up: https://jenkins.openstack.org/job/gate-integration-tests-devstack-vm/ | 00:12 |
ohnoimdead | oops, wrong channel | 00:12 |
adam_g | vishy: yea, turns out to be a seperate bug. volumes get properly cleaned up when instance is terminated, but an instance shutting itself down (sudo poweroff) causes the volume to get stuck | 00:12 |
vishy | adam_g: interesting. I wonder why that would matter | 00:13 |
*** statik has quit IRC | 00:13 | |
*** donaldngo_hp has joined #openstack-dev | 00:27 | |
*** statik has joined #openstack-dev | 00:31 | |
*** statik has joined #openstack-dev | 00:31 | |
*** dprince has joined #openstack-dev | 00:34 | |
adam_g | vishy: an attempt to terminate the instance after it s shut itself down ends in 'Invalid: trying to destroy already destroyed instance', which apparently does not clean up existing volume attachments | 00:36 |
vishy | adam_g: well that sounds fixable | 00:36 |
vishy | adam_g: are you going to try and patch it? | 00:36 |
adam_g | vishy: yeah, ill take a look | 00:37 |
jeblair | anotherjesse: i don't believe gerrit trigger supports doing that out of the box. we could probably add it. long term fix: jclouds | 00:37 |
*** rkukura has quit IRC | 00:38 | |
vishy | jeblair: we need our own custom openstack install to run the vms against | 00:38 |
*** rkukura has joined #openstack-dev | 00:39 | |
vishy | jeblair: bonus: recent kernel + kvm in kvm | 00:39 |
bcwaldon | jeblair: it was my idea! | 00:39 |
*** mnewby has quit IRC | 00:39 | |
bcwaldon | but vishy has obviously thought about it more... | 00:39 |
jeblair | we totally want to do that | 00:39 |
vishy | bcwaldon: paying you back for yesterday | 00:39 |
vishy | :p | 00:39 |
bcwaldon | vishy: I don't even remember what I stole from you yesterday | 00:39 |
jeblair | the next update to the devstack gate scripts will support multiple providers. we hope to add hp to that, and if we can rustle up an openstack cluster, we'll totally use it. | 00:41 |
bcwaldon | vishy: https://review.openstack.org/#change,5377 | 00:41 |
bcwaldon | -4654 lines | 00:41 |
jeblair | vishy, anotherjesse: devstack gate has nodes now. | 00:44 |
vishy | bcwaldon: i was kind of hoping to keep the db code in | 00:44 |
vishy | so that they can ship their code separately if need be | 00:44 |
bcwaldon | vishy: I considered that | 00:44 |
bcwaldon | but doesnt that qualify as 'their' code? | 00:45 |
bcwaldon | that's part of vsa to me | 00:45 |
*** davlaps has joined #openstack-dev | 00:45 | |
vishy | sure but there is no easy way for them to ship that part separately | 00:46 |
*** andrewsmedina has joined #openstack-dev | 00:47 | |
bcwaldon | ok, then what about the code in the volume manager? | 00:47 |
bcwaldon | I guess that is just notifications, not a big deal | 00:48 |
vishy | bcwaldon: ah i forgot they had a litle code in the manager | 00:48 |
bcwaldon | as far as I'm concerned, the only feasible way to support this code outside of the codebase at this point is to maintain a fork | 00:50 |
vishy | bcwaldon: you may be correct | 00:51 |
vishy | i think that is what they are doing | 00:51 |
bcwaldon | vishy: then that raises the question: why not migrate the tables out? | 00:52 |
bcwaldon | vishy: it doesn't make sense to create them in new installs | 00:52 |
*** devananda has joined #openstack-dev | 00:53 | |
vishy | bcwaldon: i just hate the precedent of ripping out code with no warning | 00:53 |
vishy | bcwaldon: doesn't seem very community friendly | 00:54 |
bcwaldon | pushing up code with no docs then letting it rot for months is a LOT worse | 00:54 |
bcwaldon | its not like we're erasing it from history | 00:54 |
bcwaldon | we're removing it from our minds going forward | 00:54 |
*** adjohn has quit IRC | 00:54 | |
*** rkukura has quit IRC | 00:55 | |
vishy | bcwaldon: i suppose | 00:56 |
* vishy is sad | 00:56 | |
bcwaldon | as am I | 00:56 |
vishy | i think we need a ml announcement at least | 00:57 |
bcwaldon | ok | 00:57 |
vishy | like we did for hyper-v | 00:57 |
bcwaldon | would you like to -2 my prop and send an email? | 00:57 |
vishy | sure | 00:57 |
*** donaldngo_hp has quit IRC | 01:00 | |
*** pixelbeat has quit IRC | 01:01 | |
*** dalang has quit IRC | 01:02 | |
jog0 | vishy, bcwaldon: glad to see there is both a ML discussion and something being done about it. Shipping rotted code isn't so nice. | 01:03 |
vishy | bcwaldon: might as well do a db migration too if we are going to talk on the ml about it | 01:06 |
bcwaldon | vishy: ok | 01:07 |
vishy | i just realized getting it to work after we split volumes is going to be very difficult anyway | 01:07 |
vishy | so that is another reason for torching it | 01:07 |
bcwaldon | huzzah! | 01:07 |
jog0 | huzzah! | 01:07 |
vishy | jog0: I'm planning on regenerating the sample as the last commit before rc | 01:08 |
jog0 | sounds good to me | 01:08 |
jog0 | vishy: when is rc? | 01:09 |
*** rkukura has joined #openstack-dev | 01:10 | |
*** rkukura has left #openstack-dev | 01:10 | |
*** rkukura has joined #openstack-dev | 01:11 | |
*** rkukura has left #openstack-dev | 01:13 | |
*** bencherian has joined #openstack-dev | 01:13 | |
*** jdurgin has quit IRC | 01:14 | |
vishy | as soon as bugs are fixed | 01:14 |
*** andrewsmedina has quit IRC | 01:15 | |
jog0 | vishy: good answer | 01:16 |
*** vincentricci has quit IRC | 01:18 | |
*** PotHix has quit IRC | 01:25 | |
*** danwent has joined #openstack-dev | 01:25 | |
*** shevek_ has quit IRC | 01:26 | |
*** deshantm has quit IRC | 01:30 | |
comstud | strange | 01:37 |
comstud | i haven't seen waldon's email yet | 01:37 |
comstud | assuming he's sent it | 01:37 |
comstud | ok, guess he hasn't sent one yet | 01:38 |
comstud | not in the archive | 01:38 |
*** spiffxp has quit IRC | 01:39 | |
*** dolphm_ has quit IRC | 01:39 | |
vishy | comstud: to zadara? | 01:52 |
vishy | comstud: hmm i sent from wrong email | 01:53 |
*** bencherian has quit IRC | 01:54 | |
vishy | comstud: just sent it again from the right email | 01:54 |
*** novas0x2a|laptop has quit IRC | 01:54 | |
comstud | cools | 01:56 |
comstud | it's in the archive now | 01:56 |
comstud | thanks | 01:56 |
*** jakedahn is now known as jakedahn_zz | 01:56 | |
*** Ryan_Lane has quit IRC | 01:57 | |
*** bencherian has joined #openstack-dev | 01:57 | |
*** roge has joined #openstack-dev | 01:58 | |
*** ayoung has quit IRC | 02:02 | |
*** rkukura has joined #openstack-dev | 02:08 | |
*** maplebed has quit IRC | 02:14 | |
*** misheska has quit IRC | 02:16 | |
*** hitesh__ has joined #openstack-dev | 02:17 | |
*** Mandell has quit IRC | 02:19 | |
*** danwent has quit IRC | 02:22 | |
*** markmcclain has quit IRC | 02:22 | |
*** andrewsmedina has joined #openstack-dev | 02:23 | |
*** jakedahn_zz is now known as jakedahn | 02:27 | |
*** hitesh__ has quit IRC | 02:29 | |
*** jakedahn is now known as jakedahn_zz | 02:39 | |
*** reed has quit IRC | 02:44 | |
*** devananda has quit IRC | 02:53 | |
*** dprince has quit IRC | 02:54 | |
*** hitesh__ has joined #openstack-dev | 02:54 | |
*** bencherian has quit IRC | 03:00 | |
*** deshantm has joined #openstack-dev | 03:19 | |
*** hitesh__ has quit IRC | 03:27 | |
*** relateable has quit IRC | 03:32 | |
*** jakedahn_zz is now known as jakedahn | 03:34 | |
*** dubsquared has quit IRC | 03:51 | |
*** andrewsmedina has quit IRC | 04:09 | |
*** roge has quit IRC | 04:15 | |
redbo | can someone tell me what I need to do to get my gerrit/launchpad account un screwed up? | 04:20 |
*** shang has quit IRC | 04:23 | |
anotherjesse | redbo: mtaylor / jeblair is who I usually ask for that sort of help | 04:23 |
notmyname | and LinuxJedi | 04:23 |
*** martine has joined #openstack-dev | 04:24 | |
redbo | mtaylor,jeblair,LinuxJedi: what do I need to go to get my gerrit account working? | 04:28 |
*** Mandell has joined #openstack-dev | 04:31 | |
*** rohita has quit IRC | 04:34 | |
redbo | I'm a bit sketchy on the details, but mtaylor suspected that because I merged accounts in launchpad once, my shit's all fucked up. | 04:34 |
*** littleidea has quit IRC | 04:36 | |
*** dtroyer is now known as dtroyer_zzz | 04:40 | |
*** shang has joined #openstack-dev | 04:40 | |
*** pmyers has quit IRC | 04:41 | |
*** pmyers has joined #openstack-dev | 04:44 | |
*** littleidea has joined #openstack-dev | 04:48 | |
mtaylor | redbo: ah, sorry - I thought we had you - I think you fell off the todo list | 04:57 |
mtaylor | gimme just a sec | 04:57 |
*** martine has quit IRC | 05:05 | |
*** zaitcev has quit IRC | 05:15 | |
*** davlaps has quit IRC | 05:22 | |
*** hattwick has quit IRC | 05:33 | |
*** deshantm has quit IRC | 05:42 | |
*** adjohn has joined #openstack-dev | 05:45 | |
*** Ryan_Lane has joined #openstack-dev | 06:01 | |
*** phorgh has quit IRC | 06:03 | |
*** danwent has joined #openstack-dev | 06:11 | |
*** bepernoot has joined #openstack-dev | 06:36 | |
*** deshantm has joined #openstack-dev | 06:37 | |
*** bepernoot has quit IRC | 06:49 | |
*** deshantm has quit IRC | 06:50 | |
*** phorgh has joined #openstack-dev | 07:04 | |
*** mtaylor_ has joined #openstack-dev | 07:16 | |
*** gyee has quit IRC | 07:18 | |
*** jakedahn is now known as jakedahn_zz | 07:19 | |
*** Mandell has quit IRC | 07:19 | |
*** zigo has joined #openstack-dev | 07:21 | |
*** ghe_ has quit IRC | 07:21 | |
*** mtaylor has quit IRC | 07:21 | |
*** Mandell has joined #openstack-dev | 07:26 | |
*** ghe_ has joined #openstack-dev | 07:28 | |
*** rohita has joined #openstack-dev | 07:48 | |
*** danwent has quit IRC | 07:53 | |
*** shevek_ has joined #openstack-dev | 07:54 | |
*** bepernoot has joined #openstack-dev | 07:54 | |
*** danwent has joined #openstack-dev | 07:54 | |
*** hattwick has joined #openstack-dev | 07:59 | |
*** danwent has quit IRC | 08:00 | |
rmk | hmm -- how is keystone auth enabled for nova-api with essex? | 08:02 |
*** Ryan_Lane has quit IRC | 08:02 | |
*** littleidea has quit IRC | 08:03 | |
rmk | I dont quite understand the paste config | 08:04 |
*** Mandell has quit IRC | 08:15 | |
*** reidrac has joined #openstack-dev | 08:16 | |
soren | ttx: We don't have any sort of freeze policy for packaging code, do we= | 08:20 |
soren | ? | 08:20 |
ttx | soren: no | 08:20 |
ttx | soren: note that the PPA packaging could use a refresh from the precise packaging | 08:20 |
soren | ttx: I'm doing tht right now. | 08:20 |
soren | ttx: ...but I'm also working on a bunch of other changes. | 08:20 |
ttx | soren: we somehow need to solve this duplication of work. I know we tried before :) | 08:21 |
soren | ttx: Since Ubuntu actually uses a different set of branches for their packaging, I don't actually have to worry too much about Ubuntu's FF. | 08:21 |
ttx | yep | 08:21 |
soren | ttx: I hope it'll all sort itself out in the q-timeframe. | 08:21 |
* mtaylor_ isn't going to hold his breath | 08:22 | |
*** mtaylor_ is now known as mtaylor | 08:22 | |
*** mtaylor has joined #openstack-dev | 08:22 | |
*** ChanServ sets mode: +v mtaylor | 08:22 | |
zykes- | soren: around for a non-openstack question? | 08:24 |
soren | zykes-: I'll see if I can come up with a non-openstack answer. | 08:41 |
LarsErikP | so, does anyone have a "stable" essex stack running. With VLANmanager, multihost? Just wanna know wether it's possible or not.. Running in 11.10 | 08:48 |
LarsErikP | soory, wring chan | 08:51 |
*** maploin has joined #openstack-dev | 08:51 | |
LarsErikP | wrong | 08:52 |
*** maploin has quit IRC | 08:52 | |
*** maploin has joined #openstack-dev | 08:52 | |
zykes- | soren: nvm, i found that the thing i needed for upstart was "expect fork" | 08:53 |
*** My1 has joined #openstack-dev | 09:11 | |
*** My1 has left #openstack-dev | 09:11 | |
*** ipl31 has quit IRC | 09:26 | |
*** adam_g has quit IRC | 09:26 | |
*** hyakuhei has quit IRC | 09:27 | |
*** redbo has quit IRC | 09:27 | |
*** med_ has quit IRC | 09:27 | |
*** bencherian has joined #openstack-dev | 09:28 | |
*** pixelbeat has joined #openstack-dev | 09:33 | |
*** dneary has joined #openstack-dev | 09:39 | |
*** ipl31 has joined #openstack-dev | 09:43 | |
*** redbo has joined #openstack-dev | 09:44 | |
*** ChanServ sets mode: +v redbo | 09:44 | |
*** adam_g has joined #openstack-dev | 09:44 | |
*** hyakuhei has joined #openstack-dev | 09:44 | |
*** adjohn has quit IRC | 10:01 | |
*** oneiroi has joined #openstack-dev | 10:02 | |
*** bencherian has quit IRC | 10:09 | |
*** hitesh_ has joined #openstack-dev | 10:12 | |
*** tryggvil_ has joined #openstack-dev | 10:17 | |
*** PotHix has joined #openstack-dev | 10:48 | |
*** dachary has joined #openstack-dev | 10:50 | |
*** rkukura has quit IRC | 10:58 | |
*** gpernot has joined #openstack-dev | 11:07 | |
*** dneary has quit IRC | 11:16 | |
*** hitesh_ has quit IRC | 11:37 | |
*** maploin has quit IRC | 11:49 | |
*** davlaps has joined #openstack-dev | 12:02 | |
*** rods has joined #openstack-dev | 12:06 | |
eglynn | anyone know if the "Archive the artifacts" check-box was deliberately un-ticked in the Jenkins config? | 12:10 |
eglynn | e.g. in the Post-build Actions under https://jenkins.openstack.org/job/gate-glance-python26/configure | 12:11 |
eglynn | or https://jenkins.openstack.org/job/gate-glance-python27/configure | 12:11 |
* eglynn seems to remember the Build Artifacts being preserved previously | 12:11 | |
*** hugokuo has joined #openstack-dev | 12:11 | |
hugokuo | https://answers.launchpad.net/glance/+question/190763 help please , there's an issue ticket about glance to upload larger images to swift failed | 12:13 |
eglynn | hugokuo: can you reproduce against E4? | 12:16 |
hugokuo | ok | 12:17 |
hugokuo | give it a try | 12:17 |
eglynn | hugokuo: thanks! essex-4 included at least one fix for large image upload | 12:18 |
eglynn | jeblair: there? | 12:20 |
*** dachary has quit IRC | 12:22 | |
*** berendt has joined #openstack-dev | 12:25 | |
*** Shrews has joined #openstack-dev | 12:27 | |
*** bsza has joined #openstack-dev | 12:27 | |
*** lts has joined #openstack-dev | 12:31 | |
eglynn | actually, ignore my questions above about archiving log files from Jenkin builds, wouldn't help in my case as the glance logs are dumped under random temp dirs | 12:38 |
*** martine has joined #openstack-dev | 12:40 | |
*** dneary has joined #openstack-dev | 12:42 | |
*** dneary has joined #openstack-dev | 12:42 | |
*** medberry has joined #openstack-dev | 12:51 | |
*** medberry has quit IRC | 12:51 | |
*** medberry has joined #openstack-dev | 12:51 | |
*** medberry is now known as med_ | 12:52 | |
*** danwent has joined #openstack-dev | 12:53 | |
*** pixelbeat has quit IRC | 12:54 | |
*** martine has quit IRC | 13:02 | |
*** andrewsmedina has joined #openstack-dev | 13:04 | |
*** phorgh1 has joined #openstack-dev | 13:05 | |
*** phorgh has quit IRC | 13:05 | |
*** danwent has quit IRC | 13:13 | |
*** dprince has joined #openstack-dev | 13:15 | |
*** ayoung has joined #openstack-dev | 13:17 | |
*** pixelbeat has joined #openstack-dev | 13:24 | |
*** littleidea has joined #openstack-dev | 13:24 | |
*** littleidea has quit IRC | 13:25 | |
*** danwent has joined #openstack-dev | 13:29 | |
*** rkukura has joined #openstack-dev | 13:32 | |
*** maploin has joined #openstack-dev | 13:34 | |
*** maploin has quit IRC | 13:34 | |
*** maploin has joined #openstack-dev | 13:34 | |
*** dtroyer_zzz is now known as dtroyer | 13:35 | |
*** derekh has joined #openstack-dev | 13:40 | |
*** dtroyer is now known as dtroyer_zzz | 13:42 | |
*** tryggvil_ has quit IRC | 13:43 | |
*** kbringard has joined #openstack-dev | 13:44 | |
*** danwent has quit IRC | 13:50 | |
*** dachary has joined #openstack-dev | 13:52 | |
*** dachary has joined #openstack-dev | 13:52 | |
*** roge has joined #openstack-dev | 14:00 | |
*** dtroyer_zzz is now known as dtroyer | 14:08 | |
*** maploin has quit IRC | 14:17 | |
*** glenc_ has joined #openstack-dev | 14:18 | |
*** glenc has quit IRC | 14:19 | |
*** glenc_ is now known as glenc | 14:20 | |
*** danwent has joined #openstack-dev | 14:22 | |
*** zzed has joined #openstack-dev | 14:27 | |
*** sandywalsh has joined #openstack-dev | 14:34 | |
*** reed has joined #openstack-dev | 14:37 | |
*** flaviamissi has joined #openstack-dev | 14:43 | |
eglynn | jaypipes: hey | 14:51 |
*** dprince has quit IRC | 15:00 | |
*** danwent has quit IRC | 15:00 | |
*** dolphm has joined #openstack-dev | 15:00 | |
*** byeager has joined #openstack-dev | 15:01 | |
*** phorgh1 has quit IRC | 15:04 | |
*** dprince has joined #openstack-dev | 15:04 | |
*** phorgh has joined #openstack-dev | 15:06 | |
notmyname | chmouel: you saw the comments on your bin/swift + client.py patch? | 15:09 |
chmouel | yeah I saw dfg comment working on addressing them | 15:10 |
notmyname | cool | 15:10 |
*** phorgh has quit IRC | 15:13 | |
*** littleidea has joined #openstack-dev | 15:13 | |
*** zaitcev has joined #openstack-dev | 15:16 | |
*** phorgh has joined #openstack-dev | 15:17 | |
*** danwent has joined #openstack-dev | 15:18 | |
*** reed has quit IRC | 15:18 | |
*** AlanClark has joined #openstack-dev | 15:19 | |
jaypipes | eglynn: pong | 15:28 |
eglynn | jaypipes: just wanted to check that I'm not missing something obvious on https://bugs.launchpad.net/glance/+bug/955527 | 15:28 |
uvirtbot | Launchpad bug 955527 in glance "copy_from test case logic is invalid" [High,In progress] | 15:28 |
jaypipes | eglynn: sorry, slow start this moring... | 15:29 |
eglynn | jaypipes: np ;) | 15:29 |
eglynn | jaypipes: I don't follow your logic on the delete of the original image being applied directly to the backend store | 15:29 |
eglynn | jaypipes: i.e. DELETE /v1/images/original_image_id shouldn't impact on GET /v1/images/copy_image_id | 15:30 |
eglynn | jaypipes: unless there's a bizarre ID collision | 15:30 |
jaypipes | eglynn: one sec, reading... | 15:30 |
*** hhoover has joined #openstack-dev | 15:30 | |
eglynn | jaypipes: the "fix" proposed in https://review.openstack.org/5396 is just an extra test assertion to eliminate an ID collision between copy and original being the root of the issue | 15:33 |
*** hhoover has left #openstack-dev | 15:34 | |
jaypipes | eglynn: I think there's got to be something more to it... | 15:35 |
jaypipes | eglynn: there's just a feeling I have that this can't be as common a thing (id colliision) as we've seen... | 15:36 |
eglynn | jaypipes: yep, agree ... an ID collision would be bizarre, just wanted to eliminate that from the mix | 15:36 |
jaypipes | eglynn: yeah, and I generally agree with you on the bug comments... just can't quite put my finger on it.. | 15:37 |
eglynn | jaypipes: also my comments about gremlins in swift-land were unfounded I think, as the same kind of failure intermittently hits a test copying s3->file as opposed to s3->swift | 15:37 |
eglynn | I'll continue to poke at it ... | 15:37 |
jaypipes | eglynn: I have a thought. | 15:38 |
eglynn | jaypipes: shoot | 15:39 |
jaypipes | eglynn: so.. perhaps the 404 is not referring to the object in S3/swift, but instead is referring to the container or bucket. | 15:39 |
eglynn | jaypipes: but that wouldn't apply in the s3->file case, right? | 15:40 |
jaypipes | eglynn: and it is the container/bucket that is not being managed in the same way in the setup_swift/setup_s3 (and teardown) decorators that it is in, say, functional.TestSwift.setUp()? | 15:40 |
jaypipes | eglynn: yeah, it would, since the original is the one that is returning the 404 and the original is in S3 (never in file) | 15:40 |
eglynn | jaypipes: it's the second GET /v1/images/copy_image_id that fails with 404, not the original_image_id | 15:43 |
jaypipes | eglynn: it's been both :) | 15:44 |
jaypipes | eglynn: lemme grab a link to show ya.... | 15:44 |
jaypipes | eglynn: https://jenkins.openstack.org/job/gate-glance-python26/98/console | 15:44 |
*** roge has quit IRC | 15:44 | |
eglynn | jaypipes: the most recent failure sequence ... GET /v1/images/copy_image_id->200, DELETE /v1/images/original_image_id->200, GET /v1/images/copy_image_id->404 | 15:45 |
jaypipes | eglynn: oh wait, wrong link... one sec | 15:45 |
jaypipes | eglynn: damn it, you're right... as usual. | 15:46 |
jaypipes | I think I've looked over this code so many times my eyes are going bonkers | 15:47 |
eglynn | jaypipes: I hear ya ;) | 15:47 |
eglynn | jaypipes: k, I'll dig on ... I've a feeling there's something obvious lurking ... #5396 will at least eliminate one bizarre and highly unlikely cause | 15:48 |
jaypipes | eglynn: need to narrow it down to a pattern.. | 15:49 |
jaypipes | eglynn: is the pattern that the copy_from is always in S3? | 15:49 |
eglynn | jaypipes: so far I've seen s3->{file|swift}, so yeah the source is always S3 | 15:50 |
jaypipes | eglynn: k... that should be a clue at least. | 15:50 |
eglynn | jaypipes: yep, I'll dig thru' the S3 deletion logic | 15:51 |
jaypipes | eglynn: k, thx mate | 15:52 |
*** utlemming has quit IRC | 15:53 | |
*** jdurgin has joined #openstack-dev | 15:56 | |
*** utlemming has joined #openstack-dev | 15:56 | |
*** LinuxJedi has quit IRC | 16:01 | |
*** sdake has quit IRC | 16:01 | |
*** davidkranz has quit IRC | 16:02 | |
*** tryggvil_ has joined #openstack-dev | 16:02 | |
*** rohita has quit IRC | 16:03 | |
*** phorgh has quit IRC | 16:03 | |
*** phorgh has joined #openstack-dev | 16:04 | |
*** LinuxJedi has joined #openstack-dev | 16:04 | |
*** bencherian has joined #openstack-dev | 16:05 | |
andrewbogott | anotherjesse: Is nova auth (w/out keystone) already broken in essex, or is it just going to be broken in folsom? | 16:05 |
*** dachary has quit IRC | 16:05 | |
andrewbogott | (I ask because I'm looking at upgrading a Diablo install to Essex, and we don't use keystone currently.) | 16:05 |
*** martine has joined #openstack-dev | 16:06 | |
andrewbogott | which, for that matter, is there's a 'how to upgrade to essex' guide someplace? | 16:06 |
*** maplebed has joined #openstack-dev | 16:06 | |
*** markmcclain has joined #openstack-dev | 16:06 | |
*** rohita has joined #openstack-dev | 16:07 | |
*** maploin has joined #openstack-dev | 16:08 | |
*** maploin has quit IRC | 16:08 | |
*** maploin has joined #openstack-dev | 16:08 | |
jaypipes | jeblair: looks like Jenkins is having some issues... the devstack-vm gate is failing. see here: https://jenkins.openstack.org/job/gate-integration-tests-devstack-vm/2593/console | 16:09 |
jaypipes | jeblair: failing for all projects it seems, not just for glance runs | 16:10 |
jeblair | jaypipes: looking into it | 16:10 |
jaypipes | jeblair: cheers | 16:10 |
*** markmcclain has quit IRC | 16:10 | |
LinuxJedi | jaypipes: we think there may be a resolver problem on the nodes | 16:11 |
*** reed has joined #openstack-dev | 16:12 | |
jeblair | anotherjesse: ping | 16:13 |
jaypipes | LinuxJedi: k, I trust you're on the case :) | 16:16 |
*** Mandell has joined #openstack-dev | 16:17 | |
*** bencherian has quit IRC | 16:21 | |
*** phorgh1 has joined #openstack-dev | 16:22 | |
*** phorgh has quit IRC | 16:24 | |
*** reidrac has quit IRC | 16:24 | |
*** mnewby has joined #openstack-dev | 16:25 | |
*** martine has quit IRC | 16:26 | |
*** LinuxJedi has quit IRC | 16:28 | |
*** sdake has joined #openstack-dev | 16:28 | |
eglynn | bcwaldon: quick question? | 16:29 |
*** utlemming has quit IRC | 16:29 | |
*** markmcclain has joined #openstack-dev | 16:30 | |
*** LinuxJedi has joined #openstack-dev | 16:30 | |
*** utlemming has joined #openstack-dev | 16:30 | |
bcwaldon | whatsup | 16:30 |
eglynn | bcwaldon: just wondering if this NotAuthorized mapping to NotFound in the registry controller is deliberate or copy'n'paste oversight? | 16:30 |
eglynn | bcwaldon: https://github.com/openstack/glance/commit/7c4d9350#L3R273 | 16:30 |
jeblair | jaypipes: I believe there was a series of nodes that were created with wrong permissions on /etc/resolv.conf | 16:31 |
eglynn | bcwaldon: (as opposed to NotAuthorized->webob.exc.HTTPForbidden, so that the client gets a 401) | 16:31 |
bcwaldon | eglynn: looking... | 16:31 |
eglynn | bcwaldon: reason I ask is that I'm trying to eliminate all unexpected sources of 404s wrt bug #955527 | 16:31 |
uvirtbot | Launchpad bug 955527 in glance "copy_from test case logic is invalid" [High,In progress] https://launchpad.net/bugs/955527 | 16:31 |
jeblair | jaypipes: as in, we got them from rackspace cloud that way (the perms were right on the template node) | 16:31 |
jaypipes | jeblair: interesting | 16:31 |
jeblair | jaypipes: and I think we're past that batch now, new ones are coming up correctly | 16:31 |
*** bepernoot has quit IRC | 16:31 | |
jeblair | jaypipes: yeah, i would not have expected something like that to happen. | 16:32 |
jaypipes | jeblair: ok. I'll keep an eye on it. | 16:32 |
bcwaldon | eglynn: that code has changesd since the commit you reference, but yes, it is intended | 16:32 |
bcwaldon | eglynn: we don't want to allow users to discover images by getting a 403 back | 16:32 |
bcwaldon | eglynn: so we present unauthorized images as if they don't exist | 16:32 |
eglynn | bcwaldon: k, makes sense | 16:32 |
* eglynn was grasping at straws in any case ... | 16:33 | |
bcwaldon | eglynn: look at the latest code, we realized we had to do it slightly differently for images that are public that the user doesn't own | 16:33 |
*** bengrue has quit IRC | 16:34 | |
jeblair | i retriggered the failed devstack jobs | 16:36 |
jeblair | anotherjesse, sleepsonzzz: it looks like devstack clones quantum if horizon is enabled; is that used and should we start gating quantum on devstack? | 16:37 |
*** phorgh1 has quit IRC | 16:43 | |
*** phorgh has joined #openstack-dev | 16:43 | |
*** bencherian has joined #openstack-dev | 16:45 | |
*** mszilagyi has joined #openstack-dev | 16:47 | |
*** jdg has joined #openstack-dev | 16:52 | |
*** med_ has quit IRC | 16:54 | |
sleepsonzzz | jeblair - it should only clone quantum-client | 16:55 |
sleepsonzzz | (when horizon is enabled) | 16:55 |
jeblair | sleepsonzzz: sorry, that's what i meant | 16:56 |
jeblair | sleepsonzzz: so since we gate novaclient and keystoneclient, perhaps we should also gate quantumclient? | 16:57 |
*** vricci has joined #openstack-dev | 16:57 | |
*** vricci is now known as vincentricci | 16:57 | |
*** dolphm has quit IRC | 17:00 | |
*** spiffxp has joined #openstack-dev | 17:03 | |
*** medberry has joined #openstack-dev | 17:05 | |
jeblair | sleepsonzzz, anotherjesse: https://review.openstack.org/5401 | 17:05 |
*** medberry is now known as Guest35857 | 17:06 | |
jeblair | danwent: python-quantumclient is used by horizon in the devstack integration test gate. how do you feel about gating python-quantumclient on devstack? https://review.openstack.org/5401 | 17:07 |
maploin | what's the relationship between the cloudbuilders/noNVC project and the kanaka/noVNC project (which looks like an upstream?) | 17:09 |
danwent | jeblair: gating both quantum and python-quantumclient on devstack is a big goal for Folsom. A excercise script for quantum in devstack is in review currently. Are you talking about doing this for Essex or folsom? | 17:09 |
danwent | jeblair: btw, you can talk to devin, but quantum support in horizon isn't in great shape, and probably won't be supported for essex. | 17:10 |
danwent | so gating for essex may not be necessary. in Folsom, both horizon and nova will use python-quantumclient | 17:10 |
jeblair | danwent: ASAP, since it's already being pulled in by horizon. i very much doubt it's being _used_ per se since quantum isn't being used. but since it's still theoretically possible to break horizon by, say, having an import level error in python-quantumclient, it would be good to gate on it. we like to gate all the dependencies that we control. | 17:11 |
jeblair | (unless we can configure horizon in devstack to not use python-quantumclient at all) | 17:11 |
danwent | jeblair: I believe you can, but you'd have to check with the horizon folks. | 17:13 |
*** stack_ has joined #openstack-dev | 17:13 | |
danwent | jeblair: but if not, I think it would make sense. We're looking to freeze for the RC today for both quantum and python-quantumclient, so i'm a bit nervous about making CI changes. | 17:14 |
*** oneiroi has quit IRC | 17:14 | |
danwent | jeblair: I don't know if the switch to enable/disable quantum client in horizon disable it at the import level, or just in terms of what is shown in the GUI | 17:15 |
*** Ravikumar_hp has joined #openstack-dev | 17:16 | |
*** stack_ has quit IRC | 17:16 | |
jeblair | sleepsonzzz, anotherjesse: ^ do you have any thoughts on this? in devstack, python-quantumclient is always required for horizon. will horizon fail to start without it? | 17:18 |
*** dneary has quit IRC | 17:19 | |
*** davidkranz_ has joined #openstack-dev | 17:19 | |
*** hitesh__ has joined #openstack-dev | 17:20 | |
davidkranz_ | jaypipes: I got a lot of this stuff in compute and network logs http://paste.openstack.org/show/9731/ | 17:20 |
jaypipes | davidkranz_: lemme see if that stuff is in my devstack log output... | 17:20 |
sleepsonzzz | jeblair - it used to be the case that horizon would crash if it was not installed. But I cannot repro that now. I'll check with folks to see if it is possible to remove the dep. | 17:21 |
*** bepernoot has joined #openstack-dev | 17:21 | |
*** bencherian has quit IRC | 17:21 | |
*** Ryan_Lane has joined #openstack-dev | 17:22 | |
jeblair | danwent: (in case sleepsonzzz says we need to keep it): the change i'm proposing shouldn't really alter anything. right now python-quantumclient is being installed in the devstack gate test from github. either it's used by horizon, and an ungated change to quantumclient will break the gate test (so gating quantumclient will prevent that), or it's not actually used by horizon, in which case, it still won't be used even if we gate on i | 17:22 |
adam_g | anyone around with knowledge of the xenapi compute driver? | 17:23 |
comstud | adam_g: yeah | 17:23 |
*** bencherian has joined #openstack-dev | 17:24 | |
jaypipes | davidkranz_: lol, now the tests are running fine :) | 17:24 |
danwent | jeblair: ok, go for it. i'm in favor of gating in general, and this gets us going in the right direction | 17:24 |
jaypipes | davidkranz_: I'll keep running tempest and see if it happens. | 17:24 |
jeblair | danwent: cool, thanks. looking forward to gating quatum too. :) | 17:26 |
danwent | jeblair: much appreciated, thanks | 17:26 |
*** shevek_ has quit IRC | 17:26 | |
adam_g | comstud: im looking at bug #954692. in nova.compute.manager._shutdown_instance() an exception is raised if current_power_state == power_state.SHUTOFF. that condition has been there forever. the later call to driver.destroy() appears to work just fine for the libvirt driver in the case that the instance is already shutoff. that is, it cleans up networking/volumes in its driver specific way. is the same true of xenapi? im wondering if that exception | 17:28 |
uvirtbot | Launchpad bug 954692 in nova "cannot detach volume from terminated instance" [Medium,In progress] https://launchpad.net/bugs/954692 | 17:28 |
*** bepernoot has quit IRC | 17:31 | |
comstud | adam_g: the last part of your msg was cut off.. "im wondering if that exception" is the last part | 17:33 |
comstud | adam_g: Gimme a few to look at what you're talking about | 17:33 |
*** AlanClark has quit IRC | 17:35 | |
*** novas0x2a|laptop has joined #openstack-dev | 17:35 | |
adam_g | comstud: ..wondeirng if raising an exception there is obsolete, if other compute drivers can call destroy() on an already POWEROFF'd instance | 17:35 |
*** Ravikumar_hp has quit IRC | 17:36 | |
comstud | adam_g: ok, sec | 17:37 |
*** bencherian has quit IRC | 17:37 | |
*** belmoreira has joined #openstack-dev | 17:38 | |
comstud | adam_g: something doesn't feel quite right with the code. i think it's assuming that if the instance is SHUTOFF, it's been destroyed from Xen. But it's certainly possible that is not the case. | 17:39 |
comstud | adam_g: Does seem like we should always try to destroy the instance from Xen | 17:40 |
adam_g | comstud: the only way ive been able to hit that code is to 'sudo poweroff' from inside an instance. in libvirt's case that kills the kvm/libvirt domain. not sure what happens in the xenserver case | 17:40 |
comstud | yep | 17:41 |
comstud | that's the case i was thinking of | 17:41 |
comstud | in the xen case... | 17:41 |
adam_g | comstud: like i said, the condition doesnt seem to make sense for libvirt. it certainly seems like something that should be raised in the driver | 17:41 |
comstud | you can 'poweroff' and it won't go away | 17:41 |
comstud | you should be able to power it back on just fine | 17:41 |
comstud | however | 17:41 |
comstud | it appears if you do a 'sudo poweroff'... | 17:42 |
comstud | and then try to delete the instance while it's in that state | 17:42 |
comstud | we'll leave the instance around in Xen | 17:42 |
adam_g | hmm | 17:44 |
adam_g | comstud: you mean, that happens currently? | 17:45 |
vishy | comstud, bcwaldon, pvo, johan_-_, tr3buchet, _cerberus_, jk0, Vek, blamar, any other core members i forgot: We have two reviews outstanding for rc-1: https://review.openstack.org/#change,5338 and the removal of VSA code (being disucssed on the ml) | 17:46 |
comstud | i mean that.. that's how I read the code in master | 17:46 |
comstud | adam_g: ^ | 17:46 |
vishy | please let me know if there are any other blockers taht you are aware of | 17:46 |
comstud | adam_g: and i think it's a bug.. it does appear that the whole 'if state == SHUTOFF' should go away | 17:46 |
vishy | (and review and comment on the thread pls) | 17:46 |
jk0 | vishy: I was waiting for the ML to wrap up before approving the VSA MP | 17:46 |
comstud | +1 for VSA going away | 17:46 |
tr3buchet | vishy: on it | 17:47 |
comstud | i thought i'd reviewd that s3 ssl one already | 17:47 |
*** Guest35857 is now known as med12345 | 17:47 | |
comstud | i know i looked at it before | 17:48 |
comstud | anyway, approved | 17:48 |
adam_g | comstud: thats what i figured. ill get something into gerrit shortly, thanks. | 17:48 |
tr3buchet | damn i was slowest | 17:48 |
tr3buchet | that thing is sup3r approved now | 17:48 |
* jk0 came in 2nd | 17:48 | |
comstud | adam_g: the one thing I wonder is if... there should be a 'try/except' around the 'seif.driver.destroy' call then | 17:48 |
*** med12345 is now known as med___ | 17:48 | |
comstud | adam_g: so it'll do the volume cleanup even if self.driver.destroy fails | 17:49 |
*** hitesh__ has quit IRC | 17:49 | |
adam_g | comstud: well, the volume cleanup can happen from the compute manager, but actually cleaning up the sessions/connections needs to happen in the compute driver AFAICS | 17:50 |
*** jakedahn_zz is now known as jakedahn | 17:50 | |
vishy | jk0: yeah i was just wondering if anyone had comments | 17:50 |
vishy | for the ml | 17:50 |
adam_g | comstud: but, if different hypervisors handle the case of SHUTOFF differnetly, i think it should be up to the driver to raise that exception if needed and, ya, catch it around self.driver.destroy | 17:51 |
*** dalang has joined #openstack-dev | 17:51 | |
*** adjohn has joined #openstack-dev | 17:51 | |
comstud | adam_g: Correct. I just meant that in the compute manager, we call self.driver.destroy which could potentially fail. But I wonder if we should try/except it so we can continue running the code in compute manager that starts at line 689 | 17:51 |
*** gyee has joined #openstack-dev | 17:51 | |
eglynn | jaypipes/jeblair: how about temporarily re-enabling the GLANCE_TEST_S3/SWIFT_CONF env vars on jenkins slaves after https://review.openstack.org/5396 lands? | 17:52 |
eglynn | (disabled yesterday I believe) | 17:52 |
comstud | adam_g: Yeah, ok. Agree.. although nova should really have the same experience with libvirt and xenapi. if there's a behavior difference, that'll need addressed at some point | 17:52 |
eglynn | so that I can re-trigger a bunch of builds, hopefully repro the S3 failure, and eliminate an ID collision as the cause | 17:53 |
*** bengrue has joined #openstack-dev | 17:56 | |
*** maploin has quit IRC | 18:01 | |
*** gyee has quit IRC | 18:02 | |
*** Glace_ has joined #openstack-dev | 18:02 | |
*** gyee has joined #openstack-dev | 18:03 | |
*** dolphm has joined #openstack-dev | 18:13 | |
*** derekh has quit IRC | 18:13 | |
hugokuo | eglynn , ping . the result of Glance E4 is working | 18:15 |
eglynn | hugokuo: cool | 18:15 |
hugokuo | eglynn , but we seems must run with E3 ..... ~"~ | 18:16 |
hugokuo | I got to find out the problem of E3 and fix it .... | 18:16 |
*** dolphm_ has joined #openstack-dev | 18:16 | |
hugokuo | and additional question , must I provide disk & container format in glance add command ? | 18:17 |
hugokuo | in E4 | 18:17 |
blamar | termie / vishy / bcwaldon: are there any docs on what roles should be? or are roles just free-form strings? looks like Glance is expecting "Admin" for an administrator and Nova is expecting "Admin" or "admin"... just curious which way to standardize | 18:17 |
bcwaldon | blamar: I actually needed to bring this up, too | 18:17 |
bcwaldon | blamar: since we are creating default rules with certain roles | 18:18 |
blamar | and some auth systems return things like identity:admin or image:admin or compute:admin | 18:18 |
*** dolphm has quit IRC | 18:18 | |
bcwaldon | blamar: yeah...once vishy and sleepsonzzz stop talking IRL maybe they'll have some thoughts | 18:19 |
blamar | bcwaldon: k | 18:19 |
*** deva has joined #openstack-dev | 18:19 | |
eglynn | hugokua: yes, we made the disk and container formats compulsory in E4 (enforced in the API layer, as opposed to the glance add cmd) | 18:19 |
*** deva is now known as devananda | 18:20 | |
eglynn | hugokuo: are you intending to apply a custom patch to E3? surely better to run with E4 ... | 18:21 |
vishy | blamar: i prefer lowercase | 18:21 |
vishy | blamar: maybe we should make the policy check .lower() roles before checking them | 18:21 |
hugokuo | eglynn , my leader ask me to do that though . | 18:22 |
bcwaldon | blamar: we do lower() in the common policy module | 18:22 |
hugokuo | I do love latest code . but they seems worry about the integration risk between different versions of Openstack projects | 18:22 |
vishy | blamar: in fact we ^^ | 18:22 |
bcwaldon | blamar: but looks like glance expects 'Admin' for the context.is_admin check | 18:23 |
bcwaldon | blamar: which is probably incorrect | 18:23 |
blamar | vishy / bcwaldon: what about certain auth services that might want to have administrators for just glance or just nova, ideas about that? say role = "compute:admin"? just not support that? | 18:24 |
vishy | blamar: we definitely should do that ultimately | 18:24 |
vishy | blamar: I think we just tried to keep it simple initially | 18:24 |
bcwaldon | blamar: you'll need to modify the policy json file to get that | 18:24 |
vishy | blamar: roles are just strings | 18:25 |
blamar | vishy: yup, gotcha | 18:25 |
blamar | http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_authenticate_v2.0_tokens_Service_API_Client_Operations.html | 18:25 |
vishy | blamar: so you could give the glance admin user just a 'admin' for the glance tenant | 18:25 |
bcwaldon | blamar: well, roles are referenced from everywhere by their name (a string), but you refer to them by id in keystone itself | 18:25 |
blamar | vishy: ^^ looks like example roles there | 18:25 |
vishy | blamar: and not give it for the nova tenant | 18:25 |
blamar | should we update docs to have example roles be "Admin"? | 18:26 |
bcwaldon | blamar: if anything, 'admin' | 18:26 |
blamar | oh right | 18:26 |
bcwaldon | blamar: but I wouldnt be against it if it makes this simpler to comprehend | 18:26 |
davidkranz_ | jaypipes: Just got lots of those errors on devstack/precise... | 18:26 |
jaypipes | davidkranz_: ok, so tempest ran cleanly through one time. next time I got that same error during the floating IP test. nothing bad in nova-network log but this was in nova-compute: http://paste.openstack.org/show/9744 | 18:26 |
jaypipes | davidkranz_: lol... jinx. | 18:27 |
davidkranz_ | This must be some horrible new racey type thing. We need to file a bug. | 18:28 |
vishy | jaypipes, davidkranz_: looks like perhaps the domain has been suspended and resumed between? | 18:28 |
vishy | or somehow the uuid has been regenerated | 18:28 |
vishy | while in the middle of the spawn | 18:28 |
vishy | ? | 18:28 |
davidkranz_ | jaypipes: I also got the first failure right after Positive test:Should return the list of floating IPs ... ok. Perhaps it is not random. | 18:29 |
jaypipes | vishy: yeah, it's a weird one... we're trying to track it down. think it may be due to some side effects from a previous test because running the floating IP test alone works fine.. | 18:30 |
*** dolphm_ has quit IRC | 18:30 | |
*** n0ano has quit IRC | 18:31 | |
*** dolphm has joined #openstack-dev | 18:31 | |
*** johnpostlethwait has joined #openstack-dev | 18:31 | |
jaypipes | vishy: first we need to verify that it isn't the test case at fault... | 18:31 |
vishy | jaypipes: well you mangaged to achieve a traceback i have never seen, so that is exciting | 18:32 |
*** med___ is now known as med__ | 18:32 | |
*** med__ is now known as med_ | 18:32 | |
*** med_ has joined #openstack-dev | 18:33 | |
davidkranz_ | jaypipes: what os and hypervisor are you using? | 18:33 |
jaypipes | vishy: glad we are exciting today :) | 18:33 |
*** gyee has quit IRC | 18:33 | |
jaypipes | davidkranz_: libvirt/KVM on Xubuntu 11.10 | 18:33 |
davidkranz_ | OK, so precise is not the problem. | 18:34 |
jaypipes | davidkranz_: nope | 18:34 |
*** jakedahn is now known as jakedahn_zz | 18:39 | |
*** bepernoot has joined #openstack-dev | 18:41 | |
*** Glace_ has quit IRC | 18:51 | |
*** jdg has quit IRC | 18:52 | |
*** Glace_ has joined #openstack-dev | 18:52 | |
*** bepernoot has quit IRC | 18:53 | |
*** gpernot has quit IRC | 18:55 | |
*** Glacee has joined #openstack-dev | 18:57 | |
*** Glace_ has quit IRC | 18:59 | |
davidkranz_ | jaypipes, vishy: I just got it to sort of reproduce and submitted https://bugs.launchpad.net/nova/+bug/956313 | 19:05 |
uvirtbot | Launchpad bug 956313 in nova "Nova loses ability to build instances" [Undecided,New] | 19:05 |
jaypipes | davidkranz_: yeah, the weird thing, though, is that I don't see that lock/semaphore error in my compute log... :( | 19:07 |
davidkranz_ | jaypipes: That is actually in the network log. | 19:08 |
*** bsza has quit IRC | 19:08 | |
jaypipes | davidkranz_: yes, but the timeout is in the compute log, no? | 19:09 |
davidkranz_ | Yes. | 19:09 |
*** devananda has quit IRC | 19:11 | |
*** devananda has joined #openstack-dev | 19:12 | |
jaypipes | davidkranz_: I'm suspecting that cleanup of instances from prior test runs is wreaking havoc. | 19:12 |
davidkranz_ | jaypipes: Perhaps, but his being a black-box test it is not really tempest's problem. I don't think it makes sense to spend more time trying be beat up on essex until this is fixed, do you? | 19:13 |
*** flaviamissi has quit IRC | 19:14 | |
*** pixelbeat has quit IRC | 19:14 | |
davidkranz_ | jaypipes: I don't think I am enough of a nova expert to help debug it productively. | 19:14 |
jaypipes | davidkranz_: it's ok, you're doing fine reporting the bugs so far :) | 19:15 |
davidkranz_ | jaypipes: OK, I will tell Adam G and those guys about the current state. | 19:16 |
jaypipes | davidkranz_: yup, sounds good. If you notice any patterns anywhere else, just log another bug :) | 19:17 |
*** PotHix has quit IRC | 19:17 | |
*** jakedahn_zz is now known as jakedahn | 19:22 | |
eglynn | jeblair: could I trouble you to temporarily re-enable those glance s3/swift env vars on the jenkins slaves? | 19:22 |
*** lts has quit IRC | 19:22 | |
vishy | davidkranz_, jaypipes: do you rerun all services in every test? | 19:25 |
notmyname | Daviey: ping (about swift releases) | 19:26 |
jaypipes | vishy: do you mean do we restart devstack/environment between each test? | 19:27 |
jaypipes | eglynn, jeblair: before you do that... eglynn, I can email you some conf files so you can test locally... | 19:28 |
vishy | jaypipes: yes that is what i mean | 19:28 |
eglynn | jaypipes: cool, thanks | 19:28 |
vishy | jaypipes: I'm trying to figure out how the lockfile error might occur | 19:28 |
jaypipes | vishy: yes, I do killall screen, stack.sh, update tempest.conf with the new image UUID and then re-run tempest | 19:28 |
vishy | jaypipes: and I'm wondering if there is an issue where it starts receiving requests before lockfile cleanup has finished | 19:29 |
*** sdake has quit IRC | 19:29 | |
jaypipes | vishy: and that error (the one here: http://paste.openstack.org/show/9744/) is reproduceable about 60% of test runs. | 19:29 |
davidkranz_ | vishy: I first saw these errors in a real cluster and only was using devstack today. | 19:29 |
vishy | davidkranz_: the lockfile issue is very strange | 19:30 |
jeblair | eglynn, jaypipes: i see jaypipes is sending you the files for local testing, so i'll hold off for now. let me know when you're ready to re-enable. | 19:30 |
eglynn | jeblair: great, thanks | 19:30 |
jaypipes | eglynn: test account confs in your inbox. | 19:32 |
eglynn | jaypipes: thanking you ... | 19:32 |
*** pixelbeat has joined #openstack-dev | 19:33 | |
rmk | I'm at a loss here. Trying to use the RC1 packages on 12.04 with keystone -- dash is throwing errors like this: Error: Error fetching security_groups: Malformed request url (HTTP 400) | 19:33 |
*** rohita has quit IRC | 19:34 | |
*** jakedahn is now known as jakedahn_zz | 19:34 | |
jaypipes | rmk: can you connect to keystone via curl? | 19:34 |
rmk | yep | 19:35 |
rmk | I'll try some of the more specific api calls to it now | 19:35 |
*** jakedahn_zz is now known as jakedahn | 19:36 | |
*** sdake has joined #openstack-dev | 19:36 | |
*** lts has joined #openstack-dev | 19:37 | |
*** martine has joined #openstack-dev | 19:37 | |
rmk | keystone --token ADMIN --endpoint http://127.0.0.1:35357/v2.0/ endpoint-list is generating a stack trace | 19:37 |
rmk | AttributeError: 'TemplatedCatalog' object has no attribute 'list_endpoints' | 19:38 |
jaypipes | rmk: yep, that's a known bug. | 19:39 |
rmk | I used the devstack script to generate my keystone data | 19:40 |
rmk | This is my first forray into the new keystone (which already seems way better) | 19:40 |
rmk | service-list comes back empty, that seems wrong | 19:41 |
clayg | vishy: can you look at my notes: https://bugs.launchpad.net/nova/+bug/942918 | 19:43 |
uvirtbot | Launchpad bug 942918 in nova "nova-volume doesn't log request_id" [Undecided,New] | 19:44 |
*** bepernoot has joined #openstack-dev | 19:45 | |
vishy | clayg: interesting that sounds like a bug | 19:46 |
clayg | yeah well, I don't know about nothing fancy like that - I just want to get my reqeust_id's on my volume creates ;) | 19:47 |
*** bepernoot has quit IRC | 19:50 | |
*** novas0x2a|laptop has quit IRC | 19:51 | |
*** novas0x2a|laptop has joined #openstack-dev | 19:51 | |
*** belmoreira has quit IRC | 19:53 | |
*** n0ano has joined #openstack-dev | 20:00 | |
*** belmoreira has joined #openstack-dev | 20:00 | |
vishy | clayg: on it | 20:02 |
*** spiffxp has quit IRC | 20:05 | |
vishy | clayg, comstud: https://review.openstack.org/5412 | 20:08 |
jaypipes | rmk: sorry, remember to prefix comments with jaypipes: :) otherwise I get pulled off course | 20:09 |
jaypipes | rmk: yes, the service list and endpoint list when using templated catalog is buggy. lemme get you the bug links... | 20:10 |
jaypipes | rmk: bugs are both in progress waiting for code review | 20:10 |
YorikSar | https://review.openstack.org/4937 anyone? | 20:10 |
*** Glace_ has joined #openstack-dev | 20:10 | |
*** dprince has quit IRC | 20:11 | |
*** Glacee has quit IRC | 20:11 | |
jaypipes | rmk: https://bugs.launchpad.net/keystone/+bug/954087 | 20:11 |
uvirtbot | Launchpad bug 954087 in keystone "endpoint-list with TemplatedCatalog backend raises AttributeError" [Critical,In progress] | 20:11 |
jaypipes | rmk: https://bugs.launchpad.net/keystone/+bug/954089 | 20:11 |
uvirtbot | Launchpad bug 954089 in keystone "service-list returns empty set for TemplatedCatalog backend" [Critical,In progress] | 20:11 |
*** phorgh has quit IRC | 20:12 | |
YorikSar | The change hangs around waitign for approve for some time already... Can someone approve it? | 20:12 |
*** dachary has joined #openstack-dev | 20:13 | |
YorikSar | btw, can we make uvirtbot do the same thing with changes it does with bugs? | 20:14 |
*** roge has joined #openstack-dev | 20:18 | |
vishy | comstud, johan_-_, Vek: there is n issue with lockfile that looks nasty: https://bugs.launchpad.net/nova/+bug/956313 | 20:20 |
uvirtbot | Launchpad bug 956313 in nova "Nova loses ability to build instances" [High,Triaged] | 20:20 |
vishy | I'm currently blaming the lock cleanup code | 20:21 |
vishy | but any help would be appreciated | 20:21 |
comstud | hrm | 20:21 |
comstud | k | 20:21 |
vishy | adam_g: do you have someone fixing bug 956366 ? | 20:22 |
uvirtbot | Launchpad bug 956366 in nova "self-referential security groups can not be deleted" [High,New] https://launchpad.net/bugs/956366 | 20:22 |
adam_g | vishy: me, i guess. its one of the 10 things im trying to do at once right now. | 20:23 |
vishy | pvo: if you could get someone to look at https://bugs.launchpad.net/ubuntu/+source/nova/+bug/954692 and verify that removing the check referenced would be ok for xenapi ? | 20:24 |
uvirtbot | Launchpad bug 954692 in nova "cannot detach volume from terminated instance" [Medium,In progress] | 20:24 |
*** roge has quit IRC | 20:24 | |
pvo | vishy: how well do you know policy.json? | 20:25 |
comstud | vishy: something removed the lock file | 20:25 |
*** n0ano has quit IRC | 20:25 | |
*** n0ano has joined #openstack-dev | 20:26 | |
*** roge has joined #openstack-dev | 20:27 | |
eglynn | jaypipes: I didn't realize that it was just a standard Amazon S3 account used by the func tests (as opposed to the cloudfiles S3 clone) | 20:30 |
eglynn | jaypipes: I'd tried earlier many times to repro the issue against my own S3 account, so unlikely I'll see it with just different a/c credentials | 20:31 |
eglynn | BTW getting ECONNREFUSED on the swift auth url, but the copy_to_file test should still repro the problem if it was gonna occur in my environment | 20:31 |
eglynn | jaypipes: looks like something specific to the jenkins env | 20:31 |
comstud | vishy: i added comments to both bugs, at least. | 20:32 |
eglynn | jaypipes/jeblair: I had the tests running continually for an hour with no repro of the S3 issue, so maybe time to re-enable s3/swift on the jenkins slave? | 20:32 |
*** mnewby has quit IRC | 20:33 | |
jaypipes | eglynn: cool with me, sure. | 20:33 |
*** pixelbeat has quit IRC | 20:33 | |
eglynn | jaypipes: cool, I'll see if a bunch of jenkins build re-triggering causes it to recur | 20:33 |
*** Shrews has quit IRC | 20:35 | |
vishy | pvo: very well | 20:36 |
vishy | comstud, johan_-_: commented and updated: https://review.openstack.org/#change,5412 | 20:37 |
comstud | yeah just +2'd | 20:38 |
eglynn | jeblair: are you good with re-enabling s3/swift on glance tests? | 20:38 |
*** jabbott has joined #openstack-dev | 20:38 | |
jabbott | Hello. | 20:38 |
jabbott | How is everyone today? | 20:38 |
jeblair | eglynn: can do. | 20:38 |
pvo | :q | 20:38 |
pvo | er | 20:38 |
comstud | vishy: i'm going offline for a bit.. fyi. back later. | 20:39 |
eglynn | jeblair: thanks much! | 20:39 |
vishy | comstud: cool | 20:39 |
jeblair | eglynn: done | 20:40 |
*** jabbott has left #openstack-dev | 20:41 | |
eglynn | jeblair: magic! | 20:42 |
*** tryggvil_ has quit IRC | 20:48 | |
*** gyee has joined #openstack-dev | 20:51 | |
rmk | jaypipes: Thanks. Yeah had to step away for a bit myself also. | 20:55 |
rmk | I'll probably be pulling Essex down regularly now through the final release. | 20:55 |
rmk | Super impressed with the progress from Diablo. | 20:55 |
kbringard | yea, essex is big pimpin' | 20:58 |
* kbringard is super excited about quantum | 20:58 | |
* jaypipes just wants to get TryStack off of Diablo :) | 20:59 | |
jaypipes | vishy: Ever seen anything like this? http://paste.openstack.org/show/9788/ | 20:59 |
*** lts has quit IRC | 21:00 | |
jaypipes | vishy: Every time anyone on TryStack does a snapshot, it fails (snapshot table is completely empty but dashboard has a record of something it thinks is a snapshot, in a "Queued" status). And logging into the compute node, I see the following (repeatable every time I click "snapshot" in the dashboard... | 21:00 |
vishy | jaypipes: yeah your tmp drive is too small | 21:00 |
vishy | i would guess | 21:00 |
jaypipes | vishy: aha! | 21:01 |
jaypipes | vishy: love the cryptic error message! :) | 21:01 |
kbringard | that's what you get for using tmpfs on a machine with < 4gb ram | 21:01 |
kbringard | :-p | 21:01 |
jaypipes | vishy: http://paste.openstack.org/show/9790/ | 21:01 |
jaypipes | vishy: not so sure the tmp drive is full :) | 21:02 |
jaypipes | kbringard: actually, 48G tmp drive ;) | 21:02 |
kbringard | pfft | 21:02 |
vishy | jaypipes: hmm | 21:02 |
vishy | jaypipes: how big is your root drive? 10G ? | 21:02 |
jaypipes | vishy: how can I check what nova thinks is the tmp drive? | 21:02 |
jaypipes | vishy: nope, only 2G | 21:02 |
vishy | is it possible it isn't writable? | 21:02 |
jaypipes | vishy: lemme check the fstab, one sec | 21:03 |
vishy | you could try running the command manually: qemu-img convert -f qcow2 -O raw -s e7ba4fb5f6f04f99b07d1d222ada0219 /opt/openstack/nova/instances/instance-00000548/disk /tmp/tmpIuOQo0/e7ba4fb5f6f04f99b07d1d222ada0219 | 21:03 |
jeblair | jaypipes: there's nothing mounted over /tmp so it should be writing to /, which has only .5G free, no? | 21:03 |
vishy | that sounds likely | 21:04 |
jaypipes | vishy: see the original paste there... I did try running that command manually :) got a "qemu-img: Failed to load snapshot" | 21:04 |
jaypipes | jeblair: yeah, that sounds about right... since the qemu command is trying /tmp/blah | 21:05 |
YorikSar | Will that code cleanups land to RC or should I give up asking? | 21:05 |
vishy | jaypipes: oh yeah the snapshot will have been deleted after the failure | 21:05 |
* jaypipes goes off to check our cookbooks.... | 21:05 | |
*** shevek_ has joined #openstack-dev | 21:05 | |
*** n0ano has quit IRC | 21:06 | |
*** spiffxp has joined #openstack-dev | 21:09 | |
*** belmoreira has quit IRC | 21:10 | |
*** n0ano has joined #openstack-dev | 21:12 | |
rmk | jaypipes: Interesting you brought this up, I'm having the same issue right this moment in one of our deployments. | 21:16 |
rmk | With snapshotting failing on Diablo. | 21:16 |
rmk | 512MB /tmp here as well. | 21:17 |
rmk | That's probably the issue. Snapshots integrate the backing file, I assume. | 21:17 |
*** martine has quit IRC | 21:17 | |
jaypipes | rmk: yep. | 21:19 |
*** hhoover has joined #openstack-dev | 21:19 | |
*** hhoover has left #openstack-dev | 21:19 | |
rmk | tmp files are supposed to be created in the instances dir but they're not for snapshots. | 21:19 |
jaypipes | rmk: right... that's what it looks like. either way, should have a larger /tmp dir | 21:20 |
vishy | rmk: correct, they end up as raw | 21:20 |
vishy | or qcow depending on the original image source dype | 21:20 |
rmk | I've always kept temp sized relatively small to avoid having some process go insane and chew up the entire root disk. | 21:20 |
davidkranz_ | jaypipes: I had an issue where the disk filled up because the image cache is never cleaned up (in diablo, fixed in essex). Don't know if that is related. | 21:22 |
jaypipes | davidkranz_: nah, I think this is because the root device (in which /tmp is) is too small on the compute nodes... | 21:24 |
*** danwent has quit IRC | 21:27 | |
*** zigo has quit IRC | 21:28 | |
*** dolphm has quit IRC | 21:28 | |
rmk | temp_dir = tempfile.mkdtemp() | 21:34 |
rmk | virt/libvirt/connection.py, line 467 | 21:34 |
*** sdake has quit IRC | 21:37 | |
YorikSar | Who can target bug to RC? https://bugs.launchpad.net/nova/+bug/956457 | 21:38 |
uvirtbot | Launchpad bug 956457 in nova "Code cleanups, HACKING compliance before RC" [Undecided,New] | 21:38 |
*** mnewby has joined #openstack-dev | 21:44 | |
*** mnewby has quit IRC | 21:44 | |
*** mnewby has joined #openstack-dev | 21:45 | |
*** bhall has joined #openstack-dev | 21:45 | |
*** rohita has joined #openstack-dev | 21:47 | |
*** rohit404 has joined #openstack-dev | 21:49 | |
*** andrewsmedina has quit IRC | 21:49 | |
*** andrewsmedina has joined #openstack-dev | 21:49 | |
annegentle | anyone with Jenkins-fu - can you take a look at https://jenkins.openstack.org/view/Openstack-manuals/job/openstack-install-deploy-guide-trunk/configure and tell me why the FTP copy fails? | 21:51 |
annegentle | mtaylor: jeblair (and what's the Jedi nick?) :) | 21:52 |
jeblair | annegentle: because there is no webhelp/diablo directory: | 21:53 |
jeblair | https://jenkins.openstack.org/view/Openstack-manuals/job/openstack-install-deploy-guide-trunk/ws/doc/src/docbkx/openstack-install/target/docbkx/webhelp/ | 21:53 |
annegentle | jeblair: hmmmming….. | 21:53 |
rohit404 | hey guys, keystone question - where's the entry point in auth_token.py before hitting service.py ? | 21:54 |
*** ayoung has quit IRC | 21:54 | |
annegentle | jeblair: ok, yeah I can fix that, nice. | 21:54 |
jeblair | woo :) | 21:54 |
* annegentle rebuilds | 21:55 | |
*** rkukura has left #openstack-dev | 21:55 | |
*** dolphm has joined #openstack-dev | 21:58 | |
*** berendt has quit IRC | 22:05 | |
*** andrewsmedina has quit IRC | 22:06 | |
*** sandywalsh has quit IRC | 22:13 | |
*** jakedahn is now known as jakedahn_zz | 22:17 | |
*** danwent has joined #openstack-dev | 22:26 | |
ttx | devcamcar: around? | 22:27 |
*** sdake has joined #openstack-dev | 22:27 | |
*** devananda has quit IRC | 22:30 | |
*** gyee has quit IRC | 22:37 | |
anotherjesse | markmcclain: looks like you can do %3f according to python docs for https://review.openstack.org/#change,5409 | 22:37 |
anotherjesse | nevermind | 22:38 |
*** jdg has joined #openstack-dev | 22:38 | |
rmk | Perhaps this is a known Diablo issue. If I reboot an instance with volumes attached, it doesn't reattach the volumes after it comes back up. Nor can you detattach or reattach. | 22:39 |
rmk | You put yourself in a state where you basically cannot do anything with the VM | 22:39 |
*** dolphm has quit IRC | 22:40 | |
*** dolphm has joined #openstack-dev | 22:41 | |
*** zzed has quit IRC | 22:41 | |
*** sandywalsh has joined #openstack-dev | 22:44 | |
vishy | adam_g: ping | 22:46 |
*** deva has joined #openstack-dev | 22:46 | |
*** deva has quit IRC | 22:50 | |
adam_g | vishy: sup | 22:51 |
vishy | adam_g: there are comments on the bug about detach volume | 22:52 |
adam_g | vishy: right. comstud and i chatted about it here, im getting ready to submit a patch here soon | 22:53 |
adam_g | vishy: i just submitted a very quick fix for the security group thing, https://review.openstack.org/#change,5424. im not entirely sure this is kosher, but it resolves the issues i was having. | 22:54 |
vishy | adam_g: nice thanks…i will check that patch | 22:56 |
vishy | adam_g: hmm, can't we just ignore self in the check? | 22:57 |
*** flaviamissi has joined #openstack-dev | 22:57 | |
vishy | adam_g: I do think we want to avoid deleting a group that is in use, just because of the way it is implemented | 22:58 |
vishy | adam_g: or else we will get failures if we create a group, refer to it, then delete it | 22:58 |
*** dtroyer is now known as dtroyer_zzz | 22:58 | |
*** dolphm has quit IRC | 23:00 | |
*** jakedahn_zz is now known as jakedahn | 23:00 | |
adam_g | vishy: i believe it should be possible to delete groups that are being referenced by other groups, by comparing to AWS. a group that references itself is just one case where it gets wedged | 23:01 |
*** flaviamissi has quit IRC | 23:02 | |
vishy | adam_g: see my comment | 23:04 |
*** zaitcev has quit IRC | 23:04 | |
adam_g | vishy: for instance, creating a two groups (secgrp1, secgrp1) and 'euca-authorize -p 25 -o secgrp1 secgrp2'. i should be able to delete either group provided there are no runnign instances. | 23:04 |
adam_g | k | 23:04 |
*** deva has joined #openstack-dev | 23:04 | |
vishy | adam_g: but there is no check for running instances | 23:04 |
*** deva is now known as devananda | 23:04 | |
vishy | adam_g: so if there are running instances, that rule stops it | 23:04 |
adam_g | vishy: right, and that should stay | 23:05 |
*** novas0x2a|laptop has quit IRC | 23:06 | |
vishy | adam_g: so in your case if secgrp1 has running instances | 23:06 |
*** flaviamissi has joined #openstack-dev | 23:06 | |
vishy | and you try to delete secgroup1 it will get rejected somewhere else? | 23:06 |
*** novas0x2a|laptop has joined #openstack-dev | 23:06 | |
*** zaitcev has joined #openstack-dev | 23:07 | |
*** danwent has quit IRC | 23:07 | |
adam_g | vishy: no, its rejected in security_group_in_use() by the rest of the function that wasn't touched by that patch | 23:07 |
vishy | adam_g: ok thinking through this | 23:08 |
vishy | if you have a secgrp2 rule in place referencing secgrp1 | 23:08 |
vishy | and you delete secgrp1 | 23:09 |
vishy | what happens? | 23:09 |
vishy | adam_g: I think you would break the rule generation due to a dangling rule | 23:09 |
adam_g | vishy: let me check | 23:11 |
*** flaviamissi has quit IRC | 23:11 | |
*** flaviamissi has joined #openstack-dev | 23:11 | |
*** danwent has joined #openstack-dev | 23:12 | |
adam_g | vishy: currently does authorizing traffic from another security group actually do anything on the compute/network/iptables-level? | 23:14 |
vishy | it is supposed to | 23:14 |
russellb | sorry if that code is broken, i added it :-/ | 23:15 |
vishy | adam_g: 328 for instance in rule['grantee_group']['instances']: | 23:16 |
*** flaviamissi has quit IRC | 23:16 | |
*** flaviamissi has joined #openstack-dev | 23:17 | |
vishy | adam_g: it might be that that still works fine | 23:18 |
vishy | adam_g: I"m a little confused why it is using parent_group_id instead of grantee (group_id) | 23:20 |
vishy | but if you can verify that deleting a rule that another rule is using doesn't explode | 23:20 |
vishy | i'm ok | 23:20 |
adam_g | vishy: it looks like, in that case, deleting the group that is referenced (at least) marks the correpsonding rule as deleted in the db | 23:21 |
*** kbringard has quit IRC | 23:21 | |
*** flaviamissi has quit IRC | 23:21 | |
vishy | adam_g: oh really? | 23:21 |
vishy | so it cascades? | 23:22 |
*** flaviamissi has joined #openstack-dev | 23:22 | |
vishy | adam_g: ah yes, I see that | 23:22 |
adam_g | http://paste.ubuntu.com/885656/ | 23:24 |
*** med_ has quit IRC | 23:25 | |
vishy | adam_g: something is wrong | 23:26 |
*** flaviamissi has quit IRC | 23:26 | |
*** rohit404 has quit IRC | 23:28 | |
*** med_ has joined #openstack-dev | 23:29 | |
*** med_ has quit IRC | 23:29 | |
*** med_ has joined #openstack-dev | 23:29 | |
*** littleidea has quit IRC | 23:30 | |
*** jakedahn is now known as jakedahn_zz | 23:32 | |
*** littleidea has joined #openstack-dev | 23:33 | |
vishy | adam_g: there seems to be some conflation between parent_group_id and group_id | 23:33 |
vishy | i need to sort this out | 23:33 |
*** pixelbeat has joined #openstack-dev | 23:34 | |
adam_g | vishy: in the db api or firewall? | 23:34 |
*** danwent has quit IRC | 23:34 | |
vishy | in the way that they are created | 23:35 |
vishy | for example in ec2 api i don't see where parent_group_id is used at all | 23:35 |
*** danwent has joined #openstack-dev | 23:36 | |
vishy | adam_g: ok i think i've wrappend my head around this | 23:38 |
*** agonella has joined #openstack-dev | 23:38 | |
*** danwent has quit IRC | 23:39 | |
vishy | adam_g so the first query gets all the rules that reference this group | 23:40 |
vishy | so we could add an exclude to the first filter | 23:41 |
vishy | adam_g: but i'm trying to think if there is ever a case where we can't delete the rule | 23:42 |
russellb | once this is sorted, i'll write a unit test to cover it tomorrow | 23:43 |
adam_g | vishy: an exclude to the first filter to exclude rules that reference themselves? | 23:43 |
vishy | adam_g: ok i can't see any reason why we can't delete a group that is in use | 23:43 |
vishy | if it is going to update all the rules that reference it anyway | 23:44 |
vishy | adam_g: it shouldn't matter if it is referenced by other groups | 23:44 |
russellb | but this was originally done for EC2 compatibility | 23:45 |
russellb | i'll check to see if groups used by groups was included in that, or if i just included it because i thought it sounded like a good idea ... | 23:45 |
vishy | russellb: please do | 23:45 |
adam_g | vishy: im not sure i parse your conclusion: the patch that entirely drops that check is okay, or it should stay but filter self-referencing groups? | 23:46 |
vishy | russellb: since you can reference groups owned by other users it seems odd | 23:46 |
vishy | adam_g: i think your patch can stay | 23:46 |
russellb | looks like it was just me, heh - https://bugs.launchpad.net/nova/+bug/817872 | 23:46 |
uvirtbot | Launchpad bug 817872 in nova "Delete security group that contains instances should not be allowed" [Medium,Fix released] | 23:46 |
russellb | the bug was just about not removing them if they are used by instances | 23:46 |
russellb | so yeah, +1 on the patch | 23:46 |
vishy | russellb: i mean if i create a rule that references other_user:group, suddenly they can't delete their group? | 23:47 |
vishy | that is odd | 23:47 |
vishy | cool | 23:47 |
russellb | a unit test will likely break, but that part of the test should just get removed | 23:47 |
russellb | vishy: yep, agreed, my bad :) | 23:47 |
adam_g | vishy: okay | 23:47 |
vishy | adam_g: oh, did you run tests? | 23:47 |
adam_g | vishy: no, i will now | 23:48 |
vishy | russellb: any idea which suite to run? | 23:48 |
russellb | unit tests | 23:48 |
russellb | nova.tests.api.ec2.test_cloud probably ... | 23:49 |
russellb | and nova.tests.api.openstack.compute.contrib.test_security_groups | 23:50 |
russellb | actually just test_cloud looks like | 23:52 |
russellb | adam_g: I think this is the only one that will fail with the patch ... http://paste.openstack.org/show/9829/ | 23:53 |
russellb | or I suppose it could just be changed to assert success instead of the exception | 23:54 |
adam_g | russellb: thanks | 23:54 |
russellb | np | 23:54 |
*** ayoung has joined #openstack-dev | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!