*** heckj has joined #openstack-dev | 00:01 | |
*** heckj has quit IRC | 00:03 | |
comstud | I can't tomorrow | 00:03 |
---|---|---|
comstud | and I'll be at castle next week | 00:03 |
comstud | have a wedding to go to tomorrow | 00:03 |
comstud | (fun) | 00:03 |
comstud | let's see... glance shared images is one thing we're blocked on | 00:04 |
comstud | by keystone, i mean authn/z | 00:04 |
anotherj1sse | comstud: I have public glance images working - not private images yet | 00:04 |
anotherj1sse | is that what you mean? | 00:05 |
comstud | i think i'm referring to private images mostly. | 00:05 |
comstud | being able to share images with certain privs | 00:05 |
comstud | uh, what else is in our backlog.. | 00:05 |
anotherj1sse | comstud: awesome - i want that too -- when do you think you are getting to it? | 00:06 |
anotherj1sse | i'll probably get to adding admin-ness to the keystone integration for nova soon | 00:07 |
anotherj1sse | haven't looked at glance keystone integration yet | 00:07 |
anotherj1sse | glance client will probably need to be refactored to work with auth - since it doesn't take a token or anything | 00:07 |
comstud | sirp, jay, and johannes i know are working on generating a BP this sprint | 00:07 |
comstud | for shared images | 00:07 |
comstud | I don't know how much was dependent on auth | 00:08 |
comstud | (this sprint ends next thursday/friday for us) | 00:08 |
comstud | i'm blanking on the other things that required auth | 00:08 |
comstud | we had to defer a number of stories because of it in our sprint planning last fri | 00:09 |
comstud | one of them was 'glance integration with authn/z' certainly | 00:10 |
anotherj1sse | comstud: I'll probably be in san antonio a couple days next week | 00:10 |
anotherj1sse | we should hack at the castle ... | 00:10 |
comstud | Sure! | 00:10 |
comstud | I'll be hacking either way | 00:10 |
comstud | :) | 00:10 |
anotherj1sse | seems like we have similar focus on getting an end-to-end with glance,nova,glance,keystone | 00:11 |
comstud | I'm out there all week... | 00:11 |
comstud | i'll be sitting down near pvo and the team | 00:11 |
comstud | yeah, i have some committments for this sprint, but I think those are about wrapped up and I can look at some future things | 00:12 |
comstud | ok, gotta take off for a bit | 00:33 |
* comstud & bbl | 00:33 | |
*** jdurgin has quit IRC | 00:36 | |
*** cloudgroups has joined #openstack-dev | 00:47 | |
*** dragondm has quit IRC | 01:04 | |
*** cloudgroups has left #openstack-dev | 01:13 | |
openstackjenkins | Project nova build #942: SUCCESS in 2 min 45 sec: http://jenkins.openstack.org/job/nova/942/ | 01:14 |
openstackjenkins | Tarmac: Adds the ability to make a call that returns multiple times (a call returning a generator). This is also based on the work in rpc-improvements + a bunch of fixes Vish and I worked through to get all the tests to pass so the code is a bit all over the place. | 01:14 |
openstackjenkins | The functionality is being added to support Vish's work on removing worker access to the database, this allows us to write multi-phase actions that yield state updates as they progress, letting the frontend update the db. | 01:14 |
*** mattray has joined #openstack-dev | 02:10 | |
*** mattray has quit IRC | 02:11 | |
*** antonyy has joined #openstack-dev | 02:53 | |
*** Zangetsue has joined #openstack-dev | 03:44 | |
*** kaz_ has joined #openstack-dev | 03:49 | |
*** Zangetsue has quit IRC | 04:12 | |
*** cloudgroups has joined #openstack-dev | 04:33 | |
*** cloudgroups has left #openstack-dev | 04:33 | |
*** Zangetsue has joined #openstack-dev | 04:39 | |
*** Tv has quit IRC | 05:27 | |
dweimer | Has anyone run the swift proxy server code through a profiler? My test node cpus are spending ~75% of their time in user space during upload runs and I'm wondering what is taking up all of the cycles. SSL is off on the test node. With SSL on ~85% of the cpu time is in user space. | 05:36 |
*** zaitcev has quit IRC | 06:05 | |
ttx | good morning! | 06:40 |
*** cloudgroups has joined #openstack-dev | 07:11 | |
*** cloudgroups has left #openstack-dev | 07:27 | |
*** BK_man has quit IRC | 09:41 | |
*** BK_man_ has joined #openstack-dev | 09:41 | |
*** dysinger has quit IRC | 10:05 | |
*** yamahata_lt has joined #openstack-dev | 10:06 | |
*** kaz_ has quit IRC | 10:13 | |
*** cloudgroups has joined #openstack-dev | 10:44 | |
*** cloudgroups has left #openstack-dev | 11:12 | |
*** markvoelker has joined #openstack-dev | 11:20 | |
*** markvoelker has left #openstack-dev | 11:20 | |
*** salv-orlando has joined #openstack-dev | 11:24 | |
*** adiantum has joined #openstack-dev | 11:35 | |
*** yamahata_lt has quit IRC | 11:57 | |
*** yamahata_lt has joined #openstack-dev | 11:58 | |
*** Zangetsue has quit IRC | 12:03 | |
*** Zangetsue has joined #openstack-dev | 12:03 | |
*** dysinger has joined #openstack-dev | 12:09 | |
*** Zangetsue_ has joined #openstack-dev | 12:13 | |
*** Zangetsue has quit IRC | 12:13 | |
*** Zangetsue_ is now known as Zangetsue | 12:13 | |
*** salv-orlando has quit IRC | 12:13 | |
*** salv-orlando has joined #openstack-dev | 12:13 | |
*** dprince has joined #openstack-dev | 12:19 | |
*** ameade has joined #openstack-dev | 12:29 | |
*** Zangetsue has quit IRC | 12:31 | |
*** mattray has joined #openstack-dev | 12:55 | |
*** markwash_ has quit IRC | 12:59 | |
*** dysinger has quit IRC | 13:08 | |
*** troytoman-away is now known as troytoman | 13:16 | |
jaypipes | anotherj1sse: that's blocked on keystone integration, yeah... | 13:18 |
*** thatsdone has joined #openstack-dev | 13:19 | |
jaypipes | ttx: morning | 13:19 |
ttx | jaypipes: yo | 13:19 |
ttx | jaypipes: soren and I spent a day figuring out how to handle versions around milestones | 13:20 |
jaypipes | ttx: looking at the pdf now. | 13:20 |
ttx | jaypipes: you can see the result at: http://ubuntuone.com/p/vjd/ | 13:20 |
ttx | double-check you have the right one. | 13:20 |
jaypipes | ttx: sorry, yesterday, was sitting in a car dealership most of the day (bought a new car...) | 13:20 |
jaypipes | ttx: I *just* downloaded the pdf, so should be right. | 13:21 |
jaypipes | ttx: the last slide is for swift I assume? | 13:22 |
ttx | jaypipes: yes | 13:22 |
ttx | jaypipes: the trick was to version in a way that would allow jumping from PPA to PPA while preserving upgrade paths | 13:23 |
jaypipes | ttx: and the red "explosions" are the milestone tarballs? green explosions for release tarballs? | 13:23 |
ttx | the red explosions are the branching points | 13:23 |
ttx | (a decision point) | 13:23 |
jaypipes | ttx: gotcha. | 13:23 |
jaypipes | ttx: I like the metaphor of an explosion there. ;) | 13:24 |
jaypipes | ttx: actually, I never metaphor I didn't like. | 13:24 |
* jaypipes turning into his dad... urgh. | 13:24 | |
*** thatsdone has quit IRC | 13:30 | |
ttx | jaypipes: looking good from where you stand ? | 13:31 |
ttx | jaypipes: if yes, soren can enable it for Glance today. | 13:31 |
jaypipes | ttx: most definitely. | 13:32 |
ttx | jaypipes: cool. Two down, One to go. | 13:32 |
*** foxtrotgulf has joined #openstack-dev | 13:33 | |
soren | Err... | 13:33 |
soren | Does the failure here look familiar to anyone? https://code.launchpad.net/~soren/swift/versioning/+merge/62439 | 13:34 |
soren | I can't seem to reproduce it. | 13:34 |
soren | notmyname: ^ ? | 13:34 |
notmyname | soren: no worries | 13:34 |
notmyname | I reapproved | 13:34 |
notmyname | it's a timing issue | 13:34 |
soren | IT's common? | 13:35 |
notmyname | rare, but it's come up before, I believe | 13:36 |
notmyname | your branch just merged though | 13:36 |
creiht | I think that's only the second time that I have seen it happen | 13:38 |
* gholt coughs | 13:38 | |
creiht | hehe | 13:38 |
gholt | It's something we've been pestering dfg about for a while now. But yeah, it's pretty rare. :) | 13:39 |
notmyname | ttx: versioning branch landed. we'll merge the changelog and then should be good for the release branch | 13:41 |
ttx | notmyname: ack | 13:41 |
* ttx waits with anxiety for soren's magic tarball script to spit a version number from that last merge | 13:42 | |
ttx | annegentle: I tried to refactor a bit the wiki starting page, please see http://wiki.openstack.org/ProposedStartingPage | 13:43 |
soren | ttx: I've added a couple of unit tests to the script, so in theory, we should be in better shape now than ever :) | 13:43 |
ttx | soren: i'll still avoid to breathe while it runs, just in case | 13:44 |
soren | ttx: Any second now. | 13:45 |
soren | ttx: The swift job apparantly only polls trunk every 15 minutes. | 13:45 |
openstackjenkins | Project swift build #268: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/268/ | 13:46 |
openstackjenkins | Tarmac: Add a __canonical_version__ attribute to the swift module. This is only the numbered part of the version string, no "-dev" or similar suffixes. | 13:46 |
openstackjenkins | Add a couple of unit tests to make sure __version__ and __canonical_version__ are generated correctly and to ensure __canonical_version__ never accidentally has anything other than numbers and points in it. | 13:46 |
soren | ttx: mv dist/swift-1.4.0.tar.gz dist/swift-1.4.0~20110527.301.tar.gz | 13:47 |
soren | ttx: Looks good to me. | 13:48 |
soren | ttx: Now switch glance? | 13:48 |
* ttx doublechecks with the PDF | 13:48 | |
ttx | soren: indeed and yes | 13:48 |
soren | ttx: If I trigger a rebuild, I'll get a tarball that supersedes the current one. Shall I go for it? | 13:49 |
ttx | notmyname: let me know when you've the changelog covered. I'll prepare a branch for doing (1.4.1, False) just after the release branch is setup | 13:50 |
soren | ttx: (2011.3~bzrXXXXX being less than 2011.3~dX~2011057....) | 13:50 |
ttx | soren: that's the beauty of it. yes you can | 13:50 |
notmyname | ttx: will do | 13:50 |
soren | ttx: mv dist/glance-2011.3.tar.gz dist/glance-2011.3~d1~20110527.134.tar.gz | 13:50 |
soren | ttx: Looks exactly right. | 13:50 |
ttx | soren: cool | 13:51 |
notmyname | ttx: branch approved...waiting on lp/jenkins | 13:56 |
*** antonyy has quit IRC | 13:56 | |
notmyname | soren: so if we want to tag 1.4.0 at this point, how does that work? ie, how does my local tag get into trunk? | 13:57 |
ttx | notmyname: thinking of it, I think jenkins needs to tag it, as the owner of the branch. | 13:57 |
ttx | notmyname: also since it's technically not "1.4.0" but rather "the last in the 1.4.0 series in trunk" you might want to use something other than "1.4.0" to tag ? | 13:59 |
notmyname | soren: as I was telling ttx earlier, I'm not as concerned with having a tag as having a simple way for users to get a commit log since the release | 14:00 |
ttx | (and leave the "1.4.0" tag for the release branch ?) | 14:00 |
notmyname | ttx: changelog merged. let me know when you are ready for your merge to land | 14:00 |
openstackjenkins | Project swift build #269: SUCCESS in 28 sec: http://jenkins.openstack.org/job/swift/269/ | 14:01 |
openstackjenkins | Tarmac: changelog for 1.4.0 release | 14:01 |
ttx | notmyname: it's at https://code.launchpad.net/~ttx/swift/switch-trunk-to-1.4.1/+merge/62677 -- but maybe wait for the tag magic first | 14:02 |
*** markvoelker has joined #openstack-dev | 14:03 | |
ttx | soren: I'd either tag the last 1.4.0 "last-1.4.0" or the first 1.4.1 "1.4.1-dev" | 14:03 |
ttx | small preference for the second one, maybe | 14:04 |
Daviey | Does anyone know why (from the first commit) in nova, the novarc uses "NOVA_KEY_DIR=$(pushd $(dirname $BASH_SOURCE)>/dev/null; pwd; popd>/dev/null" ? | 14:05 |
soren | ttx, notmyname: Why would there be a 1.4.1-anything-at-all tag now? | 14:08 |
*** jaypipes has quit IRC | 14:08 | |
soren | notmyname: I think we should hold off tagging 1.4.0 until we actually make the release. | 14:09 |
*** jaypipes has joined #openstack-dev | 14:09 | |
soren | Daviey: That does look odd on many levels. | 14:09 |
ttx | soren: there used to be a hint in changelog for users to get a list of changes. "Run this bzr log command" | 14:09 |
ttx | soren: tagging the branch point could serve the same purpose. and would also be a useful bit of info in trunk. | 14:10 |
soren | Daviey: IT's in a subshell so why bother pushd/popd? Why do it at all, and not just rely on dirname $BASH_SOURCE?... I don't know. | 14:10 |
Daviey | soren: I expected it to be NOVA_KEY_DIR=$(dirname $(readlink -f ${BASH_SOURCE})) | 14:10 |
notmyname | soren: I'm not too particular on the tag name. to me it makes sense to have a 1.4.0 tag at the time of the release branch | 14:10 |
ttx | soren: we should just avoid calling it "1.4.0" | 14:10 |
notmyname | ttx: you don't like the 1.4.0 tag because it could be different that the final 1.4.0 version? | 14:11 |
ttx | notmyname: because it *is* different. | 14:11 |
ttx | notmyname: at the very minimum the False becomes True. | 14:11 |
Daviey | soren: I don't realy want to propose the change, until i understand why psuh/pop was used. | 14:11 |
soren | Daviey: I completely understand. | 14:12 |
notmyname | ttx: soren: so why don't we just use the commit/rev number in the hint? | 14:12 |
notmyname | and not worry about the tags? | 14:12 |
*** foxtrotdelta has joined #openstack-dev | 14:13 | |
ttx | notmyname: would work, since you commit it after you know the revno, really. | 14:13 |
soren | Ok, so the use case is people wanting to see what changed on trunk since the release branch was cut? | 14:13 |
soren | Is that it? | 14:13 |
notmyname | yes | 14:13 |
soren | Sorry, I'm having trouble following this. | 14:13 |
soren | WEll, bzr does has magic for that. | 14:14 |
soren | bzr log -r ancestoer:lp:swift/milestone | 14:14 |
soren | or whatnot. | 14:14 |
*** foxtrotgulf has quit IRC | 14:14 | |
ttx | soren: hmm | 14:14 |
soren | Shows revisions since the most recent common ancestor of your current working tree and the given branch. | 14:15 |
soren | So it's certainly possible already. | 14:15 |
ttx | except you don't have lp:swift/milestone ? | 14:15 |
soren | Yes. | 14:15 |
soren | Er.. | 14:15 |
soren | Yet. | 14:15 |
soren | There will be a release/milestone/whatever branch. | 14:15 |
soren | Surely? | 14:15 |
soren | Otherwise I have no clue what we're talking about? | 14:16 |
ttx | there will be a release branch, yes. But it will not be named lp:swift/milestone | 14:16 |
soren | Whatever. | 14:17 |
ttx | ok | 14:17 |
soren | bzr log -r ancestor:lp:swift/something_else_then | 14:17 |
ttx | bzr log -r ancester:lp:~hudson-openstack/swift/release-branch | 14:17 |
soren | I sure hope that's going to have a lp:swift/something alias? | 14:17 |
ttx | soren: you can create any alias you want ? | 14:18 |
soren | Anyways, that's hardly the point right now. | 14:18 |
soren | ttx: Create a fake series, set it as its trunk. | 14:18 |
soren | ttx: People do it all the time. | 14:18 |
ttx | ew. | 14:18 |
ttx | agreed on it not being the problem. | 14:18 |
soren | What is the purpose of getting this log? | 14:19 |
soren | Just trying to understand.. | 14:19 |
notmyname | so when can the 1.4.1, false branch land? we can solve the commit log stuff later, I think | 14:19 |
ttx | notmyname: would that work for you ? Instructions on getting the differences since the release branch was cut (bonus points for not having to chnage the instructions at each milestone :) | 14:19 |
notmyname | soren: the changelog is human readable and updated at release time. the commit log is for those who want to know what is in the next release before we actually update the changelog | 14:20 |
soren | If someone could help me understand why this is importnat, that would be really great. When would you use it? | 14:20 |
soren | Hmm. ok. | 14:20 |
ttx | soren: can jenkins create the release branch from trunk now ? | 14:21 |
soren | ttx: Jenkins won't be creating it. | 14:21 |
soren | ttx: It's a one-off, so it'll be manual. | 14:21 |
ttx | I mean, the LP jenkins user. | 14:21 |
soren | Oh, him. | 14:21 |
*** rnirmal has joined #openstack-dev | 14:21 | |
soren | Err.. | 14:21 |
soren | Sure. | 14:21 |
soren | I really need to go now, though. :( | 14:22 |
soren | mtaylor: Around? | 14:22 |
soren | Is it really important that it happens now/today? | 14:22 |
ttx | soren: we can cut it afterwards from rev 301 even if rev302 is in trunk, right | 14:22 |
soren | It's possible to branch off an older revision, you know :) | 14:22 |
ttx | soren: rightn that's what I meant :) | 14:22 |
soren | Yeah :) | 14:23 |
ttx | soren: so actually notmyname can commit the (1.4.1, False) now | 14:23 |
soren | I agree it'd be simpler, but I really, really have to leave. | 14:23 |
soren | I should have left half an hour ago :) | 14:23 |
soren | Yeah. | 14:23 |
ttx | notmyname: go for it. We'll create the release branch from the last-revision-before-that-one | 14:23 |
dabo | anyone familiar with running the volume service? Specifically, the requirement for the 'vgs' program? | 14:24 |
ttx | notmyname: we need to work on Monday on the PPA and jenkins jobs attached to the release branch anyway | 14:24 |
notmyname | ttx: done | 14:24 |
ttx | notmyname: congratulations, you now have an always open trunk. | 14:24 |
notmyname | I won't be available on monday (if I'm included in the "we") | 14:25 |
ttx | notmyname: you're not. | 14:25 |
notmyname | ok :-) | 14:25 |
notmyname | yay | 14:25 |
notmyname | ttx: thanks for the help | 14:25 |
ttx | notmyname: basically now you can signal the trunk is ready for release-branch by switching to (1.4.n,False) to (1.4.n+1,False) | 14:26 |
ttx | I mean from...to | 14:26 |
notmyname | I have only one more request: ttx, soren, mtaylor(, myself?) -- please write this down so we don't forget what needs to be done at each release | 14:26 |
*** jkoelker has joined #openstack-dev | 14:26 | |
ttx | indeed. I plan to document this very thoroughly | 14:26 |
ttx | the PDF drawing is just the first part in a series :) | 14:27 |
annegentle | ttx: looks great - thanks for the refresh on the front page. | 14:27 |
ttx | annegentle: it's not refreshed yet | 14:27 |
notmyname | ttx: ya, I think it will be dev work, changelog, version bump, dev work, etc (where the release is cut after every version bump | 14:27 |
ttx | annegentle: it's just proposed, if it's ok I can switch StartingPage to it | 14:28 |
ttx | annegentle: then we can also separate the InstallInstructions per-project. That's my next step :) | 14:28 |
* ttx bbls | 14:29 | |
notmyname | swift 1.4.0 is scheduled for release on Tuesday. trunk is open for 1.4.1 dev work | 14:31 |
openstackjenkins | Project swift build #270: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/270/ | 14:31 |
openstackjenkins | Tarmac: Switch Swift trunk to 1.4.1, now that the 1.4.0 release branch is branched out. | 14:31 |
sandywalsh | python-novaclient 2.4.3 available on PyPi http://pypi.python.org/pypi/python-novaclient (adds POST /zones/select command for cross-zone scheduling) | 14:48 |
*** antonyy has joined #openstack-dev | 14:56 | |
ttx | annegentle: so should I just push the new page ? | 14:56 |
annegentle | ttx: yeah, go right ahead | 14:57 |
annegentle | you're so literal with my use of "refresh" I like it. :) | 14:58 |
*** dragondm has joined #openstack-dev | 14:59 | |
*** foxtrotdelta has quit IRC | 15:03 | |
*** foxtrotgulf has joined #openstack-dev | 15:04 | |
*** salv-orlando has quit IRC | 15:09 | |
*** blamar has quit IRC | 15:11 | |
*** markwash has quit IRC | 15:11 | |
*** markwash has joined #openstack-dev | 15:12 | |
*** blamar has joined #openstack-dev | 15:12 | |
*** koolhead17 has joined #openstack-dev | 15:17 | |
annegentle | mtaylor: you around? Q for you on some doc automation | 15:18 |
dabo | Hey, anyone familiar with running the volume service? Specifically, the requirement for the 'vgs' program? | 15:31 |
*** dysinger has joined #openstack-dev | 15:39 | |
*** koolhead17 has left #openstack-dev | 15:43 | |
rnirmal | dabo: nothing beyond that 'vgs' is used to get info on host lvm groups. and that we need to default to run it, I haven't actually used the volume service to provision any volumes | 15:46 |
dabo | rnirmal: I was also wondering how to install it. 'sudo apt-get install vgs' comes up empty | 15:47 |
dabo | it didn't come standard on maverick | 15:47 |
rnirmal | I think it's part of lvm2 | 15:49 |
rnirmal | dabo: just checked, yeah it's part of lvm2 | 15:49 |
dabo | rnirmal: thanks - I'll grab that and see if I can make some progress | 15:50 |
rnirmal | cool you make need to run vgcreate at some point | 15:50 |
dabo | rnirmal: ok. guess I have some reading up to do first. :) | 15:51 |
markwash | jaypipes: thanks for leading the whole pagination thing! its going to feel great to remove the "blocked" stickers from the team titan taskboard | 15:58 |
jaypipes | markwash: no probs! just finishing up a ML post that summarizes the conclusions. | 15:59 |
*** Tv has joined #openstack-dev | 16:09 | |
*** foxtrotgulf has quit IRC | 16:21 | |
*** foxtrotgulf has joined #openstack-dev | 16:21 | |
jaypipes | dabo: what, too mean? :) | 16:29 |
dabo | jaypipes: positively sinister! | 16:29 |
jaypipes | hehe | 16:29 |
* jaypipes cackles | 16:29 | |
*** zaitcev has joined #openstack-dev | 16:31 | |
mtaylor | annegentle: herro | 16:44 |
mtaylor | annegentle: (I suck - totally dropped the ball on your docs automation - putting it high up on my list right now) | 16:46 |
*** bcwaldon has joined #openstack-dev | 16:53 | |
glenc | jaypipes - isn't there a movement to deprecate the Nova /images resource and encourage people to use Glance directly? | 16:57 |
sandywalsh | jaypipes, you on skype? | 16:57 |
*** adiantum has quit IRC | 16:59 | |
jaypipes | glenc: /images in Nova is a kind of wrapper around Glance's /images endpoint. Big difference: Glance doesn't XML, Nova does ;) | 17:08 |
jaypipes | sandywalsh: yeppers. jaypipes. | 17:08 |
glenc | jaypipes: i'm getting requests for enhancements to Nova's /images and it worries me; seems like we need to reconcile these | 17:09 |
annegentle | mtaylor: ok, cool, then I can haz doc automation! Whoopie. | 17:09 |
sandywalsh | jaypipes, got a sec to talk about this api thingy? might be faster? | 17:09 |
jaypipes | sandywalsh: surely. | 17:09 |
jaypipes | glenc: want to chat about it? | 17:09 |
annegentle | mtaylor: another automation item that isn't in that email thread is to create swift.openstack.org/1.3 - automate the archiving of the point releases on the RST sites. Can we do that? | 17:10 |
jaypipes | glenc: skype as soon as I finish with sandywalsh? | 17:10 |
glenc | sure | 17:10 |
*** jdurgin has joined #openstack-dev | 17:10 | |
jaypipes | glenc: or cell me (614 406 1267) | 17:10 |
glenc | let me know when you're available | 17:10 |
mtaylor | annegentle: yes to both! | 17:10 |
jaypipes | glenc: will do. | 17:11 |
sandywalsh | annegentle, I'm working on dev-level docs for zones/dist-scheduler ... should that be in a separate file or can it go as an addendum to the admin zones docs? | 17:12 |
annegentle | mtaylor: sweet. | 17:13 |
jaypipes | glenc: should we bring erik carlin into the conversation, too? I'm been having a private email thread with him about the /images endpoint | 17:13 |
annegentle | sandywalsh: thinking... | 17:13 |
sandywalsh | annegentle, I'm thinking separate. Is there a particular format to follow? | 17:13 |
*** adiantum has joined #openstack-dev | 17:13 | |
annegentle | sandywalsh: yeah, I'm thinking separate too, though still next to each other in the index.rst | 17:14 |
mtaylor | annegentle: would you mind sending me an email with just a small little psuedo-code-ish description of what you want archived to where? | 17:14 |
glenc | jaypipes let's see if he's available | 17:14 |
annegentle | mtaylor: sure, and the "when" is part of it too, as in, what triggers an archive and how long do archives last. | 17:14 |
mtaylor | annegentle: yes | 17:15 |
annegentle | sandywalsh: I think it'd fit in here: http://nova.openstack.org/devref/index.html | 17:16 |
annegentle | sandywalsh: and then if you don't mind, I'll take what you write, edit for sysadmins, and put it on docs.openstack too | 17:16 |
sandywalsh | annegentle, sounds like a plan ... stay tuned | 17:16 |
annegentle | sandywalsh: cool. I found an RST > DocBook online tool (pandoc) and I found it's a great starting point, but I'll still have to ask you questions as if I were a sysadmin to round out the topics | 17:17 |
sandywalsh | annegentle, no problem at all | 17:18 |
*** ecarlin has joined #openstack-dev | 17:22 | |
*** RobertLaptop has quit IRC | 17:26 | |
sandywalsh | annegentle, 9pm CST is going to be tough for EU | 17:27 |
annegentle | sandywalsh: yeah, but in my docs-list of about 35 members, more are Asia than EU. | 17:28 |
annegentle | but yeah, we'll adjust as needed, gotta start with somethin' | 17:28 |
jaypipes | sandywalsh: on phone with glenc and ecarlin right now... | 17:29 |
*** dragondm has quit IRC | 17:29 | |
BK_man_ | vishy: around? | 17:30 |
BK_man_ | vishy: what did you mean under "NOTE(vish): if with_lockmode isn't supported, as in sqlite, then this has concurrency issues"? nova/db/sqlalchemy/api.py line 1163? | 17:32 |
vishy | i meant exactly what it says | 17:33 |
vishy | sqlite doesn't support with_lockmode | 17:33 |
BK_man_ | vishy: it happens for me even on MySQL: http://paste.openstack.org/show/1450/ | 17:33 |
BK_man_ | vishy: doesn't matter which backend I'm using (tried with MyISAM and InnoDB) | 17:34 |
BK_man_ | vishy: instance 2 always (smile here: tried two times) fails to start | 17:34 |
BK_man_ | vishy: on different hosts (separate cc node and two compute nodes) | 17:35 |
vishy | i don't see how that error is related? | 17:35 |
sandywalsh | jaypipes, no panic ... ping me when you're free | 17:37 |
BK_man_ | I trying to start 200 and more instances using several threads concurrently on empty installation. And second instance will fail with error in trace above. | 17:37 |
BK_man_ | vishy: do we need another lock? | 17:37 |
sandywalsh | annegentle, gotcha | 17:37 |
vishy | BK_man_: does instance 2 have a different project? | 17:37 |
vishy | BK_man_: hmm I've never seen that happen | 17:38 |
BK_man_ | vishy: all othen N-1 instances are starting Ok. Same net (/16), same project | 17:38 |
BK_man_ | vishy: Yep, indeed. That's why we (Grid Dynamics) exists - "Scaling mission critical systems" :-) | 17:38 |
BK_man_ | vishy: Let me prove it again and run my scenario again to see if instance #2 fail again with same error | 17:39 |
*** RobertLaptop has joined #openstack-dev | 17:39 | |
BK_man_ | vishy: stats from 2nd try: | 17:40 |
BK_man_ | # ./stat.sh | 17:40 |
BK_man_ | Fri May 27 10:40:01 PDT 2011 | 17:40 |
BK_man_ | 399 running | 17:40 |
BK_man_ | 1 shutdown | 17:40 |
vishy | hmm | 17:40 |
vishy | ah yes concurrency could give you an error there | 17:40 |
BK_man_ | vishy: have an idea how to fix? | 17:41 |
vishy | sure | 17:41 |
BK_man_ | # ./pping | 17:42 |
BK_man_ | $VAR1 = { | 17:42 |
BK_man_ | 'gd-4' => 191, | 17:42 |
BK_man_ | 'gd-3' => 208 | 17:42 |
BK_man_ | }; | 17:42 |
BK_man_ | pingable: 399 | 17:42 |
* BK_man_ going to 3rd try | 17:43 | |
vishy | i'm guessing that you have only one network | 17:45 |
vishy | i can see a way to fix it in the case of exactly one nova-network | 17:46 |
vishy | fixing it more generally is a little annoying | 17:46 |
BK_man_ | vishy: yep, one network at the moment | 17:48 |
vishy | hmm i think if you have multiple networks it will work | 17:49 |
vishy | because it will fail with an IntegrityError instead of no more networks | 17:49 |
vishy | BK_man_: the only fix i can think of can still race in some situations | 17:52 |
vishy | BK_man_: so I think a second network is probably the easiest fix | 17:53 |
BK_man_ | vishy: 3rd try, this time instance 5 failed with same error | 17:54 |
vishy | just create more than one when you use nova-manage network create | 17:54 |
BK_man_ | vishy: ok, will try | 17:54 |
BK_man_ | vishy: it looks like that error is gone. Need to repeat scenario at least several times to be 99.99% sure :) | 18:11 |
vishy | :) | 18:11 |
vishy | dprince: ping | 18:12 |
dprince | vishy: hello sir | 18:15 |
vishy | dprince: i would like to be able to re-run a couple smokestack tasks | 18:16 |
vishy | dprince: possible to get a login? | 18:16 |
dprince | vishy: One sec and I'll get you a login. | 18:17 |
BK_man_ | vishy: nice another bug. Got dnsmasq running on 2nd network -> instances on 1st network can't get an IP address. | 18:17 |
BK_man_ | vishy: but NoMoreNetworks error has gone :) | 18:18 |
vishy | dprince: btw have you looked at kmitchell's testing stuff? | 18:18 |
vishy | BK_man_: um, nothing should actually be getting the second network... | 18:18 |
BK_man_ | vishy: should my two networks share br100 or not? | 18:19 |
vishy | BK_man_: it shouldn't matter, nothing should get allocated the second network because it should be throwing IntegrityError | 18:19 |
dprince | vishy: not closely. I'd like to take advantage of that too. Most of the smokestack stuff has been me sort of hacking after hours, etc. I'd like to get a little more sprint time on it. Working with the guys on the team to make that possible. :) | 18:19 |
BK_man_ | vishy: there is no Integrity error (checked all three hosts with fgrep Integrity /var/log/nova/*) | 18:22 |
vishy | BK_man_: Integrity Error is caught won't show up in the logs | 18:23 |
vishy | BK_man_: hmm, perhaps network host is getting set, in any case, your networks should have different bridges | 18:24 |
vishy | how are you creating them? | 18:24 |
BK_man_ | vishy: two runs of nova-manage | 18:24 |
BK_man_ | vishy: nova-manage network create 10.20.0.0/16 1 65530 | 18:25 |
dprince | vishy: check your email | 18:25 |
BK_man_ | vishy: and nova-manage network create 10.30.0.0/24 1 255 | 18:25 |
BK_man_ | vishy: got following in MySQL | 18:26 |
BK_man_ | vishy: http://paste.openstack.org/show/1452/ | 18:26 |
vishy | BK_man_ yes don't do it that way | 18:26 |
vishy | BK_man_: this is vlan mode right? | 18:26 |
BK_man_ | vishy: yep | 18:27 |
vishy | you could manually change the bridge | 18:27 |
vishy | but something like | 18:28 |
BK_man_ | vishy: so, why should I use nova-manage network create only once? | 18:28 |
vishy | BK_man_: it is designed to create multiple networks of the same size and automatically up the bridge and vlan for each one | 18:28 |
vishy | so the general way you would use it is something like | 18:29 |
vishy | nova-manage-network create 10.20.0.0/16 8 256 | 18:29 |
vishy | which would create 8 /24s for projects | 18:29 |
vishy | (ignoring the extra - that i put in there :) | 18:30 |
BK_man_ | ok | 18:30 |
*** agarwalla has joined #openstack-dev | 18:30 | |
BK_man_ | will test on Monday, it's 10:30PM in Moscow, need to go home now | 18:30 |
BK_man_ | everybody in US - have a nice Memorial Day weekend! | 18:31 |
vishy | dprince: nifty | 18:32 |
*** BK_man_ is now known as BK_man[away] | 18:32 | |
dprince | dprince: You checking out smokestack? | 18:32 |
dprince | dprince: likes to talk to himself. | 18:32 |
dprince | vishy: feel free to fire away on branches. | 18:33 |
*** adiantum has quit IRC | 18:33 | |
dprince | vishy: I'd like to get keystone and swift in there too. | 18:33 |
dprince | vishy: I'd just need the Chef cookbooks for it and that is pretty much it I think. | 18:33 |
vishy | dprince: awesome, we are doing a similar type of thing using nova.sh which is running everything | 18:33 |
vishy | including keystone | 18:34 |
dprince | vishy: Yeah. I saw the radioedit thing. Cool idea. | 18:34 |
vishy | dprince: I have one possibly two new networking modes to hack in :) | 18:36 |
dprince | vishy: I'm happy to add in the configuration stuff. Obviously we are somewhat limited to what we can do on actuall do in the cloud. Bare metal would give more flexability. | 18:37 |
dprince | vishy: Also. As far as the radioedit stuff we do lots of our development in a similar manner on titan (spinning up a server in the cloud with everything on it) using a Openstack VPC. Kind of like vagrant for the cloud. | 18:39 |
vishy | gotcha | 18:39 |
*** Tv has quit IRC | 18:42 | |
*** dragondm has joined #openstack-dev | 18:51 | |
*** antonyy has quit IRC | 19:06 | |
*** koolhead17 has joined #openstack-dev | 19:12 | |
*** koolhead17 has left #openstack-dev | 19:18 | |
vishy | sandywalsh: ping | 19:37 |
sandywalsh | hey! | 19:37 |
sandywalsh | vishy, hey! | 19:38 |
vishy | sandwalsh: hi, I wanted to discuss some things with you regarding tasks code I've been working on | 19:38 |
*** markvoelker has quit IRC | 19:39 | |
sandywalsh | vishy, sure thing | 19:39 |
vishy | sandywalsh: essentially I'm trying to allow tracking of tasks that need to be sent from api to workers | 19:40 |
vishy | sandywalsh: allowing them to be re-run | 19:40 |
sandywalsh | vishy, hmm, tricky | 19:41 |
vishy | sandywalsh: i have it working, but I'm a little unsure as to how it would work with the distributed scheduler | 19:41 |
sandywalsh | vishy, I didn't think there was a requirement of idempotence with the workers ... is there? | 19:41 |
vishy | sandywalsh: the basic idea is that if a task to create volume is run again, that it is sent to the original host | 19:42 |
vishy | sandywalsh: there isn't currently. This would require changing that. I think it is the right way to go | 19:42 |
sandywalsh | vishy, well, with the request_id's I think we may be in good shape (don't reuse one) | 19:43 |
dabo | vishy: the original request would not go to the same host. The internally-generated request would, except that it's not preserved | 19:43 |
vishy | sandywalsh: ah interesting...so if we 'persist' request id then it will go to the same host? | 19:44 |
vishy | dabo: ^^ | 19:44 |
dabo | vishy: no | 19:44 |
*** dprince has quit IRC | 19:44 | |
dabo | since it will now try to figure the best host again | 19:44 |
dabo | we'd need to persist the request and the internally-generated data | 19:45 |
sandywalsh | vishy, hmm <thinking> there are two places where retry could be a problem: 1. The request from compute.api to the Scheduler and 2. the request from the Scheduler to the Compute hosts. | 19:45 |
vishy | so my strategy in the simple version of schedule is to simply persist the host in the task and if it is set, don't run the driver code that tries to find a host... | 19:46 |
dabo | vishy: yeah, something like that. The "build plan" that's created has the selected host information | 19:47 |
sandywalsh | vishy, we may have to track state on the request_id (started, planning, executing, etc) | 19:47 |
*** yamahata_lt has quit IRC | 19:48 | |
vishy | sandywalsh: the way that the tasks are implemented currently, is that you can update the task with data, and you will have access in any point in the call chain to a kwarg called progress which has the last updated data | 19:48 |
vishy | task.update({'status': 'xxxx'}) for example | 19:49 |
sandywalsh | vishy, ah, so the Task becomes my state machine data | 19:49 |
sandywalsh | vishy, sounds like you're reinventing a BPM | 19:49 |
vishy | sandywalsh: perhaps, I tried to keep it super simple | 19:50 |
vishy | the funny thing is as i'm working on these changes i think there are some parts of the system we can cut out | 19:50 |
sandywalsh | vishy, you may want to consider https://github.com/knipknap/SpiffWorkflow | 19:50 |
bcwaldon | win 3 | 19:51 |
bcwaldon | :( | 19:51 |
vishy | for example, I'm removing db code from the manager | 19:52 |
sandywalsh | vishy, well, if you consider soren's idea for eliminating the scheduler completely, yes :) | 19:52 |
vishy | suddenly the manager doesn't really do anything useful | 19:52 |
sandywalsh | which manager? | 19:52 |
vishy | well i started with VolumeManager | 19:52 |
vishy | but essentially without db code it just proxies calls into the driver | 19:53 |
vishy | I also am starting to think that there is no real need for scheduler to run as a separate daemon | 19:53 |
sandywalsh | right, the managers can go away if we just spawn the driver directly | 19:53 |
vishy | or be shared accross different components | 19:53 |
vishy | i think volume.api should be renamed to volume.supervisor and be running a scheduler in process | 19:54 |
sandywalsh | I'd agree with that, but it puts a burdon on the API servers (if that's an issue) | 19:54 |
sandywalsh | ah, move the scheduler into the worker | 19:54 |
vishy | then we need a separate volume api wsgi ap | 19:55 |
vishy | (if we want to scale api's separately from supervisors) | 19:55 |
vishy | but honestly I don't know if it is horrible to have them together | 19:55 |
vishy | the worker could just be a VolumeDriver | 19:56 |
vishy | supervisor makes calls directly to driver via rpc | 19:56 |
sandywalsh | the only issue I see (at first blush, may not be an issue) is the ZoneManager in the schedulers which store data in-memory. If the servers are flakey ... | 19:56 |
vishy | and driver is idempotent | 19:56 |
sandywalsh | did you read soren's idea for using amqp as the scheduler (topic per request type)? | 19:57 |
vishy | re-architecting like this is starting to become challenging | 19:57 |
sandywalsh | yeah, it's getting very rigid in there now | 19:58 |
vishy | i did, i think it is reasonable but perhaps doesn't cover all use cases | 19:58 |
sandywalsh | I think the idea of a state machine library/service/whatever is key at this stage. Love the idea of it. Understanding the requirements for one is the challenge | 19:59 |
vishy | sandywalsh: if you get a chance, take a look at lp:~vishvananda/nova/tasks | 20:00 |
sandywalsh | if we can enforce idempotence using a state machine, then we can set ourselves up for refactorings like removing the managers, etc. | 20:00 |
sandywalsh | (will do) | 20:00 |
vishy | it is a pretty simple implementation but if we can enforce idempotency it will give us a lot of recovery options | 20:00 |
vishy | I don't like the way it interacts with the scheduler but aside from that it is pretty cool | 20:01 |
sandywalsh | yup, I've been saying BPM for a long time now (just not a heavy weight one) | 20:01 |
vishy | see esp. test_task | 20:01 |
sandywalsh | like dabo said, if we persist the build-plan, add a state machine and key off the request ID, we can make it idempotent. | 20:03 |
vishy | i wonder if we could consider task_id == request_id | 20:04 |
dabo | sandywalsh: vishy: not sure about the case for multiple instances, but for any given instance, it seems possible | 20:04 |
vishy | agreed, multiple instances might be a challenge :) | 20:04 |
sandywalsh | no reason resource_id couldn't be task_id, I think. | 20:05 |
sandywalsh | vishy, the spiffworkflow is a lean and mean little library | 20:07 |
vishy | sandywalsh: ok this will require some changes later. I'm thinking that no-db changes should go in first | 20:07 |
sandywalsh | do you mean remove the db from the manager first? | 20:08 |
markwash | rpc-multicall folks: ping. . . lp789205 | 20:10 |
sandywalsh | vishy, I have to run, but I'll have a closer look over the wkend | 20:10 |
vishy | ok | 20:10 |
markwash | uvirtbot: halp? | 20:10 |
uvirtbot | markwash: Error: "halp?" is not a valid command. | 20:10 |
vishy | markwash: huh? | 20:11 |
markwash | uvirtbot: lp:789205 | 20:11 |
uvirtbot | markwash: Error: "lp:789205" is not a valid command. | 20:11 |
vishy | bug 789205 | 20:11 |
uvirtbot | Launchpad bug 789205 in nova "test_service.py tests fail because of incompatible use of mox" [Undecided,New] https://launchpad.net/bugs/789205 | 20:11 |
vishy | ah yes | 20:11 |
markwash | uvirtbot: why do you hate me? | 20:11 |
uvirtbot | markwash: Error: "why" is not a valid command. | 20:11 |
vishy | I think we should just require the new mox | 20:11 |
bcwaldon | vishy: so I don't know enough about the pip-requires business to know why it looks like we have mox 0.5.0 listed twice | 20:16 |
bcwaldon | vishy: can you explain this? | 20:16 |
vishy | merge error | 20:19 |
vishy | :) | 20:19 |
bcwaldon | Ah, can the url go away, and just leave the mox==...? | 20:19 |
*** antonyy has joined #openstack-dev | 20:19 | |
*** adiantum has joined #openstack-dev | 20:20 | |
markwash | bcwaldon: I think so. . 0.5.3 is definitely on pypi | 20:21 |
bcwaldon | markwash: thanks, I should have just gone with it. No need for you guys to do the work for me :) | 20:22 |
*** troytoman is now known as troytoman-away | 20:24 | |
*** adiantum has quit IRC | 20:32 | |
vishy | just mox without a version number is fine | 20:34 |
vishy | or we could say >0.5.3 | 20:34 |
bcwaldon | vishy: good, that's exactly what I just merge prop'd | 20:34 |
*** foxtrotgulf has quit IRC | 20:36 | |
*** ameade has quit IRC | 20:36 | |
*** antonyy has quit IRC | 20:38 | |
*** antonyy has joined #openstack-dev | 20:39 | |
vishy | comstud: why not if text[-1] != '\n' instead of if text[len(text)-1:] != '\n': ? | 20:41 |
markwash | comstud: oh also, sorry I forgot to mention that you don't need to add the '\n' if you use -A for the decryption | 20:43 |
comstud | ahh | 20:44 |
comstud | vishy: good call, just stupidness | 20:44 |
comstud | i can fix that up.. it's a bit redundant | 20:44 |
comstud | :) | 20:45 |
comstud | i'll test that it's even needed per mark | 20:45 |
comstud | cool, works without | 20:46 |
comstud | lot of work for such a simple issue ;) | 20:47 |
comstud | diff updating | 20:47 |
comstud | really i'd like to replace calling openssl with using pycrypto | 20:51 |
comstud | which is what i do in the unix guest agent | 20:51 |
comstud | but I didn't want to introduce another depedency right now | 20:51 |
vishy | comstud: um, did you mean to change dec to -d? | 20:52 |
comstud | yes | 20:52 |
vishy | comstud: ah nm, i was looking at red | 20:52 |
vishy | :) | 20:52 |
comstud | 2nd arg is now 'extra arguments to openssl' | 20:52 |
comstud | got it :) | 20:53 |
dabo | vishy: comstud: ...or you could use text.endswith("\n") | 20:54 |
*** bcwaldon has quit IRC | 20:54 | |
comstud | ah yeah.. i use that elsewhere...my mind is somewhere else today | 20:55 |
comstud | anyway, that code is gone | 20:55 |
*** adiantum has joined #openstack-dev | 20:56 | |
openstackjenkins | Project nova build #943: SUCCESS in 2 min 43 sec: http://jenkins.openstack.org/job/nova/943/ | 20:58 |
openstackjenkins | Tarmac: Fixes the bug introduced by rpc-multicall that caused some test_service.py tests to fail by pip-requiring a later version of mox | 20:58 |
* comstud & bbl | 21:01 | |
*** adiantum has quit IRC | 21:01 | |
jaypipes | markwash: around? | 21:05 |
jaypipes | markwash: I'm wondering your thoughts on jorge's links to the previous/next pages in the returned set results set... | 21:05 |
openstackjenkins | Project nova build #944: SUCCESS in 2 min 43 sec: http://jenkins.openstack.org/job/nova/944/ | 21:18 |
openstackjenkins | Tarmac: When encrypting passwords in xenapi's SimpleDH(), we shouldn't send a final newline to openssl, as it'll use that as encryption data. However, we do need to make sure there's a newline on the end when we write the base64 string for decoding.. Made these changes and updated the test. | 21:18 |
*** clayg has quit IRC | 21:19 | |
*** clayg has joined #openstack-dev | 21:20 | |
*** agarwalla has quit IRC | 21:54 | |
*** jkoelker has quit IRC | 22:29 | |
*** rnirmal has quit IRC | 22:30 | |
*** mattray has quit IRC | 22:36 | |
anotherj1sse | glenc: around? | 22:58 |
anotherj1sse | glenc / pvo - was looking at http://wiki.openstack.org/SystemUsageData, updating to use concept of tenant instead of project, and noted "account ID" is *not* the same as the existing project_id; there may be multiple projects under a single account id. | 23:00 |
anotherj1sse | I was under the impression that the auth changes would mean only tenant and user were sent into nova and other services | 23:00 |
anotherj1sse | if so, requiring usage data to be aggregated by account not tenant is incorrect | 23:00 |
*** dragondm has quit IRC | 23:23 | |
*** bcwaldon has joined #openstack-dev | 23:34 | |
*** antonyy has quit IRC | 23:37 | |
anotherj1sse | emailed list instead | 23:46 |
anotherj1sse | nite! | 23:46 |
openstackjenkins | Project swift build #271: SUCCESS in 30 sec: http://jenkins.openstack.org/job/swift/271/ | 23:46 |
openstackjenkins | Tarmac: Don't track names on PUT failure to avoid extra failures in GET/DELETE because those objects weren't stored in the cluster. | 23:46 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!