Friday, 2011-05-27

*** heckj has joined #openstack-dev00:01
*** heckj has quit IRC00:03
comstudI can't tomorrow00:03
comstudand I'll be at castle next week00:03
comstudhave a wedding to go to tomorrow00:03
comstud(fun)00:03
comstudlet's see... glance shared images is one thing we're blocked on00:04
comstudby keystone, i mean authn/z00:04
anotherj1ssecomstud: I have public glance images working - not private images yet00:04
anotherj1sseis that what you mean?00:05
comstudi think i'm referring to private images mostly.00:05
comstudbeing able to share images with certain privs00:05
comstuduh, what else is in our backlog..00:05
anotherj1ssecomstud: awesome - i want that too -- when do you think you are getting to it?00:06
anotherj1ssei'll probably get to adding admin-ness to the keystone integration for nova soon00:07
anotherj1ssehaven't looked at glance keystone integration yet00:07
anotherj1sseglance client will probably need to be refactored to work with auth - since it doesn't take a token or anything00:07
comstudsirp, jay, and johannes i know are working on generating a BP this sprint00:07
comstudfor shared images00:07
comstudI don't know how much was dependent on auth00:08
comstud(this sprint ends next thursday/friday for us)00:08
comstudi'm blanking on the other things that required auth00:08
comstudwe had to defer a number of stories because of it in our sprint planning last fri00:09
comstudone of them was 'glance integration with authn/z' certainly00:10
anotherj1ssecomstud: I'll probably be in san antonio a couple days next week00:10
anotherj1ssewe should hack at the castle ...00:10
comstudSure!00:10
comstudI'll be hacking either way00:10
comstud:)00:10
anotherj1sseseems like we have similar focus on getting an end-to-end with glance,nova,glance,keystone00:11
comstudI'm out there all week...00:11
comstudi'll be sitting down near pvo and the team00:11
comstudyeah, i have some committments for this sprint, but I think those are about wrapped up and I can look at some future things00:12
comstudok, gotta take off for a bit00:33
* comstud & bbl00:33
*** jdurgin has quit IRC00:36
*** cloudgroups has joined #openstack-dev00:47
*** dragondm has quit IRC01:04
*** cloudgroups has left #openstack-dev01:13
openstackjenkinsProject nova build #942: SUCCESS in 2 min 45 sec: http://jenkins.openstack.org/job/nova/942/01:14
openstackjenkinsTarmac: Adds the ability to make a call that returns multiple times (a call returning a generator). This is also based on the work in rpc-improvements + a bunch of fixes Vish and I worked through to get all the tests to pass so the code is a bit all over the place.01:14
openstackjenkinsThe functionality is being added to support Vish's work on removing worker access to the database, this allows us to write multi-phase actions that yield state updates as they progress, letting the frontend update the db.01:14
*** mattray has joined #openstack-dev02:10
*** mattray has quit IRC02:11
*** antonyy has joined #openstack-dev02:53
*** Zangetsue has joined #openstack-dev03:44
*** kaz_ has joined #openstack-dev03:49
*** Zangetsue has quit IRC04:12
*** cloudgroups has joined #openstack-dev04:33
*** cloudgroups has left #openstack-dev04:33
*** Zangetsue has joined #openstack-dev04:39
*** Tv has quit IRC05:27
dweimerHas anyone run the swift proxy server code through a profiler? My test node cpus are spending ~75% of their time in user space during upload runs and I'm wondering what is taking up all of the cycles. SSL is off on the test node. With SSL on ~85% of the cpu time is in user space.05:36
*** zaitcev has quit IRC06:05
ttxgood morning!06:40
*** cloudgroups has joined #openstack-dev07:11
*** cloudgroups has left #openstack-dev07:27
*** BK_man has quit IRC09:41
*** BK_man_ has joined #openstack-dev09:41
*** dysinger has quit IRC10:05
*** yamahata_lt has joined #openstack-dev10:06
*** kaz_ has quit IRC10:13
*** cloudgroups has joined #openstack-dev10:44
*** cloudgroups has left #openstack-dev11:12
*** markvoelker has joined #openstack-dev11:20
*** markvoelker has left #openstack-dev11:20
*** salv-orlando has joined #openstack-dev11:24
*** adiantum has joined #openstack-dev11:35
*** yamahata_lt has quit IRC11:57
*** yamahata_lt has joined #openstack-dev11:58
*** Zangetsue has quit IRC12:03
*** Zangetsue has joined #openstack-dev12:03
*** dysinger has joined #openstack-dev12:09
*** Zangetsue_ has joined #openstack-dev12:13
*** Zangetsue has quit IRC12:13
*** Zangetsue_ is now known as Zangetsue12:13
*** salv-orlando has quit IRC12:13
*** salv-orlando has joined #openstack-dev12:13
*** dprince has joined #openstack-dev12:19
*** ameade has joined #openstack-dev12:29
*** Zangetsue has quit IRC12:31
*** mattray has joined #openstack-dev12:55
*** markwash_ has quit IRC12:59
*** dysinger has quit IRC13:08
*** troytoman-away is now known as troytoman13:16
jaypipesanotherj1sse: that's blocked on keystone integration, yeah...13:18
*** thatsdone has joined #openstack-dev13:19
jaypipesttx: morning13:19
ttxjaypipes: yo13:19
ttxjaypipes: soren and I spent a day figuring out how to handle versions around milestones13:20
jaypipesttx: looking at the pdf now.13:20
ttxjaypipes: you can see the result at: http://ubuntuone.com/p/vjd/13:20
ttxdouble-check you have the right one.13:20
jaypipesttx: sorry, yesterday, was sitting in a car dealership most of the day (bought a new car...)13:20
jaypipesttx: I *just* downloaded the pdf, so should be right.13:21
jaypipesttx: the last slide is for swift I assume?13:22
ttxjaypipes: yes13:22
ttxjaypipes: the trick was to version in a way that would allow jumping from PPA to PPA while preserving upgrade paths13:23
jaypipesttx: and the red "explosions" are the milestone tarballs? green explosions for release tarballs?13:23
ttxthe red explosions are the branching points13:23
ttx(a decision point)13:23
jaypipesttx: gotcha.13:23
jaypipesttx: I like the metaphor of an explosion there. ;)13:24
jaypipesttx: actually, I never metaphor I didn't like.13:24
* jaypipes turning into his dad... urgh.13:24
*** thatsdone has quit IRC13:30
ttxjaypipes: looking good from where you stand ?13:31
ttxjaypipes: if yes, soren can enable it for Glance today.13:31
jaypipesttx: most definitely.13:32
ttxjaypipes: cool. Two down, One to go.13:32
*** foxtrotgulf has joined #openstack-dev13:33
sorenErr...13:33
sorenDoes the failure here look familiar to anyone? https://code.launchpad.net/~soren/swift/versioning/+merge/6243913:34
sorenI can't seem to reproduce it.13:34
sorennotmyname: ^ ?13:34
notmynamesoren: no worries13:34
notmynameI reapproved13:34
notmynameit's a timing issue13:34
sorenIT's common?13:35
notmynamerare, but it's come up before, I believe13:36
notmynameyour branch just merged though13:36
creihtI think that's only the second time that I have seen it happen13:38
* gholt coughs13:38
creihthehe13:38
gholtIt's something we've been pestering dfg about for a while now. But yeah, it's pretty rare. :)13:39
notmynamettx: versioning branch landed. we'll merge the changelog and then should be good for the release branch13:41
ttxnotmyname: ack13:41
* ttx waits with anxiety for soren's magic tarball script to spit a version number from that last merge13:42
ttxannegentle: I tried to refactor a bit the wiki starting page, please see http://wiki.openstack.org/ProposedStartingPage13:43
sorenttx: I've added a couple of unit tests to the script, so in theory, we should be in better shape now than ever :)13:43
ttxsoren: i'll still avoid to breathe while it runs, just in case13:44
sorenttx: Any second now.13:45
sorenttx: The swift job apparantly only polls trunk every 15 minutes.13:45
openstackjenkinsProject swift build #268: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/268/13:46
openstackjenkinsTarmac: Add a __canonical_version__ attribute to the swift module. This is only the numbered part of the version string, no "-dev" or similar suffixes.13:46
openstackjenkinsAdd a couple of unit tests to make sure __version__ and __canonical_version__ are generated correctly and to ensure __canonical_version__ never accidentally has anything other than numbers and points in it.13:46
sorenttx: mv dist/swift-1.4.0.tar.gz dist/swift-1.4.0~20110527.301.tar.gz13:47
sorenttx: Looks good to me.13:48
sorenttx: Now switch glance?13:48
* ttx doublechecks with the PDF 13:48
ttxsoren: indeed and yes13:48
sorenttx: If I trigger a rebuild, I'll get a tarball that supersedes the current one. Shall I go for it?13:49
ttxnotmyname: let me know when you've the changelog covered. I'll prepare a branch for doing (1.4.1, False) just after the release branch is setup13:50
sorenttx: (2011.3~bzrXXXXX being less than 2011.3~dX~2011057....)13:50
ttxsoren: that's the beauty of it. yes you can13:50
notmynamettx: will do13:50
sorenttx: mv dist/glance-2011.3.tar.gz dist/glance-2011.3~d1~20110527.134.tar.gz13:50
sorenttx: Looks exactly right.13:50
ttxsoren: cool13:51
notmynamettx: branch approved...waiting on lp/jenkins13:56
*** antonyy has quit IRC13:56
notmynamesoren: so if we want to tag 1.4.0 at this point, how does that work? ie, how does my local tag get into trunk?13:57
ttxnotmyname: thinking of it, I think jenkins needs to tag it, as the owner of the branch.13:57
ttxnotmyname: also since it's technically not "1.4.0" but rather "the last in the 1.4.0 series in trunk" you might want to use something other than "1.4.0" to tag ?13:59
notmynamesoren: as I was telling ttx earlier, I'm not as concerned with having a tag as having a simple way for users to get a commit log since the release14:00
ttx(and leave the "1.4.0" tag for the release branch ?)14:00
notmynamettx: changelog merged. let me know when you are ready for your merge to land14:00
openstackjenkinsProject swift build #269: SUCCESS in 28 sec: http://jenkins.openstack.org/job/swift/269/14:01
openstackjenkinsTarmac: changelog for 1.4.0 release14:01
ttxnotmyname: it's at https://code.launchpad.net/~ttx/swift/switch-trunk-to-1.4.1/+merge/62677 -- but maybe wait for the tag magic first14:02
*** markvoelker has joined #openstack-dev14:03
ttxsoren: I'd either tag the last 1.4.0 "last-1.4.0" or the first 1.4.1 "1.4.1-dev"14:03
ttxsmall preference for the second one, maybe14:04
DavieyDoes anyone know why (from the first commit) in nova, the novarc uses "NOVA_KEY_DIR=$(pushd $(dirname $BASH_SOURCE)>/dev/null; pwd; popd>/dev/null" ?14:05
sorenttx, notmyname: Why would there be a 1.4.1-anything-at-all tag now?14:08
*** jaypipes has quit IRC14:08
sorennotmyname: I think we should hold off tagging 1.4.0 until we actually make the release.14:09
*** jaypipes has joined #openstack-dev14:09
sorenDaviey: That does look odd on many levels.14:09
ttxsoren: there used to be a hint in changelog for users to get a list of changes. "Run this bzr log command"14:09
ttxsoren: tagging the branch point could serve the same purpose. and would also be a useful bit of info in trunk.14:10
sorenDaviey: IT's in a subshell so why bother pushd/popd? Why do it at all, and not just rely on dirname $BASH_SOURCE?... I don't know.14:10
Davieysoren: I expected it to be NOVA_KEY_DIR=$(dirname $(readlink -f ${BASH_SOURCE}))14:10
notmynamesoren: I'm not too particular on the tag name. to me it makes sense to have a 1.4.0 tag at the time of the release branch14:10
ttxsoren: we should just avoid calling it "1.4.0"14:10
notmynamettx: you don't like the 1.4.0 tag because it could be different that the final 1.4.0 version?14:11
ttxnotmyname: because it *is* different.14:11
ttxnotmyname: at the very minimum the False becomes True.14:11
Davieysoren: I don't realy want to propose the change, until i understand why psuh/pop was used.14:11
sorenDaviey: I completely understand.14:12
notmynamettx: soren: so why don't we just use the commit/rev number in the hint?14:12
notmynameand not worry about the tags?14:12
*** foxtrotdelta has joined #openstack-dev14:13
ttxnotmyname: would work, since you commit it after you know the revno, really.14:13
sorenOk, so the use case is people wanting to see what changed on trunk since the release branch was cut?14:13
sorenIs that it?14:13
notmynameyes14:13
sorenSorry, I'm having trouble following this.14:13
sorenWEll, bzr does has magic for that.14:14
sorenbzr log -r ancestoer:lp:swift/milestone14:14
sorenor whatnot.14:14
*** foxtrotgulf has quit IRC14:14
ttxsoren: hmm14:14
sorenShows revisions since the most recent common ancestor of your current working tree and the given branch.14:15
sorenSo it's certainly possible already.14:15
ttxexcept you don't have lp:swift/milestone ?14:15
sorenYes.14:15
sorenEr..14:15
sorenYet.14:15
sorenThere will be a release/milestone/whatever branch.14:15
sorenSurely?14:15
sorenOtherwise I have no clue what we're talking about?14:16
ttxthere will be a release branch, yes. But it will not be named lp:swift/milestone14:16
sorenWhatever.14:17
ttxok14:17
sorenbzr log -r ancestor:lp:swift/something_else_then14:17
ttxbzr log -r ancester:lp:~hudson-openstack/swift/release-branch14:17
sorenI sure hope that's going to have a lp:swift/something alias?14:17
ttxsoren: you can create any alias you want ?14:18
sorenAnyways, that's hardly the point right now.14:18
sorenttx: Create a fake series, set it as its trunk.14:18
sorenttx: People do it all the time.14:18
ttxew.14:18
ttxagreed on it not being the problem.14:18
sorenWhat is the purpose of getting this log?14:19
sorenJust trying to understand..14:19
notmynameso when can the 1.4.1, false branch land? we can solve the commit log stuff later, I think14:19
ttxnotmyname: would that work for you ? Instructions on getting the differences since the release branch was cut (bonus points for not having to chnage the instructions at each milestone :)14:19
notmynamesoren: the changelog is human readable and updated at release time. the commit log is for those who want to know what is in the next release before we actually update the changelog14:20
sorenIf someone could help me understand why this is importnat, that would be really great. When would you use it?14:20
sorenHmm. ok.14:20
ttxsoren: can jenkins create the release branch from trunk now ?14:21
sorenttx: Jenkins won't be creating it.14:21
sorenttx: It's a one-off, so it'll be manual.14:21
ttxI mean, the LP jenkins user.14:21
sorenOh, him.14:21
*** rnirmal has joined #openstack-dev14:21
sorenErr..14:21
sorenSure.14:21
sorenI really need to go now, though. :(14:22
sorenmtaylor: Around?14:22
sorenIs it really important that it happens now/today?14:22
ttxsoren: we can cut it afterwards from rev 301 even if rev302 is in trunk, right14:22
sorenIt's possible to branch off an older revision, you know :)14:22
ttxsoren: rightn that's what I meant :)14:22
sorenYeah :)14:23
ttxsoren: so actually notmyname can commit the (1.4.1, False) now14:23
sorenI agree it'd be simpler, but I really, really have to leave.14:23
sorenI should have left half an hour ago :)14:23
sorenYeah.14:23
ttxnotmyname: go for it. We'll create the release branch from the last-revision-before-that-one14:23
daboanyone familiar with running the volume service? Specifically, the requirement for the 'vgs' program?14:24
ttxnotmyname: we need to work on Monday on the PPA and jenkins jobs attached to the release branch anyway14:24
notmynamettx: done14:24
ttxnotmyname: congratulations, you now have an always open trunk.14:24
notmynameI won't be available on monday (if I'm included in the "we")14:25
ttxnotmyname: you're not.14:25
notmynameok :-)14:25
notmynameyay14:25
notmynamettx: thanks for the help14:25
ttxnotmyname: basically now you can signal the trunk is ready for release-branch by switching to (1.4.n,False) to (1.4.n+1,False)14:26
ttxI mean from...to14:26
notmynameI have only one more request: ttx, soren, mtaylor(, myself?) -- please write this down so we don't forget what needs to be done at each release14:26
*** jkoelker has joined #openstack-dev14:26
ttxindeed. I plan to document this very thoroughly14:26
ttxthe PDF drawing is just the first part in a series :)14:27
annegentlettx: looks great - thanks for the refresh on the front page.14:27
ttxannegentle: it's not refreshed yet14:27
notmynamettx: ya, I think it will be dev work, changelog, version bump, dev work, etc (where the release is cut after every version bump14:27
ttxannegentle: it's just proposed, if it's ok I can switch StartingPage to it14:28
ttxannegentle: then we can also separate the InstallInstructions per-project. That's my next step :)14:28
* ttx bbls14:29
notmynameswift 1.4.0 is scheduled for release on Tuesday. trunk is open for 1.4.1 dev work14:31
openstackjenkinsProject swift build #270: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/270/14:31
openstackjenkinsTarmac: Switch Swift trunk to 1.4.1, now that the 1.4.0 release branch is branched out.14:31
sandywalshpython-novaclient 2.4.3 available on PyPi http://pypi.python.org/pypi/python-novaclient (adds POST /zones/select command for cross-zone scheduling)14:48
*** antonyy has joined #openstack-dev14:56
ttxannegentle: so should I just push the new page ?14:56
annegentlettx: yeah, go right ahead14:57
annegentleyou're so literal with my use of "refresh" I like it. :)14:58
*** dragondm has joined #openstack-dev14:59
*** foxtrotdelta has quit IRC15:03
*** foxtrotgulf has joined #openstack-dev15:04
*** salv-orlando has quit IRC15:09
*** blamar has quit IRC15:11
*** markwash has quit IRC15:11
*** markwash has joined #openstack-dev15:12
*** blamar has joined #openstack-dev15:12
*** koolhead17 has joined #openstack-dev15:17
annegentlemtaylor: you around? Q for you on some doc automation15:18
daboHey, anyone familiar with running the volume service? Specifically, the requirement for the 'vgs' program?15:31
*** dysinger has joined #openstack-dev15:39
*** koolhead17 has left #openstack-dev15:43
rnirmaldabo: nothing beyond that 'vgs' is used to get info on host lvm groups. and that we need to default to run it, I haven't actually used the volume service to provision any volumes15:46
dabornirmal: I was also wondering how to install it. 'sudo apt-get install vgs' comes up empty15:47
daboit didn't come standard on maverick15:47
rnirmalI think it's part of lvm215:49
rnirmaldabo: just checked, yeah it's part of lvm215:49
dabornirmal: thanks - I'll grab that and see if I can make some progress15:50
rnirmalcool you make need to run vgcreate at some point15:50
dabornirmal: ok. guess I have some reading up to do first. :)15:51
markwashjaypipes: thanks for leading the whole pagination thing! its going to feel great to remove the "blocked" stickers from the team titan taskboard15:58
jaypipesmarkwash: no probs! just finishing up a ML post that summarizes the conclusions.15:59
*** Tv has joined #openstack-dev16:09
*** foxtrotgulf has quit IRC16:21
*** foxtrotgulf has joined #openstack-dev16:21
jaypipesdabo: what, too mean? :)16:29
dabojaypipes: positively sinister!16:29
jaypipeshehe16:29
* jaypipes cackles16:29
*** zaitcev has joined #openstack-dev16:31
mtaylorannegentle: herro16:44
mtaylorannegentle: (I suck - totally dropped the ball on your docs automation - putting it high up on my list right now)16:46
*** bcwaldon has joined #openstack-dev16:53
glencjaypipes - isn't there a movement to deprecate the Nova /images resource and encourage people to use Glance directly?16:57
sandywalshjaypipes, you on skype?16:57
*** adiantum has quit IRC16:59
jaypipesglenc: /images in Nova is a kind of wrapper around Glance's /images endpoint. Big difference: Glance doesn't XML, Nova does ;)17:08
jaypipessandywalsh: yeppers. jaypipes.17:08
glencjaypipes: i'm getting requests for enhancements to Nova's /images and it worries me; seems like we need to reconcile these17:09
annegentlemtaylor: ok, cool, then I can haz doc automation! Whoopie.17:09
sandywalshjaypipes, got a sec to talk about this api thingy? might be faster?17:09
jaypipessandywalsh: surely.17:09
jaypipesglenc: want to chat about it?17:09
annegentlemtaylor: another automation item that isn't in that email thread is to create swift.openstack.org/1.3 - automate the archiving of the point releases on the RST sites. Can we do that?17:10
jaypipesglenc: skype as soon as I finish with sandywalsh?17:10
glencsure17:10
*** jdurgin has joined #openstack-dev17:10
jaypipesglenc: or cell me (614 406 1267)17:10
glenclet me know when you're available17:10
mtaylorannegentle: yes to both!17:10
jaypipesglenc: will do.17:11
sandywalshannegentle, I'm working on dev-level docs for zones/dist-scheduler ... should that be in a separate file or can it go as an addendum to the admin zones docs?17:12
annegentlemtaylor: sweet.17:13
jaypipesglenc: should we bring erik carlin into the conversation, too? I'm been having a private email thread with him about the /images endpoint17:13
annegentlesandywalsh: thinking...17:13
sandywalshannegentle, I'm thinking separate. Is there a particular format to follow?17:13
*** adiantum has joined #openstack-dev17:13
annegentlesandywalsh: yeah, I'm thinking separate too, though still next to each other in the index.rst17:14
mtaylorannegentle: would you mind sending me an email with just a small little psuedo-code-ish description of what you want archived to where?17:14
glencjaypipes let's see if he's available17:14
annegentlemtaylor: sure, and the "when" is part of it too, as in, what triggers an archive and how long do archives last.17:14
mtaylorannegentle: yes17:15
annegentlesandywalsh: I think it'd fit in here: http://nova.openstack.org/devref/index.html17:16
annegentlesandywalsh: and then if you don't mind, I'll take what you write, edit for sysadmins, and put it on docs.openstack too17:16
sandywalshannegentle, sounds like a plan ... stay tuned17:16
annegentlesandywalsh: cool. I found an RST > DocBook online tool (pandoc) and I found it's a great starting point, but I'll still have to ask you questions as if I were a sysadmin to round out the topics17:17
sandywalshannegentle, no problem at all17:18
*** ecarlin has joined #openstack-dev17:22
*** RobertLaptop has quit IRC17:26
sandywalshannegentle, 9pm CST is going to be tough for EU17:27
annegentlesandywalsh: yeah, but in my docs-list of about 35 members, more are Asia than EU.17:28
annegentlebut yeah, we'll adjust as needed, gotta start with somethin'17:28
jaypipessandywalsh: on phone with glenc and ecarlin right now...17:29
*** dragondm has quit IRC17:29
BK_man_vishy: around?17:30
BK_man_vishy: what did you mean under "NOTE(vish): if with_lockmode isn't supported, as in sqlite, then this has concurrency issues"? nova/db/sqlalchemy/api.py line 1163?17:32
vishyi meant exactly what it says17:33
vishysqlite doesn't support with_lockmode17:33
BK_man_vishy: it happens for me even on MySQL: http://paste.openstack.org/show/1450/17:33
BK_man_vishy: doesn't matter which backend I'm using (tried with MyISAM and InnoDB)17:34
BK_man_vishy: instance 2 always (smile here: tried two times) fails to start17:34
BK_man_vishy: on different hosts (separate cc node and two compute nodes)17:35
vishyi don't see how that error is related?17:35
sandywalshjaypipes, no panic ... ping me when you're free17:37
BK_man_I trying to start 200 and more instances using several threads concurrently on empty installation. And second instance will fail with error in trace above.17:37
BK_man_vishy: do we need another lock?17:37
sandywalshannegentle, gotcha17:37
vishyBK_man_: does instance 2 have a different project?17:37
vishyBK_man_: hmm I've never seen that happen17:38
BK_man_vishy: all othen N-1 instances are starting Ok. Same net (/16), same project17:38
BK_man_vishy: Yep, indeed. That's why we (Grid Dynamics) exists - "Scaling mission critical systems" :-)17:38
BK_man_vishy: Let me prove it again and run my scenario again to see if instance #2 fail again with same error17:39
*** RobertLaptop has joined #openstack-dev17:39
BK_man_vishy: stats from 2nd try:17:40
BK_man_# ./stat.sh17:40
BK_man_Fri May 27 10:40:01 PDT 201117:40
BK_man_    399 running17:40
BK_man_      1 shutdown17:40
vishyhmm17:40
vishyah yes concurrency could give you an error there17:40
BK_man_vishy: have an idea how to fix?17:41
vishysure17:41
BK_man_# ./pping17:42
BK_man_$VAR1 = {17:42
BK_man_          'gd-4' => 191,17:42
BK_man_          'gd-3' => 20817:42
BK_man_        };17:42
BK_man_pingable: 39917:42
* BK_man_ going to 3rd try17:43
vishyi'm guessing that you have only one network17:45
vishyi can see a way to fix it in the case of exactly one nova-network17:46
vishyfixing it more generally is a little annoying17:46
BK_man_vishy: yep, one network at the moment17:48
vishyhmm i think if you have multiple networks it will work17:49
vishybecause it will fail with an IntegrityError instead of no more networks17:49
vishyBK_man_: the only fix i can think of can still race in some situations17:52
vishyBK_man_: so I think a second network is probably the easiest fix17:53
BK_man_vishy: 3rd try, this time instance 5 failed with same error17:54
vishyjust create more than one when you use nova-manage network create17:54
BK_man_vishy: ok, will try17:54
BK_man_vishy: it looks like that error is gone. Need to repeat scenario at least several times to be 99.99% sure :)18:11
vishy:)18:11
vishydprince: ping18:12
dprincevishy: hello sir18:15
vishydprince: i would like to be able to re-run a couple smokestack tasks18:16
vishydprince: possible to get a login?18:16
dprincevishy: One sec and I'll get you a login.18:17
BK_man_vishy: nice another bug. Got dnsmasq running on 2nd network -> instances on 1st network can't get an IP address.18:17
BK_man_vishy: but NoMoreNetworks error has gone :)18:18
vishydprince: btw have you looked at kmitchell's testing stuff?18:18
vishyBK_man_: um, nothing should actually be getting the second network...18:18
BK_man_vishy: should my two networks share br100 or not?18:19
vishyBK_man_: it shouldn't matter, nothing should get allocated the second network because it should be throwing IntegrityError18:19
dprincevishy: not closely. I'd like to take advantage of that too. Most of the smokestack stuff has been me sort of hacking after hours, etc. I'd like to get a little more sprint time on it. Working with the guys on the team to make that possible. :)18:19
BK_man_vishy: there is no Integrity error (checked all three hosts with fgrep Integrity /var/log/nova/*)18:22
vishyBK_man_: Integrity Error is caught won't show up in the logs18:23
vishyBK_man_: hmm, perhaps network host is getting set, in any case, your networks should have different bridges18:24
vishyhow are you creating them?18:24
BK_man_vishy: two runs of nova-manage18:24
BK_man_vishy: nova-manage network create 10.20.0.0/16 1 6553018:25
dprincevishy: check your email18:25
BK_man_vishy: and nova-manage network create 10.30.0.0/24 1 25518:25
BK_man_vishy: got following in MySQL18:26
BK_man_vishy: http://paste.openstack.org/show/1452/18:26
vishyBK_man_ yes don't do it that way18:26
vishyBK_man_: this is vlan mode right?18:26
BK_man_vishy: yep18:27
vishyyou could manually change the bridge18:27
vishybut something like18:28
BK_man_vishy: so, why should I use nova-manage network create only once?18:28
vishyBK_man_: it is designed to create multiple networks of the same size and automatically up the bridge and vlan for each one18:28
vishyso the general way you would use it is something like18:29
vishynova-manage-network create 10.20.0.0/16 8 25618:29
vishywhich would create 8 /24s for projects18:29
vishy(ignoring the extra - that i put in there :)18:30
BK_man_ok18:30
*** agarwalla has joined #openstack-dev18:30
BK_man_will test on Monday, it's 10:30PM in Moscow, need to go home now18:30
BK_man_everybody in US - have a nice Memorial Day weekend!18:31
vishydprince: nifty18:32
*** BK_man_ is now known as BK_man[away]18:32
dprincedprince: You checking out smokestack?18:32
dprincedprince: likes to talk to himself.18:32
dprincevishy: feel free to fire away on branches.18:33
*** adiantum has quit IRC18:33
dprincevishy: I'd like to get keystone and swift in there too.18:33
dprincevishy: I'd just need the Chef cookbooks for it and that is pretty much it I think.18:33
vishydprince: awesome, we are doing a similar type of thing using nova.sh which is running everything18:33
vishyincluding keystone18:34
dprincevishy: Yeah. I saw the radioedit thing. Cool idea.18:34
vishydprince: I have one possibly two new networking modes to hack in :)18:36
dprincevishy: I'm happy to add in the configuration stuff. Obviously we are somewhat limited to what we can do on actuall do in the cloud. Bare metal would give more flexability.18:37
dprincevishy: Also. As far as the radioedit stuff we do lots of our development in a similar manner on titan (spinning up a server in the cloud with everything on it) using a Openstack VPC. Kind of like vagrant for the cloud.18:39
vishygotcha18:39
*** Tv has quit IRC18:42
*** dragondm has joined #openstack-dev18:51
*** antonyy has quit IRC19:06
*** koolhead17 has joined #openstack-dev19:12
*** koolhead17 has left #openstack-dev19:18
vishysandywalsh: ping19:37
sandywalshhey!19:37
sandywalshvishy, hey!19:38
vishysandwalsh: hi, I wanted to discuss some things with you regarding tasks code I've been working on19:38
*** markvoelker has quit IRC19:39
sandywalshvishy, sure thing19:39
vishysandywalsh: essentially I'm trying to allow tracking of tasks that need to be sent from api to workers19:40
vishysandywalsh: allowing them to be re-run19:40
sandywalshvishy, hmm, tricky19:41
vishysandywalsh: i have it working, but I'm a little unsure as to how it would work with the distributed scheduler19:41
sandywalshvishy, I didn't think there was a requirement of idempotence with the workers ... is there?19:41
vishysandywalsh: the basic idea is that if a task to create volume is run again, that it is sent to the original host19:42
vishysandywalsh: there isn't currently.  This would require changing that.  I think it is the right way to go19:42
sandywalshvishy, well, with the request_id's I think we may be in good shape (don't reuse one)19:43
dabovishy: the original request would not go to the same host. The internally-generated request would, except that it's not preserved19:43
vishysandywalsh: ah interesting...so if we 'persist' request id then it will go to the same host?19:44
vishydabo: ^^19:44
dabovishy: no19:44
*** dprince has quit IRC19:44
dabosince it will now try to figure the best host again19:44
dabowe'd need to persist the request and the internally-generated data19:45
sandywalshvishy, hmm <thinking> there are two places where retry could be a problem: 1. The request from compute.api to the Scheduler and 2. the request from the Scheduler to the Compute hosts.19:45
vishyso my strategy in the simple version of schedule is to simply persist the host in the task and if it is set, don't run the driver code that tries to find a host...19:46
dabovishy: yeah, something like that. The "build plan" that's created has the selected host information19:47
sandywalshvishy, we may have to track state on the request_id (started, planning, executing, etc)19:47
*** yamahata_lt has quit IRC19:48
vishysandywalsh: the way that the tasks are implemented currently, is that you can update the task with data, and you will have access in any point in the call chain to a kwarg called progress which has the last updated data19:48
vishytask.update({'status': 'xxxx'}) for example19:49
sandywalshvishy, ah, so the Task becomes my state machine data19:49
sandywalshvishy, sounds like you're reinventing a BPM19:49
vishysandywalsh: perhaps, I tried to keep it super simple19:50
vishythe funny thing is as i'm working on these changes i think there are some parts of the system we can cut out19:50
sandywalshvishy, you may want to consider https://github.com/knipknap/SpiffWorkflow19:50
bcwaldonwin 319:51
bcwaldon:(19:51
vishyfor example, I'm removing db code from the manager19:52
sandywalshvishy, well, if you consider soren's idea for eliminating the scheduler completely, yes :)19:52
vishysuddenly the manager doesn't really do anything useful19:52
sandywalshwhich manager?19:52
vishywell i started with VolumeManager19:52
vishybut essentially without db code it just proxies calls into the driver19:53
vishyI also am starting to think that there is no real need for scheduler to run as a separate daemon19:53
sandywalshright, the managers can go away if we just spawn the driver directly19:53
vishyor be shared accross different components19:53
vishyi think volume.api should be renamed to volume.supervisor and be running a scheduler in process19:54
sandywalshI'd agree with that, but it puts a burdon on the API servers (if that's an issue)19:54
sandywalshah, move the scheduler into the worker19:54
vishythen we need a separate volume api wsgi ap19:55
vishy(if we want to scale api's separately from supervisors)19:55
vishybut honestly I don't know if it is horrible to have them together19:55
vishythe worker could just be a VolumeDriver19:56
vishysupervisor makes calls directly to driver via rpc19:56
sandywalshthe only issue I see (at first blush, may not be an issue) is the ZoneManager in the schedulers which store data in-memory. If the servers are flakey ...19:56
vishyand driver is idempotent19:56
sandywalshdid you read soren's idea for using amqp as the scheduler (topic per request type)?19:57
vishyre-architecting like this is starting to become challenging19:57
sandywalshyeah, it's getting very rigid in there now19:58
vishyi did, i think it is reasonable but perhaps doesn't cover all use cases19:58
sandywalshI think the idea of a state machine library/service/whatever is key at this stage. Love the idea of it. Understanding the requirements for one is the challenge19:59
vishysandywalsh: if you get a chance, take a look at lp:~vishvananda/nova/tasks20:00
sandywalshif we can enforce idempotence using a state machine, then we can set ourselves up for refactorings like removing the managers, etc.20:00
sandywalsh(will do)20:00
vishyit is a pretty simple implementation but if we can enforce idempotency it will give us a lot of recovery options20:00
vishyI don't like the way it interacts with the scheduler but aside from that it is pretty cool20:01
sandywalshyup, I've been saying BPM for a long time now (just not a heavy weight one)20:01
vishysee esp. test_task20:01
sandywalshlike dabo said, if we persist the build-plan, add a state machine and key off the request ID, we can make it idempotent.20:03
vishyi wonder if we could consider task_id == request_id20:04
dabosandywalsh: vishy: not sure about the case for multiple instances, but for any given instance, it seems possible20:04
vishyagreed, multiple instances might be a challenge :)20:04
sandywalshno reason resource_id couldn't be task_id, I think.20:05
sandywalshvishy, the spiffworkflow is a lean and mean little library20:07
vishysandywalsh: ok this will require some changes later.  I'm thinking that no-db changes should go in first20:07
sandywalshdo you mean remove the db from the manager first?20:08
markwashrpc-multicall folks: ping. . . lp78920520:10
sandywalshvishy, I have to run, but I'll have a closer look over the wkend20:10
vishyok20:10
markwashuvirtbot: halp?20:10
uvirtbotmarkwash: Error: "halp?" is not a valid command.20:10
vishymarkwash: huh?20:11
markwashuvirtbot: lp:78920520:11
uvirtbotmarkwash: Error: "lp:789205" is not a valid command.20:11
vishybug 78920520:11
uvirtbotLaunchpad bug 789205 in nova "test_service.py tests fail because of incompatible use of mox" [Undecided,New] https://launchpad.net/bugs/78920520:11
vishyah yes20:11
markwashuvirtbot: why do you hate me?20:11
uvirtbotmarkwash: Error: "why" is not a valid command.20:11
vishyI think we should just require the new mox20:11
bcwaldonvishy: so I don't know enough about the pip-requires business to know why it looks like we have mox 0.5.0 listed twice20:16
bcwaldonvishy: can you explain this?20:16
vishymerge error20:19
vishy:)20:19
bcwaldonAh, can the url go away, and just leave the mox==...?20:19
*** antonyy has joined #openstack-dev20:19
*** adiantum has joined #openstack-dev20:20
markwashbcwaldon: I think so. . 0.5.3 is definitely on pypi20:21
bcwaldonmarkwash: thanks, I should have just gone with it. No need for you guys to do the work for me :)20:22
*** troytoman is now known as troytoman-away20:24
*** adiantum has quit IRC20:32
vishyjust mox without a version number is fine20:34
vishyor we could say >0.5.320:34
bcwaldonvishy: good, that's exactly what I just merge prop'd20:34
*** foxtrotgulf has quit IRC20:36
*** ameade has quit IRC20:36
*** antonyy has quit IRC20:38
*** antonyy has joined #openstack-dev20:39
vishycomstud: why not if text[-1] != '\n' instead of if text[len(text)-1:] != '\n': ?20:41
markwashcomstud: oh also, sorry I forgot to mention that you don't need to add the '\n' if you use -A for the decryption20:43
comstudahh20:44
comstudvishy: good call, just stupidness20:44
comstudi can fix that up.. it's a bit redundant20:44
comstud:)20:45
comstudi'll test that it's even needed per mark20:45
comstudcool, works without20:46
comstudlot of work for such a simple issue ;)20:47
comstuddiff updating20:47
comstudreally i'd like to replace calling openssl with using pycrypto20:51
comstudwhich is what i do in the unix guest agent20:51
comstudbut I didn't want to introduce another depedency right now20:51
vishycomstud: um, did you mean to change dec to -d?20:52
comstudyes20:52
vishycomstud: ah nm, i was looking at red20:52
vishy:)20:52
comstud2nd arg is now 'extra arguments to openssl'20:52
comstudgot it :)20:53
dabovishy: comstud: ...or you could use text.endswith("\n")20:54
*** bcwaldon has quit IRC20:54
comstudah yeah.. i use that elsewhere...my mind is somewhere else today20:55
comstudanyway, that code is gone20:55
*** adiantum has joined #openstack-dev20:56
openstackjenkinsProject nova build #943: SUCCESS in 2 min 43 sec: http://jenkins.openstack.org/job/nova/943/20:58
openstackjenkinsTarmac: Fixes the bug introduced by rpc-multicall that caused some test_service.py tests to fail by pip-requiring a later version of mox20:58
* comstud & bbl21:01
*** adiantum has quit IRC21:01
jaypipesmarkwash: around?21:05
jaypipesmarkwash: I'm wondering your thoughts on jorge's links to the previous/next pages in the returned set results set...21:05
openstackjenkinsProject nova build #944: SUCCESS in 2 min 43 sec: http://jenkins.openstack.org/job/nova/944/21:18
openstackjenkinsTarmac: When encrypting passwords in xenapi's SimpleDH(), we shouldn't send a final newline to openssl, as it'll use that as encryption data.  However, we do need to make sure there's a newline on the end when we write the base64 string for decoding..  Made these changes and updated the test.21:18
*** clayg has quit IRC21:19
*** clayg has joined #openstack-dev21:20
*** agarwalla has quit IRC21:54
*** jkoelker has quit IRC22:29
*** rnirmal has quit IRC22:30
*** mattray has quit IRC22:36
anotherj1sseglenc: around?22:58
anotherj1sseglenc / pvo - was looking at http://wiki.openstack.org/SystemUsageData, updating to use concept of tenant instead of project, and noted "account ID" is *not* the same as the existing project_id; there may be multiple projects under a single account id.23:00
anotherj1sseI was under the impression that the auth changes would mean only tenant and user were sent into nova and other services23:00
anotherj1sseif so, requiring usage data to be aggregated by account not tenant is incorrect23:00
*** dragondm has quit IRC23:23
*** bcwaldon has joined #openstack-dev23:34
*** antonyy has quit IRC23:37
anotherj1sseemailed list instead23:46
anotherj1ssenite!23:46
openstackjenkinsProject swift build #271: SUCCESS in 30 sec: http://jenkins.openstack.org/job/swift/271/23:46
openstackjenkinsTarmac: Don't track names on PUT failure to avoid extra failures in GET/DELETE because those objects weren't stored in the cluster.23:46

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!