Friday, 2011-01-07

vishymaybe try to change that to logging.exception and see if you can get a traceback for the underlying exception causing the auth failure00:00
nelson__creiht: well, it's a help when you say that /myfiles shouldn't be going to the account-server ... that gives me some place to look.00:00
creihtyeah, I'm trying to figure out how that got there00:00
creihtnelson__: I've lost history, can you paste the st command again that you are running when to upload the object?00:01
*** kevnfx has quit IRC00:01
termiejk0: updated00:01
nelson__oh urgh. Am I being bit by swift-init's silence again??  When I run this: swift-account-server account-server.conf00:01
nelson__I get this: Error trying to load config account-server.conf: Cannot resolve relative uri 'config:account-server.conf'; no context keyword argument given00:02
dubscreiht: you guys should rename `st` to `swiftly` or something.   st is so boring (and probably a bash alias for a lot of people :) )00:02
nelson__I'm *pretty* sure that I ought to be able to run any swift server from the command line in the form swift-$SERVICE-server /etc/swift/$SERVICE-server.conf right?00:02
dubsswiftly download ...00:03
creihtcorrect00:03
creihtdubs: lol00:03
creihtI like it :)00:03
creihtgholt: -^00:03
nelson__I like it too.00:03
nelson__and 'st' is already a command, as 'man st' (scsi tape) will tell you.00:03
creihtwho uses scsi tape any more? :)00:03
nelson__who uses scsi any more?00:04
creihtnelson__: that isn't a command00:04
nelson__ST(4)         Linux Programmer's Manual            ST(4)00:04
creiht" Linux Programmer's Manual" not "User Commands"00:05
nelson__haha, yes, of course, man 4 isn't for commands, yes' youre right.00:05
creihthehe00:05
dubsstill though :)00:05
nelson__still, I like the punnnage of swiftly00:05
creihtI still like swiftly reguardless :)00:05
nelson__check it into bzr and tell annegentle to start renaming in the docs.00:06
creihtnelson__: make sure you add /etc/swift in front of account-server.conf00:06
creihtthe error you listed above was due to it not finding the account-server.conf00:07
nelson__I'm in /etc/swift.  It must chdir first or something.00:07
creihthrm00:07
nelson__cuz it's running now.00:07
creihtk00:07
*** ctennis has joined #openstack00:07
creihtgotta run... will look at it more tomorrow00:08
nelson__st -A https://a.ra-tes.org:11000/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz00:08
nelson__okay, thanks for your help.00:08
dubsquaredvishy: yeah odd…objectstore is hosed00:08
dubsquaredvishy:  i cant even terminate these broken instances etc etc00:08
dubsquaredgoing to roll home…but ill leave a message with soren too00:09
dubsquaredill chat with you tonight if you're around00:09
dabook, all tests are passing, pep8 is happy; please review https://code.launchpad.net/~ed-leafe/nova/internal_id_fix/+merge/45460 so we can push our last few merges00:10
dabosandywalsh: OK, I tried to get this pushed into trunk, but it looks like it ain't gonna happen, so feel free to take what you can for your own branch00:13
*** dubsquared has quit IRC00:14
sandywalshk, thanks dabo. I'll give password reset a review while I'm at it00:14
termiejk0: why are you using 'date'?00:15
daboi've been here too long, and my patience is gone. I'll try to get the password reset pushed later tonight, but now it looks doubtful00:15
termiejk0: it is not a date, at the very least it is a datetime, and the real type is 'created_at'00:15
jk0termie: fair enough00:15
edaydabo: it looks good minus the one extra ctxt line in there00:16
tr3bucheteday you around?00:22
edaytr3buchet: just reviewed00:22
tr3buchetsweet00:22
* tr3buchet refreshes page00:22
jk0I wonder of launchpad could possibly take any longer to update a diff00:22
jk0doesn't take quite long enough00:23
edayjk0: I'm sure there are ways :)00:23
tr3buchetyeah maybe a sleep 500 would help...00:23
jk0termie: how's it look now?00:23
*** littleidea has quit IRC00:23
tr3buchetit's even slower across vpn00:23
*** littleidea has joined #openstack00:25
uvirtbotNew bug: #699654 in nova "i18n  - Terminating instance with invalid instance id causes error in the response" [Undecided,New] https://launchpad.net/bugs/69965400:26
termiejk0: looks great :) approvedz0r00:30
jk0thanks :)00:31
jk0eday: I feel like making a few more changes, would you mind reviewing it too? :P00:31
edayjk0: how much is it worth to you? :)00:36
jk0a beer on me at the next summit00:37
edayjk0: you still working on it, or should I look now?00:42
jk0it's ready to go00:42
jk0termie gave the approval00:42
edayso, one issue here... why filter at the DB level? Why not return the full list of actions directly from the query (like other methods) and filter/format in the openstack API code (like other data types)?00:45
edayie, what if another future API call wanted more data that was in the action record?00:46
jk0hm, I'm not sure I follow you00:46
edayjk0: for example, remove the for loop around actions, and just return the list directly00:47
jk0oh, are you on the latest diff?00:47
edaylet the servers.py code filter out created/action/error00:47
edayyup00:47
jk0that's what it's doing -- sending all of the instance_action records for that instance_id00:48
openstackhudsonProject nova build #364: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/364/00:49
edayI see the diff for r521, which filters out everything but ccreated_at, action, and error00:49
jk0oh, I see what you mean00:50
jk0that's really the only data that's being stored00:50
jk0but I see where you going00:50
jk0*youre00:50
edaybut if it changes.. and other thigns don't filter at that layer, they filter/format in the API layers (outside the core API)00:51
jk0I'll update that quick00:51
*** jc_smith has quit IRC00:51
rluciois HOL still broken?00:54
rluciofor nova00:54
rlucio?00:54
*** jc_smith has joined #openstack00:59
*** johnpur has quit IRC00:59
jk0eday: how about now?01:03
*** dirakx has joined #openstack01:04
*** MarkAtwood has quit IRC01:06
*** maplebed has quit IRC01:07
*** rlucio has quit IRC01:07
openstackhudsonProject nova build #365: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/365/01:19
*** daleolds has quit IRC01:26
*** adiantum has joined #openstack01:33
*** jc_smith has quit IRC01:35
*** joearnold has quit IRC01:36
*** dragondm has quit IRC01:41
*** kevnfx has joined #openstack01:43
*** dfg_ has quit IRC01:47
*** littleidea has quit IRC01:48
*** dfg_ has joined #openstack01:52
*** dfg_ has quit IRC01:57
*** zul has joined #openstack02:33
*** Jordandev has quit IRC02:47
*** jdurgin has quit IRC02:47
*** gasbakid|2 has quit IRC02:47
*** trin_cz has quit IRC02:53
*** ArdRigh has joined #openstack02:55
*** littleidea has joined #openstack03:04
*** styro has left #openstack03:05
*** lvaughn_ has quit IRC03:23
*** littleidea has quit IRC03:33
winston-done question about nova03:45
creihtwinston-d: howdy :)03:46
creihtany luck with the performance issues?03:46
winston-dwhat would happen to the images, when the instances is shut-downed03:46
winston-dcreiht: I thought you left03:46
* creiht never leaves03:47
creiht:)03:47
creihtwell I did leave, but was just checking back in03:47
creiht:)03:47
winston-dcreiht: I see.03:48
winston-dcreiht: how long will you be around? i need to get a quick lunch.03:51
creihtnot much03:52
creihtsorry03:52
*** kevnfx has quit IRC03:52
winston-dOK, never mind.  Let me just skip this lunch. :) here's newest results.  http://paste.openstack.org/show/419/03:52
creihthah03:53
creihthrm03:53
creihtcan you paste another section of proxy logs from a bench run?03:54
winston-dsure. all of them or just one operation PUT/GET/DEL?03:54
creihtjust a good sampling03:54
creihtof puts03:54
winston-dOK03:54
creihtI imagine that if we solve the put problem, everything else will follow :)03:55
winston-dcreiht: no error this time.  http://paste.openstack.org/show/435/03:57
winston-dthe average latency for PUT seems to be ~0.45 s.  So 20.2/s PUT looks reasonable?03:58
creihtwinston-d: hrm... can you post a larger cross-section?03:58
creihtall of those were .05ish seconds03:58
winston-dOK03:58
creihtwell if it was just a single thread, then that would make sense04:00
creihtIt should by default run with a concurrency of 1004:00
winston-dthis is full one, 1000 PUTS http://paste.openstack.org/show/436/04:01
creihtcool04:01
creihtlooking...04:01
creihtwinston-d: what rev are you running again?04:02
winston-dswift-1.1.004:02
creihtand are you running swift-bench with the defaults, or are you overriding any of them?04:02
winston-dwith defaults04:03
winston-dcreiht: i've some questions about the consistency of swift.  how can i know swift (data/account) is consistent?  Any way to find out?04:05
creihtjust sec, I might be on to something :)04:06
winston-dK04:06
creihtwell I thought I was... there was once a bug where swift-bench wouldn't write to multiple containers, but it is fixed in that version04:11
creihtand it would cause poor perf numbers like you are seeing04:11
creihtso consistency04:11
creihtwhich data do you want to know is consistent?04:12
creihtthe objects, container listings, or...?04:12
winston-dfirst, account04:13
winston-dwhat can i expect from 'swift-account-audit' ?04:13
creihtwinston-d: and out of curiosity, if you run swift bench the same, but add a -c 20 as a command line option, and let me know if things change any?04:13
winston-dcreiht: let me try04:14
creihtthat will bump the concurrency up to 2004:14
creihtswift-account-audit is something that can be used to verify the consistency of the account, its containers, and objects04:15
creihtbut it is more of a one off tool04:15
creihtor more for debugging04:15
winston-dnew results w/ -c 20: http://paste.openstack.org/show/437/04:15
winston-dhttp://paste.openstack.org/compare/437/419/ no much difference04:16
creihtheh04:16
creihtok04:16
creihtdo you want to know consistency in general, accross the cluster? or for a specific account?04:17
*** ccustine has quit IRC04:18
winston-dcreiht: you know, yesterday i destory two storage drives, and i used 'swift-auth-recreate-accounts' to recreate accounts. but never know whether it fixed the destoried drives04:18
creihtswift-auth-recreate-accounts will let you know if it can't recreate the account04:19
creihtyou could use swift-account-audit also in that case04:19
*** kashyapc has joined #openstack04:20
winston-d"1 accounts, failures []" is the result of swift-recreate-accounts04:20
creihtthen it was successful04:20
creihtif any accounts show up in [], then those failed04:20
winston-di see04:21
creihtin swift, everything has 3 replicas04:21
creihtany PUT operation (account, container, or object), the opperation only returns success if at least 2 of the replicas were written04:21
winston-dwell, i've turned off replicator of account/container/object04:22
creihtahh04:22
creihtthat's right, we did that for testing04:23
winston-dthis is the result of swift-account-audit http://paste.openstack.org/show/438/04:23
creihtI've actually never used it myself04:24
winston-dwell. :)04:25
*** ArdRigh has quit IRC04:25
winston-dnever mind.04:25
creihthehe04:25
creihtjust sec04:25
creihttrying it now :)04:25
creihtof course it runs fine here :/04:27
creihtI get the same error that you do if I try an account that doesn't exist04:27
creihtdid you enter the right account string?04:28
creihttry04:28
anticwlooks like it works here too04:28
creihtswift-account-audit AUTH_40d3ae6f824a980cd8c01a5f93f5e44704:28
anticwthough apparently some people have REALLY large containers04:28
creihthehe04:28
creihtit can be a bit intensive for a large account04:29
winston-d:)04:29
winston-dhttp://paste.openstack.org/show/439/ new one with right account.04:31
winston-dit says that Cntainer mismatch: 1, and missing replicas: 8704:32
winston-dthat's fine, right? since I turned of replicator services04:32
creihtwinston-d: yup04:34
winston-dback to perf issue, so right now we can conclude there's no perf issue with swift. but there might be some issue with swift-bench.  am i right?04:36
creihtor possibly settings04:37
creihtwinston-d: can you paste your /etc/swift/proxy-server.conf and /etc/swift/object-server.conf from one of the storage nodes?04:37
winston-dsure. right away04:38
winston-dhttp://paste.openstack.org/show/440/04:39
*** littleidea has joined #openstack04:41
creihtwinston-d: did you restart all the services after making the config changes to add the workers?04:42
winston-dsure did04:43
winston-di can did it again right now and test to confirm04:43
winston-dshall i turn on replicator this time?04:43
creihtI would leave replicator off still04:44
creihtyeah try restarting04:44
creihtjust ot be sure04:44
winston-dhmm, didn't know what happened, but this time, back to super slow mode (PUT 2.6/s)04:48
creihthah04:51
*** mdomsch has joined #openstack04:54
creihtcan you paste the proxy log?04:56
winston-dwait a sec.  i repeat the test 3 times, the result is NOT consistent04:59
winston-dtwo rounds are super slow, one is fine04:59
*** littleidea has quit IRC04:59
creihtodd05:09
creihthow many objects are you putting?05:09
creihtwinston-d: out of curiosity, can you run05:10
creihtswift-ring-builder /etc/swift/object.builder05:10
winston-dsure05:10
creihtand all the other ring builder files05:10
creihtand paste the output05:11
winston-dOK. here's result for slow run and PUT log first: http://paste.openstack.org/show/441/05:14
*** ArdRigh has joined #openstack05:15
creihtwinston-d: yeah... that is similar to what we saw before, most calls are pretty fast, and then a couple take 5-10 seconds05:16
winston-dcreiht: http://paste.openstack.org/show/442/ ring builder05:17
creihtwinston-d: btw, if you have 6 machines, I would go ahead an make the 6th another zone05:19
creihtinstead of having 1 zone have 2 machines, while all the others just have 105:19
creihtbut that shouldn't have anything to do with what we are seeing05:19
winston-doh, ok05:20
creihtotherwise the rings look good05:20
creihtin general it is a good idea to have the same number of devices in each zone05:21
winston-dis there any specific reason that swift doc recommend at least 5 zones?05:22
anticwoh, that's documented now? neat05:23
creihtwinston-d: mainly it is best for handling failure scenarios05:23
creihtthat allows for 3 replicas plus 2 zone to use as handoff nodes in the case of failure05:24
winston-danticw: swift doc is way better than Nova :)05:24
anticwwinston-d: i know, they got a lot better too05:24
anticwbut swift is also somewhat more mature05:24
winston-dcreiht: what if i have 8 nodes, 10, 12?05:24
creihtif you have only 3 zones, and a device fails, there isn't another zone that can be used to handoff to05:24
creihtI would shoot for a multiple of zones05:25
winston-danticw: i heard that before.05:25
creihtso if you are going to have 5 zones, then I would start with 5, 10, 15, etc.05:25
*** damon__ has quit IRC05:25
winston-dcreiht: OK. understand, but now i have 6... I am likely to get another 4 later...05:26
creihtk05:26
creihtI would just use 5 then05:26
*** MarkAtwood has joined #openstack05:26
creihtand actually it would probably be a better use to make the leftover another proxy05:26
creihtsince you only have one proxy05:26
winston-dhmm. i should seriously consider the deployment05:28
creihtwinston-d: is this just for testing, or is this also going to be production hardware?05:29
winston-djust for testing. :)05:29
creihtfor a production setup, a lot of thought needs to be put into the deployment05:29
creihtok05:29
winston-dbut for performance test in future05:29
creihtthen I would recommend starting with 2 proxies and 5 storage nodes05:30
winston-dcreiht: can i also use these nodes as Nova compute nodes? LOL05:32
creihthah05:32
creihtactually you probably have enough memeory/cpu to do that05:33
creihtthough you should get some more disks before you do :)05:33
winston-dcertainly05:34
creihtdevcamcar: Are you guys still looking at deplying swift alongside your nova installs?05:34
creihtwinston-d: do you have any sas drives around that you could put in the storage nodes to test?05:35
creihtI would be curious to see if you still have the same perf problems running off the sas drives rather than the ssds05:35
winston-dnope, only SATA interface is available05:35
creihtor SATA05:35
winston-di don't have enough SATA disk right now. :*05:36
creihtsince they were ssds, I just assumed a SAS interface for some reason05:36
creihtk05:36
winston-di can try that once i do05:36
creihtwinston-d: what machine are you running swift-bench from?05:37
winston-done of the storage nodes05:37
winston-dthe one in zone 105:37
creihtk05:38
creihtby default, swift-bench uses 0byte files, so we shouldn't be hitting any network limitations05:39
creihtwinston-d: I'll have to sleep on it some more05:39
creihtfeel free to leave any notes if you make any more progress05:39
winston-dOK.  i may skip this issue for a while and dive into Nova mess. :)05:39
winston-dcreiht: thank you so much for your help.05:40
winston-dcreiht: have a good nite.05:41
*** mdomsch has quit IRC05:45
*** f4m8_ is now known as f4m805:49
*** adiantum has quit IRC05:57
*** kevnfx has joined #openstack06:00
*** adiantum has joined #openstack06:03
*** zaitcev has quit IRC06:18
*** Ryan_Lane is now known as Ryan_Lane|sleep06:20
*** ramkrsna has joined #openstack06:22
*** opengeard has quit IRC06:35
*** kevnfx has quit IRC06:39
*** adiantum has quit IRC06:50
*** adiantum has joined #openstack06:55
*** opengeard has joined #openstack07:21
*** MarkAtwood has quit IRC07:29
*** 36DAAZJLY has joined #openstack07:32
*** miclorb has quit IRC07:33
ttxsirp-: lp:~rconradharris/nova/xs-snap-return-image-id-before-snapshot is just the first step to xs-snapshots, right ?07:42
*** adiantum has quit IRC07:42
zykes-|/j indefero07:46
*** adiantum has joined #openstack07:54
*** MarkAtwood has joined #openstack07:57
*** allsystemsarego has joined #openstack07:58
*** brd_from_italy has joined #openstack08:00
*** befreax has joined #openstack08:02
*** adiantum has quit IRC08:06
*** adiantum has joined #openstack08:06
*** MarkAtwood has quit IRC08:19
*** adiantum has quit IRC08:34
*** trin_cz has joined #openstack08:35
*** arcane has quit IRC08:36
*** adiantum has joined #openstack08:41
*** skrusty has quit IRC08:44
*** calavera has joined #openstack08:47
*** Guest61782 has joined #openstack08:48
*** Guest61782 has quit IRC08:50
*** skrusty has joined #openstack08:59
*** nijaba has quit IRC09:00
*** adiantum has quit IRC09:02
*** nijaba has joined #openstack09:03
*** nijaba has joined #openstack09:03
*** adiantum has joined #openstack09:09
*** MarkAtwood has joined #openstack09:14
*** arcane has joined #openstack09:19
ttxsoren: we should use this to track the lp:nova/trunk PPA downloads : http://ftagada.wordpress.com/2011/01/05/ppa-stats-initial-impressions/09:20
* ttx gets some coffee09:20
*** fabiand_ has joined #openstack09:30
*** adiantum has quit IRC09:31
*** Abd4llA has joined #openstack09:35
*** Abd4llA is now known as Guest3218909:35
*** adiantum has joined #openstack09:36
*** Guest32189 has quit IRC09:42
*** adiantum has quit IRC09:56
*** adiantum has joined #openstack09:56
*** arcane has quit IRC10:00
*** arcane has joined #openstack10:04
*** gasbakid has joined #openstack10:15
*** irahgel has joined #openstack10:16
*** adiantum has quit IRC10:22
*** adiantum has joined #openstack10:27
*** adiantum has quit IRC10:48
*** adiantum has joined #openstack10:48
*** trin_cz has quit IRC10:49
sorenttx: Yeah, I saw that.10:55
sorenttx: I asked him about it yesterday. He's working on releasing the code.10:55
*** kashyapc has quit IRC11:11
*** Abd4llA has joined #openstack11:38
* soren lunches11:40
*** MarkAtwood has quit IRC11:47
* soren returns12:05
*** MarkAtwood has joined #openstack12:05
*** trin_cz has joined #openstack12:18
sandywalsho/12:23
sandywalshhey guys, trunk openstack api is busted until we get this bug fix in ... can someone approve please? https://code.launchpad.net/~ed-leafe/nova/internal_id_fix12:30
sandywalshdoes this need a merge prop?12:30
sorensandywalsh: Yeah, can't merge stuff without a merge proposal.12:31
sorenI could have sworn I saw an mp for that this morning, though.12:31
dabosandywalsh: It got rejected because it contained some code cleanup stuff that wasn't specific to the internal_id question. That's why I pulled it.12:32
sandywalshhmm12:32
sandywalshcan we rip out those other changes?12:32
sandywalsh(I can branch it and do so if you'd like)12:32
dabosandywalsh: yeah, we can. It's just that I was trying to help you out yesterday by going on this tangent, and after 13 hours straight I got fed up with the nitpicking12:33
daboIt caused me to miss the freeze for password reset, so I didn't see the point in continuing.12:34
sandywalshhmm, not sure where we stand now12:35
daboI took my password reset branch, which contained the new stuff along with a lot of code cleanup around those changes, and in order to help you with the internal_id problems introduced by eday's change, I removed the password-specific stuff12:36
dabothat apparently wasn't good enough, and there are only so many hours in a day12:36
sorendabo: If you have a branch you'd like to land, feel free to propose it.12:36
sandywalshbut these are two difference branches, correct?12:36
sorendabo: Any core-dev can approve exceptions to the freeze.12:37
dabosandywalsh: yes, but you understand that the fix branch was derived from the password branch, right? It wasn't created fresh from trunk12:37
sandywalshpassword-reset and instance-id? Password reset used to have instance-id in it, but you pulled it out. And password reset was merged with trunk recently enought, no?12:38
sandywalsh*enough12:38
dabosoren: seems silly that exceptions can be made after the deadline, but not before.12:38
sorendabo: Eh?12:38
sandywalshlet's just get it done :)12:38
sorendabo: Before deadline, there's no point in making exceptions, since you don't need any to begin with.12:39
dabosandywalsh: no, password reset was almost ready to propose. I stopped work on that to make the internal_id only fix12:39
sandywalshwhat can I do to the branch to get things on track?12:39
*** arcane has quit IRC12:39
sandywalshsoren, would it be easier just to push on password-reset and kill two birds?12:39
dabosoren: sorry, I've been doing agile too long to adapt my thinking to this style of development.12:39
sorensandywalsh: The internal-id-fix should be a much easier review.12:40
sandywalshI agree12:40
dabosandywalsh: it would have been done much earlier yesterday if I hadn't gone off on a tangent to create a "pure" patch12:40
sorenI don't really understand the problem here? There's a branch that fixes the internal id thing, and there's a branch that implement reset password.12:40
sorenRight?12:41
*** ctennis has quit IRC12:41
dabosoren: no. There was the branch that implements password stuff12:41
sorenRight..12:41
dabothen yesterday the internal_id problem came up12:41
soren...right..12:41
daboI fixed it in my branch and was working to finish the password branch12:41
sorenOk.12:42
dabowhen the bug was holding up sandywalsh.12:42
sorenRight.12:42
sandywalsh(and trunk was broken)12:42
sorenRight, right.12:42
daboso instead of finishing password, I tried to create a branch from password minus the password-specific stuff12:42
daboso that sandywalsh could more easily merge into his stuff12:43
sandywalshdid you do think by branching password-reset?12:43
sandywalsh*this12:43
sorendabo: Right.12:43
sandywalshor by ripping out password-reset branch12:43
dabowhen it was proposed for merge, it was rejected because it also contained some code cleanup that wasn't specific to the internal_id bug12:43
dabocode that was going to be in the password branch anyway12:44
sorenWell, not technically "Rejected", but eday votes "Needs fixing" on it, yes.12:44
dabowe were trying to beat the deadline and were rushing to get stuff done in time.12:44
sorens/votes/voted/12:44
dabovishy also objected here on irc12:45
sorenOk.12:45
sorenOk, so those were the problems /yesterday/. I still don't really understand what the problem is today.12:45
daboso now that the deadline's passed, I'm going to start over and do it right12:45
sorenWhy not just let sandywalsh remove the "superfluous12:45
soren" cleanups and propose a clean internal-id fix?12:45
dabosoren: because that's what I'm doing already12:46
sorenAh.12:46
sorenSo there's no problem?12:46
sorenOr?12:46
sorenIf there's problems I'd like to help solve them, that's all.12:47
sorenNot trying to stir up new ones :)12:47
*** zykes- has quit IRC12:47
*** nijaba has quit IRC12:47
*** Xenith has quit IRC12:47
dabosoren: when a bug is introduced and has broken trunk, and a fix is being prepared, it seems silly to argue over trivial name changes as a reason to hold up the fix12:48
daboe.g., compute_api.get() vs. compute_api.get_instance()12:48
sorendabo: You're arguing with the wrong person :)12:48
dabowell, you asked if there were problems... :)12:49
sorenTrue :)12:49
sorenAbout the password reset stuff... There's a blueprint for it, it was approved, etc. Feel free to propose it for merge.12:49
*** nijaba has joined #openstack12:49
sorendabo: FeatureFreeze is *next* Thursday.12:50
sandywalshso, is there anything I can do to help here?12:50
dabosoren: i was planning on it. This stuff about exceptions to the freeze was news to me when I read ttx's email this morning12:50
sandywalshdabo, I can review password-reset and instance-id once there are merge-props12:50
dabosandywalsh: just be patient. I need to be in the code window instead of the irc window12:50
soren:)12:50
sandywalshsoren, do bug fixes require merge-props? I assume yes?12:51
dabosandywalsh: how else would they get merged?12:51
*** Xenith has joined #openstack12:51
sandywalshwell, it gets into that blurry area of merge-prop freeze12:51
sandywalshor is it just blueprint merge-prop freeze?12:52
*** rogue780 has joined #openstack12:52
sandywalshand dabo, sorry if I seemed impatient, I wasn't aware of the gory details of your events yesterday. :)12:53
dabosandywalsh: np12:53
daboI'm just frustrated at unfamiliar processes über alles12:54
sorensandywalsh: You can always propose bug fixes for merge.12:55
sorensandywalsh: All the way up until release.12:55
sorensandywalsh: ...and get them merged, that is.12:55
sandywalshsoren, so the merge-prop freeze is just regarding blueprints, correct?12:56
sorensandywalsh: The freezes only really pertain to new features or stuff that alters behaviour (other than fixing bugs).12:56
sandywalshsoren, thanks for the clarification.12:56
sorensandywalsh: Well, yes.12:56
sorensandywalsh: It's for everything that ought to have a corresponding blueprint.12:56
sandywalshright12:57
sorensandywalsh: If a snazzy, new feature came along that was well written, had good tests etc., but didn't have a blueprint, I wouldn't mind approving it in spite of the lack of blueprint.12:57
sandywalshsoren, and thus my "blurry area" comment above and, I think, the source of dabo's frustration.12:58
*** ctennis has joined #openstack12:58
sandywalshsoren, but I understand your rationale12:58
dabosandywalsh: exactly. Why have a freeze deadline if you can get around it12:58
*** henrichrubin has joined #openstack13:01
*** MarkAtwood has quit IRC13:01
*** anotherjesse has joined #openstack13:02
anotherjesseanyone know why /var/log/messages and /var/log/user.log would be size 0 on a debian/ubuntu box?  I've rebooted13:03
*** henrichrubin has quit IRC13:06
*** calavera has quit IRC13:07
uvirtbotNew bug: #699814 in nova "nova-compute tries to connect to the wrong database at startup" [Undecided,New] https://launchpad.net/bugs/69981413:11
*** adiantum has quit IRC13:14
ttxdabo: I thought my email explained it clearly... but apparently not13:14
dabottx: today's email did. Up until today I had no idea that the deadline was fluid.13:15
ttxdabo: this particular freeze is really about organizing reviews in the last week before FF13:16
ttxdabo: for branches proposed at least one week before, we can ensure review. For branches proposed closer to FF, we can't promise anything13:16
ttxso the freeze says "if you want to get your branch merged, propose it before BMPFreeze, which is one week before"13:17
dabottx: thanks. I said that your email today explained things clearly13:17
ttxthe name "freeze" is probably not the best chosen one. It's really more a deadline :)13:17
ttxdabo: I explained it during Tuesday's meeting as well, fwiw13:17
ttxNote that FeatureFreeze is much stricter.13:18
dabosandywalsh: looks like someone beat me to it. The latest trunk has no use of internal_id13:18
*** WonTu has joined #openstack13:18
*** WonTu has left #openstack13:19
sandywalshhaha13:19
* sandywalsh mergin13:19
*** hggdh has quit IRC13:19
dabowell, this was an extremely productive use of the last 4 hours of my development time.13:20
* ttx wonders how that confusion happened. Usually you have a bug and only one person being assigned to it13:20
*** westmaas has joined #openstack13:21
sandywalshttx perhaps that's where the problem stemmed from, I don't think there was bug issued for instance_id or the bugs that followed.13:22
sandywalshbut, as is often the case, I'm probably wrong.13:22
ttxsandywalsh: yep -- not filing a bug can be seen as a gain of time, but sometimes triggers catastrophic loss of time13:23
dabothe problem stemmed from the urgency of trying to get everything done by the end of yesterday13:23
daboI had already fixed it in my branch and moved on13:23
ttxdabo: I'm sorry about that. I'll try to communicate better next time13:24
ttxThough all Freezes and Exceptions are already quite documented13:25
*** hggdh has joined #openstack13:25
sandywalshJust tried pycharm ... it's actually quite handy for browsing all the code if nothing else.13:28
sorenanotherjesse: Maybe it just got rotated?13:34
*** nelson__ has quit IRC13:49
*** Abd4llA has quit IRC13:49
*** nelson__ has joined #openstack13:49
*** ramkrsna has quit IRC13:53
*** 36DAAZJLY is now known as guigui13:54
*** westmaas has quit IRC13:57
daboFor our marketing team: http://www.dilbert.com/strips/comic/2011-01-07/14:03
rogue780Does anyone know if the blog post by Anne Gentle on 30 November regarding consolidated docs and tutorials is still fairly accurate or not?14:04
ttxannegentle: ^14:05
annegentlerogue780: the blog post with the goals for Bexar?14:12
rogue780specifically, 1. Docs site – OpenStack needs a central docs.openstack.org site that curates the content from various other sources and gives a good user experience upon landing on it. My goal is to implement this in time for Bexar (February).14:13
rogue780and 5. Tutorials – Now that we have virtual boxes and stack on a stick in progress, we need tutorials that are meaningful and simple enough to step through while still giving an impressive demonstration of the power of OpenStack clouds.14:13
annegentlerogue780: yes, for 1. I have docs.openstack.org set up and am working on seeding content.14:13
*** gondoi has joined #openstack14:13
annegentlerogue780: for 5. I had a volunteer want to write a tutorial for LAMP stack but haven't heard from him for a while. We still have need there.14:14
rogue780cool. we're experimenting with openstack here, but haven't ever had any experience with cloud computing before, so we're finding the learning curve to be rather steep. a consolidated documentation site and a few useful tutorials would be fantastic for people like us who have no cloud experience14:15
rogue780dabo, I just printed that comic off this morning and put it up here at work. It perfectly describes our management at the moment14:16
*** ppetraki has joined #openstack14:17
rogue780bb in 1014:18
ttxlol14:19
*** allsystemsarego_ has joined #openstack14:19
annegentlerogue780: totally grok that sensation "what the heck can this do?"14:20
annegentlerogue780: working on it furiously :) it's a ton of work so collaborators, even newbies, are always welcome. Start writing on the wiki for an easy start, esp. if you're the type who learns by writing down.14:21
*** allsystemsarego has quit IRC14:22
*** allsystemsarego_ has quit IRC14:24
rogue780will do.14:25
*** matclayton has joined #openstack14:25
tr3bucheto/14:26
dabohey trey14:28
*** lvaughn has joined #openstack14:28
sandywalshtrunk still busted ... investigating14:32
*** rogue780 has quit IRC14:32
sandywalshinternal_id http://paste.openstack.org/show/443/14:35
*** mdomsch has joined #openstack14:37
*** dendrobates is now known as dendro-afk14:37
anotherjesseany rackers around who know about performance of slicehost - I run userscripts.org and the server needs replace - want to chat about net/disk i/o14:37
*** rogue780 has joined #openstack14:37
dabosandywalsh: do you have the latest? My n/c/api.py looks like this: http://paste.openstack.org/show/444/14:38
sandywalshhmm, I just merged from trunk14:38
sandywalshbut will change just in case14:38
*** dendro-afk is now known as dendrobates14:38
sandywalshseems like a bad idea exposing database ID's to the outside world14:40
dabosandywalsh: there's an old saying: intelligent keys aren't14:40
daboPKs in a database should be used for nothing other than relational purposes14:40
sandywalshso why are we exposing instance id ?14:41
dabobecause having one ID column is cleaner than two!14:41
*** jdarcy has joined #openstack14:50
*** comstud has quit IRC14:51
sandywalshtrunk is going again \o/14:51
*** vishy has quit IRC14:51
*** sleepson- has quit IRC14:51
*** devcamcar has quit IRC14:51
*** vishy has joined #openstack14:52
*** devcamcar has joined #openstack14:52
*** ChrisAM has quit IRC14:52
*** sleepsonthefloor has joined #openstack14:53
*** ChrisAM1 has joined #openstack14:54
* sandywalsh finds it odd his merge with trunk didn't grab that change14:54
* sandywalsh shrugs and moves on14:54
notmyname5555555514:58
*** ChrisAM1 is now known as ChrisAM14:59
notmynameaaaaaaawwwwweeeeeeeeeeeeeeeeeeeeyyyyyyyyyyybbbbbbbb55555ppppppppppp88888uu4yt5tuyy4ygey4yftyfghgbedbgehdytdgtewdt3rtedtdr3tdrtd3tdtd15:01
notmynameaugh! sorry. 2 year-old attacking15:01
sorenOh, I thought it was your password.15:02
sandywalshnotmyname, thought you were having a stroke15:02
*** f4m8 is now known as f4m8_15:04
xtoddxsoren, vishy: i merged newlog2 to trunk, but should be good for merge again.  had to remove references to internal_id and swap in nova.log for logging in a few places.15:04
notmynamesoren: hunter2 is my pw15:04
*** ArdRigh has quit IRC15:04
*** spectorclan has joined #openstack15:07
sorennotmyname: Mine too!15:08
sorenxtoddx: I'm about to leave for now. Want to have a quick chat about the version branch?15:08
xtoddxsoren: sure, let me pull up your branch real quick15:09
sorenCool.15:09
sandywalshdabo, are you going to reissue your merge prop for password reset?15:11
dabosandywalsh: I never proposed it. I was too busy trying to fix the internal_id stuff15:11
xtoddxsoren: so yours isolates the code generation to vcsversion, and the rest stays the same?15:11
daboworking on it today. One test still breaks.15:12
sandywalshdabo, thx15:12
sorenxtoddx: I leave all code generation to bzr.15:12
xtoddxsoren: yea, i like that better15:12
sorenxtoddx: That's the primary change, really.15:12
xtoddxsoren: want to just file a merge prop for yours?15:13
sorenxtoddx: Sure, I could do that.15:13
xtoddxsoren: i'll mark mine as rejected15:13
sorenxtoddx: Cool beans.15:13
annegentlesoren, xtoddx, how do you retrieve the version info after installing?15:14
xtoddxannegentle: import nova.version ; print nova.version.version_string()15:15
rogue780openstack@openstack:~$ euca-authorize default -P tcp -p 22 -s 0.0.0.0/015:15
rogue780OperationalError: (OperationalError) attempt to write a readonly database u'INSERT INTO security_groups (created_at, updated_at, deleted_at, deleted, name, description, user_id, project_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)' ('2011-01-07 15:14:47.529301', None, None, False, 'default', 'default', u'ttx', u'myproject')15:15
annegentlextoddx: great, thanks15:15
sorenxtoddx: https://code.launchpad.net/~soren/nova/version/+merge/4551715:19
* soren has to run. May be back this evening.15:19
*** mdomsch has quit IRC15:29
*** glenc has joined #openstack15:31
*** befreax has quit IRC15:31
*** glenc_ has quit IRC15:32
*** kevnfx has joined #openstack15:43
*** arthurc has joined #openstack15:45
*** kevnfx has quit IRC15:46
*** dfg_ has joined #openstack15:47
*** dragondm has joined #openstack15:55
*** dragondm has joined #openstack15:56
uvirtbotNew bug: #699878 in nova "Compute worker silently fails when XS host runs out of space" [Undecided,New] https://launchpad.net/bugs/69987816:02
*** dubsquared has joined #openstack16:14
*** enigma has joined #openstack16:14
*** kevnfx has joined #openstack16:19
*** guigui has quit IRC16:20
*** piken has joined #openstack16:26
pikenHey all.16:26
pikenSo I am thinking of trying something odd with openstack and wanted to get a few pointers.16:26
pikenWe have a non-cloud system that we need to provision software to in a bsd jail style environment that we spawn on demand.16:27
pikenI have written a script engine in Python to do the provisioning and it works well.16:27
*** dubsquared has quit IRC16:27
pikenI was wondering what the possibility of using the nova components and message queue to do the calls to the provisioning system for it.16:27
pikenie a provision that doesn't hit nova-network or nova-compute, but my internal system instead.16:28
pikenwould that be feasable?16:28
*** dubsquared has joined #openstack16:34
dubsquaredmorning/afternoon #openstack!16:34
*** jaypipes has joined #openstack16:34
dubsquaredanyone privy of nova-objectstore getting fixed in the packages?16:35
vishypiken: seems like there was a blueprint for lxc containers that may do exactly that.16:38
jaypipeshello again, #openstack. :)16:38
KnightHackergholt: I just submitted my patch. Should I "propose for merging"?16:38
vishypiken: as i understand it lxc is roughly equivalent to a bsd jail and is supported by libvirt.16:40
vishypiken: if you want to implement your scripting version, it might be good to implement it as a new compute driver with possibly a new network manager as well16:41
vishypiken: blueprint is here https://blueprints.launchpad.net/nova/+spec/bexar-nova-containers looks like it hasn't been started yet16:41
*** Ryan_Lane|sleep is now known as Ryan_Lane16:42
*** rlucio has joined #openstack16:44
*** rlucio has quit IRC16:45
*** troytoman has joined #openstack16:52
dabowhat's the deal with the copyright comments at the top of code files? Do we change 'em to 2010-2011, or just 2011, or leave 'em as 2010?16:58
ttxhey jay, nice vacation ?16:58
ttxjaypipes: ^16:58
ttxjaypipes: I marked i18n-support and image-service-use-glance-clients "slow progress" because I assumed they still needed some code merged. Feel free to correct me16:59
jk0dabo: I think annegentle started changing them to 2010-2011 in her docs17:00
dabobut if I'm submitting a new script... do I make them 2011?17:00
*** maplebed has joined #openstack17:00
jaypipesttx: yup, that's correct17:01
jaypipesttx: and, yes, great vacation thx :)17:01
dubsquarednew nova install (using packages), using UEC ubuntu image…following errors —> http://paste.openstack.org/show/445/  http://paste.openstack.org/show/446/17:02
rogue780the instance I'm trying to launch is stuck on pending17:03
*** ccustine has joined #openstack17:03
uvirtbotNew bug: #699910 in nova "Nova RPC layer silently swallows exceptions" [Undecided,In progress] https://launchpad.net/bugs/69991017:06
creihtnelson__: btw, in order to contribute code to openstack, we need you to sign a CLA, as described here: http://wiki.openstack.org/HowToContribute17:07
nelson__already done.17:07
creihtcool17:07
creihtdid you add yourself here: http://wiki.openstack.org/Contributors17:07
creiht?17:07
vishyxtoddx: pep8 on newlog217:08
*** brd_from_italy has quit IRC17:09
uvirtbotNew bug: #699912 in nova "When failing to connect to a data store, Nova doesn't log which data store it tried to connect to" [Undecided,In progress] https://launchpad.net/bugs/69991217:12
nelson__yes, I added myself seconds ago.17:16
creihtnelson__: awesome... thanks!17:17
creihtand thanks for your contribution17:17
*** glenc has left #openstack17:22
* annegentle does a docs happy dance around nelson__ 17:22
nelson__:)17:22
*** glenc has joined #openstack17:22
*** kashyapc has joined #openstack17:23
*** dirakx has quit IRC17:23
jaypipesgah, holy full inbox Batman :(17:24
* jaypipes can't imagine how a man or woman coming back from maternity/paternity leave ever gets back into the swing of work...17:25
edayjaypipes: welcome back!17:25
jaypipeseday: thx mate! :)17:26
jaypipeseday: how was your holiday?17:26
edayjaypipes: pretty uneventful, which is good in it's own way :)17:26
jaypipeseday: indeed17:26
edayglad to be back in the cold weather?17:26
jaypipeseday: was good to see the "kids". I missed my puglet. :)17:26
edayhehe17:27
jaypipeseday: heh, yeah, stepping off the plane in Columbus to 20 degrees was, well, ass-baggery.17:27
edaynice17:27
*** dragondm has quit IRC17:28
*** kashyapc has quit IRC17:29
*** Abd4llA has joined #openstack17:38
*** Abd4llA is now known as Guest3807917:38
daboJust pushed the merge prop for the password reset blueprint: https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/4553717:39
*** befreax has joined #openstack17:39
*** dragondm has joined #openstack17:43
spectorclanjaypipes: I assume it was GREY as well in Columbus17:43
jaypipesspectorclan: yup!17:45
jaypipesdabo: well played, Mr Leafe.17:45
jaypipesdabo: spoken with Sean Connery accent...17:45
dabojaypipes: how so?17:45
*** ccustine has quit IRC17:46
jaypipesdabo: xs-password-reset17:46
*** brd_from_italy has joined #openstack17:47
jaypipesdabo: I was attempting to say nice work on the password reset stuff :)17:47
dabojaypipes: ah. Thought you were commenting on my submitting the day after the freeze. :)17:48
jaypipesdabo: heh, no, I will let ttx fret about such things ;)17:48
dubsquaredlol17:51
*** leted has joined #openstack17:52
*** joearnold has joined #openstack17:53
*** Guest38079 has quit IRC17:55
*** comstud has joined #openstack17:58
*** ChanServ sets mode: +v comstud17:58
*** jdurgin has joined #openstack17:58
*** arthurc has quit IRC18:02
uvirtbotNew bug: #699929 in nova "All the nova- services show up as [?] with services --status-all" [Undecided,New] https://launchpad.net/bugs/69992918:02
*** fabiand_ has left #openstack18:03
*** spsneo has joined #openstack18:04
spsneojaypipes: ping18:04
*** ccustine has joined #openstack18:09
*** trin_cz has quit IRC18:09
jaypipesspsneo: well, hi there Sid :)18:10
spsneojaypipes: I was just reading about nova and swift18:10
spsneoboth of them sound interesting18:10
spsneothough I think I should start with nova18:11
spsneoI just signed the CLA18:11
spsneojaypipes: what are you working on currently?18:12
jaypipesspsneo: awesome. let us know when you have questions on stuff as you're reading through the code18:12
jaypipesspsneo: I work on Glance and a bit on Nova right now.18:12
spsneojaypipes: what's glance?18:13
spsneojaypipes: I could only see nova and swift18:13
spsneojaypipes: so should I start with bugs?18:13
jaypipesspsneo: it is the image registry and delivery server (glance.openstack.org)18:13
spsneojaypipes: ok18:13
jaypipesspsneo: you can, sure, or documentation would also be good...18:13
spsneojaypipes: where's the documentation?18:13
spsneojaypipes: ok got it18:14
jaypipesspsneo: google is your friend ;)18:14
spsneojaypipes: :D18:14
*** hash9 has joined #openstack18:14
*** daleolds has joined #openstack18:15
*** dirakx has joined #openstack18:20
*** hadrian has joined #openstack18:20
*** hash9 has quit IRC18:22
Ryan_Lanehow do the floating IPs work? I added a single IP using "nova-manage floating create <ipaddress> <computenode>", but when I go to allocate it, I get a "NoMoreAddresses" error18:22
*** kevnfx has quit IRC18:31
*** Jordandev has joined #openstack18:34
Ryan_Laneah. I need to use the network-node as a host, not the compute-node :)18:37
openstackhudsonProject nova build #366: SUCCESS in 1 min 17 sec: http://hudson.openstack.org/job/nova/366/18:39
*** matclayton has left #openstack18:43
openstackhudsonProject nova build #367: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/367/18:44
*** soliloquery has joined #openstack18:44
*** MarkAtwood has joined #openstack18:49
sandywalshRyan_Lane, I think I was talking to you yesterday about the mapping json file in the api-parity branch?18:50
sandywalshRyan_Lane, we were talking about 2-way functions, yes?18:50
Ryan_Lanesandywalsh: we were talking about display_name in that branch :)18:51
sandywalshRyan_Lane, ah, rats ... sorry18:51
Ryan_LaneI do remember you having that conversation with someone18:51
* sandywalsh ponders18:51
Ryan_Lanewas it eday?18:51
sandywalshcould have been. eday did we talk about two-way functions for api-parity yesterday?18:51
edaysandywalsh: I was talking to you about 2-way function for those ids18:53
sandywalsheday, phew18:53
sandywalsheday, so one proposal I received was utf-8 -> bigint18:53
sandywalsheday, still makes for BUN (big ugly number)18:53
sandywalsheday, but, it's reversible18:54
sandywalsheday, personally, I think for the short term we live with it.18:55
sandywalsheday, and let our hatred of it grow until we can contain it no longer18:56
vishywho is mdragon on here?18:56
sandywalshMonsyne Dragon ... ozone dev18:57
sandywalshworking on the xs-console bp18:57
vishysandywalsh: is he on irc?18:57
sandywalshvishy, yes, but I think he's at lunch right now (SAT)18:57
vishysandywalsh: I have a question about the way xen consoles work18:57
sandywalshvishy, I just did a review, I can take a stab at it?18:58
iRTermitevishy: yea, he's not around.  did you want me to go see if he's here?18:58
vishysandywalsh: does the console proxy have to run on the same host as the instance?18:58
vishysandywalsh: or does xen have some proxying magic that it does under the covers18:59
sandywalshvishy, heh, that was one of my questions too.18:59
sandywalshvishy, sorry, don't have an answer, but my gut says it runs on the same server as the console application18:59
vishysandywalsh: because for ajaxconsole we had to create a little proxy to send traffic from the api-server to the compute host18:59
vishysandywalsh: in most deployments the compute hosts aren't publicly accessible19:00
sandywalshvishy, right ... the api server seems to be the right place for it19:00
vishysandywalsh: so we will need something similar if xen isn't doing it magically19:00
sandywalshvishy, so you may need two proxies? dom0 -> api proxy : api proxy -> public ?19:01
vishysomething like that19:01
*** leted has quit IRC19:02
*** irahgel has left #openstack19:02
vishyyou can see how anthony addressed it in the ajaxconsole branch19:02
sandywalshvishy, hmm, the instance listens for vnc connections, so wouldn't just one proxy work?19:02
vishythe instance itself?19:02
vishyyou mean the hypervisor?19:02
sandywalshah, no, you're right. the hypervisor runs the vnc service19:03
vishyjust need a proxy from public into host port that the hypervisor is running on19:03
sandywalshbut still, it listens for connections, so wouldn't one proxy work?19:03
sandywalshyes19:03
vishyiRTermite: if you see him, ask him to read through this scrollback and we can discuss19:04
sandywalshvishy, dragon is here now19:04
dragondmya19:04
sandywalshdragondm, do you have the thread ^^19:04
dragondmya19:04
* iRTermite bows out and goes back to eating19:05
dragondmsandywalsh: about the driver class, yes that is the base class for the console drivers19:08
dragondm(driver.ConsoleProxy)19:08
edaysandywalsh: thats fine, it's just unusable for anything else besides testing/small installation purposes :)19:12
sandywalshdragondm, I guess I getting confused by the term "console" when it's really a "console proxy". Driver doesn't really match either. (more questions in the merge comments)19:14
dragondmyah,  I am typing up a comment now in reply ...19:14
sandywalsheday, yeah. I just figured tweaking objectstore at this stage was not worthwhile.19:15
*** trin_cz has joined #openstack19:17
sandywalshso vishy, dragondm will answer the "where does the proxy run" question in the merge prop comments.  https://code.launchpad.net/~mdragon/nova/xs-console/+merge/4532419:17
vishysandywalsh: sounds good19:18
*** hadrian has quit IRC19:18
dragondmvishy: what do you need to know?   The was rackspace runs things w/ xenserver is thus:19:19
dragondmWe have separate console hosts (which are setup for public access)   For xenserver there is a proxy daemon calles xvp (Xenserver Vnc Proxy) which sets up the password access,  and  can multiplex vnc connection  if the client supports that.19:20
vishydragondm: I'm about to head out for lunch.  I'm just trying to figure out how a user connects to a console that is running on a compute host when the compute host is generally not publicly accessible19:21
dragondmit's thru the console host, which is a separate node19:21
vishydragondm: ah so xvp is is a vnc proxy19:21
dragondmyup19:21
vishydoes it work with non xen vnc consoles?19:21
vishydragondm: ok makes sense to me.19:22
dragondmbtw, my xs-console stuff is designed to be fairly flexible. You could use another vnc proxy w/  say kvm, if you added a different driver19:22
jk0for xen classic we do something similar, but use ajaxterm instead19:22
dragondmyup19:22
dragondmsandywalsh: did you see the comment I added?19:23
sandywalshdragondm, doing so now19:23
*** jc_smith has joined #openstack19:23
sandywalshdragondm, so what I'm proposing is that driver.py be renamed to proxy.py, since that's what it is.19:24
*** rlucio has joined #openstack19:24
sandywalshdragondm, console is implied by the namespace19:25
dragondmvishy:  are you asking about xvp working w/ non xen?  I don't think so, it talks to the xenserver via xenapi.19:25
dragondmsandywalsh: ??  er actually it's the driver.19:25
sandywalshdragondm, re: 574 you can use stubout to achieve the same results. no magic flags required19:25
dragondmstubout?19:26
dragondmsandywalsh: actually I suppose I might rename the ConsoleProxy class to DriverBase or somesuch.19:27
sandywalshdragondm, look at nova/tests/api/openstack/test_servers19:27
sandywalshdragondm, I think I'm getting confused with the term driver in this context19:27
sandywalshdragondm, what is it a driver of? console proxies?19:27
*** hadrian has joined #openstack19:28
dragondmit's the driver for the manager class.  Like the xen_conn  for libvirt is the driver for the compute manager19:28
dragondmer like xen_conn or libvirt_conn ...19:28
sandywalshyes, but it represents a proxy entry in the xvp daemon19:30
dragondmto be specific it's the base class for the driver, of which there is 1 currently (XVPConsoleProxy)19:30
sandywalshyes, I've followed that. The naming was confusing (but something I can easily get over)19:31
dragondm??  I'm confused...  represents the proxy entry?19:31
sandywalshok, so ... the manager is a factory of drivers.19:32
dragondmI can rename that class to something like DriverBase or somesuch19:32
sandywalshdrivers are facades over proxy entries (which get handed down to xvp) and go in the db.19:33
*** fabiand_ has joined #openstack19:33
sandywalshthere are different drivers (ideally) for different console mechanisms19:33
*** rogue780 has quit IRC19:33
dragondmyes there are different drivers for different mechanisims19:33
sandywalshso my confusion was the filename didn't match the class.19:33
dragondmah19:34
sandywalsheither the filename was wrong or the class was wrong19:34
sandywalshat first I was going to suggest the class being ConsoleDriver19:34
dragondmI was following the example of the other workers  they have standard module names (manager/driver/api)19:34
sandywalshbut then it seemed that manager was a factory for proxy entries, so the filename was wrong.19:34
dragondmno the driver is the driver, not a factory.19:35
sandywalshright, the manager is the factory of drivers19:35
dragondmI can rename that class19:35
sandywalshanywho ... it cause a schism in my brain. So I asked :)19:36
dragondmok19:36
sandywalsh:)19:36
sandywalshso, the Manager is instantiated by the service, correct?19:37
dragondmyup19:37
sandywalshok, so that answers my other question. :)19:37
sandywalshand the xvp daemon runs, where? api server or dom0?19:38
dragondmneither.  it runs on  a separate console server19:38
sandywalshso how do public requests get to it?19:38
dragondmwhich is where the nova-console sevice will run too19:38
sandywalshnova-console wouldn't be accessible to the outside world though, would it?19:39
sandywalshonly api is19:39
dragondmthe console servers will have public interfaces.    and no nova-console is not accessable to the outside world19:40
dragondmyou call the openstack api, and get back a host/port/password combo that you can connect to w/ a vnc client19:41
uvirtbotNew bug: #700015 in nova "Headers in virt/images.py function _fetch_s3_image are sent improperly to curl" [Undecided,New] https://launchpad.net/bugs/70001519:41
dragondm(in our case the client will  be a java applet)19:41
*** nelson__ has quit IRC19:41
sandywalshah, ok, so console server and nova-console are different machines.19:41
*** nelson__ has joined #openstack19:42
dragondmEr?19:42
dragondmdefine console-server ?19:42
*** Ryan_Lane is now known as Ryan_Lane|food19:42
sandywalshyou said above "the console servers will have public interfaces"19:42
dragondmyes19:42
sandywalshthese are different machines than the machines that nova-console runs on?19:43
*** leted has joined #openstack19:43
dragondmno19:43
sandywalshok, so they offer public interfaces on a separate nic?19:44
dragondmbut the worker does not expose a public interface, (and presumably would talk ofer a separate network interface from the public)19:44
dragondmyes19:44
dragondmthis is how, afaik, rackspace's  current 'orange' code works19:44
sandywalshcool, can they run on different machines?19:45
dragondmno. the nova-console  manages the daemon process19:45
dragondm(that's it's basic purpose)19:45
sandywalshok, cool. do you think that will fly with other deployments?19:46
sandywalshbut I suppose they can make a patch if they need it :)19:46
dragondmif they are using xenserver...19:46
*** leted has quit IRC19:47
sandywalshok, that's a great help. Thank for the clarifications dragondm!19:47
dragondmor if they add a driver for a different vnc proxy19:47
dragondmok19:47
sandywalshI'll summarize in the merge19:48
dragondmyah. I was hoping the  info I put in the blueprint  spec explained that architecture19:48
sandywalshyou should try the stubout thing mentioned ^^19:48
dragondmok19:48
sandywalshwould be nice to take the magic flags out19:48
sandywalshI may have missed that point19:49
sandywalshmy biggest hurdle was the pool vs. proxy vs. driver thing19:49
*** befreax has quit IRC19:51
dragondmya. Really the whole xvp+nova-console server is the proxy.  'consoles' are entries  in the  proxy's config, and 'pools' are collections of consoles proxied  *from* a specific hypervisor host.19:51
sandywalshcan one xvp proxy from multiple hypervisors?19:52
dragondmyes. that is why there are pools19:53
sandywalshperfect ... thx19:53
*** aliguori has quit IRC19:55
sandywalshdragondm, approved19:57
sandywalsh(I still have to get it running though)19:57
dragondmstuill can't get xvp installed?19:58
dragondmer, still19:59
EdwinGrubbsletterj: are you still having login problems on Launchpad?20:00
sandywalshdragondm, haven't gotten to it yet20:01
dragondmah ok20:01
sandywalshdragondm, but there's nothing in your code that should bust the rest of trunk, so it's low risk20:01
sandywalshvishy, that was rather a long thread above, but did you get the gist of it?20:02
sandywalshvishy, the nova-console server runs the proxy as well and the assumption is that machine is dual-nic (one public, one private)20:02
*** dfg_ has quit IRC20:02
dragondmalso, nova workers don't expose an interface anyway (since they make the connection to rabbit)20:04
dubsquaredryan_lane in the house?20:05
*** Ryan_Lane|food is now known as Ryan_Lane20:05
Ryan_Lanejust got back :)20:05
sandywalshdragondm, good point20:05
dubsquarednice20:05
dubsquaredryan_lane:  bug 700015 - this causing issues with the UEC images not being able to boot….and i suppose every other image as well20:06
uvirtbotLaunchpad bug 700015 in nova "Headers in virt/images.py function _fetch_s3_image are sent improperly to curl" [Undecided,New] https://launchpad.net/bugs/70001520:06
dubsquaredthat one20:06
dubsquaredlol20:06
Ryan_Lanedubsquared: I'd imagine :)20:06
Ryan_Lanenot sure about public images20:06
Ryan_Lanedefinitely for private ones20:06
Ryan_Lanethere's a few other bugs like this I'm tracking down20:06
edaytr3buchet: you around?20:07
dragondmsandywalsh: so the console box could be single nic, as well, with a firewall in front of it that only allowed public access to the vnc port(s)20:07
tr3bucheteday yes20:07
dubsquarednice, i just noticed yesterday there was a merge that broke the packages…and been swimming through logs trying to see what happened20:07
Ryan_Lanefor instance, in this pastebin: http://pastebin.com/VB0kFV5Y20:07
edaytr3buchet: is set_admin_password a function that should not happen if a lock on the instance is active?20:07
Ryan_Lane-append root=/dev/vda1 console=ttyS020:08
Ryan_Laneshould be:20:08
Ryan_Lane-append 'root=/dev/vda1 console=ttyS0'20:08
dubsquaredhttp://paste.openstack.org/show/445/ and http://paste.openstack.org/show/446/ is the errors im dealing with20:08
tr3bucheteday, that's a good question. i had originally planned on lock preventing changes of state20:08
sandywalshdragondm, but it needs access to the private network to talk to the rabbit cluster doesn't it?20:08
openstackhudsonProject nova build #368: SUCCESS in 1 min 20 sec: http://hudson.openstack.org/job/nova/368/20:09
tr3buchetbut i think it should also apply to set_admin password20:09
Ryan_Lanedubsquared: my fix would solve at least part of your problem20:09
edaytr3buchet: ok20:09
dragondmyes20:09
Ryan_Lanethis is invalid to send to curl: -H Authorization: AWS d1fb4c58-7cd0-4e18-80dc-f362a71c0390:dubproj:L+9AEtMYn3PA0zUEhhQgSG14CqE=20:09
dubsquaredryan_lane:  /me cheers20:10
dragondmsandywalsh: thus the firewall20:10
dubsquaredyeah, that is what i was thinking20:10
Ryan_Laneit must be wrapped in quotes20:10
sandywalshright20:10
dubsquaredsooooo…'part' of my problem…20:10
Ryan_LaneI have a feeling you'll run into more problems after patching that ;)20:11
Ryan_LaneI am20:11
Ryan_Lanedue to other things that need to be quoted20:11
dubsquaredcrap20:12
Ryan_Lanetracking em down :)20:12
dubsquaredI have a handful of images built…9.04, 10.04, 10.10, and cents 5.4 and centos 5.5 ...20:13
dubsquaredso i just wiped a box to restart the entire process just to ensure it was seamless...20:13
dubsquaredand kaboom20:13
dubsquared:|20:13
letterjEdwinGrubbs: Yes20:15
*** lvaughn has quit IRC20:16
*** lvaughn has joined #openstack20:17
letterjEdwinGrubbs: I just did the "Forgot Password" process again20:17
*** kevnfx has joined #openstack20:18
EdwinGrubbsletterj: do you have multiple email addresses on your account?20:19
letterjYes20:20
EdwinGrubbsletterj: can you try "Forgot my password" with the other ones. Unfortunately, the login.launchpad.net accounts don't always match up exactly with the launchpad accounts.20:20
EdwinGrubbsletterj: I just noticed that you have 666 karma. I think that is considered really bad karma.20:22
*** aliguori has joined #openstack20:22
gholtWe are talking about letterj here. ;)20:23
letterjEdwinGrubbs: Using the other email address worked.   Thanks for taking the time to help me.20:24
EdwinGrubbsletterj: If you go back to https://login.launchpad.net/   , you should see a management screen with a "Manage email addresses" link so you can get your other email address correctly connected to the account.20:27
*** kevnfx has quit IRC20:27
*** joearnold has quit IRC20:27
vishyRyan_Lane what is that b64 encoded string?20:28
vishysandywalsh: followed thanks20:29
creihtlol20:29
openstackhudsonProject nova build #369: SUCCESS in 1 min 21 sec: http://hudson.openstack.org/job/nova/369/20:29
creihtEdwinGrubbs: is there any way to lock letterj's karma to 666? :)20:29
sandywalshvishy, does that work for your deployment model?20:30
EdwinGrubbscreiht: I believe you need to use an incantation.20:30
creihtmaybe and IncantationFactory20:30
* EdwinGrubbs casts fireball at that joke20:31
vishysandywalsh: we aren't using xen so no20:32
vishy:)20:32
vishysandywalsh: but i think it could work with an equivalent proxy service.20:32
sandywalshvishy, right, I just mean if you had a restriction of having to put the proxy on the console-server with dual-nics. Would that fly?20:34
creihtheh20:34
vishysandywalsh: yes probably, I don't think there is any reason why you couldn't run it on the network or api host20:35
sandywalshvishy, excellent. thx20:37
dragondmvishy: yah, you could run the console service on one of the other hosts20:43
dragondmwe (rs) just separate them for security + scaling20:43
Ryan_Lanevishy: which encoded string?20:44
vishyRyan_Lane: at the end of the curl header you pasted above?20:45
Ryan_Lanevishy: pasted from what curl was trying to run20:46
Ryan_Laneauthorization header20:46
Ryan_Lanerelated to the last bug/merge I made20:46
vishyRyan_Lane: ah i guess it is the b64 encoded hmac for the request to s3.  I was just curious why it never failed before.  Perhaps it only fails if there happens to be a + in the header?20:50
Ryan_Lanelikely20:50
Ryan_Laneit's a special char causing the problem20:50
Ryan_Laneit's something getting interpreted by bash20:50
vishymaybe =20:50
vishybecause that code hasn't changed recently, so it was odd to see it suddenly fail20:51
* Ryan_Lane nods20:51
Ryan_LaneI'm having other issues too :(20:51
Ryan_Lanesomething bad must be getting passed to qemu on my nodes20:52
dubsquared+120:52
Ryan_LaneI can't get instances to start20:52
vishybad qemu20:52
Ryan_Lanethere is a bug in the template20:52
Ryan_Lanefor sure20:52
*** joearnold has joined #openstack20:52
vishyoh?20:52
Ryan_Laneor libvirt20:52
vishyare you running kvm or qemu?20:52
Ryan_Laneqemu20:52
vishymight be out of ram20:53
vishythat causes things to fail20:53
Ryan_Lanecould be that too :)20:53
vishyvery badly20:53
Ryan_Lanebut...20:53
Ryan_Lane <cmdline>root=/dev/vda1 console=ttyS0</cmdline>20:53
dubsquaredvishy:  full logs from the issue yesterday —> http://paste.openstack.org/show/445/ and http://paste.openstack.org/show/446/20:53
Ryan_Lane^^ seems to be problematic20:53
uvirtbotRyan_Lane: Error: "^" is not a valid command.20:53
Ryan_Laneon the command line that is turned into:  -append root=/dev/vda1 console=ttyS020:54
Ryan_Lanewhere it should likely be -append 'root=/dev/vda1 console=ttyS0'20:54
Ryan_Lanememory was definitely a problem :)20:57
*** ctennis has quit IRC21:00
Ryan_Lanetemplate wasn't a problem at all it seems :)21:02
Ryan_Lane\o/21:02
*** pothos_ has joined #openstack21:02
*** spsneo has left #openstack21:03
*** nelson__ has quit IRC21:03
*** nelson__ has joined #openstack21:03
*** pothos has quit IRC21:04
*** pothos_ is now known as pothos21:04
uvirtbotNew bug: #700106 in glance "DescribeRegions is not working" [Undecided,New] https://launchpad.net/bugs/70010621:06
Ryan_Lanebah. UEC images seem to require UEC :(21:11
Ryan_Lane2011-01-07 21:02:40,499 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id21:12
nelson__Is there a program which just tests the auth server?21:13
nelson__Or has that not been viewed as a priority because it's just a dev tool?21:13
creihtnelson__: well there are unit and functional tests, but beyond that, I don't think so21:14
creihtnelson__: Is there something in particular that you were wanting to test?21:15
*** ctennis has joined #openstack21:16
*** ctennis has joined #openstack21:16
vishysoren, ttx, dendrobates: the ppa is broken i have a fix here21:17
nelson__Still trying to figure out what's going on with my upload problem.21:18
vishyhttp://pastie.org/143839421:18
nelson__it's almost as if something is contacting the account server when it should be contacting the container server.21:18
nelson__but I checked the logs and the port numbers.21:18
vishyRyan_Lane: which networking mode are you using?21:19
Ryan_Lanevishy: flat21:19
nelson__So I was wondering if there were stand-alone tests for the container service or account service.21:19
vishyRyan_Lane, ok then you'll have to set up a manual forward for metadata from your gateway21:19
Ryan_Laneoh. it can access it?21:19
vishyapi server provides metadata21:19
Ryan_Laneahhh. ok21:20
creihtnelson__: Well there is a check that is done to see if the account exists, and that part is failing21:20
Ryan_Lanehow do I go about that?21:20
vishyforwarding rules are set up automatically for flatdhcp and vlan21:20
vishyyou need to create an iptables type rule on your gateway21:20
creihtfor some reason it is appending your container name to the account, when it is checking to see if the account exists21:20
creihtand that is why it 404s21:20
creihtI traced all that down yesterday evening just to verify21:20
creihthow it happens, I'm not entirely sure21:20
nelson__right. that's why I was wondering if somehow I misconfigured the account server on the container service's address.21:21
creihthrm21:21
nelson__I think I'll look at the proxy next to see what it's calling....21:21
vishyiptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <api_ip>:877321:22
Ryan_Laneawesome. thanks21:22
creihtnelson__: can you run swift-ring-builder account.builder21:22
creihtand21:22
vishyRyan_Lane: np21:22
creihtswift-ring-builder container.builder21:22
creihtand paste the output?21:22
nelson__GASP!21:23
Ryan_Lanevishy: in this mode do I also need to handle NAT manually for floating IPs?21:23
nelson__oh, wait, no,21:23
nelson__they're all accessing devices on port 6002, but ... that's storage.21:23
vishyRyan_Lane: floating IP natting will probably be nasty21:24
creihtaccount, container, and storage should all have different ports21:24
Ryan_Laneis that pretty much only supported well in vlan mode?21:24
vishyRyan_Lane: it all works fine as long as the network node is also the gateway21:24
Ryan_Laneyeah21:24
vishyRyan_Lane: it should work fine in flatdhcp21:24
Ryan_LaneI'm already planning on doing that :)21:25
creihtnelson__: by default, object is 6000, container is 6001, and account is 600221:25
vishysomeone was supposed to make a set of instructions about using flatdhcp with only one interface, can't remember who know21:25
vishys/know/now21:25
Ryan_LaneI'd love to be able to have more than one interface ;)21:26
Ryan_Lanethat's likely coming in cactus, right?21:26
nelson__http://paste.openstack.org/show/447/21:27
Ryan_LaneGeneral error mounting filesystems.21:27
Ryan_LaneA maintenance shell will now be started.21:27
Ryan_Lane:D21:27
creihtnelson__: yeah your container ring isn't right21:27
nelson__cool!21:27
creihtI would also check object.builder just to be sure21:27
nelson__bad is good!21:27
creihtindeed21:27
nelson__:)21:27
creihtI was tossing and turning last night, racking my brain trying to figure out what was causing the issue21:28
creihtnelson__: so you have everything going to one hard drive on each machine?21:28
nelson__yes, 5 machines.21:29
creihtk21:29
nelson__dedicated filesystem. XFS21:29
Ryan_Lanevishy: is there some way around natting the floating IPs?21:29
*** leted has joined #openstack21:30
colinnichcreiht: I'm getting slick at setting up swift machines now - on my 3rd storage server of the session and finished from a clean install of ubuntu on a 4 drive machine in about 8 minutes :-)21:32
vishyRyan_Lane: not really, it would require a lot of code changes21:32
* Ryan_Lane nods21:33
Ryan_Laneso is the natting something that nova takes care of, or something I need to do manually?21:33
*** westmaas has joined #openstack21:33
vishyRyan_Lane: it does the natting, but the outgoing route will be all messed up21:33
vishyRyan_Lane: so outgoing packets will look like they are coming from the wrong ip which i suspect will cause all sorts of strange issues.21:34
creihtcon:)21:34
creihterm21:34
creihtcolinnich: :)21:34
Ryan_Lanevishy: how do I avoid this issue?21:35
vishyuse flatdhcp and let nova create the rules for you?21:35
vishy:)21:35
Ryan_LaneI think that'll be more than doable :)21:35
Ryan_Lanenova.network.manager.FlatDhcpManager?21:36
vishyaye, the tricky thing is setting up multiple hosts with that guy21:36
Ryan_Laneheh21:36
vishyif you have 2 eth devices it is easier21:37
nelson__ creiht: okay, so I'm going to go through everything checking the port numbers. probably swapped something during the config. Thanks for your help. I'll let you know when I've got it figured out.21:37
Ryan_Lanethere's always a catch!21:37
Ryan_Lanevishy: I have 4, in fact :D21:37
creihtnelson__: sounds good21:37
vishyoh in that case it shouldn't be too bad21:37
vishyjust make sure that when you set --flat_interface in your flagfile that it is an interface that doesn't already have an ip on it21:38
vishyit can be a vlan or whatever21:38
Ryan_Laneis there documentation for this mode?21:39
Ryan_Laneno examples on the doc site :(21:39
vishyRyan_Lane: the tricky part is if  you only have one interface on the host, and you can't make vlans.  You have to add the hosts public ip to the bridge and bridge it into the interface and use that bridge for the private ip as well21:40
vishyRyan_Lane: examples are slim, the guy who I was working with on the one interface version was supposed to write something21:41
vishyRyan_Lane: i think it was patri0t???21:41
Ryan_Lanewell, I have more than one interface...21:41
vishyRyan_Lane: there are some nice docstrings in nova/network/manager.py21:41
Ryan_Laneis there a multiple interface version of the doc?21:41
Ryan_Laneok21:41
vishyRyan_Lane: it is mostly just setting a few flags21:42
Ryan_Laneoh. ok21:42
Ryan_Laneshouldn't be too bad then21:42
Ryan_Lanevishy: thanks :)21:42
*** maplebed has quit IRC21:53
*** maplebed has joined #openstack21:53
*** lvaughn_ has joined #openstack21:54
*** soliloquery has quit IRC21:55
*** fabiand_ has quit IRC21:56
*** lvaughn has quit IRC21:56
*** ccustine has quit IRC21:57
*** westmaas has quit IRC21:58
*** arcane has joined #openstack22:00
*** jdarcy has quit IRC22:02
nelson__creiht: oops. swift-ring-builder gets unhappy when you try to remove everything.22:04
nelson__I may just discard and rebuild from scratch, to get going. but I'll fix the bug later.22:05
creihtnelson__: yeah it would probably be better to rebuild from scratch22:05
colinnichcreiht: swauth.... how do you authenticate against it? ie what url for st?22:05
creihtgholt: -^22:06
colinnichcreiht: :-)22:06
creihtcolinnich: I'm reviewing the merge prop right now, but haven't gotten that far yet :)22:06
colinnichcreiht: I tried to read the built docs, but they are on a remote server and I'm reading an html file using more. Perhaps not the best plan :-)22:07
*** cynb has left #openstack22:08
creihtcolinnich: in your source tree, it will probably be easier to read doc/source/overview_auth.rst22:08
colinnichcreiht: looks to have created users ok, can see them on the storage nodes and swauth-list works22:09
colinnichcreiht: will do22:09
vishyeday: want to take a quick look, I don't want to approve until your Needs Information has been changed.22:10
rlucioanyone know how often the new builds are pushed into the trunk ppa?  its still showing build 527, and we are on 529 now22:10
vishyeday: https://code.launchpad.net/~soren/nova/iptables-security-groups/+merge/4376722:10
vishyrlucio: ppa is broken, trying to find someone who can fix it22:11
rluciovishy: ok thx for the info22:11
nelson__oh fail, rebalance REALLY doesn't like rebalancing down to one storage server.22:13
creihtlol22:13
creihtum yeah... don't do that :)22:13
nelson__yeah, probably not a real use case.22:13
nelson__in my copious spare time I'll fix it. probably when I add test cases to the docstrings.22:14
sorenvishy: PPA is broken how?22:17
creihtcolinnich: My first guess is that you point your auth to a proxy:808022:17
gholtcolinnich: For an saio, I use st -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat22:17
sorenvishy: eday's concerns have already been adressed on that mp.22:17
gholtBut the main part is that you put auth in front of whatever the path was.22:18
creihtahh22:18
vishysoren: yes the nova-manage patch is failing22:18
creihtcolinnich: ignore what I said :)22:18
vishysoren: http://pastie.org/143839422:18
creiht8080 is the saio port for the proxy22:18
vishysoren: yeah i figured, you also still have the branch marked wip22:18
sorenvishy: Good point.22:19
sorenvishy: Set back to "Needs review".22:19
gholtcolinnich, creiht: I went ahead and added the example st line to the description of the merge proposal. :)22:20
vishysoren: so you think i can go ahead and approve it?22:20
creihtgholt: cool, thanks22:20
sorenvishy: If you want to wait for eday's approval, that's fine with me.22:20
gholtBut letterj has found a couple other issues I need to work through. One is internal/external urls and the other is purging tokens.22:20
creihtk22:21
sorenvishy: Ah. /me looks at the ppa thing22:22
sorenvishy, rlucio: PPA build fixed. New packages on their way.22:27
vishysoren: wh00t22:27
rluciosoren: thanks for the heads up22:27
sorenARGH!22:27
sorenThere's  2709 jobs in the PPA queue.22:27
vishysoren: you noticed that one of your merges failed with no test output?22:28
sorenestimated 14 hours.22:28
sorenvishy: Nope.22:28
*** spectorclan has quit IRC22:28
vishysoren: the version one22:28
sorenWeird.22:28
* soren investigates22:28
uvirtbotNew bug: #700140 in nova "DescribeRegions is not working" [Undecided,New] https://launchpad.net/bugs/70014022:31
sorenuvirtbot: I have a patch for that!22:32
uvirtbotsoren: Error: "I" is not a valid command.22:32
jk0soren: you snooze, you lose22:32
jk0:P22:32
vishysoren: i thought the trunk ppa was built and uploaded by hudson?22:34
sorenvishy: It is?22:34
sorenvishy: Oh, I see what you mean.22:34
sorenvishy: Hudson builds source packages.22:34
sorenvishy: Uploads them to Launchpad which then builds the packages.22:34
vishysoren: i see22:34
vishysoren: is your patch the same as the one in the bug?22:35
sorenvishy: Maybe. Haven't looked at the bug.22:36
*** aliguori has quit IRC22:36
sorenvishy: No.22:36
sorenvishy: Mine is wrong. His isn't. :)22:36
*** glenc_ has joined #openstack22:37
vishysoren: loflcopter22:37
Ryan_Lanevishy: would you happen to see blatently wrong with this config: http://pastebin.com/fzzjESyY22:39
Ryan_Lane?22:39
*** glenc has quit IRC22:39
vishyyou don't need vlan start22:40
vishyflat_interface needs to be an interface22:40
Ryan_Laneoh?22:40
Ryan_Laneah. can't be a bridge?22:41
vishynot a bridge22:41
vishycorrect22:41
Ryan_Laneon the compute nodes as well?22:41
vishyit creates br100 and bridges into an interface22:41
vishycorrect22:41
Ryan_Laneok22:41
Ryan_Laneand if I wanted it to use vlan 103?22:41
vishyflat_interface=vlan10322:41
vishyshould be fine22:42
colinnichgholt, creiht: that's it now, thanks22:42
Ryan_Laneah. so I can make a vlan interface, and tell it to use that22:42
Ryan_Lanethanks22:42
vishyyes22:42
vishyand you need to specify the same on the compute hosts22:43
Ryan_Laneawesome :)22:43
*** ppetraki has quit IRC22:44
*** enigma has left #openstack22:44
*** gasbakid has quit IRC22:46
*** adiantum has joined #openstack22:47
vishysoren: we want to set up our own ppa.  Is there a good set of instructions somewhere?22:48
*** troytoman has quit IRC22:54
*** brd_from_italy has quit IRC22:59
Ryan_Lanevishy: for this do I need to have the network service installed on all compute nodes too?23:01
vishynope23:01
vishyjust one network host needed23:01
Ryan_Lanehmm23:01
jeremybwill this have much HA?23:02
jeremybtesla's 1 box now?23:02
Ryan_Lanevishy: any reason instances may be stuck in "networking" state?23:03
Ryan_Lanejeremyb: well, it won't be very HA without migration and other HA features either ;)23:03
jeremybRyan_Lane: i meant the controller itself23:03
vishydo you get any errors in the log23:03
Ryan_Lanenope23:03
vishysounds like the call to network to get the ip is failing23:03
jeremyband if there's a "networking" service that too23:04
vishyoh23:04
vishyyou need to set --network_host flag on compute hosts23:04
Ryan_Laneahhhhhh23:04
Ryan_Laneok23:04
vishyso they know which one to use23:04
*** trin_cz has quit IRC23:05
Ryan_Lanejeremyb: dunno. that's a good question23:06
Ryan_Lanevishy: with this configuration, I'm pretty dependent on the network node, right? is there any way to have redundancy on it?23:06
jeremybdoes it tunnel everything through there or it just hands out IPs?23:06
Ryan_Lanejeremyb: tunnel23:06
Ryan_LaneNAT23:06
jeremybhuh23:06
jeremybwhy not ospf or some such?23:07
Ryan_Laneall vms are on a private subnet by default23:07
vishyRyan_Lane: that is something that we are looking at.  We're thinking hot spare might be the best way23:07
Ryan_Laneyeah. likely23:07
Ryan_Lanejeremyb: when I discussed this with mark, his first idea was "well, we'll just use NAT" :D23:07
sorenvishy: I don't know any off the top of my head. I could google for them for you, but you're a big boy. You can work it out :D23:08
colinnichgholt: cosmetic copy and paste error in your overview-auth doc - "sat": "http://ord.storage.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9" for storage and servers23:08
* soren heads bedwards23:09
vishysoren: anthony tried one and it failed miserably, hence the q23:10
jeremybRyan_Lane: my network foo needs some work and i think there may also be some confusing black magic somewhere23:12
uvirtbotNew bug: #700151 in nova "nova volume checks for /dev/vg which doesn't always exist" [Undecided,New] https://launchpad.net/bugs/70015123:12
* jeremyb heads to datacenter23:12
Ryan_Laneheh23:12
sleepsonthefloorI notice now that our instance_ids are short, like "i-1" - is that intentional?23:14
*** dirakx has quit IRC23:14
*** jdurgin has quit IRC23:15
*** aliguori has joined #openstack23:17
*** hggdh has quit IRC23:18
*** aliguori has joined #openstack23:20
edaysleepsonthefloor: yes, we removed the random secondary id23:22
openstackhudsonProject nova build #370: FAILURE in 10 sec: http://hudson.openstack.org/job/nova/370/23:23
sleepsontheflooreday: ok cool, looked a little funny so I wanted to ask.  Definitely easier to type though :)23:23
Ryan_Laneis there supposed to be a dhcp process running on the network server?23:26
Ryan_Lanecause I'm seeing arps on the flat dhcp network, but nothing is answering it23:27
Ryan_Lanemissing dnsmasq :(23:30
colinnichgholt: swift-bench doesn't seem to work. It is looking for NamedLogger in utils.py which isn't in the Bexar code. From what I can make out, the version in your branch is out of date - the trunk swift-bench uses NamedFormatter23:33
colinnichgholt: probably not a real problem, but thought I'd let you know23:33
colinnichgholt: (your swauth2 branch)23:33
colinnichgholt: tried copying the trunk swift-bench in in the hope that it worked, but of course it didn't - failed with a formatting error23:37
* colinnich is off to bed23:41
jeremybgood night23:41
colinnichjeremyb: :-)23:41
*** hggdh has joined #openstack23:43
*** reldan has joined #openstack23:43
*** gondoi has quit IRC23:46
*** dubsquared has quit IRC23:51
*** hggdh has quit IRC23:51
*** hggdh has joined #openstack23:52
uvirtbotNew bug: #700162 in glance "Can't upload image more than 2GB using glance.client" [Undecided,New] https://launchpad.net/bugs/70016223:56
*** dwight_ has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!