Wednesday, 2012-08-01

asalkeldso s34n change your devstack to rabbit00:00
*** halfss has joined #openstack-dev00:00
asalkeld(not qpidd)00:00
s34nasalkeld: it was rabbit by default00:00
asalkeldI changed mine00:00
asalkeldso go with default00:00
s34nasalkeld: you changed to qpid, right?00:01
asalkeldyip00:01
asalkeldfedora default00:01
asalkeldfor openstack00:01
s34nasalkeld: ok. beam.smp seams to be there for rabbit00:03
s34nI stopped rabbit, but beam is lingering. I'm not sure how to make it go away nicely00:04
*** steveb_ has quit IRC00:05
jkoelkerthe shutdown script from rabbit should stop it00:05
jkoelkerany epmd will linger00:05
jkoelkers/any/only/00:05
jkoelkerif you're not concerned about messages in it, you can just kill it00:06
*** mikal has quit IRC00:09
*** halfss has quit IRC00:11
*** mikal has joined #openstack-dev00:11
*** novas0x2a|laptop has quit IRC00:12
*** Aaton is now known as Aaton_off00:16
*** mnewby has quit IRC00:16
*** spiffxp has quit IRC00:17
*** ecarlin has quit IRC00:21
*** zhuadl has joined #openstack-dev00:22
*** Ryan_Lane has quit IRC00:25
*** littleidea has quit IRC00:26
*** halfss has joined #openstack-dev00:28
*** jakedahn_zz is now known as jakedahn00:28
*** johnpostlethwait has quit IRC00:29
*** anniec has quit IRC00:32
*** anniec has joined #openstack-dev00:32
*** koolhead17 has joined #openstack-dev00:36
*** halfss has quit IRC00:37
*** maoy has joined #openstack-dev00:39
*** littleidea has joined #openstack-dev00:41
*** pixelbeat has quit IRC00:42
*** jakedahn is now known as jakedahn_zz00:45
*** nati_uen_ has joined #openstack-dev00:48
*** nati_uen_ has quit IRC00:48
*** nati_uen_ has joined #openstack-dev00:48
*** amotoki_ has joined #openstack-dev00:48
*** spiffxp has joined #openstack-dev00:49
*** sandywalsh has quit IRC00:49
*** nati_ueno has quit IRC00:50
*** amotoki has quit IRC00:50
*** edygarcia has quit IRC00:51
*** spiffxp has quit IRC00:52
*** hemna has quit IRC00:53
*** mikal has quit IRC00:56
*** mikal has joined #openstack-dev00:58
*** steveb_ has joined #openstack-dev01:02
*** Mandell_ has quit IRC01:07
*** ncode has joined #openstack-dev01:08
*** Dr_Who has joined #openstack-dev01:13
*** ncode has quit IRC01:15
*** cloudvirt has joined #openstack-dev01:18
*** mfer has joined #openstack-dev01:19
*** johnpur has quit IRC01:19
*** mfer has quit IRC01:20
*** zhuadl has quit IRC01:22
*** cloudvirt has quit IRC01:23
*** danwent has joined #openstack-dev01:24
*** PotHix has quit IRC01:26
*** jtran has quit IRC01:32
*** rkukura has joined #openstack-dev01:42
*** nati_ueno has joined #openstack-dev01:42
*** issackelly has quit IRC01:45
*** roge has joined #openstack-dev01:45
*** nati_uen_ has quit IRC01:45
*** littleidea has quit IRC01:46
*** Dr_Who has quit IRC01:47
*** danwent has quit IRC01:49
*** nunosantos has quit IRC01:49
*** primeministerp has quit IRC01:52
*** blamar has joined #openstack-dev01:53
*** nati_ueno has quit IRC01:55
*** anniec has quit IRC01:59
*** primeministerp has joined #openstack-dev01:59
*** roge has quit IRC02:00
*** jog0 has quit IRC02:01
*** jdurgin has quit IRC02:02
*** roge has joined #openstack-dev02:03
*** gongys has quit IRC02:06
*** bencherian has quit IRC02:07
*** amotoki_ has quit IRC02:11
*** littleidea has joined #openstack-dev02:12
*** sunxin has joined #openstack-dev02:15
*** bencherian has joined #openstack-dev02:15
*** zhuadl has joined #openstack-dev02:18
*** Dr_Who has joined #openstack-dev02:19
*** Dr_Who has joined #openstack-dev02:20
*** Dr_Who has quit IRC02:22
*** littleidea has left #openstack-dev02:23
*** halfss has joined #openstack-dev02:24
*** halfss has quit IRC02:26
*** nati_ueno has joined #openstack-dev02:30
*** troytoman is now known as troytoman-away02:31
*** dolphm has joined #openstack-dev02:31
*** nati_ueno has quit IRC02:31
*** nati_ueno has joined #openstack-dev02:32
*** Dr_Who has joined #openstack-dev02:33
*** koolhead17 has quit IRC02:35
*** n0ano has quit IRC02:38
*** rkukura has quit IRC02:42
*** jakedahn_zz is now known as jakedahn02:43
*** nati_ueno has quit IRC02:43
*** bencherian has quit IRC02:46
*** koolhead17 has joined #openstack-dev02:48
*** blamar has quit IRC02:50
*** bencherian has joined #openstack-dev02:52
*** blamar has joined #openstack-dev02:57
*** koolhead17 has quit IRC02:57
*** novas0x2a|laptop has joined #openstack-dev02:59
*** sunxin has quit IRC03:02
*** blamar has quit IRC03:09
*** bencherian has quit IRC03:10
*** Mandell has joined #openstack-dev03:10
*** blamar has joined #openstack-dev03:10
*** amotoki has joined #openstack-dev03:10
*** sunxin has joined #openstack-dev03:12
*** blamar has quit IRC03:18
*** blamar has joined #openstack-dev03:19
*** bcwaldon has quit IRC03:28
*** bcwaldon has joined #openstack-dev03:28
*** Dr_Who has quit IRC03:31
*** nati_ueno has joined #openstack-dev03:32
*** blamar has quit IRC03:34
*** nati_ueno has quit IRC03:35
*** blamar has joined #openstack-dev03:36
*** nati_ueno has joined #openstack-dev03:36
*** roge has quit IRC03:36
*** mdomsch has joined #openstack-dev03:39
*** blamar has quit IRC03:42
*** blamar has joined #openstack-dev03:43
*** blamar has quit IRC03:45
*** blamar has joined #openstack-dev03:46
*** blamar has joined #openstack-dev03:47
*** danwent has joined #openstack-dev03:47
*** blamar has quit IRC03:47
*** danwent has quit IRC03:50
*** lzyeval has quit IRC03:51
*** danwent has joined #openstack-dev03:53
*** jaypipes has quit IRC04:01
*** jakedahn is now known as jakedahn_zz04:15
*** jakedahn_zz is now known as jakedahn04:17
*** jakedahn is now known as jakedahn_zz04:20
*** nati_uen_ has joined #openstack-dev04:24
*** nati_uen_ has quit IRC04:24
*** nati_uen_ has joined #openstack-dev04:24
*** thingee is now known as thingee_zz04:25
*** nati_ueno has quit IRC04:27
*** mokas has quit IRC04:27
*** sacharya has quit IRC04:31
*** mokas has joined #openstack-dev04:37
*** Ryan_Lane has joined #openstack-dev04:48
*** nati_ueno has joined #openstack-dev05:12
*** nati_ueno has quit IRC05:12
*** nati_ueno has joined #openstack-dev05:13
*** anniec has joined #openstack-dev05:14
*** nati_uen_ has quit IRC05:15
*** mnewby has joined #openstack-dev05:30
*** mnewby has quit IRC05:30
*** spiffxp has joined #openstack-dev05:37
*** littleidea has joined #openstack-dev05:37
*** johnpur has joined #openstack-dev05:41
*** ChanServ sets mode: +v johnpur05:41
*** johnpur has quit IRC05:42
*** avishay has joined #openstack-dev05:45
*** hemna has joined #openstack-dev05:52
*** steveb_ has quit IRC05:54
*** nati_uen_ has joined #openstack-dev06:01
*** hattwick has quit IRC06:01
*** nati_ueno has quit IRC06:04
*** spiffxp has quit IRC06:05
*** sunxin has quit IRC06:08
*** nati_ueno has joined #openstack-dev06:09
*** nati_uen_ has quit IRC06:11
*** nati_ueno has quit IRC06:21
*** nati_ueno has joined #openstack-dev06:22
*** steveb_ has joined #openstack-dev06:38
*** steveb_ has quit IRC06:47
amotokidanwent: hello!06:56
danwenthello06:57
amotokiI could not attend the IRC network meeting yesterday.06:57
amotokiIn the meeting agenda, my patch about quantum keystone support in is listed.06:57
amotokiIs there any action to work on it?06:58
*** xchu_ has quit IRC06:58
danwentamotoki: i'm actually testing it out right now...06:58
danwentbut ran into an unrelated issue06:59
danwenti should be able to +2 it tonight06:59
amotokithanks.06:59
danwentthen we'll need to ask dean or some other devstack core to approve it.06:59
amotokiI understand the situation.07:00
amotokiBTW, I have two question about subnet creation in v2 API.07:00
danwentok, i have to run for a sec, but I will be back in 5 minutes.07:00
danwentfeel free to ask, and I will respond then07:00
amotokiIn API spec on wiki, we can specify subnet info in create_network(). Does it plan to be supported in F-3?07:02
amotokiIn horizon work, I plan to specify network and subnet info in a single operation. So I am interested in the status of create_network().07:04
amotokiThe other is related to multiplen subnets for a network.07:05
amotokiIn the current implementation two ipv4 subnets can be created on a network. But when create_port() only one ipv4 address is assigned for a port.07:06
amotokiIs it a bug? I think is is confusing for a user.07:06
*** nati_uen_ has joined #openstack-dev07:10
*** reidrac has joined #openstack-dev07:12
*** m4xmr has joined #openstack-dev07:12
*** m4xmr has left #openstack-dev07:12
danwenton first question07:13
amotokihi07:13
*** nati_ueno has quit IRC07:13
danwentwe had originally planned on letting someone specify a list of full subnet dictionary values during network create, and quantum would automatically create the subnets on their behalf.07:13
danwentwe never implemented that, and currently, there's no bug to do so.07:14
danwentthough I think from a UI perspective, that's the right workflow07:14
danwentbut we can always have the UI make it a single step, which breaks down into multiple API calls07:14
*** shang has quit IRC07:14
amotokiMy current implementation call multiple API calls.07:14
danwenti think that's fine, personally07:15
danwenton the second question07:15
*** erikzaadi has joined #openstack-dev07:15
danwentif no list of fixed_ips is specified, the port gets only a single IP.07:16
danwentif we want the port to get an IP on each subnet, we could specify a fixed_ip list of [ { subnet_id: XXX}, { subnet_id: YYY} ]07:16
danwentand it would be allocated an IP on both subnets.07:17
*** alex88 has joined #openstack-dev07:17
*** alex88 has joined #openstack-dev07:17
danwentthe reason it was designed this way is that the common use case for having multiple subnets for a network is that you run out of space on a subnet (e.g., a public network).  In this scenario, you really just want the port to get a single avaialble IP, not an IP from each subnet.07:18
amotokiI see. That sounds reasonable.07:18
danwentboth very good questions, thanks for asking07:19
amotokiIt sounds multiple IP subnet pools for a single network07:19
amotokiThanks very much. I am happy to know the backgrond of the design.07:20
danwentbtw, gabriel said that things are coming along well on the horizon work.  I'm very happy to hear that :)07:20
amotokii will send a status update on Horizon work. I think it is comming near review ready.07:22
*** davidha has quit IRC07:28
*** nati_uen_ has quit IRC07:29
*** nati_ueno has joined #openstack-dev07:29
*** EmilienM has joined #openstack-dev07:29
*** Ryan_Lane has quit IRC07:40
*** dachary has joined #openstack-dev07:48
*** mancdaz has quit IRC07:52
*** mancdaz has joined #openstack-dev07:55
*** sulochan has joined #openstack-dev07:55
alex88hello, someone tried openstack with kernel > 3.4? i've tried with that to get O_DIRECT working with fuse, and in my configuration, vm weren't getting the ip, rolling back to 3.2 works fine.07:57
*** sulochan has quit IRC07:57
*** sulochan has joined #openstack-dev08:01
*** darraghb has joined #openstack-dev08:01
*** nati_ueno has quit IRC08:04
*** Mandell has quit IRC08:06
*** pixelbeat has joined #openstack-dev08:17
*** erikzaadi has quit IRC08:24
*** kzhen has left #openstack-dev08:31
*** derekh has joined #openstack-dev08:34
*** hattwick has joined #openstack-dev08:43
*** derekh has quit IRC08:48
*** davidha has joined #openstack-dev09:08
*** eglynn has joined #openstack-dev09:15
*** johngarbutt has quit IRC09:17
*** derekh has joined #openstack-dev09:18
*** derekh has quit IRC09:21
*** derekh has joined #openstack-dev09:22
*** troytoman-away has quit IRC09:28
*** troytoman-away has joined #openstack-dev09:31
*** mcaway is now known as markmc09:31
*** dolphm_ has joined #openstack-dev09:33
*** dolphm has quit IRC09:36
*** halfss has joined #openstack-dev10:03
*** apevec has joined #openstack-dev10:04
*** cyberbob has joined #openstack-dev10:14
*** halfss has quit IRC10:24
*** danwent has quit IRC10:30
*** zhuadl has quit IRC10:31
*** alexpilotti has joined #openstack-dev10:59
*** mancdaz has quit IRC11:06
*** GheRivero has quit IRC11:09
*** alexpilotti has quit IRC11:10
*** troytoman-away is now known as troytoman11:16
*** rods has joined #openstack-dev11:16
*** GheRivero has joined #openstack-dev11:22
*** cyberbob has quit IRC11:23
*** sunxin has joined #openstack-dev11:27
*** wiliam has joined #openstack-dev11:30
*** gargya has joined #openstack-dev11:47
*** chrisfer has joined #openstack-dev11:47
*** huats has quit IRC11:54
*** maurosr has joined #openstack-dev11:57
*** davidha has quit IRC12:07
*** davidha has joined #openstack-dev12:07
*** rkukura has joined #openstack-dev12:14
*** sandywalsh has joined #openstack-dev12:14
*** zhuadl has joined #openstack-dev12:22
*** cloudvirt has joined #openstack-dev12:23
*** Shrews has joined #openstack-dev12:24
*** dprince has joined #openstack-dev12:31
*** lts has joined #openstack-dev12:35
*** salgado has joined #openstack-dev12:38
*** salgado has joined #openstack-dev12:38
*** huats has joined #openstack-dev12:40
*** huats has joined #openstack-dev12:40
*** sacharya has joined #openstack-dev12:41
*** sunxin has quit IRC12:42
*** sunxin has joined #openstack-dev12:44
*** lorin1 has joined #openstack-dev12:45
dprinceayoung: I assigned https://bugs.launchpad.net/keystone/+bug/1031022 to you sir.12:47
uvirtbotLaunchpad bug 1031022 in keystone "update auth_token to default signing_dir w/ os USER as suffix" [High,In progress]12:47
*** GheRivero has quit IRC12:54
*** sacharya has quit IRC12:55
*** timello has joined #openstack-dev12:58
*** glenc_ has joined #openstack-dev13:00
*** shang has joined #openstack-dev13:02
*** glenc has quit IRC13:03
*** thovden has joined #openstack-dev13:14
*** tgall_foo has joined #openstack-dev13:15
*** roge has joined #openstack-dev13:15
*** ayoung has quit IRC13:16
*** ayoung has joined #openstack-dev13:17
eglynnrussellb: if you get a chance, can you give https://review.openstack.org/10130 another look? (corresponding tempest change has landed)13:19
*** kbringard has joined #openstack-dev13:20
*** markmcclain has joined #openstack-dev13:20
*** thovden has quit IRC13:22
*** halfss has joined #openstack-dev13:23
asalkelddhellmann, you up and about?13:29
*** sacharya has joined #openstack-dev13:29
asalkeld(about to head off to bed - thought I'd catch you ...)13:29
*** garyk has quit IRC13:30
mtaylormorning chmouel13:30
chmouelmtaylor: hello13:30
*** flaviamissi has joined #openstack-dev13:30
mtaylorchmouel: sorry for using your email to complain :)13:30
chmouelmtaylor: ah no I totally agree with you heh :)13:30
mtaylorchmouel: hehe. good! :)13:31
mtaylorchmouel: I sent you a pkg_resources-based patch (it's not a large patch)13:31
chmouelmtaylor: yeah saw that it's missing a import :)13:32
mtaylorchmouel: BAH. imports should import themselves!13:32
chmouelheh13:33
mtaylorme hides13:33
*** koolhead17 has joined #openstack-dev13:33
chmouelbut I definitively like the idea13:33
mtaylorchmouel: I can make a POC plugin that uses it to install too if you like13:33
chmouelI am trying to find a way so HP/RAX can provide a auth_url endpoint directly in plugin without having the user needing to specify it13:34
mtaylorchmouel: ++13:34
dansmitheglynn: I like it! thanks for that.. I was actually punting on how that got reported (see my other comments).. I'll work on that change13:34
chmouelmtaylor: for the plugin yes please if you can do this quickly13:34
eglynndansmith: cool, thanks!13:35
chmouelmtaylor: i mean if that's not something that would take you too much time13:35
mtaylorchmouel: and I think, different from random env vars in the code, we _do_ have a hisory of having plugins in the tree...13:38
mtaylorchmouel: it'll just be a few minutes...13:38
*** dolphm_ has quit IRC13:38
chmouelmtaylor: you mean having the rax/hp auth plugin directly in tree ? i guess this would be then needs to be in openstack-common as this would be shared across13:39
* chmouel think his english is getting worse but he's multitasking by watching france-sweden women's handball at the same time13:41
*** maoy has quit IRC13:41
*** zhuadl has quit IRC13:42
*** alex88 has quit IRC13:43
*** matiu has joined #openstack-dev13:45
mtaylorchmouel: https://github.com/emonty/python-novaclient/commit/6af2c914c3eb40bfcd6d35286bf691d8a7c5f4ad13:53
mtaylorchmouel: ah, hadn't thought about htat - yeah, inside of novaclient would be silly :)13:54
mtaylorchmouel: I stuck it in tree for now for sake of argument13:54
chmouelyeah13:54
dhellmannasalkeld: looks like we're on different schedules, my day is just starting13:54
mtaylorchmouel: in fact, ooh! instead of my original patch to remove the plugin, we could make one patch first which moves rax auth to a plugin but would not break end users13:55
dhellmannasalkeld: I'll reply to your emails13:55
mtaylorchmouel: and then we could work on making an externally installable version of the plugin with the same results, and then remove it from the tree13:55
mtaylorchmouel: so that people depending on the current behavior don't get screwed13:55
chmouelwell the nova under rax cloud has just being released today13:56
chmouelso it doesn't matter to break compatibility13:56
mtaylorI've been using that nova for openstack build farm for 6 months13:56
* mtaylor points out he was submitting a patch that was going to make his own life hell13:57
chmouelyeah13:57
chmouelwas going to says that :)13:57
*** glenc has joined #openstack-dev13:58
chmouelwe would just need to have people move from env varibale from NOVA_AUTH_RAX to OS_AUTH_SYSTEM=rackspace2_0 (or whatever it's called)13:58
mtaylorchmouel: I like your idea about having plugins be able to provide the auth_url too ... that actually makes it nicer than the previous scheme, where you had to know the url, but you also had to know to do NOVA_RAX_AUTH13:58
mtayloryea13:58
chmouelit should not be too hard to handle NOVA_RAX_AUTH and auth_url would always be overridable13:59
mtaylor++13:59
chmouelI can package all this and works with your patches and propose for a review if you don't mind14:00
mtaylorsounds great to me14:00
chmouelkeep everything in python-novaclient for now14:00
chmoueland would move them to openstack.common when i'll get that in swiftclient after14:01
mtayloryeah. it's a smoother starting point, I think14:01
mtayloryeah14:01
*** glenc_ has quit IRC14:01
mtaylorwell, moving to openstack.common will be hard before openstack.common becomes a library...14:01
chmouelyeah, it sucks we probably would have to duplicate until we have library14:02
*** mnewby has joined #openstack-dev14:05
*** PotHix has joined #openstack-dev14:05
*** mdomsch has quit IRC14:08
mtaylorchmouel: actually, that might get problematic with how entry points work. we might need to either just go ahead and make external hp and rackspace auth plugin packages14:12
mtaylorchmouel: or make an openstack-auth-plugins package14:13
chmouelpersonally I would say that should be the job from the rax/hp to do that14:13
*** Shrews has quit IRC14:13
mtaylortotally14:13
mtayloras soon as we're happy with the entrypoints interface we want to use, it should be easy to hand that off - I'm happy to write the first version of code for each of them :)14:14
*** ayoung has quit IRC14:14
chmouelI don't think we need more authenticate14:16
chmouelmaybe a post or a pre_authenticate hook for some use case14:16
*** edygarcia has joined #openstack-dev14:16
*** ayoung has joined #openstack-dev14:18
*** dolphm has joined #openstack-dev14:19
*** mnewby has quit IRC14:21
*** zhuadl has joined #openstack-dev14:21
*** andrewbogott_afk is now known as andrewbogott14:21
*** mnewby has joined #openstack-dev14:23
*** Shrews has joined #openstack-dev14:23
mtaylorkk14:25
*** AlanClark has joined #openstack-dev14:25
mtaylorchmouel: so, this may be getting ahead of ourselves, but setuptools made a naming convention for setuptools plugins that they asked people to follow - so the git plugin is "setuptools-git"14:26
mtaylorchmouel: I think we don't want these to start with openstack-, because that's what all of our official stuff does... what about "hp-auth-openstack" and "rackspace-auth-openstack" as the python packages14:27
chmouelyeah i like that14:27
*** rnirmal has joined #openstack-dev14:28
chmouelI think people would take whatever we will give them tbh14:28
*** mnewby has quit IRC14:29
mtayloryup14:29
*** mnewby has joined #openstack-dev14:29
*** amotoki has quit IRC14:31
*** cloudvirt has quit IRC14:33
*** maurosr is now known as maurosr_meeting14:33
*** maoy has joined #openstack-dev14:33
*** maurosr_meeting is now known as maurosr14:33
*** cloudvirt has joined #openstack-dev14:33
ayoungdprince, I think you missed the point on https://review.openstack.org/#/c/10627/14:36
ayoungObviosuly, the right value for the cache dir is in /var/cache14:36
ayoungbut that will not work for development, etc.  The default value is one that will work, securely, if left unset14:36
*** Shrews has quit IRC14:39
dprinceayoung: Okay. I think that should be fine then.14:41
*** Shrews has joined #openstack-dev14:41
ayoungdprince, thanks.14:41
dprinceayoung: looks like it failed for SmokeStack though. Any idea why?14:42
ayoungdprince, no.  It might be a permissions issue14:42
ayoungI just triggered a recheck14:42
dprinceayoung: My user for keystone uses /var/lib/keystone.14:42
ayounglets see if it happens again.14:42
dprinceas HOM14:42
dprinceHOME14:42
ayoungdprince, what sets that up?14:42
dprincepackages.14:42
*** dspano has joined #openstack-dev14:43
ayoungwell, it probably is close enough to correct that we can use it.  TEchnically, that should be for files generated by your user,  and /var/cache if for cached files,  but I think I am OK with not splitting that hair.  THe thing is,  we need a dir for every service that usese Keystone14:43
ayoungKeystone itself is actually the exception14:44
ayoungas it reads the files out of /etc/keystone/14:44
dprinceYep.14:44
ayoungdprince, torpedo error was Error: test_get(Torpedo::Compute::Flavors)14:45
ayoung  OpenStack::Compute::Exception::Connection: Unable to connect to localhost14:45
ayoung/usr/local/share/gems/gems/openstack-compute-1.1.10/lib/openstack/compute/connection.rb:333:in `rescue in start_http'14:45
*** guitarzan_ has quit IRC14:46
*** Gordonz has joined #openstack-dev14:47
ayoungdprince, my concern is that http://smokestack.openstack.org/?go=/jobs/27143  error is coming from trying to create or read a dir in $HOME14:47
dprinceayoung: Doesn't auth_token try to create it though?14:47
ayoungdprince, yes, and if it fails, it would explain the error.  But I thought I would see it in the log14:48
ayoung2012-07-31 20:55:12 INFO keystone.middleware.auth_token [-] Starting keystone auth_token middleware14:48
ayounglooks OK to me14:48
dprinceayoung: Yeah. Well. Keep in mind *all* SmokeStack runs started failing yesterday due to a Nova bug just fixed this morning. So that could have been the issue.14:49
dprinceayoung: let me have a look at it...14:49
mtaylorchmouel: https://github.com/emonty/rackspace-auth-openstack <-- there it is split out into its own repo14:50
*** timello has quit IRC14:50
bcwaldondprince: can you run through stuart's patches today?14:50
bcwaldondprince: should be super easy14:50
dprincebcwaldon: yes.14:51
bcwaldondprince: and you've got comments on your next-to-last swift storage patch14:52
*** alex88 has joined #openstack-dev14:52
bcwaldondprince: I would love to get it in today14:52
*** cloudvirt has quit IRC14:53
chmouelmtaylor: cool, so I have fixed a bit your load_plugin in https://github.com/chmouel/python-novaclient so we basically need to remove the setup.py entry_point part from novaclient?14:53
*** sunxin has quit IRC14:54
mtaylorchmouel: yup14:54
mtaylorchmouel: well, the plugin part, but yeah14:54
chmouelmtaylor: cool, will remove that14:55
mtaylorchmouel: and for testing before we publish, just doing "python setup.py develop" in the rackspace-auth-openstack should get things install right - I'll grab your tree and test it out locally when you've done that14:55
*** sunxin has joined #openstack-dev14:55
chmouelmtaylor: cool thanks14:56
chmouelmtaylor: I have tested already with my latest github branch and this works perfectly now let's see with your package14:56
mtaylorchmouel: cool - did you add an env var yet?14:58
*** datsun180b has joined #openstack-dev14:58
mtaylorlike, does OS_AUTH_SYSTEM work?14:58
chmouelmtaylor: yeah14:58
*** hemna has quit IRC14:58
mtaylorchmouel: w00t!14:58
mtaylorchmouel: it worked for me14:58
chmouelhey hey lucky :)14:59
*** alex88 has quit IRC14:59
chmouelmtaylor: btw: you want to make sure to use --no_cache15:00
chmouelmtaylor: it has been bothering me with my tests15:01
*** salgado has quit IRC15:02
mtaylorchmouel: ok. cool15:03
mtaylorchmouel: btw- we might need to refactor something more...15:03
mtaylorchmouel: we're currently requiring that this plugin call cls._authenticate which is a private method15:03
mtaylorchmouel: we might want to make there be an actual api for this plugin to call to :)15:03
*** reidrac has quit IRC15:04
chmouelmtaylor: yeah agreed... it is such a fine line between private and public methos in python :)15:04
mtaylor++15:04
*** nunosantos has joined #openstack-dev15:05
*** salgado has joined #openstack-dev15:06
*** salgado has joined #openstack-dev15:06
*** jaypipes has joined #openstack-dev15:07
*** mdomsch has joined #openstack-dev15:09
*** dubsquared has joined #openstack-dev15:09
*** Gordonz has quit IRC15:10
*** Gordonz has joined #openstack-dev15:11
*** winston-d has joined #openstack-dev15:13
*** heckj has joined #openstack-dev15:13
*** thingee_zz is now known as thingee15:14
*** cloudvirt has joined #openstack-dev15:15
*** andrewbogott is now known as andrewbogott_afk15:17
*** e1mer has quit IRC15:19
*** alex88 has joined #openstack-dev15:20
*** andrewbogott_afk is now known as andrewbogott15:21
mtaylorchmouel: one more pull req your way - ability to leave auth_url blank if you're using an auth_system plugin15:24
*** jtran has joined #openstack-dev15:24
chmouelmtaylor: awesome i was just about to test that15:24
chmouelmtaylor: using your repo and remoing the entry_point from novaclient works perfectly15:25
mtaylorchmouel: yah. works for me too15:25
dprinceayoung: So the reason https://review.openstack.org/#/c/10627/ fails for me is because nova-api failed to startup due to a permissions issue. For Nova we set $HOME to /home/nova via Fedora packages... but we don't actually create /home/nova. So fail.15:25
mtaylorchmouel: do you think I should just publish that repo to pypi myself and then hand the keys to someone later?15:25
*** andrewbogott is now known as andrewbogott_afk15:26
*** nati_ueno has joined #openstack-dev15:26
*** sunxin has quit IRC15:26
chmouelmtaylor: yeah I think that reasonable15:26
dprinceayoung: I'm not sure offhand why we aren't using /var/lib/nova for HOME by default. So maybe that is the fix for my setup. Anyway. I left you some more feedback that logging the signing directory would be nice. Otherwise I think this looks good.15:27
chmouelmtaylor: from history I was owning the python-cloudfiles repo in pypi  for the last 4 years and nobody wanted to take it from me :) so expect to keep it for a while :)15:27
*** sulochan has quit IRC15:27
*** panpengjun has joined #openstack-dev15:27
*** danwent has joined #openstack-dev15:30
*** nati_ueno has quit IRC15:30
*** andrewbogott_afk is now known as andrewbogott15:33
*** gargya has quit IRC15:36
mtaylorchmouel: hehe. ok. well, I've pushed up a small change to rackspace_auth_openstack and have also pushed up an hpcloud_auth_openstack repo15:43
chmouelmtaylor: cool, so for uk rax there is a twist15:43
chmouelmtaylor: more suckage to be honest but the auth_url is a change15:44
mtaylorchmouel: wow, really?15:44
chmouelmtaylor: yeah believe me i did yell on that a few years ago15:44
*** zhuadl has quit IRC15:44
chmouelmtaylor: but lost the fight15:44
mtaylorchmouel: should that be a different auth_system?15:44
*** ayoung has quit IRC15:45
mtaylorchmouel: or some other mechanism15:45
chmouelmtaylor: no it's exactly the same as the us but the endpoint url is starting by lon.15:45
mtaylorchmouel: wow.15:45
mtaylorchmouel: it's almost like we're one step away from making a general cloud provider plugin here ... ;)15:46
chmouelmtaylor: ahah we just made a new jcloud15:46
mtaylorchmouel: yay! :)15:46
chmouelmtaylor: I can make the uk plugin available15:47
chmouelmtaylor: as it's not really a big change and would not expect the interface would change anyway15:47
*** halfss has quit IRC15:48
*** samkottler has joined #openstack-dev15:49
annegentlebcwaldon: or jaypipes: do you know if all extensions in the Compute API are controlled by policy.json listings? Or are some controlled in code, others not pre-controlled for the user/admin?15:51
mtaylorchmouel: https://github.com/emonty/rackspace-auth-openstack/commit/8c9c3d0efe9520d4ec49a77f0082faf654b1db1315:51
mtaylorchmouel: what about that?15:51
jaypipesannegentle: they should all be controlled by plicy.json by this point15:52
chmouelmtaylor: perfect!15:52
jaypipesannegentle: the issue I was referring to was just calling things "admin extensions" when they technically could be non-admin extensions by simply modiofying the policy.json file15:52
annegentleok, at least in folsom, but who knows in essex?15:52
chmouelmtaylor: so people would just do pip install the_package_name;export OS_AUTH_SYSTEM=rackspace_$REGION and that would just work with their username/passwords15:52
annegentlejaypipes: yeah, totally get what  you mean, just wasn't sure if the code reflects that, happy to change the docs though15:53
annegentlejaypipes: probably the docs should say "access controlled by policy.json"15:53
mtaylorchmouel: should do!15:53
jaypipesannegentle: ++15:53
chmouelmtaylor: cool I have added a small fix to your last patch and will add tomorrow the tests and stuff and would give that for review/docs etc..15:54
chmouelmtaylor: and get the RAX guys aware of updating their documentation15:54
jaypipessdague: FYI, made excellent progress on the whitebox stuff last night... pushing a new patch for review shortly.15:54
mtaylorchmouel: awesome. I'll publish the plugins so that it will work as soon as your thing lands15:54
*** samkottler has quit IRC15:55
*** danwent has quit IRC15:57
sdaguejaypipes: awesome15:58
*** dprince has quit IRC15:59
*** alex88 has quit IRC15:59
mtaylorchmouel: ok. I have pushed them both up to pypi16:01
*** gargya has joined #openstack-dev16:03
*** issackelly has joined #openstack-dev16:03
*** jdurgin has joined #openstack-dev16:03
*** ayoung has joined #openstack-dev16:06
*** salgado is now known as salgado-lunch16:07
*** lloydde has joined #openstack-dev16:12
*** markvoelker has joined #openstack-dev16:13
*** sacharya has quit IRC16:17
*** matwood has joined #openstack-dev16:20
*** thovden has joined #openstack-dev16:23
*** spiffxp has joined #openstack-dev16:25
*** shang has quit IRC16:25
davidhahey swifters, I wonder if people had noticed the drastic slowdown in ring loading from 2.6 to 2.7 (also takes 100% CPU while loading)16:26
*** shang has joined #openstack-dev16:28
davidhachmouel around?16:29
davidhaAnyone from the swift core team?16:29
*** derekh has quit IRC16:30
s34ndevstack should run just fine in a vm, right?16:31
*** andrewsmedina has joined #openstack-dev16:31
*** bencherian has joined #openstack-dev16:32
*** mokas has quit IRC16:32
jaypipess34n: yup.16:34
*** eglynn has quit IRC16:34
*** timello has joined #openstack-dev16:40
*** anniec has quit IRC16:47
*** shang_ has joined #openstack-dev16:52
*** mokas has joined #openstack-dev16:52
*** sacharya has joined #openstack-dev16:53
*** Mandell has joined #openstack-dev16:54
*** koolhead17 has quit IRC16:55
*** shang has quit IRC16:55
dolphmheckj: are we still going forward with page / per_page, or sticking with marker / limit?16:56
*** n0ano has joined #openstack-dev16:56
*** salgado-lunch is now known as salgado16:56
*** markmc is now known as mcaway16:57
*** hemna has joined #openstack-dev16:57
*** dprince has joined #openstack-dev16:59
heckjdolphm: I'm still on the page/per-page plan - was there greater consensus to not do that I missed while being distracted?17:00
*** danwent has joined #openstack-dev17:00
*** nati_ueno has joined #openstack-dev17:00
*** nati_uen_ has joined #openstack-dev17:01
*** nati_uen_ is now known as nati_ueno_17:02
*** Aaton_off is now known as Aaton17:03
*** nati_ueno has quit IRC17:04
*** gargya has quit IRC17:04
*** nati_ueno_ is now known as nati_ueno17:05
*** davidha has quit IRC17:07
*** davidha has joined #openstack-dev17:07
*** matwood_ has joined #openstack-dev17:09
*** johnpur has joined #openstack-dev17:09
*** ChanServ sets mode: +v johnpur17:09
*** matwood has quit IRC17:09
*** matwood_ is now known as matwood17:09
*** cloudvirt has quit IRC17:10
*** davidha has quit IRC17:13
*** Ryan_Lane has joined #openstack-dev17:13
*** jog0 has joined #openstack-dev17:13
*** anniec has joined #openstack-dev17:16
*** anniec_ has joined #openstack-dev17:16
*** sstent has quit IRC17:17
*** epim has joined #openstack-dev17:17
*** sstent has joined #openstack-dev17:18
dolphmheckj: not necessarily -- seems like a fairly even divide on opinion from what i've seen17:18
dolphmheckj: i implemented page / per page for policy crud -- was just wondering if i should revise before posting for review17:18
*** anniec has quit IRC17:20
*** anniec_ is now known as anniec17:20
heckjdolphm: I don't think you need to - with the V3 api, the desire from the folks who're using it (horizon) are page/per-page17:20
heckj(dolphm: welcome back from vaca!)17:20
*** matwood has quit IRC17:21
*** matwood has joined #openstack-dev17:25
*** thovden has quit IRC17:27
*** cloudvirt has joined #openstack-dev17:29
ayoungheckj, dolphm so which one of you guys are going to implement the LDAP paging support?17:29
heckjayoung: ahh! there's the kicker17:32
ayoungheckj, yep17:32
ayoungand, I think, unsolvable.17:32
ayoungAs different LDAP servers do paging different ways17:33
ayoungheckj, so,  what I suggest is that we fake it17:33
Ryan_Lanewell, there should be an rfc for a standard way17:33
Ryan_Laneand we could check for the standard support by oid17:33
ayoungRyan_Lane  the problem is that LDAP does not specify the order of results17:34
ayoungand so in order to do paging, you need to have the equivalent of a cursor17:34
ayoungwhich is different for different LDAP servers17:34
ayoungcreated differently17:34
Ryan_Lanehttp://tools.ietf.org/html/rfc269617:35
*** Mandell_ has joined #openstack-dev17:35
ayoungRyan_Lane, and, since the writers are from Microsoft, I am guessing that is what Active Directory implements17:35
Ryan_Lanelikely17:36
Ryan_Laneit's supported in openldap too17:36
Ryan_Laneand opendj17:36
ayoungRyan_Lane, the thing is,  we probably should not use it, if I understand correctly17:36
ayoungbascially, it puts a huge load on the DS17:37
ayoungunlike SQL,  which has offset/limit17:37
ayoungyou need to keep the cursor alive in order to get paging.  Which means you have state maintained on both the DS and the Keystone side17:37
ayoungThe only way to know when to free it up is session time out17:37
* Ryan_Lane nods17:38
Ryan_Lanehm. I had to head off. have a meeting17:38
*** Mandell has quit IRC17:38
*** Ryan_Lane has quit IRC17:39
*** anniec has quit IRC17:40
*** japage has quit IRC17:47
zulbcwaldon:  ping17:49
*** andrewbogott is now known as andrewbogott_afk17:50
zulbcwaldon:  looks like your tidying up of nova.images.glance caused a regression in s317:51
*** reed has joined #openstack-dev17:53
*** blamar has joined #openstack-dev17:54
*** panpengjun has left #openstack-dev17:54
*** Mandell_ has quit IRC17:57
*** Mandell has joined #openstack-dev17:57
*** littleidea has quit IRC17:58
*** littleidea has joined #openstack-dev18:00
bcwaldonzul: oh no!18:00
bcwaldonzul: looks like we're missing tests18:00
*** AlanClark has quit IRC18:00
zulbcwaldon:  when features got removed then s3 stopped working...getting tracebacks http://pastebin.ubuntu.com/1123587/18:01
*** apevec has quit IRC18:02
bcwaldonzul: ah, ok, I'll fix it18:04
*** utlemming has quit IRC18:04
zulbcwaldon:  i got a commit here locally that re-adding works for me, if you want me to submit that im ok with18:05
bcwaldonzul: sure, push it up18:05
bcwaldonzul: we need to write a test that exercises that code, though18:05
zulbcwaldon: sure np18:05
zulbcwaldon: yeah18:05
bcwaldonsince I shouldnt have been allowed to break that :)18:05
*** Adri2000 has quit IRC18:07
*** utlemming has joined #openstack-dev18:07
*** Adri2000 has joined #openstack-dev18:07
*** zaitcev has joined #openstack-dev18:10
zulbcwaldon: no worries s3 is kind of screwy18:10
*** gabrielhurley has joined #openstack-dev18:16
*** gabrielhurley has quit IRC18:16
*** dhellmann has quit IRC18:17
*** negronjl has quit IRC18:19
*** darraghb has quit IRC18:21
*** maoy_ has joined #openstack-dev18:21
*** maoy_ has quit IRC18:22
*** maoy has quit IRC18:22
*** maoy_ has joined #openstack-dev18:22
*** heckj has quit IRC18:22
*** blamar has quit IRC18:23
*** blamar has joined #openstack-dev18:23
*** johnpostlethwait has joined #openstack-dev18:23
*** johnpostlethwait has quit IRC18:24
vishymtaylor: what is it going to take to get stable-diablo gating working again?18:25
vishymtaylor: maybe we should just turn it off so we can merge the security fixes that are in the queue?18:25
vishyjeblair, LinuxJedi: ^^18:25
*** andrewbogott_afk is now known as andrewbogott18:26
*** negronjl has joined #openstack-dev18:26
*** eglynn has joined #openstack-dev18:27
jeblairvishy: as far as i am aware, the issue is that the tests no longer pass.18:28
jeblairvishy: (I'm reading latest on change 9093 to get caught up)18:28
vishyjeblair: that is correct, there is some env / tox weirdness18:29
vishyrussellb: what was berrange's irc nick again?18:30
russellbdanpb18:30
russellbguess he's out for the day (UK)18:31
vishyrussellb: he's not online. I'm trying to find someone who knows if there has been any work done on passing a dev through virtio so we can actually control where the guest puts it18:32
russellbcould try the ML, dan will see it there18:32
mtaylorvishy: yeah - it's that the tests don't pass - and as best I can tell it's related to dependencies - I believe I tried running the tests with run_tests.sh as well and that also did not work18:33
dansmithvishy: you mean other than udev rules in the guest?18:33
jeblairjtran: ping18:33
vishydansmith: yeah, xen manages it by passing the data somehow, I figure they are using a plug and play vendor id or something18:33
jtranjeblair, hi18:34
jeblairjtran: i was looking at your latest comment on https://review.openstack.org/#/c/9093/18:34
vishydansmith: perhaps someone needs to write a fancy udev rule that can put the volume at the right location based on the metadata server18:34
dansmithdoes xen use virtio now? my xen knowledge is really old, but they used to be able to do it because they passed xenbus info in and that driver handled registration18:34
jeblairjenkins doesn't run devstack when running unit tests18:34
jtranok, so that's why me trying to reproduce it even tho passing , gate still fails?18:35
jeblairjtran: adapting the commands you pasted, i think it should be more like this: http://paste.openstack.org/show/19905/18:35
jeblairmtaylor: ^ double check me18:35
mtaylorjeblair: yup18:36
dansmithvishy: I think virtio must pass the index in, otherwise I'm not sure how the current behavior of not being able to detach without acpiphp loaded would be possible18:36
jtranjeblair,  would need to git checkout stable/diablo tho right18:36
jeblairjtran: wouldn't hurt, but i think the git-review cmd will do the right thing18:37
jtranjeblair, i'll spin up a clean oneiric vm and try that out18:37
vishydansmith: it passes an index, but that means nothing to udev and is solely based on the order in the xml18:37
vishydansmith: it happily ignores the device name that you specify18:37
dansmithI know it does now, but I thought the index would be the vd{a-z} index, since libvirt uses that to generate a unique id (i.e. virtio3 for vdc)18:38
dansmithvishy: but after poking around in /sys, I suppose not :(18:42
*** davidha has joined #openstack-dev18:42
vishydansmith: yeah :(18:43
vishydansmith: magic udev rule required. Who wants to write it?18:44
dansmithvishy: you mean a magic rule to query out to openstack infrastructure to make the determination?18:44
vishyaye18:44
dansmithI think that'd be highly unconventional :)18:44
*** novas0x2a|laptop has quit IRC18:45
vishydansmith: but at least when we get complaints we can say, hey install this udev rule!18:45
vishydansmith: it would use the metadata server so it could work on amazon as well (although it already does because they use xen)18:45
zykes-udev rule for what ?18:46
dansmithI'm surprised that virtio doesn't expose it, but if there's a reason, I wonder if we could provide some more local channel..18:46
jog0vishy: I am trying to reproduce the sqlalchemy problem you found with my migration script18:47
jog0vishy: what version of sqlalchemy was it 0.7.4?18:47
vishyi believe 0.7.418:49
vishymay have been .318:50
jog0vishy: thanks just realized pypi only has 7.8, so running 'tox -efull' on Ubuntu 12.0418:50
vishyjog0: it was whatever version is packaged in precise18:50
*** johnpostlethwait has joined #openstack-dev18:51
*** danwent_ has joined #openstack-dev18:52
andrewsmedinadanwent_: hi18:52
*** danwent has quit IRC18:55
*** danwent_ is now known as danwent18:55
danwentandrewsmedina: hey, about to run off to a meeting, but can chat quickly18:57
andrewsmedinadanwent: we talk later18:58
andrewsmedinadanwent: call me when you return18:59
danwentk18:59
*** spiffxp has quit IRC19:06
*** matwood has quit IRC19:09
*** andrewbogott is now known as andrewbogott_afk19:22
*** dhellmann has joined #openstack-dev19:22
*** ewindisch has quit IRC19:22
*** dubsquared has quit IRC19:28
*** jtran has quit IRC19:33
*** matwood has joined #openstack-dev19:38
*** matwood has quit IRC19:40
*** andrewbogott_afk is now known as andrewbogott19:44
*** ewindisch has joined #openstack-dev19:47
*** mdomsch has quit IRC19:49
jog0vishy:  when I install python-sqlalchemy from 12.04 and run the 'tox -efull' I cannot even get teh unit tests to run.19:56
*** AlanClark has joined #openstack-dev19:57
*** jtran has joined #openstack-dev19:58
vishyhehe19:59
vishyjog0: you're probably missing other dependencies19:59
vishyjog0: fyi i was using ./run_tests.sh -N19:59
jog0vishy: tox -efull installs missing deps20:02
*** steveb_ has joined #openstack-dev20:03
*** novas0x2a|laptop has joined #openstack-dev20:03
danwentandrewsmedina: ping, i'm back20:08
*** Ryan_Lane has joined #openstack-dev20:11
andrewsmedinadanwent: hi20:11
*** eglynn has quit IRC20:12
andrewsmedinadanwent: I will try to end support for xml up Monday20:12
andrewsmedinadanwent: and20:12
andrewsmedinadanwent: the fake plugin are working with api v2?20:13
danwentandrewsmedia, fake plugin was only for v220:15
danwentsorry v120:15
danwentbut there is something similar for v220:15
davidhaHi, Anyone from the swift core team?20:15
notmynamedanwent: what's up?20:15
davidhanotmyname: Hi.20:16
danwentlook in quantum.db.db_base_plugin_v220:16
danwentandrewsmedina:  test_db_plugin.py uses this extensively, and all other v2 plugins are subclasses of it20:16
danwentnotmyname: hey20:16
davidhaI found a problem in Ring Loading  under 2.720:17
*** spiffxp has joined #openstack-dev20:17
danwentnotmyname: sorry, were you looking to chat with me about something?  my above comments were directed at andrewsmedina20:17
danwentah, you probably meant davidha, not danwent20:18
notmynamedanwent: sorry. danwent v. davidha. silly tab autocomplete ;-)20:18
danwentgot it20:18
davidhanotmyname: I got confused also :) anyhow see ^^20:18
notmynamedavidha: what details do you have?20:18
davidhanotmyname: Unpickling a 18 power ring is two orders of magnitude slower in 2.7 compared to 2.620:19
notmynamethat's not good. do you have any ideas as to why?20:19
davidhaI also found out why with the help of the #python20:19
davidhaYep - a change made in 2.720:20
notmynamedo tell :-)20:20
davidhaThe change is meant to align it to 3.x20:20
davidha__reduce__ of array.array was changed20:20
davidhanow instea dof a single BINSTRING it is many many many (did I say many already) BININTs20:21
davidhaI think it is time to move away from pickle20:22
davidha:)20:22
notmynameis that something you could work on?20:23
andrewsmedinadanwent: but the fake plugin doesnt has fields attr in the get_network method20:24
andrewsmedinadanwent: and it is expected by v2 api20:24
danwentandrewsmedina: fake plugin does not work with v2, and will be removed20:24
danwentso you can just ignore it20:24
davidhaNot right away. For now I can: 1. describe the problem in some details. 2. suggest an intermediate resolution for people using the system in production and 3. Suggest a direction for resolution20:25
notmynamedavidha: that sounds great20:25
davidhanotmyname: I will not be able to work on in Q3 for sure.20:25
danwentif you need a simple plugin for testing, I would use either the quantum.db.db_base_plugin_v2.QuantumDbPluginV220:26
andrewsmedinadanwent: ok, thanks20:26
davidhaok.20:26
*** wiliam has quit IRC20:26
danwentandrewsmedina: or quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV220:26
andrewsmedinadanwent: would be nice to create a issue to fix the fake plugin20:27
*** EmilienM has left #openstack-dev20:27
davidhanotmyname, there is a mechanism for reloading the ring every X hours right?20:27
notmynamedavidha: not exactly like that. the ring is reloaded based on its mtime (but not more that every X seconds, where X is set in a config file)20:28
*** bcwaldon has quit IRC20:28
*** bcwaldon has joined #openstack-dev20:28
danwentandrewsmedina: the fake plugin is v1 only, and all v1 code will be removed next week.   is there something that you need to do with the fake plugin that can't be done by using quantum.db.db_base_plugin_v2.QuantumDbPluginV2 with a in-memory sqlite database?20:28
*** bcwaldon has quit IRC20:28
davidhaok, great - I thought it is measured in hours - this is what I found in the documentation....20:29
*** bcwaldon has joined #openstack-dev20:29
danwentandrewsmedina: so far, the latter has been sufficient for us.20:29
notmynamedavidha: ah, you may be referring to min_part_hours which is in hours and is the fastest a ring can be updated20:29
notmynamedavidha: thanks for doing the detective work. even if someone else gets to the code before you, any info you give will be quite helpful20:29
andrewsmedinadanwent: ok, you're right20:31
andrewsmedinadanwent: thanks20:31
danwentandrewsmedina: no problem.  thanks for working on this bug!20:31
davidhaAfter finding that the startup became very slow, I was intrigued to find out that my old ring file loaded 100 times faster than the new ones I was creating - although they all had the same size....20:32
andrewsmedinadanwent: I intend to contribute more to the project20:32
notmynamedavidha: where are you planning on documenting this? a bug report in launchpad is one possibility20:32
davidhaI will do that and send an email for pepole to be aware20:33
notmynamedavidha: thanks20:33
davidhaWhere are you now?20:33
danwentandrewsmedina: great.20:33
davidha:-X20:33
notmynamedavidha: like my physical location?20:34
davidhaLast we spoke you were heading to a new post20:34
notmynamedavidha: ah, yes. I'm in San Francisco now, working at SwiftStack20:35
davidhaOk, sorry for being out of touch :)20:35
notmynamenot too much. it's still quite new20:37
zykes-notmyname: is georeplication getting into swift?20:39
*** dubsquared has joined #openstack-dev20:40
*** lorin1 has quit IRC20:41
notmynamezykes-: I think (hope) someone will probably try to tackle it in the grizzly timeframe. there are a lot of people asking about it, but so far nobody has stepped up to figure it out and code it20:41
*** lorin1 has joined #openstack-dev20:51
*** lloydde has quit IRC20:52
*** Mandell has quit IRC20:52
*** novas0x2a|laptop has quit IRC20:53
dolphmayoung: this is technically a blocker for policy crud impl https://review.openstack.org/#/c/10682/20:54
*** ewindisch has quit IRC20:56
ayoungdolphm, OK.  I'll review it.20:56
*** AlanClark has quit IRC20:56
ayoungdolphm, looks reasonable.  So long as it passes the tests, I should be able to merge20:58
ayoungdolphm, the only change in the code base is to raise an exception.  What is the justification for that?20:58
dolphmayoung: test_get_endpoint_404 would fail otherwise (the rest of the driver raises explicit exceptions)21:00
dolphmayoung: however, get_endpoint isn't exposed through to the rest API, so there's no client tests to check for a 404 -- (but the driver still needs to raise the appropriate exception, so i can catch it in the policy driver =) )21:01
*** lloydde has joined #openstack-dev21:01
*** cloudvirt has quit IRC21:01
ayoungdolphm, OK.  Looks good to me.  Shouldn't break anything....21:02
zykes-vishy: ping21:02
*** lts has quit IRC21:02
ayoungdolphm, quid pro quo:  https://review.openstack.org/#/c/10544/21:04
jtranjeblair, so i installed a fresh oneiric vm, using the exact sequence here:  http://paste.openstack.org/show/19905/ ,  did not use devstack at all.  Results :21:05
jtran py26: commands succeeded21:05
jtran  congratulations :)21:05
jtran-  py26: commands succeeded21:05
jtranRan 1780 tests in 743.175s21:05
dolphmayoung: revise keystone.sample.conf as well?21:06
*** dprince has quit IRC21:06
ayoungdolphm, good catch21:08
*** cloudvirt has joined #openstack-dev21:08
dolphmayoung: ping me when it's patch and i'll +221:08
*** Mandell has joined #openstack-dev21:08
*** maurosr has quit IRC21:11
*** novas0x2a|laptop has joined #openstack-dev21:13
*** ewindisch has joined #openstack-dev21:14
*** dolphm has quit IRC21:14
*** cloudvirt has quit IRC21:22
*** cloudvirt has joined #openstack-dev21:22
notmynamezykes-: watch out. don't feed the trolls :-)21:24
*** matwood has joined #openstack-dev21:29
*** winston-d has quit IRC21:29
*** markvoelker has quit IRC21:30
zykes-vishy: in cinder will it be able to set some kind of reasonable default / policy for where volumes go for diff things ?21:30
zykes-if you didnt answer it already21:30
*** lorin1 has quit IRC21:32
*** salgado has quit IRC21:33
jgriffithzykes-: What do you mean by that?21:34
jgriffithzykes-: You mean something to use a sort of tiering?21:35
jgriffithbackend selection?21:35
jgriffithzykes-: geographical location....21:35
*** eglynn has joined #openstack-dev21:44
zykes-jgriffith: tiering yes21:44
zykes-or basically: if I have a SAN and a ISCSI storage in my cluster21:44
zykes-and I wanna say that all vm disks go to SAN but all data goes to ISCSI ?21:44
zykes-by default but still be choosable21:44
jgriffithzykes-: I'm hoping to have multiple back-end support available in cinder which would let you do something like this21:45
jgriffithzykes-: Don't know if it'll make it though, we'll have to see21:45
zykes-:D21:45
jgriffithzykes-: Definitely something on the roadmap21:45
zykes-its a enterprise thign as well :p21:45
zykes-kinda21:45
jgriffithzykes-: although when you say SAN, if you're talking FC that's a whole seperate deal21:45
jgriffithzykes-: That's debatable, but I understand what you're saying21:46
*** jgriffith has quit IRC21:49
*** jgriffith has joined #openstack-dev21:52
*** johnpur has left #openstack-dev21:52
*** cloudvirt has quit IRC21:54
*** dolphm has joined #openstack-dev21:56
*** avishay has quit IRC21:56
*** Shrews has quit IRC22:00
*** flaviamissi has quit IRC22:00
*** johnpostlethwait has quit IRC22:00
*** eglynn has quit IRC22:01
*** maoy_ has quit IRC22:01
*** lloydde has quit IRC22:02
*** lloydde has joined #openstack-dev22:03
*** johnpostlethwait has joined #openstack-dev22:03
*** andrewsmedina has quit IRC22:04
*** _mancdaz has quit IRC22:05
*** mancdaz has joined #openstack-dev22:05
jeblairjtran: i have reproduced your resoults, thanks.22:07
jeblairmtaylor: ^22:07
jtranjeblair, you're welcome ;)22:08
jeblairjtran: i'm kicking jenkins again on that change; when it's done, i'll take that done offline and see if i can reproduce the result manually on the host and dig further into it there.22:09
notmynamedavidha: thanks for posting the bug.22:09
jeblairs/take that done/take that node/22:09
jtranjeblair, sounds like an excellent plan22:10
*** andrewbogott is now known as andrewbogott_afk22:10
*** markmcclain has quit IRC22:10
*** dspano has quit IRC22:13
*** datsun180b has quit IRC22:14
*** rnirmal has quit IRC22:24
*** kbringard has quit IRC22:27
jaypipeswow, neverending foundation thread today...22:27
*** dubsquared has quit IRC22:29
cloudflyi am running some lengthy qa checks today22:31
cloudflyso i have a lot of free time to read and comment on that thread22:31
*** cloudvirt has joined #openstack-dev22:31
cloudflyi should probably try to cut myself off a bit22:31
jaypipescloudfly: and you are? :)22:34
*** danwent has quit IRC22:37
*** tgall_foo has quit IRC22:37
cloudflymatt joyce22:37
*** danwent has joined #openstack-dev22:38
jaypipescloudfly: oh, duh. I knew that :)22:38
cloudfly=D22:38
*** danwent has quit IRC22:39
jaypipescloudfly: yeah, much like soren, I'm having a bit of chuckle from the conversation about people representing their company in the foundation board... since I've worked for 3 of them in the past year ;)22:39
jaypipescloudfly: and frankly, my representation in the community or the PPB hasn't changed at all regardless of employer22:39
bcwaldoncloudfly: I dont remember seeing a recent response RE python-glanceclient22:40
bcwaldoncloudfly: how do you feel aboutt a temporary compatibility layer in the cli?22:41
jaypipesbcwaldon: what about just calling the CLI glance2?22:42
bcwaldonjaypipes: we already have people using it as 'glance'22:42
bcwaldonjaypipes: and I dont want to carry that baggage forever22:42
bcwaldonjaypipes: this is just glance client v222:42
jaypipesbcwaldon: well, yeah, but we already have a lot more using bin/glance as glance :)22:42
*** dachary has quit IRC22:43
bcwaldonjaypipes: I do understand that22:43
jaypipesbcwaldon: it's a tough situation, I know.22:43
*** Gordonz has quit IRC22:43
bcwaldonjaypipes: I do want to talk to someone who is really feeling this pain22:43
bcwaldonjaypipes: because this upgrade is going to do *nothing* to nodes that aren't running openstack services22:44
jaypipesbcwaldon: understood.22:44
bcwaldonjaypipes: And I understand all the concerns that were brought up, I just want to determine which ones are *real* concerns22:44
jaypipesyup, ++22:45
cloudflyjaypipes sure but you guys are different you are core technical talent22:45
*** edygarcia has quit IRC22:45
bcwaldoncloudfly: that doesnt matter right now22:45
cloudflythey guy working third tier support for hp cloud isn't going to want to 'rock the boat' i doubt he even cares who gets elected22:45
cloudflyhe votes to look good22:45
jaypipesbcwaldon: no, I think cloudfly was referring to a different conversation :)22:45
cloudflyand that introduces a bias22:45
bcwaldonOH22:45
cloudflysorry yes i was idle for a bit22:45
bcwaldonwhew22:45
jaypipescloudfly: understood.22:45
cloudflybcwaldon yeah i wish the user community would be more engaged but i get why they aren't.22:47
cloudflyit's a tough situation22:47
*** mnewby_ has joined #openstack-dev22:47
bcwaldoncloudfly: so do you have an ideal path forard for this?22:48
bcwaldonforward*22:49
*** mnewby has quit IRC22:49
*** mnewby_ is now known as mnewby22:49
*** thingee is now known as thingee_zz22:49
*** sacharya has quit IRC22:50
*** maurosr has joined #openstack-dev22:55
*** heckj has joined #openstack-dev22:55
*** PotHix has quit IRC22:56
mtaylorjaypipes: dude. I haven't had a email thread this fun since marten mickos wanted to make portions of MySQL closed-source :)23:01
jaypipesmtaylor: oooh. another one I was involved in. :)23:01
mtaylorjaypipes: it all comes down to you23:01
Davieyjaypipes and his freedom hating agenda.23:02
mtaylorjaypipes: I should dig up some of my emails on that thread and send them to this one ... and then ask who thinks my company's voice is mine :)23:02
mtaylorDaviey: jaypipes hates so many things, freedom doesn't even make the list23:02
mikalI'm missing a ranty thread on the internet?23:03
mikalWhy was I not informed?23:03
dolphmayoung: heckj: just put policy CRUD up for review -- it's still on /v2.0/ but otherwise ready to rock23:03
jaypipesDaviey: :)23:03
Davieymikal: If you were not included on the thread, it's probably ABOUT you...23:04
jaypipesmikal: because freedom hates you, that's why. don'tcha know? ;P23:04
mikal:(23:04
mikalI can't help it that I'm a communist23:04
DavieySo, including you in the thread may of hurt your feelings.23:04
mikalIts just the way I was brought up23:04
Daviey</troll>23:04
jgriffitheverybody quiet... mikal is here, act busy23:05
heckjdolphm: just saw it hit in email - need to change that to v3, but otherwise sounds good. I'd really like to get client tests written to force the driving of it, so that's first off.23:05
dolphmheckj: client-side tests: https://review.openstack.org/#/c/9237/8/tests/v2_0/test_policies.py23:06
dolphmheckj: client-dependent tests: https://review.openstack.org/#/c/9744/4/tests/test_keystoneclient_sql.py23:06
*** e1mer has joined #openstack-dev23:07
*** mnewby has quit IRC23:07
*** lloydde has quit IRC23:07
heckjdolphm: awesome dude!~23:08
cloudflybcwaldon i support the glance2 binary23:09
cloudfly=/23:09
cloudflyi don't like it23:09
cloudflybut i see it as the lesser evil23:09
bcwaldoncloudfly: I think thats a little silly, no?23:10
jaypipeslol, at this rate the foundation thread might take the all-time thread count title.23:10
*** roge has quit IRC23:10
cloudflyi think its a little silly for this.  but i think it sets a great precedent and it does say we tried to keep compatibility for users.23:10
cloudflywe can always deprecate it later when a year from now someone revamps the client again23:11
cloudfly=/23:11
bcwaldoncloudfly: are you planning on doing that?23:11
bcwaldoncloudfly: what's wrong with thinking of this as a major version change?23:12
cloudflyit comes down to breaking compatibility of shell scripts on users between versions with no warning23:12
cloudflyas far as they are concerned it is no warning unless you feed them "THIS IS DEPRECATED STOP USING IT" in their STDERR they aren't going to know.23:12
bcwaldoncloudfly: how would you feel about making the same interface work in python-glanceclient?23:12
dolphmheckj: my only api concern after playing with the impl... {'policy': {'policy': '...'}} is a bit awkward imo; more explicitly: {'policy': {'blob': '...'}}23:13
cloudflyi think the only concern i have is shell scripts calling glance and glance breaking23:13
bcwaldoncloudfly: and I am suggesting we fix that23:13
cloudflyif it's easy enough to support all the old calls glance made.23:13
cloudflysure23:13
bcwaldoncloudfly: anything is possible, and I'm going to be doing all the freaking work anywyas23:13
cloudflywith deprecation warnings etc.23:13
bcwaldoncloudfly: yep23:14
cloudflyi mean yeah that'd be bananas23:14
bcwaldoncloudfly: bananas == bad?23:14
jaypipesfrom ewan: "In general I hate customers, and the last thing I would ever be fanatical about is support.". Classic.23:14
cloudflygood23:14
bcwaldoncloudfly: good23:14
cloudflymore exceptional23:14
cloudflyjaypipes i was dying when i read that23:14
bcwaldoncloudfly: so how about I do that and take the issue back to the ML23:14
cloudflyi'll +`23:15
cloudflyerr 123:15
bcwaldonok23:15
bcwaldonI think that gets everybody what they want23:15
bcwaldonI want to move everyone to python-glanceclient23:15
bcwaldonand I'm sorry its taking a while but the resources we have to do things in glance is extremely small23:15
bcwaldonwe should have had this done as soon as we released essex23:16
*** andrewsmedina has joined #openstack-dev23:17
*** dolphm has quit IRC23:18
jeblairjtran: curiouser and curiouser -- as a user on the jenkins slave, i can run the tests successfully.23:22
jeblair(still digging)23:22
jeblairmtaylor: ^23:22
heckjdolphm: that's why I left the spec open, so we could tweak it to make it more sensible for implementation :-)23:24
jeblairjtran, mtaylor: here is the output at the end of run_tests.log from the (failed) jenkins run: http://paste.openstack.org/show/19913/23:24
*** heckj has quit IRC23:24
bcwaldonjaypipes: https://review.openstack.org/#/c/10518/223:34
*** markwash has joined #openstack-dev23:34
bcwaldonmarkwash: https://review.openstack.org/#/c/10520/23:34
*** halfss has joined #openstack-dev23:36
*** matwood has quit IRC23:40
*** sacharya has joined #openstack-dev23:46
*** mnewby has joined #openstack-dev23:46
*** davidha has quit IRC23:58
*** davidha has joined #openstack-dev23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!