Friday, 2011-07-29

*** negronjl has joined #openstack-dev00:07
*** alfred_mancx has quit IRC00:10
*** rjimenez has quit IRC00:37
*** mattray has joined #openstack-dev01:11
*** mattray has quit IRC01:11
*** BK_man_ has quit IRC01:22
*** martine has joined #openstack-dev01:28
*** Tushar has quit IRC01:33
*** alfred_mancx has joined #openstack-dev01:58
*** mfer has joined #openstack-dev02:22
*** BK_man_ has joined #openstack-dev02:22
*** rohitk has quit IRC02:34
*** mfer has quit IRC02:35
*** mfer has joined #openstack-dev02:36
*** mfer has quit IRC02:38
*** nci has left #openstack-dev03:10
*** pyhole has quit IRC03:10
*** pyhole has joined #openstack-dev03:11
*** negronjl has quit IRC03:19
*** creiht has quit IRC03:26
*** creiht has joined #openstack-dev03:28
*** ChanServ sets mode: +v creiht03:28
*** alfred_mancx has quit IRC03:50
*** zaitcev has quit IRC04:14
openstackjenkinsProject keystone build #55: SUCCESS in 52 sec: http://jenkins.openstack.org/job/keystone/55/04:21
openstackjenkinsMonty Taylor:  #16 Changes to remove unused group clls.04:21
openstackgerritZiad Sawalha proposed a change to openstack/keystone: Add unittest2 to pip requires for testing  https://review.openstack.org/11004:53
*** nci has joined #openstack-dev05:01
*** martine has quit IRC05:06
*** nmistry has joined #openstack-dev05:26
*** mnaser has quit IRC05:30
*** nmistry has quit IRC05:36
*** openpercept_ has joined #openstack-dev06:19
ttxjaypipes, vishy: starting diablo-3 release process in 10min.06:21
openstackjenkinsProject nova-milestone build #15: FAILURE in 7.5 sec: http://jenkins.openstack.org/job/nova-milestone/15/06:32
*** mnaser has joined #openstack-dev06:32
*** mnaser has quit IRC07:20
*** BK_man_ has quit IRC08:06
ttxNova and Glance diablo-3 are out !08:28
*** BK_man_ has joined #openstack-dev08:30
*** chomping has quit IRC08:38
*** royh has joined #openstack-dev08:38
royho/ hi guys :)08:38
*** mnaser has joined #openstack-dev08:42
*** mnaser has quit IRC09:02
*** darraghb has joined #openstack-dev09:05
*** mnaser has joined #openstack-dev10:33
*** thickskin has quit IRC10:38
*** thickskin has joined #openstack-dev10:41
BK_manttx: khm... got an exception with released version of Glance. Let me check it again...10:41
BK_manttx: it might be a regression here.10:42
BK_manttx: yep. rolling back glance - and it working again10:49
BK_manttx: this is a trace from released version: http://paste.openstack.org/show/1972/10:49
*** TimR has quit IRC10:51
BK_manttx: that tarball working ok: http://glance.openstack.org/tarballs/glance-2011.3~d3~20110727.r144.tar.gz10:52
*** markvoelker has joined #openstack-dev11:28
*** BK_man has quit IRC11:32
*** BK_man_ is now known as BK_man11:32
*** openpercept_ has quit IRC11:40
*** lorin1 has joined #openstack-dev11:42
ttxBK_man: there is an added dependency, python-xattr11:44
*** martine has joined #openstack-dev11:45
ttxBK_man: maybe that's the issue ?11:45
*** mfer has joined #openstack-dev11:58
*** markvoelker has quit IRC12:31
*** markvoelker has joined #openstack-dev12:36
*** mnaser has quit IRC12:39
*** bsza has joined #openstack-dev12:49
*** bcwaldon has joined #openstack-dev13:08
zulsoren: ping are you around?13:21
zulsoren: unping13:22
*** donald650 has joined #openstack-dev14:00
*** BK_man has quit IRC14:04
*** kbringard has joined #openstack-dev14:04
*** amccabe has joined #openstack-dev14:06
*** donald650 has quit IRC14:11
*** donald650 has joined #openstack-dev14:11
*** gaitan has joined #openstack-dev14:12
*** amccabe has quit IRC14:14
*** amccabe has joined #openstack-dev14:15
*** donald650 has quit IRC14:28
*** jkoelker has joined #openstack-dev14:28
*** tudamp has joined #openstack-dev14:43
*** cp16net_ has joined #openstack-dev14:47
*** dragondm has joined #openstack-dev14:58
*** rnorwood has joined #openstack-dev15:04
tr3buchetvishy: care to revisit https://code.launchpad.net/~klmitch/nova/glance-private-images/+merge/6966115:14
ttxjaypipes: saw BK_man's issue from ~3 hours ago ?15:14
blamarvishy: functionally testing keystone-migration, hopefully should be able to get it in15:30
*** chemikadze has quit IRC15:32
jaypipesttx: no, looking now15:40
jaypipesttx: just need to add python-xattr to package dependencies15:40
ttxjaypipes: yes, that's what I thought. The exception looked a bit funny15:41
ttxi.e. <'module' object has no attribute 'xattr'> instead of <cannot find module xattr> -- that's why I waneted you to doublecheck15:42
*** vladimir3p has joined #openstack-dev15:45
jaypipesttx: when I see BK_man reappear, I'll address it15:52
ttxjaypipes: I told him already, before he left15:53
ttxno ack though15:53
*** mnaser has joined #openstack-dev15:53
kbringarddo I need to do nova-manage network create for flat DHCP?15:53
vishykbringard: yup15:55
kbringardcool, and it just ignores the vlan stuff, or I set that argv entry to 0?15:56
vishyit ignores vlan15:58
kbringardah, sweet, thanks :-)15:58
kbringardis this still accurate, as far as the flags and whatnot that are required?15:59
kbringardhttp://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-flat-dhcp-networking.html15:59
kbringardI would think that if the network is created and fixed_ips is populated, you wouldn't need the flat_network_dhcp_start option...15:59
kbringardalso, since it's flat, the gateway is the actual upstream router, and not some address that is going to come up with the network controller, right?16:04
*** tudamp has left #openstack-dev16:06
openstackgerritYogeshwar Srikrishnan proposed a change to openstack/keystone:      #16 Fixing pylint issues.Changes to remove unused group calls.  https://review.openstack.org/11116:20
vishyblamar: doesn't it make more sense to get rid of _convert_timeformat and replace the calls with utils.isotime16:28
blamarvishy: sure, just offering up a fix without really looking into it :)16:29
vishyok16:29
kbringardit looks like you have to create a project, even if you're not using vlan modeā€¦ is that correct?16:29
vladimir3pguys, sorry for repeating it, but is there any chance to get more eyes on our VSA proposal? https://code.launchpad.net/~vladimir.p/nova/vsa/+merge/6898716:31
*** lorin1 has left #openstack-dev16:37
*** lorin1 has joined #openstack-dev16:38
vishyblamar: fix pushed and tests pass16:39
vishyvladimir3p: I will be looking at it again today16:39
blamarvishy: k, running tests again then it'll be good16:39
vishyvladimir3p: been busy with d3 milestone so haven't had a chance to dive in16:39
vladimir3pvishy: thanks a lot. I will update the BP spec and wil add some implementation details16:40
vladimir3pvishy: sure, np16:40
openstackgerritYogeshwar Srikrishnan proposed a change to openstack/keystone:      #16 Changes to remove unused group calls.Pylint fixes.  https://review.openstack.org/11216:40
*** zaitcev has joined #openstack-dev16:43
openstackgerritYogeshwar Srikrishnan proposed a change to openstack/keystone:  #16 Changes to remove unused group clls.  https://review.openstack.org/11316:43
jaypipesdolphm: heyo. got a few minutes today to check out Keystone testing?17:05
jaypipesbcwaldon: I thought *you* were Brian II and blamar was Brian I? :)17:12
blamarjaypipes: that is correct17:13
jaypipesblamar: :)17:13
*** yogirackspace has joined #openstack-dev17:14
bcwaldonjaypipes: don't you start17:29
jaypipeslol17:29
jaypipesannegentle: is the openstack.org wiki CSS editable on LP or GH? :)17:38
annegentlenope, good idea though17:39
annegentlenot that I know of anyway17:39
*** darraghb has quit IRC17:42
vishyvladimir3p: ping17:43
vladimir3pvishy: pong :-)17:43
vishyso i want to go over some questions here because it is faster17:43
vishydid you rename volume_type to drive_type?17:44
vladimir3pvishy: sure17:44
vladimir3pvishy: hmm... it was drive type from the beginning17:44
vishyok I thought there was a volume_type in there at some point17:44
vishyapparently i missed it17:44
vladimir3pvisy: but it is pretty close to volume type17:45
vladimir3pvishy: what is going on is: we create a volume from drives of particular type17:45
vladimir3pvishy: the idea is to create a VSA (comput instances) and associate multiple drives to them17:45
vladimir3pvishy: for these drives inside of nova we create special volumes17:46
vishyvladimir3p: so I'm concerned that these additions are creating a bunch of specific code paths throughout the system for one driver17:46
vladimir3pvishy: it is a nova-volume + special SW responsibility to decide either to dedicate full physical drive or some virtualized form of it17:46
vishyvladimir3p: and I'd like to figure out how we can somehow decouple things a bit more17:47
vladimir3pvishy: ok17:47
vladimir3pvishy: in general we thought that VSAs might be provided by different vendors17:47
vishyvladimir3p: it would be great if we could be at the point where there are generalizations in nova to support all of the stuff that you need17:47
vladimir3pvishy: exactly17:47
vishyvladimir3p: and nova-vsa + driver could be a separate component17:48
vishya plug-in if you will17:48
vladimir3pvishy: ok... plug-in works ... will be better to see it as a part of nova17:48
vladimir3pvishy: but I definitely understand your concerns17:49
vishyvladimir3p: core will need to decide whether it is a nova-component or not, but I think we could do it in a way17:49
vishythat would allow you to still do what you need whether or not it is in core17:49
vishyso i'd like to get it into multiple proposals17:50
vishy1) all of the stuff in nova needed to make it work17:50
vishy2) the added nova-vsa / schedulers / drivers17:51
vishyso let me go through the integration points step by step17:51
vishy1) cloud.py -> not sure about the addition here.  Are you planning on writing command line tools to use these commands via the ec2 api?17:52
vladimir3pvishy: we have our application using Openstack APIs, ... and have all the changes in nova-manage using cloud... we were thinking about creating a separate CLI utility for all our stuff17:53
vladimir3pvishy: (or to integrate it with nova-manage completely)17:53
vladimir3pvishy: in general we can discard ec2 part ... probably.17:54
vishyvladimir3p: cloud.py is supposed to be a compatibility layer for ec2, so if they aren't ec2 supported methods they shouldn't go there.17:54
vishyperhaps those methods could move into their own file that is called via nova-manage or even a separate cli17:55
vishy2) -> the additions in compute_api.  Why is that necessary?17:55
vladimir3pvishy: I will need to consult with our PRD folks, but in general our GUI & apps are using Openstack APIs for now17:56
vladimir3pvishy: (it was for EC2)17:57
vladimir3pvishy: for compute APIs... Currently the main one is vsa_id17:58
vladimir3pvishy: we can try to move it to meta-data, but it is way easier to operate when it is part of instances17:58
vladimir3pvishy: also, the next step will be to perform all necessary filterings and hide admin (VSA) instances17:58
vishyvladimir3p: i think we should do that via metadata17:59
*** lorin1 has quit IRC17:59
vladimir3pvishy: ok... I suppose it deneds if we will introduce VSAs as part of nova or if it will be some separate plug-in18:01
vladimir3pvishy: IMHO, if (when) VSA will go inside, it will make sense to have this on top-level (instances) and not in meta-data18:02
vishyvladimir3p: why do you think it should be a field in instances?18:02
vladimir3pvishy: I suppose that in general VSA could replace Lunr and provide block storage for cloud18:03
vladimir3pvishy: wrt: field in instances vs metadata ... such instances will not be regular instances18:03
vladimir3p(that user can operate with).18:03
vishyit is a specific kind of instance not regular instance though18:03
vladimir3pyeah, in general we wanted to intorduce the concept of instace categories ... but not sure if/when it will go through18:04
vishyif we have nova-db for dbaas do we add db_id?18:04
vladimir3pif categories will be added to nova, such field (vsa_id) might be category-dependant18:04
vladimir3pgood point :-)18:05
vishyso categories are just a type of metadata18:05
vishyso we need InstanceMetadata and VolumeMetadata18:05
openstackgerritYogeshwar Srikrishnan proposed a change to openstack/keystone:  #16 Changes to remove unused group clls.  https://review.openstack.org/10718:06
vladimir3pvishy: ok, makes sense - we can move it out there to special metadata18:06
openstackjenkinsProject nova build #1,170: SUCCESS in 4 min 24 sec: http://jenkins.openstack.org/job/nova/1170/18:06
openstackjenkinsTarmac: --Stolen from https://code.launchpad.net/~cerberus/nova/lp809909/+merge/6860218:06
openstackjenkinsFixes lp80990918:06
openstackjenkinsMigrate of instance with no local storage fails with exception18:06
openstackjenkinsSimply checks to see if the instance has any local storage, and if not, skips over the resize VDI step.18:06
*** lorin1 has joined #openstack-dev18:07
*** lorin1 has left #openstack-dev18:07
vladimir3pvishy: for volume meta-data ... we've added drive_type ref there18:08
vladimir3pvishy: if you want this one to go to metadata as well it will be quite problematic to perform joins18:08
vishyvladimir3p: what type of join might you need to do?18:08
vladimir3pvolume to drive_type18:09
vladimir3pvishy: the idea is to create volumes from particular drive_types18:09
vladimir3pactually our volumes also have "categories"18:09
vladimir3pFrontEnd volumes vs BackEnd volumes18:09
vladimir3pfirst ones are presented by VSA and latter are consumed by VSAs18:10
vishyi'm looking at this to try and figure out how we could do this...18:12
*** rnorwood has quit IRC18:12
vladimir3pthe request to allocate storage for VSA is translated into volumes, goes to scheduler (who knows what drive types are present where) and based on that sent to appropriate node18:13
vladimir3pin general, we could perform an additional query to DB (or two) and retrieve this info18:13
vladimir3pbut wanted to avoid it18:14
vishyi'm considering the way that computes report capabilities18:14
vishyand then the scheduler makes decisiions based on the reported capabilities18:14
vladimir3pwe implemented it exactly in the same way18:15
vladimir3pnova-volume retrieves capabilities from driver and reports to scheduler18:15
vishyso the drive_type is used by the driver?18:15
vladimir3pby nova-volume driver - yes. It provides it to SW responsible for volume creation18:16
vladimir3peach node may have different types of storage18:16
vladimir3pand when volume is created it should know from what pool (with what QoS props) it should do it18:17
vishyso get_all_by_vsa is troublesome18:19
vishyif the volumes are reporting capabilities to the scheduler18:20
vishycan't the scheduler determine this without making a db call?18:20
vladimir3plet me see where scheduler is calling the DB ...18:21
dabovishy: don't know if the volume capabilities is done yet, but yes, that's the idea18:21
vladimir3pvishy: in our version each nova-volume report its capabilities to scheduler. Based on that scheduler knows where (and how many) drives of particular type are18:23
vladimir3pwhen volume creation request reachs scheduler, it checks what is the requested drive type and based on that performs selection18:24
vladimir3phere it operates with specified drive type18:24
vladimir3pwhere drive type includes type (SATA/SAS/etc), capacity, etc...18:24
vladimir3pwe have 2 entry points for scheduler: create_volume & create_volumes18:25
vladimir3pby default the latter one is used (as scheduler should create a complex decision based on # of requested drives/volumes)18:25
vladimir3pin a single create_volume call we go to the DB in order to retrieve all vol's info ... we can probably avoid it by putting all volume parameters into the mesage, but we need there drive type info as well18:26
vishyvladimir3p: in general i feel like there is some underlying changes we could make to make all this stuff work more generally18:27
vladimir3pvishy: it would be nice. I agree that we need to generalize it a bit18:27
vishyi think i'm going to have to dive in a little more and come up with a specific set of changes and suggestions18:27
vladimir3pvishy: but we also want it to be the part of nova...18:28
vladimir3pAwesome!!!18:28
vladimir3pvishy: pls let me know if you need any clarifications18:29
*** markvoelker has quit IRC18:30
*** mnaser has quit IRC18:31
*** mnaser has joined #openstack-dev18:31
dolphmis there a pylint command to reduce it's output to simply the total message count?18:34
dolphmjaypipes: i'm back, if you want to work on tests18:35
jaypipesdolphm: cool, in about 30 minutes/18:35
jaypipes?18:35
dolphmjaypipes: that works18:35
bcwaldonjaypipes: will jenkins kick back your S3 branch if I send it in?18:45
jaypipesbcwaldon: see latest comment on MP... just pushed  afix. I ran a param build and saw the problem.18:46
bcwaldonjaypipes: Ah, I ran the tests and Approved before you commented18:47
jaypipesbcwaldon: approve again then please :) grr. boto bugs.18:48
jaypipesbcwaldon: http://code.google.com/p/boto/issues/detail?id=54018:48
jaypipesbcwaldon: I will be *so* happy when 713154 lands.18:49
bcwaldonjaypipes: oh, I know18:50
bcwaldonjaypipes: me, too18:50
bcwaldonme too...18:50
*** mwhooker_ has joined #openstack-dev18:51
bcwaldonjaypipes: on its way in18:57
Vekvishy: you around?19:01
*** jhtran has joined #openstack-dev19:01
bcwaldonjaypipes: !19:02
jhtrananyone using osx as their dev env w/ xcode 4.x or higher?  ever since i got xcode 4, greenlet has probs compiling everytime i have to do virtualenv for a branch19:05
*** mdomsch has joined #openstack-dev19:10
bcwaldonjaypipes: need to add python-boto to glance packages19:13
openstackgerritDolph Mathews proposed a change to openstack/keystone: Added support for versioned openstack MIME types  https://review.openstack.org/11419:19
jaypipesbcwaldon: gotcha. thx.19:22
*** rnorwood has joined #openstack-dev19:23
vishyVek: heyo19:24
openstackjenkinsProject nova build #1,171: SUCCESS in 4 min 16 sec: http://jenkins.openstack.org/job/nova/1171/19:31
openstackjenkinsTarmac: Fixes issue with OSAPI passing compute API a flavorid instead of an instance identifier. Added tests.19:31
*** AhmedSoliman has joined #openstack-dev19:32
*** negronjl has joined #openstack-dev19:33
openstackjenkinsProject nova build #1,172: SUCCESS in 4 min 27 sec: http://jenkins.openstack.org/job/nova/1172/19:43
openstackjenkinsTarmac: This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit.  The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch.  I also have a small demo driver+server for19:43
blamardolphm: "pylint -rn <files> | wc -l" might do it19:53
openstackjenkinsProject nova build #1,173: SUCCESS in 4 min 20 sec: http://jenkins.openstack.org/job/nova/1173/19:56
openstackjenkinsTarmac: Fix various errors discovered by pylint and pyflakes.19:56
openstackjenkinsProject nova-tarball-bzr-delta build #404: FAILURE in 25 sec: http://jenkins.openstack.org/job/nova-tarball-bzr-delta/404/19:57
openstackjenkins* Tarmac: Round 1 of changes for keystone integration.19:57
openstackjenkins* Modified request context to allow it to hold all of the relevant data from the auth component.19:57
openstackjenkins* Pulled out access to AuthManager from as many places as possible19:57
openstackjenkins* Massive cleanup of unit tests19:57
openstackjenkins* Made the openstack api fakes use fake Authentication by default19:57
openstackjenkinsThere are now only a few places that are using auth manager:19:57
openstackjenkins* Authentication middleware for ec2 api (will move to stand-alone middleware)19:57
openstackjenkins* Authentication middleware for os api (will be deprecated in favor of keystone)19:57
openstackjenkins* Accounts and Users apis for os (will be switched to keystone or deprecated)19:57
openstackjenkins* Ec2 admin api for users and projects (will be removed)19:57
openstackjenkins* Nova-manage user and project commands (will be deprecated and removed with AuthManager)19:57
openstackjenkins* Tests that test the above sections (will be converted or removed with their relevant section)19:57
*** AhmedSoliman has quit IRC19:57
openstackjenkins* Tests for auth manager19:57
openstackjenkins* Pipelib (authman can be removed once ec2 stand-alone middleware is in place)19:57
openstackjenkins* xen_api (for getting images from old objectstore. I think this can be removed)19:57
openstackjenkinsVish19:57
openstackjenkins* Tarmac: Fix various errors discovered by pylint and pyflakes.19:57
vishysandywalsh: ping19:58
* Vek waves at vishy19:59
Veknow that your branch is merged, mind reviewing mine?  :)19:59
vishysure sure20:00
Vekcool, thanks :)20:00
vishyi just noticed that the tarball failed20:00
openstackjenkinsProject nova build #1,174: SUCCESS in 4 min 32 sec: http://jenkins.openstack.org/job/nova/1174/20:02
openstackjenkinsTarmac: Round 1 of changes for keystone integration.20:02
openstackjenkins* Modified request context to allow it to hold all of the relevant data from the auth component.20:02
openstackjenkins* Pulled out access to AuthManager from as many places as possible20:02
openstackjenkins* Massive cleanup of unit tests20:02
openstackjenkins* Made the openstack api fakes use fake Authentication by default20:02
openstackjenkinsThere are now only a few places that are using auth manager:20:02
openstackjenkins* Authentication middleware for ec2 api (will move to stand-alone middleware)20:02
openstackjenkins* Authentication middleware for os api (will be deprecated in favor of keystone)20:02
openstackjenkins* Accounts and Users apis for os (will be switched to keystone or deprecated)20:02
openstackjenkins* Ec2 admin api for users and projects (will be removed)20:02
openstackjenkins* Nova-manage user and project commands (will be deprecated and removed with AuthManager)20:02
openstackjenkins* Tests that test the above sections (will be converted or removed with their relevant section)20:02
openstackjenkins* Tests for auth manager20:02
openstackjenkins* Pipelib (authman can be removed once ec2 stand-alone middleware is in place)20:02
openstackjenkins* xen_api (for getting images from old objectstore. I think this can be removed)20:02
openstackjenkinsVish20:02
vishyVek: there are merge errors20:04
vishyVek: perhaps you merged a little too quickly?  Try grabbing trunk and merging again20:04
Vekvishy: quite possible...20:04
Vekand indeed it was a little too quickly.20:05
*** mdomsch has quit IRC20:05
*** mwhooker_ has quit IRC20:06
vishydabo: ping20:08
dabovishy: pong20:09
vishydabo: question about current state of the scheduler code20:09
daboi'll try20:09
vishydabo: say i want to have two different hypervisors in the same zone20:09
vishyis there enough information reported to the scheduler to make a decision about where to send a vm20:10
dabovishy: hmmm... I've only seen the code for how xen handles this20:11
daboi don't think 'hypervisor_type' is included20:11
vishyi'm asking because there is a request for volumes to allow different drivers in the same zone20:11
dabothe idea was that each hypervisor would report its own state in its own way20:11
dabobut I can see how having some required data points (i.e., type) would make sense20:12
vishyi'm considering something like the following: adding an optional required_hypervisor to the InstanceTypes table20:12
dabowould there ever be an "optional" hypervisor for an instance type?20:13
vishywell a type could be only xen hypervisor20:13
daboiow, would 'hypervisor' be sufficient for a name20:13
vishyor it could run on all20:13
daboah, I didn't know that the same type could run on all20:14
vishybut as i was thinking about it i've been trying to generalize it20:14
vishyso say you could have an arbitrary blob associated with a type that would be passed to the scheduler20:14
vishyinside this blob you could put things like required hypervisor, etc.20:15
kbringardquick ? about flat DHCPā€¦ can public_interface and flat_interface be the same?20:15
dabovishy: that sounds like sandywalsh's json-based requirements matching20:16
vishykbringard: yes20:16
vishykbringard: actually no20:16
kbringardyea, that's what I was starting to wonder20:16
vishykbringard: you will want to set public_interface to br10020:16
kbringardah so20:17
vishyor whatever bridge you are using20:17
kbringardeven if br100 is attached to eth0, which is the flat_interface?20:17
dabovishy: my concern is that currently there is no defined requirements that each host must report20:17
dabos/is no/are no/20:17
dolphmblamar: ended up using "pylint --output-format=parseable keystone | grep 'keystone/' | wc -l" thanks!20:17
vishyhmm, the requirements matching could work, but where do the requirements come from?20:18
dabovishy: so hosts of one hypervisor may report a 'hypervisor_type', while another may not20:18
vishysure but you can default to unknown20:18
*** cp16net_ has quit IRC20:19
daborequirements would be the flexible part that each deployment could customize20:19
vishybasically i think we need to be able to pass some arbitrary data to the scheduler from the type20:19
vishyand also to the driver20:19
vishyso maybe I should just call it scheduler_data20:19
daboyou may want to talk with sandywalsh about this. He's the one that did the work on that part. He could explain the thought behind it better20:19
daboExcept that I think he's gone for the day20:19
vishyyeah i pinged him but he isn't around :)20:20
daboHave you looked at nova/scheduler/least_cost.py?20:20
Vekvishy: conflicts fixed.20:20
vladimir3pvishy/dabo: there should be some sort of compromise. either we define type of scheduler per type of data or standartize on requirements and in this case one scheduler can handle some standartized data20:20
vladimir3pvishy: thanks a lot for looking at it20:21
dabovladimir3p: What I was thinking was a set of basic data points that each hypervisor type should report. Each type could then add any additional information that it wanted20:21
dabovladimir3p: things like 'hypervisor_type', 'enabled', etc.20:22
vladimir3pdabo: exactly. this scenario is more aligned with "standartizing" types of data reported20:22
dabovladimir3p: the other side of this was that at the summit, it was made very clear that different people had very different needs20:23
vladimir3pdabo: we can go another route and define different schedulers.. but it means that we need to have multiple scheed in the system20:23
dabothe ability to customize the selection criteria is critical20:23
dabovladimir3p: not separate schedulers20:23
daboscheduler gets info about its compute nodes (or volumes) from those nodes20:24
kbringardthanks vishy, that fixed it20:24
sandywalshsorry vishy just following thread now20:24
daboeach node would report its own capabilities20:24
vishyoh hey sandywalsh20:24
dabowe hadn't dealt with heterogenous hypervisor deployments20:25
sandywalshvishy, each hypervisor reports their status to the ZoneManager periodically.20:25
sandywalshvishy, currently they don't say exactly what hypervisor they are, but they could easily enough20:25
sandywalshI filed a bug to standardize what gets reported20:25
vishyso we have requirements in volume to support volume types that can a) go to specific backends and b) pass arbitrary metadata to backends20:25
vishyso I was looking at the stuff that was done for instances and see how much we have already20:26
vishythis would be the equivalent of putting some data in instance type that would do the same thing20:26
sandywalshyes, it's very compute-centric right now, but could easily be expanded to support volume/network I think20:26
sandywalshafter all, it's just key-value matching essentially20:26
vishysandywalsh: where does the metadata come from?20:27
sandywalsheach virt layer reports it20:27
vishysandywalsh: is the plan to expose this to the user?20:27
vishyi mean from the request side20:27
sandywalshah20:27
sandywalshcurrently it's all InstanceType based, but we added /zones/create to handle new request formats20:27
sandywalsh(like a JSON query-based request)20:28
sandywalshkind of a lisp-like grammar for more expressive requests20:28
vishyhow do you feel about putting a json blog in instancetype table that is passed to the scheduler20:28
Vekdon't you think a full-up blog is overkill?  :)20:28
* Vek ducks rapidly20:28
vishys/blog/blob20:29
vishy:p20:29
vishythx Vek20:29
sandywalshwell, I thought instancetype was supposed to be rather rigid20:29
sandywalshthere was a change recently to add extra-attributes to InstanceType table20:29
vishysandywalsh: I'm thinking for third party schedulers and drivers20:29
Vek:)20:30
vishybasically I'm thinking of: adding some extra metadata that is stuffed into request context20:30
vishyso that the scheduler and driver can do the right thing20:30
sandywalshcould you put it in the extra-attributes?20:31
sandywalshand that makes it optional20:31
sandywalshrather than extend InstanceType table for the blob?20:31
vishydid extra-attributes actually show up?20:31
sandywalshlooking20:32
vishywe have instance_metadata20:32
vishyi don't see anything else20:32
sandywalshInstanceTypeExtraSpecs20:32
vishyah cool that is what i was looking for20:33
sandywalshthat should work and not upset the apple cart. then your custom drivers/etc can pick it up and use it20:33
sandywalshand if you wanted it more dynamic you could modify OS API /zones/boot20:34
sandywalshto accept the JSON query20:34
vishysandywalsh: how do the extra specs get to the scheduler?20:34
vishyare they loaded from the db? One sec, checking20:35
sandywalshI'm pretty sure they're loaded from the db ... perhaps not in nova.compute.api, but it may need to be done explicitly in the scheduler20:35
sandywalshInstanceType is passed to the scheduler via the RPC call from compute.api20:35
vishydoesn't look like there is a scheduler that is doing this20:35
sandywalshI think most of that is done in HostFilter currently20:36
sandywalshthat's where the hard requirements are filtered out (can_boot_windows, etc)20:37
vishysandywalsh: ok i think this will work for VolumeTypes.  I think we'll have to use a default flavor for ec220:37
vishysandywalsh: next part: metadata for the driver20:38
sandywalshscheduling (using the zone scheduler) is in two parts: host filtering and host weighing20:38
vishyi could query at the driver level for extra specs too i guess20:38
sandywalshyes, I was going to say ... you can stuff what you need in extra-specs20:38
vishybut it seems like the scheduler should be able to pass extra data to the driver20:38
sandywalshyes, correct, or the driver can look it up itself20:39
sandywalshpie/cake really20:39
vishyso I guess the larger question is: is there any shared concept of volume_type20:40
vishyor is volume_type just id / name and everything else goes in VolumeExtraSpecs20:40
sandywalshcurrently there is no concept of volume in the zone aware scheduler ... it's compute only right now. But it shouldn't be hard to change.20:42
sandywalshand actually, I wonder if it needs to be zone-aware20:43
sandywalshsince the volume will stem from where the host is placed20:43
vishysandywalsh: it will at the very least need to be able to be aware of zones20:43
vishyso that someone can create a volume local to a compute20:43
sandywalshyes, certainly, and the more complex schedulers are available in that framework20:44
vishybut I'm more concerned about using the spec stuff from schedulers20:44
vishyspeaking of which, the model of using one scheduler class for both compute and volume is terrible20:44
dolphmmtaylor: is jenkins asleep or what?20:44
mtaylordolphm: aroo?20:44
sandywalshvishy, well, now's a good time for us to address that if need be.20:45
dolphmmtaylor: we have a review (114) that's been at +2 for about 40 minutes, and jenkins hasn't stepped in20:45
mtaylordolphm: looking20:45
dolphmmtaylor: https://review.openstack.org/#change,11420:45
mtaylordolphm: +1 and +1 != +2 ... (I'm not crazy about the +2 naming, but oh well)20:46
mtaylordolphm: you need someone in keystone-core to actually give it a +2 vote20:46
dolphmmtaylor: oh cool, didn't know that rule had been implemented :)20:47
mtaylordolphm: yup!20:47
sandywalshvishy sorry, but I have to run ... chat later on this?20:47
vishysandywalsh: sure20:47
mtaylordolphm: should you or yogi be able to do that? right now only ziad and jesse have that power20:47
sandywalshcool, thx20:47
vishyVek: where is auth_token actually passed into RequestContext?20:47
dolphmmtaylor: i think it's fine as-is, we talked about at least one us and one of them having to review everything20:48
Vekvishy: in the nova_auth_token.py plug-in in keystone.20:48
vishyah20:48
VekI have a patch waiting to go in20:48
vishyno wonder i couldn't find it )20:48
Vek(i.e., I'm sitting on it until this goes into nova)20:48
vishyVek: is it a string?20:48
Vek:)20:48
Vekyep.20:48
mtaylordolphm: k. ziad should probably re-approve 107 as well now that yogi re-submitted20:49
vishyok cool, just making sure it would get in20:49
vishys/in/transferred through the queue properly20:49
dolphmmtaylor: will gerrit eventually send a reminder to reviewers?20:49
vishyVek: this doesn't fix the glance support for kvm right?20:50
Vek"kvm"?20:50
mtaylordolphm: it _should_ be sending emails already BUT, I'll put that on the list20:50
Vekno, I don't think it does.20:50
Vek(libvirt, right?)20:50
VekI pass the context in to those functions, now, but I didn't know where to send it from there.20:50
vishyVek: right20:50
vishylemme look real quick20:51
Veksame with vmwareapi, of course.20:51
dolphmmtaylor: I'm just used to Code Collaborator, which will send daily reminders if you're needed as a reviewer20:51
mtaylordolphm: it's a good idea - and I see no reason why we can't do it20:52
vishyhehe you called it cxt?20:52
kbringardisn't that what that weiner guy did that got him in trouble?20:52
openstackgerritDolph Mathews proposed a change to openstack/keystone: Tweaked the sample dtest to avoid pylint errors  https://review.openstack.org/11520:53
openstackgerritDolph Mathews proposed a change to openstack/keystone: pylint fix: avoiding overriding the built in next() function  https://review.openstack.org/11620:54
vishyVek: doesn't rescue need a context as well?21:00
* Vek looks...21:03
vishyit does21:03
vishyrescue -> spawn_rescue -> spawn21:03
bcwaldonvishy: I'm not quite sure how how to move forward with https://code.launchpad.net/~rackspace-titan/nova/remove_xenapi_image_service_flag_lp708754/+merge/6919321:04
bcwaldonthoughts?21:04
Vekwhere does it call spawn?  What file?21:05
Vekyeah, you're right, I totally missed that instance of spawn21:07
Vekof course, there's no way that it could work anymore21:07
Vekeven minus my addition of the ctx argument, the network_info argument isn't passed to it.21:07
vishyVek: lp:~vishvananda/nova/glance-private-images21:09
vishythere is libvirt support21:09
vishywe need to make a bug for hyperv and vmware21:09
vishybcwaldon: we should get in touch with the citrix team and make sure they don't mind.  Also it will need a trunk merge as well21:10
Vek*nod*21:10
openstackgerritDolph Mathews proposed a change to openstack/keystone: simple pylint fixes  https://review.openstack.org/11721:10
bcwaldonvishy: sounds good. Do you habe a contact I can hit up?21:10
vishyi will ping someone21:11
bcwaldonvishy: thank you21:11
*** dolphm has quit IRC21:13
Vekvish: I recommend using "ctx" in place of "context" as function arguments, btw, just to make sure you don't accidentally shadow the context module.21:14
bcwaldonctxt21:14
dabocontxt21:14
dabo:)21:14
vishyVek: the problem is that we are using context everywhere and some things are called via kwargs21:15
* Vek has been using "ctx" since he worked on Kerberos21:15
Vekvishy: yeah :/21:15
bcwaldonvishy: that doesn't make it right ;)21:16
vishyso it is dangerous.  can we stay consistent change them all at once21:16
bcwaldonI mean Vek21:16
vishyi was using ctxt where needed, but I think it is confusing to have context in most places but then ctx in the driver21:16
*** negronjl has quit IRC21:17
Vekthe other alternative is to import the context module as something else21:17
*** negronjl has joined #openstack-dev21:17
Vek"from nova import context as nova_context"21:17
Vekthoughts?21:21
VekI know of at least one place where both the context and the context module are referred to within a single def...21:22
bcwaldonvishy: problem!21:23
bcwaldonvishy: can't build a venv because http://nova.openstack.org/Twisted-10.0.0Nova.tar.gz returns a 40421:23
vishyyeah i'm aware21:24
vishydoes mainline twisted have the fix now?21:24
vishymaybe we can just up the twisted version and grab it from pipi21:24
bcwaldonvishy: I never knew about this, so I'm of no help21:24
vishyhonestly we don't even use twisted anymore so maybe we should just remove it21:24
*** martine has quit IRC21:26
bcwaldonvishy: its used in nova-instancemonitor21:26
bcwaldonvishy: maybe not21:26
vishyyeah nova-instancemonitor is dead21:27
bcwaldonvishy: yeah, its definitely still used in the code. Maybe it can be removed at this point21:27
vishylets do a merge that kills it and removes twistd.py and twisted dep21:27
bcwaldonsure21:27
bcwaldonyou want to do it, or me21:28
vishybcwaldon: can you do it? Tracking down other stuff atm21:28
bcwaldonvishy: yep21:28
*** alekibango has quit IRC21:37
*** yogirackspace has left #openstack-dev21:42
bcwaldonvishy: https://code.launchpad.net/~rackspace-titan/nova/remove-twistd/+merge/6986521:43
bcwaldonvishy: building venv and running tests now21:43
s1rpjust for fun: here's a profile of nova's current slowest test: http://paste.openstack.org/show/1980/22:00
vishys1rp: which test is it?22:03
s1rptoo_many_cores22:03
*** mfer has quit IRC22:03
s1rpwondering if a judiciously placed stub will speed it up22:04
*** amccabe has quit IRC22:07
vishystub out the db calls yeah :)22:10
vishybcwaldon: got an ok from citrix guys on removing objectstore code22:10
vishybcwaldon: you will need to do a trunk merge though, I moved some of that stuff around in the authmanager branch22:10
bcwaldonvishy: will do momentarily22:10
bcwaldonvishy: fighting with venv crap22:11
bcwaldonvishy: apparently we dont need to patch eventlet anymore22:11
vishyno we don't :)22:13
bcwaldonvishy: it'll come out in this twisted branch if thats okay22:13
bcwaldonvishy: and do you want me to delete all of objectstore?22:13
vishyno no just the imageservice stuff in xen22:14
bcwaldonvishy: check22:14
s1rpvishy: strangely         self.context = context.get_admin_context() is the line that seems to slow the test down.... trying to wrap my head around that22:16
s1rpanticipating some eventlet wonkiness22:16
s1rpnm, misread the number, context line is fine22:18
*** kbringard has quit IRC22:21
s1rpheh, just noticed we have two (!) tests called test_too_many_cores22:26
s1rpmust have been a bad merge22:26
Vekvishy: my branch is ready to review again.22:26
bcwaldonvishy: remove_xenapi_image_service_flag should be good to go22:30
*** bsza has quit IRC22:32
vladimir3pGlance/image question: we are trying to register a raw image in glance and it keeps failing with MemoryError. We are using command like:  glance add name="vc-psp4-test" is_public=True container_format=bare image_state=available project_id=nubeblog architecture=x86_64 < /vc-dev-psp4-3.img any ideas?22:33
vladimir3psame image works fine on other configs22:33
vladimir3p(with exactly same cmd)22:34
*** bcwaldon has quit IRC22:36
*** dragondm has quit IRC22:42
*** jhtran has quit IRC22:47
s1rpvladimir3p: MemoryError sounds like it's buffering the entire image in memory as it's reading it into glance (thought we had fixed that)...22:47
s1rpvladimir3p: how large is the image?22:47
vladimir3ps1rp: it is more than 2GB and these nodes are just desktops22:50
vladimir3ps1rp: other nodes are normal servers with enough memory22:50
s1rpvladimir3p: which backend are you using, filesystem, swift?22:51
vladimir3pfilesystem22:51
s1rpvladimir3p: are you running a pretty recent version of glance?22:52
vladimir3ps1rp: hmm... need to check, but should be relatively recent22:52
vladimir3ps1rp: glance 2011.3-dev22:52
s1rpvladimir3p: gonna dd out some random data on my machine and try to replicate22:52
vladimir3ps1rp: can send you the backtrace of this raise if you need it22:53
s1rpvladimir3p: yeah that could be helpful, thanks!22:53
vladimir3ps1rp: email?22:53
s1rpyou can drop it in paste.openstack.org22:54
s1rpand drop it in here if you'd like22:54
vladimir3ps1rp: sure22:54
vladimir3ps1rp: http://paste.openstack.org/show/1981/22:56
vladimir3ps1rp: this particular host has 2GB of RAM22:57
s1rpuhoh, looks like when we tried to make_body_seekable, we ended up introducing a copy of the entire image...22:58
s1rpvladimir3p: looks like this a bug that was introduced relatively recently (i think)22:58
vladimir3ps1rp: ok. any workarounds?22:59
s1rpvladimir3p: not sure of a workaround yet; this appears related to https://bugs.launchpad.net/glance/+bug/79471823:03
uvirtbotLaunchpad bug 794718 in glance "S3 requires seekable file. webob versions 0.9.8 through 1.0.7 make_body_seekable() method broken for chunked transfer requests" [Low,Fix committed]23:03
* s1rp wishes webob had a __version__23:03
*** dantoni has joined #openstack-dev23:04
*** gaitan has quit IRC23:06
vladimir3ps1rp/uvirtbot: just went over the code of this fix ... seems like it is related to s3 only, isn't it?23:07
s1rpvladimir3p: sort of, that particular issue intersected with the s3 backend, but in general i think webob has some serious issues with chunked transfers, and that what's causing this...23:07
s1rpvladimir3p: could you go ahead and make a bug for this so it's tracked... gotta run at the moment, but should be able to take a look at this later tonight23:08
vladimir3ps1rp: ok, thanks. should we try to update webob?23:09
s1rpvladimir3p: certainly worth a shot23:10
vladimir3ps1rp: ok, thanks23:11
*** bengrue has joined #openstack-dev23:15
*** jkoelker has quit IRC23:30

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!