*** vladimir3p has quit IRC | 00:01 | |
*** jdg_ has quit IRC | 00:04 | |
*** ohnoimdead_ has joined #openstack-meeting | 00:15 | |
*** Ravikumar_hp has quit IRC | 00:16 | |
*** dragondm has quit IRC | 00:17 | |
*** ohnoimdead has quit IRC | 00:18 | |
*** ohnoimdead_ is now known as ohnoimdead | 00:18 | |
*** HowardRoark has joined #openstack-meeting | 00:24 | |
*** nati2 has quit IRC | 00:33 | |
*** nati2 has joined #openstack-meeting | 00:33 | |
*** tsuzuki_ has joined #openstack-meeting | 00:58 | |
*** ohnoimdead has quit IRC | 01:07 | |
*** ohnoimdead has joined #openstack-meeting | 01:07 | |
*** nati2 has quit IRC | 01:25 | |
*** nati2_ has joined #openstack-meeting | 01:25 | |
*** nati2_ has quit IRC | 01:32 | |
*** bencherian has quit IRC | 01:39 | |
*** bencherian has joined #openstack-meeting | 01:42 | |
*** bencherian has quit IRC | 02:01 | |
*** jdg_ has joined #openstack-meeting | 02:11 | |
*** ohnoimdead has quit IRC | 02:18 | |
*** jdg_ has quit IRC | 02:58 | |
*** jdg_ has joined #openstack-meeting | 03:41 | |
*** HowardRoark has quit IRC | 03:56 | |
*** corrigac_ has joined #openstack-meeting | 04:03 | |
*** _cerberu` has joined #openstack-meeting | 04:04 | |
*** vish1 has joined #openstack-meeting | 04:05 | |
*** annegentle has quit IRC | 04:05 | |
*** vishy has quit IRC | 04:05 | |
*** _cerberus_ has quit IRC | 04:05 | |
*** ogelbukh has quit IRC | 04:05 | |
*** annegentle has joined #openstack-meeting | 04:05 | |
*** jaypipes has quit IRC | 04:05 | |
*** corrigac has quit IRC | 04:05 | |
*** tr3buchet has quit IRC | 04:05 | |
*** tr3buchet has joined #openstack-meeting | 04:05 | |
*** deshantm has quit IRC | 04:05 | |
*** ogelbukh has joined #openstack-meeting | 04:06 | |
*** deshantm has joined #openstack-meeting | 04:06 | |
*** jaypipes has joined #openstack-meeting | 04:18 | |
*** jdg_ has quit IRC | 04:20 | |
*** reed has quit IRC | 04:45 | |
*** bengrue has joined #openstack-meeting | 04:57 | |
*** royh has quit IRC | 05:01 | |
*** royh has joined #openstack-meeting | 05:02 | |
*** _cerberu` is now known as _cerberus_ | 05:28 | |
*** bengrue has quit IRC | 05:54 | |
*** novas0x2a|laptop has quit IRC | 06:02 | |
*** bencherian has joined #openstack-meeting | 07:22 | |
*** bencherian has quit IRC | 07:44 | |
*** tsuzuki_ has quit IRC | 11:10 | |
*** markvoelker has joined #openstack-meeting | 11:45 | |
*** cmagina has quit IRC | 12:54 | |
*** cmagina has joined #openstack-meeting | 12:54 | |
*** scottsanchez has quit IRC | 13:22 | |
*** mdomsch has joined #openstack-meeting | 13:46 | |
*** dprince has joined #openstack-meeting | 14:06 | |
*** scottsanchez has joined #openstack-meeting | 14:19 | |
*** deshantm has quit IRC | 14:21 | |
*** deshantm has joined #openstack-meeting | 14:33 | |
*** dolphm has joined #openstack-meeting | 14:45 | |
*** reed__ has joined #openstack-meeting | 14:56 | |
*** rnirmal has joined #openstack-meeting | 14:56 | |
*** dragondm has joined #openstack-meeting | 15:02 | |
*** vladimir3p has joined #openstack-meeting | 15:12 | |
*** bencherian has joined #openstack-meeting | 15:15 | |
*** HowardRoark has joined #openstack-meeting | 15:23 | |
*** michaelkre has joined #openstack-meeting | 15:23 | |
*** reed__ is now known as reed | 15:27 | |
*** cmagina has quit IRC | 15:33 | |
*** cmagina has joined #openstack-meeting | 15:40 | |
*** dolphm has quit IRC | 15:48 | |
*** dolphm has joined #openstack-meeting | 15:50 | |
*** nati2 has joined #openstack-meeting | 15:51 | |
*** dolphm has quit IRC | 15:59 | |
*** dolphm has joined #openstack-meeting | 15:59 | |
*** dendrobates is now known as dendro-afk | 16:14 | |
*** adjohn has joined #openstack-meeting | 16:16 | |
*** dolphm has joined #openstack-meeting | 16:25 | |
*** dendro-afk is now known as dendrobates | 16:29 | |
*** clayg has joined #openstack-meeting | 16:43 | |
*** DuncanT has joined #openstack-meeting | 16:46 | |
*** dolphm has quit IRC | 16:47 | |
*** darraghb has joined #openstack-meeting | 16:48 | |
*** timr1 has joined #openstack-meeting | 16:49 | |
*** dolphm has joined #openstack-meeting | 16:54 | |
*** mdomsch has quit IRC | 16:59 | |
vladimir3p | Hi | 17:00 |
---|---|---|
timr1 | hi | 17:00 |
DuncanT | Hi | 17:00 |
*** vish1 is now known as vishy | 17:00 | |
clayg | ohai | 17:01 |
vladimir3p | ok, so probably waiting for Renuka to start the meeting | 17:01 |
*** bencherian has quit IRC | 17:01 | |
clayg | vladimir3p: blueprint looks great nice work | 17:01 |
vladimir3p | clayg: thanks | 17:02 |
*** renuka has joined #openstack-meeting | 17:02 | |
renuka | Should we start the nova-volume meeting? | 17:03 |
vladimir3p | hey, yes. go ahead | 17:03 |
renuka | #startmeeting | 17:03 |
openstack | Meeting started Thu Oct 20 17:03:29 2011 UTC. The chair is renuka. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:03 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic. | 17:03 |
renuka | vladimir3p: would you like to begin with the scheduler discussion | 17:04 |
renuka | #topic volume scheduler | 17:04 |
*** openstack changes topic to "volume scheduler" | 17:04 | |
vladimir3p | no prob. So, there is a blueprint spec you've probably seen | 17:04 |
renuka | yes | 17:04 |
vladimir3p | we have couple options on implementing it | 17:04 |
vladimir3p | 1. to create a single "generic" scheduler tat will perform some exemplary scheduling | 17:05 |
*** dolphm has quit IRC | 17:05 | |
vladimir3p | 2. or don't do there almost nothing and just find an appropriate sub-scheduler | 17:05 |
*** hggdh has quit IRC | 17:05 | |
vladimir3p | another change that we may need - providing some extra-data back to Default Scheduler (for sending it to manager host) | 17:06 |
vladimir3p | let's first of all discuss how we would like schedduler to look like: | 17:06 |
renuka | ok | 17:06 |
vladimir3p | either we will put there any basic logic | 17:06 |
*** dolphm has joined #openstack-meeting | 17:06 | |
vladimir3p | or just rely on every vendor to supply their own | 17:06 |
vladimir3p | I prefer the 1st option | 17:07 |
vladimir3p | any ideas? | 17:07 |
rnirmal | the only problem I see with the sub scheduler is, it's going to become too much too soon | 17:07 |
clayg | I think a generic type scheduler could get you pretty far | 17:07 |
rnirmal | so starting with a generic scheduler would be nice | 17:07 |
DuncanT | Assuming it is generic enough, I can't see a problem with a good generic scheduler | 17:07 |
clayg | I'm not sure how zones and host aggregates play in at the top level scheduler | 17:07 |
timr1 | vladimir3p: am I rigth in understaning that the generic schduler will eb able to schdule on opaque keys | 17:08 |
renuka | I think the vendors can plug in their logic in report capabilities, correct? So why not let the scheduler decide which backend is appropriate and based on a mapping which shows which volume workers can reach with backend, we can have the scheduler select a node | 17:08 |
vladimir3p | timr1: yes, at the beginning the generic scheduler could just match volume_type keys with reported keys | 17:09 |
vladimir3p | I was thinking that it must have some logic for quantities as well | 17:09 |
DuncanT | Since the HP backend can support many volume types in a single instance, it is also important that required capabilities get passed through to the create. Is that covered in the design already? | 17:09 |
vladimir3p | (instead of going to the DB for ll available vols on that host) | 17:09 |
vladimir3p | DuncanT: can you pls clarify? | 17:09 |
vladimir3p | DuncanT: I was thinking that this is the essential requirement for it | 17:10 |
vladimir3p | renuka: on volume-driver level, yes, they will report whatever keys they want | 17:10 |
renuka | DuncanT: isn't the volume type already passed in the create call (with the extensions added) | 17:10 |
*** hggdh has joined #openstack-meeting | 17:11 | |
DuncanT | vladimir3p: A single hp backend could support say (spindle speed==4800, spindle speed=7200, spindlespeed = 15000) so if the user asked for spindlespeed > 7200 we'd need that detail passed through to create | 17:11 |
vladimir3p | DuncanT: yes, volume type is used during volume creation and might be retrieved by the scheduler | 17:11 |
renuka | vladimir3p: correct, so those keys should be sufficient at this point to plug in vendor logic, correct? | 17:11 |
timr1 | that sound ok so | 17:12 |
renuka | DuncanT: I think the user need only specify the volume type. The admin can decide if a certain type means spindle speed = x | 17:12 |
vladimir3p | renuka: yes, but without special scheduler driver they will be useless (generic will only match volume type vs this reported capabilities) | 17:12 |
vladimir3p | DuncanT, renuka: yes, I was thinking that admin will create separate volume types. Every node will report what exactly it sees and scheduler will perform a matching between them | 17:13 |
renuka | vladimir3p: what is the problem with that? | 17:13 |
renuka | vladimir3p: every node should report for every backend (array) that it can reach | 17:13 |
renuka | the duplicates can be filtered later or appropriately at some point | 17:14 |
DuncanT | vladimir3p: That makes sense, thanks | 17:14 |
vladimir3p | renuka: I mean drivers will report whatever data they think is appropriate (some opaque key/value pairs), but generic scheduler will only look at the ones it recognize (and those are the one supplied in volume types) | 17:14 |
vladimir3p | if vendor would like to perform some special logic based on this opaque data - special schedulers will be required | 17:15 |
timr1 | I agree that we want to deal with quantities as well | 17:15 |
rnirmal | can we do something like match volume_type: if type supports sub_criteria match those too (gleaned from the opaque key/value pairs) | 17:15 |
DuncanT | vladimir3p: It should be possible to do ==. =/=, <, > generically on any key shouldn't it? | 17:16 |
renuka | rnirmal: I agree | 17:16 |
timr1 | dont htink we need special schedulers to support addiitonal vendor logic in backend | 17:16 |
vladimir3p | DuncanT: == and != yes, not sure about >, < | 17:16 |
renuka | yea, we should be able to add at least some basic logic via report capabilities | 17:17 |
vladimir3p | rnirmal: it would be great to have sub_criteria, but how scheduler will recognize that particular key/value pair is not a single criteria, but a complex one | 17:17 |
renuka | #idea what if we add a "specify rules" at driver level and keep "report capabilities" at node/backend level | 17:17 |
rnirmal | valdimir3p: it wouldn't know that, without some rules | 17:18 |
vladimir3p | folks, how about to start with something basic and improve it with the time (we could discuss these improvements right now, but let's agree on basics) | 17:18 |
vladimir3p | so, meanwhile we agreed that: | 17:18 |
vladimir3p | #agreed drivers will report capabilities in key/value pairs | 17:19 |
renuka | vladimir3p: an opaque rules should be simple enough to add, while keeping it generic | 17:19 |
renuka | correction, nodes will report capabilities in key/value pairs per backend | 17:19 |
vladimir3p | #agreed basic scheduler will have a logic to match volume_type's key/value pairs vs reported ones | 17:19 |
vladimir3p | renuka: driver will decide how many types it would like to report | 17:20 |
clayg | I think the abstract_scheduler for compute would be a good base for the gerneic type/quantity scheduler | 17:20 |
vladimir3p | each type has key/value pairs | 17:20 |
vladimir3p | clayg: yes, I was thinking exactly the same | 17:20 |
vladimir3p | clayg: there is already some logic for matching this stuff | 17:21 |
clayg | least_cost allows you to register cost functions | 17:21 |
clayg | oh right... | 17:21 |
clayg | in the vsa scheduler | 17:21 |
clayg | ? | 17:21 |
vladimir3p | we will need to rework vsa scheduler, but it has some basic logic for that | 17:22 |
vladimir3p | how about quantities: I suppose this is important. are we all agree to add reserved keywords? | 17:22 |
renuka | clayg: so do you think it will be straightforward enough to go with the register cost functions at this point? | 17:23 |
DuncanT | vladimir3p: I think the set needs to be small, since novel backends may not have many of the same concepts as other designs | 17:24 |
timr1 | vladimir3p: yes - IOPS, bandwidths, capacity etc? | 17:24 |
clayg | renuka: I'm not sure... something like that may work, but the cost idea is more about selecting a best match from a group of basically indentically capable hosts - but with different loads | 17:24 |
vladimir3p | the cost in our case might be total capacity or amount of drivers, etc... | 17:25 |
*** darraghb has quit IRC | 17:25 | |
renuka | I think there needs to be a way to call some method for a driver specific function. It will keep things generic and simplify a lot of stuff | 17:25 |
vladimir3p | that's why I suppose reporting something that scheduler will look at is important | 17:26 |
clayg | perhaps better would be to just subclass and over-ride "filter_capable_nodes" to look at type... and also some vendor specific opaque k/v pairs. | 17:26 |
clayg | the quantities stuff would more closely match the costs concepts | 17:26 |
vladimir3p | clayg: yes, subclass will work for single-vendor scenario | 17:26 |
*** novas0x2a|laptop has joined #openstack-meeting | 17:26 | |
clayg | vladimir3p: I mean you could call the super method and ignore stuff that's pass on "other-vendor" stuff | 17:27 |
renuka | vladimir3p: how about letting the scheduler to the basic scheduling as you said, and call a driver method passing the backends it has filtered? | 17:28 |
clayg | meh, I think we start with types and quantities - knowning that we'll have to had something more later | 17:28 |
rnirmal | regarding filtering, is anyone considering locating volume near vm, for scenarios with top-of-rack storage | 17:28 |
clayg | s/we/get out of the way and let Vlad | 17:28 |
renuka | rnirmal: yes | 17:28 |
vladimir3p | :-) | 17:28 |
clayg | ;) | 17:28 |
rnirmal | renuka: ok | 17:29 |
vladimir3p | renuka: not sure if generic scheduler should call driver... probably as clayg mentioned every provider could declare a subclass that will do whatever required | 17:29 |
timr1 | timr1: we are including it | 17:29 |
vladimir3p | in this case we will not need to implement volume_type - 2 - driver translation tables | 17:29 |
vladimir3p | however, there is a disadvantage with this - only a single vendor per scheduler ... | 17:30 |
timr1 | we (HP) are agnostic at this point as our driver is generic and we re-schdule in the back end | 17:30 |
vladimir3p | timr1: do you really want to rescheduler on volume driver level? | 17:30 |
vladimir3p | timr1: will it be easier if scheduler will have all the logic | 17:31 |
renuka | vladimir3p: you mean single driver... SM for example supports multiple vendors backend types | 17:31 |
timr1 | vladimir3p: we already do it this way - legacy | 17:31 |
vladimir3p | timr1: and just use volume node as a gateway | 17:31 |
vladimir3p | ok | 17:31 |
timr1 | that is correct - we jsut use it as a gateway | 17:31 |
vladimir3p | renuka: yes, single driver | 17:31 |
vladimir3p | so, can we claim that we agreed that there will be no volume_type-2-driver translation menwhile? | 17:32 |
renuka | vladimir3p: i still think matching simply on type is a bit less. If a driver does report capabilities per backend, there should be a way to distinguish between two backends | 17:33 |
renuka | for example, 2 netapp servers do not make 2 different volume types. They map to the same one, but the scheduler should be able to choose one | 17:33 |
clayg | two backends of the same type? | 17:33 |
vladimir3p | how about to go over particular renuka's scenario and see if we could do it with a basic scheduler | 17:34 |
renuka | similarly, as rnirmal pointed out, top of rack type of restrictions cannot be applied | 17:34 |
clayg | renuka: how do you suggest the scheduler choose between the two if they both have equal "qantities" | 17:34 |
renuka | that is where i think there should be a volume driver specific call | 17:35 |
renuka | or a similar way | 17:35 |
clayg | i do't really know how to support top of rack restrictions... unless you know where the volume is going to be attached when you create it | 17:35 |
*** adjohn has quit IRC | 17:36 | |
renuka | clayg: that will have to be filtered based on some reachability which is reported | 17:36 |
rnirmal | clayg: yeah, it's a little more complicated case, where it would require combining both vm and volume placement, so it might be out of scope for the basic scheduler | 17:36 |
renuka | clayg: we can look at user/project info and make a decision. I expect that is how it is done anyway | 17:37 |
vladimir3p | renuka: I suppose the generic scheduler could not really determine which one should be used, but you could do it in the sub-class. So you will over-ride some methds and pick based on opaque data | 17:37 |
renuka | so by deciding to subclass, we have essentially made it so that at least for a while, we can have only 1 volume driver in the zone | 17:38 |
vladimir3p | in this case scheduler will call driver "indirecty", because driver will over-ride it | 17:38 |
*** bengrue has joined #openstack-meeting | 17:38 | |
renuka | can everyone here live with that for now? | 17:38 |
clayg | isn't there like an "agree" or "vote" option? | 17:38 |
DuncanT | One issue I've just thought off with volume-types: | 17:38 |
clayg | I'm totally +1'ing the simple thing first | 17:39 |
vladimir3p | #agreed simple things first :-) there will be no volume_type-2-driver translation meanwhile (sub-class will override whatever necessary) | 17:39 |
timr1 | :) | 17:40 |
rnirmal | +1 | 17:40 |
*** jdg_ has joined #openstack-meeting | 17:40 | |
DuncanT | agreed | 17:40 |
vladimir3p | so, renuka, back to your scenario. I suppose it will be possible to register volume types like: SATA volumes, SAS volumes, etc... probably with some RPM, QoS, etc extra data | 17:41 |
renuka | ok | 17:41 |
DuncanT | Is it possible to specify volume affinity or anti-affinity with volume types? | 17:41 |
vladimir3p | on volume driver side, each driver will go to all its underlying arrays and will collect things like what type of storage each one of them support. | 17:41 |
DuncanT | I don't think there is a way to express taht with volume types? | 17:42 |
vladimir3p | I think after that it could rearrange it in the form like... [ {type: "SATA", arrays: [{array1, access path1}, {array2, accesspath2, ...}] | 17:42 |
renuka | vladimir3p: as long as we can subclass appropriately, the SM case should be fine, since we use one driver (multiple instances) in a zone | 17:42 |
vladimir3p | yeah, the sub-class on scheduler level will be able to perform all the filtering and will be able to recognize such data | 17:43 |
clayg | DuncanT: no I don't think so, not yet | 17:43 |
timr1 | DuncanT: I dont think volume types can do this - but it is somethign customers want | 17:43 |
vladimir3p | the only thing - it will need to report back what exactly should be done | 17:43 |
renuka | DuncanT: yes, that is why I am stressing on reachability | 17:43 |
DuncanT | renuka: I see, yes, it is a reachability question too -we were looking at it for performance reasons but the API is the same | 17:45 |
vladimir3p | actually, on volume driver level, we could at least understand the preferred path to the array | 17:45 |
renuka | DuncanT: yes, we could give it a better name ;) | 17:45 |
vladimir3p | renuka: do you think that what I described above will work for you? | 17:46 |
DuncanT | renuka: Generically it is affinity - we don't care about reachability from a scheduler point of view but obviously other technologies might, but we need a way for it to be passed to the driver... | 17:47 |
renuka | vladimir3p: yes | 17:47 |
vladimir3p | renuka: great | 17:47 |
vladimir3p | DuncanT: yes, today scheduler returns back only host name | 17:47 |
vladimir3p | it will be required to return probably a tuple of host and some capabilities that were used for this decision | 17:48 |
renuka | vladimir3p: could we ensure that when a request is passed to the host, it has enough information about which backend array to use | 17:48 |
DuncanT | vladimir3p: I'm more worried about the user facing API - we need a suer to be able to specify both a volume_type and a volume (or list of volumes) to have affinity to or antiaffinity to | 17:48 |
renuka | #topic admin APIs | 17:49 |
*** openstack changes topic to "admin APIs" | 17:49 | |
vladimir3p | DuncanT: afinity of user to type? | 17:49 |
renuka | DuncanT: that was in fact the next thing on the agenda, what are the admin APIs that everyone requires | 17:49 |
DuncanT | vladimir3p: No, affinity of the new volume to an existing volume | 17:50 |
DuncanT | vladimir3p: On a per-create basis | 17:50 |
vladimir3p | DuncanT: yes, we have the same requirement | 17:50 |
timr1 | i dont thinks duncanT is talkign about an admin API, it is the ability of a user to say create my volume near (or far)rom volume-abs | 17:50 |
renuka | DuncanT: could that be restated as ensuring a user/project data is kept together | 17:50 |
timr1 | renuka: also want anti-ainity for availability | 17:51 |
timr1 | anti-affinity | 17:51 |
renuka | timr1: isn't that too much power in the hands of the user. Will they even know such details? | 17:51 |
*** adjohn has joined #openstack-meeting | 17:51 | |
timr1 | renuka: we have users who want to do it, I dont see it as power. They say make volume X anywhere. Make Volume Y in a different place from volume X | 17:52 |
renuka | timr1: yes, in general some rules about placing user/project data. Am i correct in assuming that anti-affinity is used while creating redundant volumes? | 17:52 |
rnirmal | renuka: I think it's something similar to the hostId, which is an opaque id to the user, on a per user/project basis and maps on the backend to a host location | 17:52 |
vladimir3p | DuncanT: I guess affinity could be solved on sub-scheduler level. Actually specialized scheduler go first retrive what was already created for the user/project and based on that perform filtering | 17:52 |
renuka | vladimir3p: yes but we still have to have a way of saying that | 17:53 |
rnirmal | volumes may need something like the hostId for vms, to be able to tackle affinity first | 17:53 |
timr1 | yes it is for usere who want to create their own tier of availability. agree it could be solved on sub-sched - but need to user API expandedd | 17:53 |
clayg | vladimir3p: that would probably work, when they create the volume they can send in meta info, a customer scheduler could look for affinity and anti-affinity keys | 17:53 |
DuncanT | vladimir3p: What we'd like to make sure of if possible is that the user can specif which specific volume that already exists they want to be affine to (or anti-) - e.g. if they are creating several mirrored pairs, they might want to say that each pair is 'far' from each other to enhance the survivability of that data | 17:54 |
*** bencherian has joined #openstack-meeting | 17:54 | |
DuncanT | clayg: That should work fine, yes | 17:54 |
rnirmal | DuncanT: is this for inclusion in the basic scheduler ? | 17:55 |
vladimir3p | DuncanT: I see ... in our case we create volumes and build RAID on top of them. So, it is qute important for us to be able to schedule volumes on different nodes and to know about that | 17:55 |
*** renuka_ has joined #openstack-meeting | 17:55 | |
*** jdg_ has quit IRC | 17:56 | |
*** mattray has joined #openstack-meeting | 17:56 | |
DuncanT | rnirmal: The API field, yes. I'm happy for the scheduler to ignore it, just pass it to the driver | 17:56 |
renuka_ | vladimir3p: I was disconnected, has that disrupted anything? | 17:56 |
vladimir3p | renuka: not really, still discussing options for affinity | 17:56 |
DuncanT | Tim and I can write up a more detailed explaination off-line and email it round if that helps with progress? | 17:57 |
clayg | the scheduler will send the volume_ref, any meta data hanging of of that will be available to the driver | 17:57 |
renuka_ | #action DuncanT will write up detailed explaination for affinity | 17:57 |
DuncanT | :-) | 17:58 |
vladimir3p | DuncanT: thanks it wil be very helpfull. seems like we have some common areas there | 17:58 |
*** renuka has quit IRC | 17:58 | |
renuka_ | vladimir3p: what is the next step? We have 2 minutes | 17:58 |
clayg | OH: someone on ozone is working on adding os-volumes support to novaclient | 17:58 |
vladimir3p | clayg: seems like we will need to change how message is sent from scheduler to volumemanager | 17:58 |
rnirmal | I suppose admin api topic is for the next meeting | 17:59 |
vladimir3p | renuka: I guess we will need to find a volunteer | 17:59 |
renuka_ | rnirmal: yes | 17:59 |
vladimir3p | :-) | 17:59 |
clayg | vladimir3p: I don't see that, manager can get whatever it needs from volume_id | 17:59 |
vladimir3p | nope | 17:59 |
DuncanT | OT: I've put in a blueprint for snap/backup API for some discussion | 18:00 |
clayg | you're so mysterious... | 18:00 |
vladimir3p | clayg: scheduler will need to pass volume not only to the host, but "for particular backEnd array" | 18:00 |
renuka_ | any volunteers for the simple scheduler work in the room? we can have people taking this up in the mailing list, since we are out of time | 18:00 |
clayg | hrmmm.... if the manager/driver is responsible for multiple backends that support the same type... I don't see why it would let the scheduler make that decision | 18:01 |
renuka_ | alright i am going to end the meeting at this point | 18:01 |
renuka_ | is that ok? | 18:01 |
timr1 | okso | 18:01 |
vladimir3p | clayg: do you have some free time to continue ? | 18:01 |
clayg | it's going to aggregate that info in capabilities and send it to the scheduler, but by the time create volume comes down - it'll know better than the scheduler node which array is the best place for the volume | 18:01 |
vladimir3p | renuka: fine with me | 18:01 |
renuka_ | #endmeeting | 18:02 |
renuka_ | oh wow, that didn't work because I got disconnected before | 18:02 |
vladimir3p | clayg: it is not really relevant for my company, but I suppose folks managing multiple arays would prefer to have the scheduling on one place | 18:03 |
clayg | #endmetting | 18:03 |
*** renuka_ is now known as renuka | 18:03 | |
renuka | #endmeeting | 18:03 |
*** openstack changes topic to "Openstack Meetings: http://wiki.openstack.org/Meetings | Minutes: http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/" | 18:03 | |
openstack | Meeting ended Thu Oct 20 18:03:54 2011 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:03 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-10-20-17.03.html | 18:03 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-10-20-17.03.txt | 18:03 |
openstack | Log: http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-10-20-17.03.log.html | 18:03 |
clayg | thanks renuka! | 18:04 |
DuncanT | Yup, thanks Renuka | 18:04 |
vladimir3p | clayg: do you have couple of min to continue? | 18:04 |
vladimir3p | thanks Renuka & all | 18:04 |
clayg | vladimir3p: I'm pming you ;) | 18:05 |
renuka | sure, I do have time to continue, do we know if there is any other meeting on this channel lined up? | 18:05 |
clayg | I thouht there was... | 18:06 |
clayg | #opnstack-volumes is open | 18:06 |
*** timr1 has quit IRC | 18:06 | |
renuka | clayg: i have joined that | 18:07 |
*** renuka has quit IRC | 18:08 | |
clayg | renuka volumeS? we don't see you? | 18:08 |
vladimir3p | seems like renuka quit from this one | 18:08 |
*** df1 has joined #openstack-meeting | 18:16 | |
*** adjohn has quit IRC | 18:18 | |
*** zul has quit IRC | 18:22 | |
*** zul has joined #openstack-meeting | 18:26 | |
*** HowardRoark has quit IRC | 18:31 | |
*** dolphm has quit IRC | 18:33 | |
*** HowardRoark has joined #openstack-meeting | 18:36 | |
*** rohitk has joined #openstack-meeting | 18:36 | |
*** clayg has left #openstack-meeting | 18:47 | |
*** timr has joined #openstack-meeting | 18:53 | |
*** dendrobates is now known as dendro-afk | 18:54 | |
*** dolphm has joined #openstack-meeting | 18:56 | |
*** dendro-afk is now known as dendrobates | 18:58 | |
*** vladimir3p has quit IRC | 19:08 | |
*** dolphm has quit IRC | 19:23 | |
*** dolphm has joined #openstack-meeting | 19:25 | |
*** dendrobates is now known as dendro-afk | 19:26 | |
*** adjohn has joined #openstack-meeting | 19:39 | |
*** df1 has left #openstack-meeting | 19:40 | |
*** HowardRoark has quit IRC | 19:45 | |
*** timr has quit IRC | 19:52 | |
*** HowardRoark has joined #openstack-meeting | 20:04 | |
*** sandywalsh has quit IRC | 20:13 | |
*** adjohn has quit IRC | 20:15 | |
*** bencherian has quit IRC | 20:16 | |
*** jdag has quit IRC | 20:19 | |
*** sandywalsh has joined #openstack-meeting | 20:19 | |
dprince | exit | 20:23 |
dprince | exit | 20:23 |
*** dprince has quit IRC | 20:23 | |
*** dendro-afk is now known as dendrobates | 20:23 | |
*** dolphm has quit IRC | 20:25 | |
*** dolphm has joined #openstack-meeting | 20:28 | |
*** donaldngo_hp has joined #openstack-meeting | 20:48 | |
*** rnirmal has quit IRC | 20:58 | |
*** bencherian has joined #openstack-meeting | 21:01 | |
*** nati2 has quit IRC | 21:08 | |
*** dolphm has quit IRC | 21:09 | |
*** dendrobates is now known as dendro-afk | 21:11 | |
*** bencherian has quit IRC | 21:12 | |
*** markvoelker has quit IRC | 21:14 | |
*** bencherian has joined #openstack-meeting | 21:15 | |
*** nati2 has joined #openstack-meeting | 21:30 | |
*** reed has quit IRC | 21:37 | |
*** jdg_ has joined #openstack-meeting | 21:37 | |
*** reed has joined #openstack-meeting | 21:38 | |
*** mattray has quit IRC | 21:46 | |
*** mattray has joined #openstack-meeting | 21:50 | |
*** bencherian has quit IRC | 21:53 | |
*** bencherian has joined #openstack-meeting | 21:56 | |
*** jdg_ has quit IRC | 22:04 | |
*** jdag has joined #openstack-meeting | 22:06 | |
*** mattray has quit IRC | 22:09 | |
*** dolphm has joined #openstack-meeting | 22:24 | |
*** dolphm has quit IRC | 22:26 | |
*** bencherian has quit IRC | 22:28 | |
*** nati2 has quit IRC | 22:31 | |
*** dragondm has quit IRC | 22:32 | |
*** HowardRoark has quit IRC | 22:32 | |
*** rohitk has quit IRC | 22:36 | |
*** nati2 has joined #openstack-meeting | 22:44 | |
*** mattray has joined #openstack-meeting | 23:07 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!