*** wanghao has joined #openstack-mogan | 00:48 | |
*** litao__ has joined #openstack-mogan | 00:57 | |
*** wanghao has quit IRC | 00:58 | |
*** wanghao has joined #openstack-mogan | 00:58 | |
zhenguo | morning mogan! | 01:10 |
---|---|---|
wanghao | moring | 01:18 |
litao__ | morning | 01:23 |
liusheng | morning | 01:26 |
*** harlowja has quit IRC | 01:32 | |
openstackgerrit | wanghao proposed openstack/mogan master: Support keypair quota in Mogan(part three) https://review.openstack.org/485112 | 01:37 |
zhenguo | liusheng: sems the placement rp query only support member_of for aggregates, which means retrieving rp associated with at least one of the aggregates | 01:55 |
zhenguo | liusheng: but in fact, when filtering the aggregates ourselves, we may need nodes in all aggregates | 01:56 |
liusheng | zhenguo: let me check | 01:58 |
zhenguo | liusheng: thanks | 01:58 |
zhenguo | liusheng: but in some senario, we need one of them | 01:59 |
zhenguo | liusheng: aggregates maybe got one same metadata key/value pair | 01:59 |
zhenguo | liusheng: like agg1 has az=bj and GPU, agg2 has az=bj, then requesting, we want to get all of them. | 02:00 |
zhenguo | hi all, the weekly meeting will happen soon, please move to openstack-meeting | 02:00 |
liusheng | zhenguo: for the rps and aggregates, they are N:N relationship, right ? | 02:00 |
zhenguo | liusheng: yes | 02:00 |
liusheng | zhenguo: hah, almost forget that | 02:01 |
zhenguo | liusheng: hah, we can discuss later | 02:01 |
*** harlowja has joined #openstack-mogan | 02:52 | |
wanghao | ha I'm later, internal meeting.... | 03:06 |
zhenguo | wanghao: hah | 03:12 |
zhenguo | liusheng: member_of will get those resource providers that are associated with any of the list of aggregate uuids provided | 03:13 |
zhenguo | liusheng: is that what we expected? | 03:13 |
wanghao | zhenguo: I hava a question, I find that keypair didn't have project_id, I think maybe we need it. | 03:15 |
wanghao | zhenguo: since we can control the quota based the project_id | 03:15 |
zhenguo | wanghao: seems liusheng copied that from nova, not sure how | 03:16 |
liusheng | zhenguo: I think there can meet our requirement, if the list of aggregates are only 1 item, we can get the rps of a specified aggregate, right ? | 03:16 |
wanghao | liusheng: god sheng, do you think we need project_id in keypair? | 03:17 |
liusheng | wanghao: seems in Nova, keypair only have user_id | 03:17 |
zhenguo | liusheng: right, but if not one aggregate, how can we handle it? | 03:17 |
wanghao | liusheng: if without project_id, so we will have a overall quota for all projects. | 03:17 |
liusheng | zhenguo: do we need that situation ? | 03:17 |
zhenguo | liusheng: like we may have an agg1 with az=1 and another agg2 with GPU=true | 03:18 |
liusheng | wanghao: :( not very sure about the Nova implemenation | 03:18 |
zhenguo | liusheng: then we request a server in az1 with GPU | 03:18 |
zhenguo | liusheng: but if az1 and GPU in one agg with different metadata is ok | 03:18 |
wanghao | liusheng: zhenguo: well, I prefer to add the project_id in keypair, that will no harm for mogan. | 03:19 |
liusheng | zhenguo: if so, how about split that process to two steps ? | 03:19 |
wanghao | liusheng: zhenguo: I will add it in quota patch, you can have a look about it. | 03:19 |
zhenguo | liusheng: wdyt about wanghao's proposal? | 03:19 |
liusheng | zhenguo: we can filter with ag1 and then filter with ag2, then select the intersection | 03:20 |
liusheng | zhenguo, wanghao let me check how Nova does | 03:20 |
zhenguo | liusheng: but the reason we choose placement is for one sql, hah | 03:21 |
liusheng | zhenguo: hah, will confirm the placement api, seems placement api is far from perfect | 03:31 |
zhenguo | liusheng: I just read the code, it can only support list rps in any of the agg | 03:32 |
liusheng | wanghao: just disscussed with my colleagues, not very sure, seems Nova can support quotas by user | 03:34 |
liusheng | zhenguo: hmm... | 03:34 |
wanghao | liusheng: I see, but now Mogan just support quota in project level | 03:35 |
wanghao | liusheng: so a little change here, support project level first? | 03:35 |
wanghao | liusheng: zhenguo: make sense for you guys? | 03:36 |
zhenguo | wanghao, liusheng: a user can list all keypairs of that project, right? | 03:36 |
zhenguo | wanghao, liusheng: or, when list keypairs, we filter the user id? | 03:37 |
wanghao | zhenguo: only get the keypairs belong to this user | 03:38 |
zhenguo | wanghao, liusheng: maybe that's the concern, even if in same tenant, users want to keep their private key secret | 03:38 |
liusheng | zhenguo: no, just current user's , but admin can specify a user_id to query other users, I left a TODO for that :( | 03:38 |
wanghao | zhenguo: It's ok now I think, we just limit the quota in project level, no influence in creation or getting. | 03:39 |
zhenguo | wanghao: seems it's ok, wdyt liusheng? | 03:41 |
zhenguo | wanghao, liusheng: so the project id is only for quotas | 03:41 |
liusheng | zhenguo, wanghao I am ok, but just thinking, besides quota, is there other meaning of adding a project field? | 03:42 |
liusheng | wanghao: hah, same question | 03:42 |
zhenguo | wanghao, liusheng: seems not | 03:42 |
wanghao | zhenguo: yes, we just record the project_id in keypair object and use it for quota only | 03:42 |
liusheng | wanghao: so please go a head :) | 03:42 |
zhenguo | hah | 03:42 |
wanghao | zhenguo: or... we can got all keypairs for admin rule in one project... | 03:43 |
zhenguo | before we introduce user level quota, it's ok for that | 03:43 |
wanghao | liusheng: just a little thinking | 03:43 |
zhenguo | wanghao: maybe yes, need to find real requirements for that, hah | 03:43 |
wanghao | zhenguo: yes | 03:44 |
wanghao | okay, so I will add project_id in keypair object, and use it in quota. | 03:44 |
openstackgerrit | wanghao proposed openstack/mogan master: Support keypair quota in Mogan(part three) https://review.openstack.org/485112 | 03:48 |
zhenguo | liusheng: seems we usually define an aggregate with one metadata key | 03:52 |
liusheng | zhenguo: you mean in Nova ? | 03:53 |
zhenguo | liusheng: but there's no restrict | 03:54 |
zhenguo | liusheng: yes | 03:54 |
liusheng | zhenguo: for now, the restrict is in placement | 03:54 |
zhenguo | liusheng: sigh | 03:54 |
*** harlowja has quit IRC | 04:18 | |
*** harlowja has joined #openstack-mogan | 04:30 | |
zhenguo | https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/ | 05:11 |
zhenguo | hi all, do you find az is confusion? I plan to get rid of notion of availability zone in mogan instead you can add a specific key in host aggregate metadata | 05:15 |
zhenguo | then when you request a server, you can specify az, and for you can' | 05:15 |
zhenguo | list of availability zones then | 05:16 |
zhenguo | az will also be a special key for host aggregate, which used for affinity and anti-affinity | 05:16 |
zhenguo | then if you define a server group which means the server within it will be in the same availability zone or not | 05:17 |
zhenguo | here availability zone can be rack, power source, cluster, or whatever defined by opeartors | 05:18 |
zhenguo | zhangyang: please confirm with liudong about the proposal, thanks! | 05:19 |
openstackgerrit | Merged openstack/mogan master: Add filters to server list API https://review.openstack.org/473323 | 05:20 |
zhenguo | liusheng, shaohe_feng, wanghao, litao__: wdyt? | 05:20 |
*** harlowja has quit IRC | 05:33 | |
zhenguo | or maybe for history reason, we can leave the availability zone, then introduce a new affinity_zone which used for server group affinity and anti-affinity policies | 05:35 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Add support for DBDeadlock handling https://review.openstack.org/485021 | 05:36 |
wanghao | zhenguo: In fact, I didn't get what the AZ used for? | 06:03 |
wanghao | zhenguo: I mean if user want to choose a server as they want, they can use host aggregate or server group | 06:04 |
wanghao | zhenguo: So it's a bigger scope in logic to organize the servers or something else? | 06:06 |
zhenguo | wanghao: yes, currently az is just a metadata of host aggregate | 06:08 |
zhenguo | wanghao: but the difference is that users can list the az and request server in that az | 06:08 |
zhenguo | wanghao: but other metadata not exposed to common users | 06:08 |
zhenguo | maybe some deployments have already used az | 06:09 |
wanghao | zhenguo: well I see, so from user side, they can't see the host aggregate in fact, but they can see the az, right? | 06:15 |
zhenguo | wanghao: yes | 06:16 |
wanghao | zhenguo: they can use az to specify where to put their server, but in fact they just choose a host aggregate. | 06:16 |
zhenguo | wanghao: correct | 06:16 |
zhenguo | wanghao: but as the defination, az can be a rack or power source | 06:16 |
zhenguo | wanghao: but server group can't use anti-affinity based on az to provide high availbility | 06:17 |
zhenguo | wanghao: so I want to introduce a new affinity-zone notion for server group | 06:17 |
wanghao | zhenguo: I didn't see why we need a new affinity-zone | 06:21 |
wanghao | zhenguo: let me check the concept of anti-affinity server group, | 06:21 |
zhenguo | wanghao: you mean we can based on availability zone? | 06:21 |
wanghao | zhenguo: if you want a anti-affinity server group, you mean the servers can not be in the same host aggregat, right? | 06:22 |
wanghao | zhenguo: I mean maybe we didn't need az either. | 06:22 |
zhenguo | wanghao: not host aggregate | 06:22 |
zhenguo | wanghao: but host aggregate with special key maybe affinity-zone | 06:23 |
zhenguo | wanghao: a key indicates what the aggregate is | 06:23 |
wanghao | zhenguo: okay, I see, you mean a key to tell mogan this aggregate is a affinity-zone, like have a same power source or something. | 06:24 |
zhenguo | wanghao: yes | 06:25 |
zhenguo | wanghao: like it's a rack or with same Tor or power source or whatever | 06:25 |
wanghao | zhenguo: so mogan shouldn't create two servers in this aggregate if they are in a anti-affinity server group? | 06:25 |
wanghao | zhenguo: emm, I see. | 06:25 |
zhenguo | wanghao: yes | 06:26 |
wanghao | zhenguo: well, then affinity-zone sounds like a different level scope. | 06:26 |
zhenguo | wanghao: hah, which is for affinity | 06:26 |
zhenguo | wanghao: I think az can be used to do that, but someone may already used az for other purpose | 06:27 |
wanghao | zhenguo: that's sure, bascially I think AZ is more bigger scope than affinity-group | 06:28 |
zhenguo | wanghao: yes, I think so | 06:28 |
wanghao | zhenguo: you can including many host aggreates in somewhere to a single AZ, like beijing AZ | 06:28 |
zhenguo | wanghao: that's true | 06:29 |
wanghao | zhenguo: emm, then I think affinity-zone sounds good, just another special key in host aggregates to indicate some detail information. | 06:30 |
wanghao | zhenguo: but the name..... kind of like az..... | 06:30 |
zhenguo | wanghao: hah | 06:31 |
zhenguo | wanghao: as az can also used for this purpose | 06:31 |
zhenguo | wanghao: the notion of az is not only for large zone | 06:32 |
zhenguo | wanghao: you can use it for whatever purpose like other aggregate metadata :( | 06:32 |
wanghao | zhenguo: that's true, but that means operators should create the azs according the affinity rule. | 06:32 |
zhenguo | wanghao: yes | 06:33 |
zhenguo | wanghao: https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/ here explains availability zone | 06:33 |
wanghao | zhenguo: consider this case, a user want two server in beijing az, but those servers also should also have high-available. | 06:34 |
wanghao | zhenguo: okay, I will check it. | 06:34 |
zhenguo | wanghao: then there should be an aggregate with metadata az=beijing including all nodes there | 06:34 |
zhenguo | wanghao: and there maybe some small aggregaes with affinity-zone | 06:35 |
zhenguo | wanghao: az maybe for a DC, but affinity zone for rack in the DC | 06:35 |
wanghao | zhenguo: yes | 06:35 |
wanghao | zhenguo: but there is another question, now can't be many aggreagtes have a same az? | 06:36 |
zhenguo | wanghao: they can | 06:36 |
wanghao | zhenguo: as this case, I thinks there should be many aggreagtes have same az=beijing, not one aggreagte including all nodes. | 06:37 |
zhenguo | wanghao: yes | 06:37 |
wanghao | zhenguo: and mogan just need to pick two aggregates from them with different affinity-zone | 06:37 |
zhenguo | wanghao: only for server groups | 06:38 |
wanghao | zhenguo: of course | 06:38 |
zhenguo | wanghao: when you request a server and specify a server groups with anti-affinity policy, then we will check the servers in the group and find the aggregates they are in, then scheduler to an aggregate not same with others | 06:39 |
wanghao | zhenguo: so the important thing is how to find 'not same with others', that means different affinity-zone. | 06:40 |
zhenguo | wanghao: yes | 06:41 |
wanghao | zhenguo: I think there cloud be many host aggreates can be within same affinity-group | 06:41 |
zhenguo | wanghao: if the policy is affinity, then we will schedule to the same aggregate | 06:41 |
zhenguo | wanghao: yes, they can | 06:42 |
wanghao | zhenguo: for affinity, you mean just same aggregate? or same affinity-zone | 06:42 |
zhenguo | wanghao: but there's restrictions, one node can only be in one affinity_zone aggregates | 06:42 |
wanghao | zhenguo: that should be | 06:43 |
zhenguo | wanghao: affinity_zone is a metadata for an aggregate | 06:43 |
zhenguo | wanghao: same affinity zone | 06:43 |
wanghao | zhenguo: that's clear more. | 06:43 |
zhenguo | wanghao: hah, I will prepare a spec soon | 06:44 |
wanghao | zhenguo: okay, cool | 06:44 |
zhenguo | wanghao: and please help to review the aggregates patches, that must be done first :D | 06:45 |
wanghao | zhenguo: okay, sure | 06:45 |
zhenguo | wanghao: thanks | 06:45 |
openstackgerrit | wanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three) https://review.openstack.org/485461 | 06:48 |
openstackgerrit | wanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three) https://review.openstack.org/485461 | 06:54 |
liusheng | zhenguo: sorry, I didn't get you thoughts very clearly | 06:55 |
liusheng | zhenguo: just scaned above conversation, seems you just want to merge the az conception and server group in mogan, right ? | 06:56 |
zhenguo | liusheng: no, I want to introduce a new key affinity_zone | 06:57 |
liusheng | zhenguo: yes, I know | 06:58 |
zhenguo | liusheng: which will be used for server group affinity | 06:58 |
zhenguo | liusheng: as az is an old notion in openstack, so it maybe used for other purpose | 06:58 |
liusheng | zhenguo: so that looks an concept of az+server group | 06:58 |
liusheng | zhenguo: yes, hah, I mean them in Nova | 06:59 |
zhenguo | liusheng: yes, at first, I just want to use az | 06:59 |
zhenguo | liusheng: yes | 06:59 |
liusheng | zhenguo: I didn't think out clearly yet, so what is the node aggregate for ? | 07:00 |
openstackgerrit | wanghao proposed openstack/mogan master: Miss the argument in db/driver.py https://review.openstack.org/485471 | 07:00 |
zhenguo | liusheng: both az and affinity zone are metadata of aggregate | 07:00 |
liusheng | zhenguo: yes | 07:00 |
zhenguo | liusheng: aggregate is just mean groups without meanings | 07:01 |
liusheng | zhenguo: an affinity zone may inludes serveral aggregates, right ? | 07:01 |
zhenguo | liusheng: only the metadata associated the aggregate have meanings | 07:01 |
wanghao | liusheng: yes, an affinity zone can include serveral host aggregates. | 07:01 |
zhenguo | liusheng: here if we say az or affinity zone with aggregate, we will be confused | 07:02 |
zhenguo | liusheng: we can use nodes for availability zone and affinity zone | 07:02 |
zhenguo | liusheng: as aggregaets is just some relationship | 07:02 |
zhenguo | liusheng: it can be az or affinity zone | 07:03 |
liusheng | zhenguo: but for actual usages, an affinitiy zone maybe a rack, right ? | 07:03 |
zhenguo | liusheng: yes | 07:03 |
zhenguo | liusheng: or tow racks with the same power source | 07:03 |
liusheng | zhenguo: affinity may not only related with power source | 07:04 |
liusheng | zhenguo: e.g. network trafic | 07:04 |
zhenguo | liusheng: sure | 07:04 |
zhenguo | liusheng: so we use affinity zone instead of rack or power source | 07:04 |
zhenguo | liusheng: as my understanding, aggregates are just some easy way to add traits to nodes, hah | 07:05 |
liusheng | zhenguo: so there maybe a possible problem, users may don't how that it represent for of the affinity zone | 07:05 |
liusheng | zhenguo: what the affinity means of an specified affinity zone | 07:06 |
zhenguo | liusheng: users can't define aggregate | 07:06 |
zhenguo | liusheng: cloud providers define that for users | 07:06 |
zhenguo | liusheng: aha, I see, you mean users don't know what affinity means | 07:07 |
liusheng | zhenguo: but at least users need to know that | 07:07 |
liusheng | zhenguo: correct! | 07:07 |
liusheng | zhenguo: as we talked the affinity may means different things | 07:07 |
zhenguo | liusheng: that depends on what a deployment define it | 07:08 |
zhenguo | liusheng: it can provide some description to users, just like users don't know what availability zone mean | 07:08 |
zhenguo | liusheng: in different cloud deployments, it may got different definations | 07:10 |
liusheng | zhenguo: hmm.. seems availability zone is related logical concept, but affinity zone means real thing | 07:10 |
zhenguo | liusheng: we don't want to fix it to real thing as well | 07:11 |
liusheng | zhenguo: for an example, user may want to create two servers which should be putted into an affinity zone which can have high speed to acess each others, but users may don't how if the affinity zone provided by cloud operator have the capability | 07:12 |
zhenguo | liusheng: why users want to do that, we only tell them that we can provide server group affinity to let your server with high availbility | 07:13 |
zhenguo | liusheng: we don't want to show the backend details to them | 07:14 |
liusheng | zhenguo: affinity zones related with hight availability ? | 07:14 |
zhenguo | liusheng: just logical means | 07:15 |
zhenguo | liusheng: you can told your users what does that mean in your cloud | 07:15 |
liusheng | zhenguo: for server group, I think it is a common use that users want the servers in a gourp have a high speed to acess each other | 07:16 |
liusheng | zhenguo: ... | 07:16 |
zhenguo | liusheng: not only | 07:16 |
zhenguo | liusheng: that's only for affinity, | 07:17 |
liusheng | zhenguo: yes, just a example | 07:17 |
zhenguo | liusheng: I talked with our product team yesterday, they just want anity-affinity for different rack for high availability | 07:17 |
liusheng | zhenguo: let me think more | 07:18 |
zhenguo | liusheng: ok | 07:18 |
zhenguo | liusheng: we just provide the ability, but what does that mean decided by cloud deployers | 07:18 |
liusheng | zhenguo: I am just think another way, just personal thoughts | 07:19 |
liusheng | zhenguo: how about just we just provide the ability for tagging a aggregate, but we don't have concept of affinity zone, availabity zones | 07:20 |
liusheng | zhenguo: and our scheduler can provide the ability to filter the specific tags of the aggregate ? | 07:20 |
liusheng | zhenguo: what do you think | 07:21 |
zhenguo | liusheng: the question is how does users know what tags to specify | 07:23 |
liusheng | zhenguo: as your proposal, cloud operator add the tags | 07:24 |
zhenguo | liusheng: we don't want to show that to users which will let users know more details of our backend | 07:24 |
zhenguo | liusheng: we even don't show aggregate metadata to users | 07:24 |
zhenguo | liusheng: we don't want to bring more complex to users | 07:25 |
liusheng | zhenguo: it is more like to show users the ability of aggreagtes. | 07:25 |
zhenguo | liusheng: why we want to show that to common users | 07:26 |
liusheng | zhenguo: I am afraid there maybe more procise scheduling requirement in the future | 07:26 |
liusheng | zhenguo: why don't, users pass scheduling parameter according to their requirement | 07:27 |
liusheng | zhenguo: but just personal thoughts | 07:27 |
liusheng | zhenguo: you can please to propose your spec firstly, hah | 07:28 |
zhenguo | liusheng: then users will got how many aggregates you have with what metadata, and maybe how many servers you got, then that's not cloud | 07:28 |
liusheng | zhenguo: not exactly, the situation like we providing flavors with resource classes to usrs | 07:30 |
zhenguo | liusheng: I also plan to not show resource classes to users | 07:31 |
zhenguo | liusheng: and just the description | 07:31 |
zhenguo | liusheng: as users can't read what the resource classes mean and it will confuse users | 07:31 |
openstackgerrit | wanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three) https://review.openstack.org/485461 | 07:31 |
liusheng | zhenguo: yes, that is reasonable, but users need to know what's ability of a specified flavor | 07:32 |
zhenguo | liusheng: all I want to tell users include in the description with a human readble context | 07:32 |
liusheng | zhenguo: even not exact info | 07:32 |
liusheng | zhenguo: ok | 07:34 |
zhenguo | liusheng: hah | 07:34 |
zhenguo | liusheng: we can just provide the ability, if there's really user requirements, we can discuss then | 07:35 |
zhenguo | liusheng: wdyt? | 07:35 |
liusheng | zhenguo: sure, thanks | 07:35 |
liusheng | zhenguo: but your proposal also make sense to me, hah | 07:36 |
zhenguo | liusheng: lol | 07:36 |
zhenguo | liusheng: for server groups, placement aggregates well fit it | 07:36 |
liusheng | zhenguo: I think some thinks most openstack project are "infrastructure" | 07:36 |
*** yushb has joined #openstack-mogan | 07:36 | |
liusheng | zhenguo: ok | 07:37 |
wanghao | guys, https://review.openstack.org/#/c/485461/ quota for keypairs has ready and tested pass. you can have a look | 07:37 |
zhenguo | liusheng: one sql can satisfy both affinity and antiy-affinity requirements | 07:37 |
zhenguo | wanghao: thanks | 07:38 |
wanghao | and I find an issue in db/driver.py about filter function, we can fix quickly: https://review.openstack.org/#/c/485471/ | 07:38 |
zhenguo | wanghao: hah, sure. | 07:39 |
liusheng | wanghao: +2ed for the easy one, hah | 07:39 |
zhenguo | hah, will +A when the gate truns green | 07:39 |
zhenguo | but maybe don't need to wait, as it will must pass the gate :D | 07:40 |
liusheng | hah | 07:49 |
wanghao | haha | 07:52 |
openstackgerrit | Merged openstack/mogan master: Miss the argument in db/driver.py https://review.openstack.org/485471 | 08:19 |
openstackgerrit | wanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three) https://review.openstack.org/485461 | 08:20 |
zhenguo | liusheng: I have added the first there aggregate patches but without tests, you can help to test if got time | 08:41 |
liusheng | zhenguo: ok, will try | 08:42 |
zhenguo | liusheng: thanks, I will add a following up patch to make availability zone list by aggregate metadata | 08:43 |
liusheng | zhenguo: ok, | 08:44 |
openstackgerrit | liusheng proposed openstack/mogan master: Add support for scheduler_hints https://review.openstack.org/463534 | 09:00 |
openstackgerrit | liusheng proposed openstack/mogan master: Add support for scheduler_hints https://review.openstack.org/463534 | 09:11 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Adds aggregates DB model and API https://review.openstack.org/482786 | 09:17 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Add aggregate API https://review.openstack.org/484690 | 09:17 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Add aggregate object https://review.openstack.org/484630 | 09:17 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Retrieve availability zone from aggregate https://review.openstack.org/485506 | 09:17 |
zhenguo | liusheng: seems nova will get rid of reschdule | 09:18 |
liusheng | zhenguo: really ? | 09:22 |
zhenguo | liusheng: matt said that during last meeting | 09:22 |
liusheng | zhenguo: just git rid of or a alternative way ? | 09:22 |
zhenguo | liusheng: seems a scheuling will got a list of alternatives host to use | 09:23 |
zhenguo | liusheng: when failed, pick one from that | 09:23 |
liusheng | zhenguo: oh, seems like a alternative way | 09:23 |
zhenguo | liusheng: then, there will not rescheuling | 09:24 |
zhenguo | liusheng: I'm not sure how to make that happen for us, as the alternative nodes may get used by other requests | 09:24 |
liusheng | zhenguo: just talked with zhenyu, he said seems, rescheduling only exist in cell0 | 09:25 |
zhenguo | liusheng: yes | 09:26 |
zhenguo | liusheng: do you know how many time left for Pike? | 09:28 |
*** wanghao has quit IRC | 09:35 | |
openstackgerrit | liusheng proposed openstack/mogan master: Add support for scheduler_hints https://review.openstack.org/463534 | 09:35 |
liusheng | zhenguo: let me confirm | 09:35 |
liusheng | zhenguo: https://releases.openstack.org/pike/schedule.html#p-release | 09:41 |
liusheng | zhenguo: sees 24.Jul~28.Jul | 09:42 |
zhenguo | liusheng: thanks, so we still have one month | 09:43 |
liusheng | zhenguo: s/sees/seems/ | 09:43 |
liusheng | zhenguo: no | 09:44 |
zhenguo | liusheng: The Pike coordinated release will happen on 30 August 2017 | 09:44 |
liusheng | zhenguo: I am not sure about the coordinated release | 09:45 |
liusheng | zhenguo: RC1 target week¶ | 09:45 |
liusheng | The week of 7-11 August 2017 is the target date for projects following the release:cycle-with-milestones model to issue their first release candidate, with a deadline of 10 August 2017. | 09:45 |
*** wanghao has joined #openstack-mogan | 09:45 | |
liusheng | zhenguo: ^^, usaully, the rc1 is the release date | 09:45 |
zhenguo | liusheng: but we are not official, hah | 09:45 |
zhenguo | liusheng: we can release at anytime | 09:46 |
liusheng | zhenguo: you can download the calendar | 09:46 |
liusheng | zhenguo: https://releases.openstack.org/ | 09:46 |
liusheng | zhenguo: on the first line of this page | 09:46 |
liusheng | zhenguo: hah, yes | 09:47 |
zhenguo | liusheng:hah | 09:48 |
*** wanghao has quit IRC | 09:51 | |
*** yushb has quit IRC | 10:02 | |
*** yushb has joined #openstack-mogan | 10:10 | |
*** yushb has quit IRC | 10:12 | |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Adds aggregates DB model and API https://review.openstack.org/482786 | 11:45 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Retrieve availability zone from aggregate https://review.openstack.org/485506 | 11:45 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Add aggregate API https://review.openstack.org/484690 | 11:45 |
openstackgerrit | Zhenguo Niu proposed openstack/mogan master: Add aggregate object https://review.openstack.org/484630 | 11:45 |
zhenguo | liusheng: hi, for aggregate nodes, which value would you like to be returned, name or uuid? | 11:56 |
liusheng | zhenguo: is name unique ? | 11:56 |
zhenguo | liusheng: seems yes | 11:56 |
liusheng | zhenguo: I may prefer name | 11:56 |
zhenguo | liusheng: hah | 11:56 |
zhenguo | liusheng: and we should add a node list API back :( | 11:57 |
liusheng | zhenguo: why ? | 11:57 |
zhenguo | liusheng: as we need to add node to aggregate | 11:57 |
zhenguo | liusheng: but it can be simple | 11:58 |
liusheng | zhenguo: how about just retrive nodes from placement ? | 11:58 |
zhenguo | liusheng: that's resource providers | 11:58 |
liusheng | zhenguo: yes | 11:58 |
zhenguo | liusheng: we can add a simple API, maybe just return a list of node name? | 11:59 |
zhenguo | liusheng: read from our cache | 11:59 |
zhenguo | liusheng: for security reason, in production env, we may not let admins access placement directly | 12:00 |
liusheng | zhenguo: not sure if we can just use ironic node-list | 12:00 |
zhenguo | liusheng: ironic node list can be different from our node list | 12:00 |
zhenguo | liusheng: as we will filter it | 12:00 |
liusheng | zhenguo: yes... hoped we can get rid of nodes api :( | 12:01 |
zhenguo | liusheng: but it works like a proxy API, not quite like it | 12:01 |
liusheng | zhenguo: yes, I am not sure it is good idea to retive nodes from cache, may it effects if we want to support multiple workers in the future | 12:03 |
zhenguo | liusheng: but if we don't use ironic, but other drivers, the API is needed | 12:03 |
liusheng | zhenguo: how about a proxy api to placement api ? | 12:03 |
liusheng | zhenguo: yes | 12:03 |
zhenguo | liusheng: multiple workers also need t o address the current cache problem | 12:03 |
zhenguo | liusheng: not only for us | 12:04 |
*** litao__ has quit IRC | 12:04 | |
liusheng | zhenguo: sigh... a service with in-memory cache is not stateless | 12:05 |
liusheng | zhenguo: we'd better to get rid of that | 12:05 |
zhenguo | liusheng: cache will sync with placement, so it's ok | 12:05 |
zhenguo | liusheng: as placement data is also sync by mogan | 12:06 |
zhenguo | liusheng: so the cache is always the newest | 12:06 |
liusheng | zhenguo: oh | 12:06 |
liusheng | zhenguo: if we only need node name and node id, we adding a proxy api which can directly call placement api and avoid a call to mogan-engine | 12:07 |
zhenguo | liusheng: why that's better? | 12:08 |
zhenguo | liusheng; placement is work like DISK, we are MEMEORY, why not read from memory directly? | 12:09 |
liusheng | zhenguo: not better, just think an api to retrive info from the cache of a servers looks a bit strange. hah | 12:10 |
zhenguo | liusheng: hah | 12:10 |
zhenguo | liusheng: I don't think it's strange, as when changing the placement data, we first update the cache, there's no problem | 12:11 |
liusheng | zhenguo: ok | 12:11 |
zhenguo | liusheng: cache works like a fast db, if there's no others change the backend db | 12:12 |
liusheng | zhenguo: ok, make sense | 12:13 |
zhenguo | liusheng: ok | 12:13 |
zhenguo | liusheng: I have to go, bye | 12:13 |
liusheng | zhenguo: bye, I also have to go soon | 12:14 |
*** shaohe_feng has quit IRC | 17:09 | |
*** shaohe_feng has joined #openstack-mogan | 17:09 | |
*** harlowja has joined #openstack-mogan | 17:23 | |
*** harlowja has quit IRC | 17:27 | |
*** shaohe_feng has quit IRC | 17:28 | |
*** shaohe_feng has joined #openstack-mogan | 17:28 | |
*** harlowja has joined #openstack-mogan | 18:06 | |
*** openstack has joined #openstack-mogan | 20:00 | |
*** openstackgerrit has quit IRC | 20:17 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!