*** ysandeep|out is now known as ysandeep | 01:27 | |
*** ysandeep is now known as ysandeep|afk | 02:50 | |
*** mnasiadka_ is now known as mnasiadka | 05:08 | |
*** tkajinam_ is now known as tkajinam | 05:11 | |
*** ysandeep|rover is now known as ysandeep|rover|lunch | 07:35 | |
*** ysandeep|rover|lunch is now known as ysandeep|rover | 09:23 | |
*** ysandeep|rover is now known as ysandeep|rover|break | 11:44 | |
*** ysandeep|rover|break is now known as ysandeep|rover | 12:36 | |
*** mnaser_ is now known as mnaser | 12:41 | |
kevko | Hi Octavia team | 14:13 |
---|---|---|
kevko | I really would like to schedule 2 instances in server group with anti-affinity policy to be scheduled to different host but also in different datacentre (or az) | 14:14 |
kevko | something similar as it is here | 14:15 |
kevko | https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/complex-anti-affinity-policies.html | 14:15 |
kevko | instead of max_server_per_host key should be something as max_server_per_az | 14:15 |
kevko | main idea is to start amphoras on different locations in case of one datacentre outage | 14:15 |
johnsom | kevko Nova doesn’t support AZ anti-affinity. | 14:16 |
kevko | yeah i know :) | 14:17 |
kevko | is it possible to somehow reach it with octavia ? | 14:17 |
johnsom | That is one reason we don’t have that feature yet. | 14:17 |
johnsom | Do you have a shared L2 network between the AZs? | 14:18 |
*** ysandeep|rover is now known as ysandeep|dinner | 14:27 | |
*** ysandeep|dinner is now known as ysandeep | 14:50 | |
kevko | johnsom: will it help ? | 15:19 |
johnsom | kevko Well, we are trying to understand the use cases here. If you have shared L2 it makes it easier as the VIP can migrate across the AZs. | 15:20 |
johnsom | kevko If you don't, then we need to bring in some BGP work too. | 15:20 |
johnsom | kevko With shared L2 this experiment was said to work: https://review.opendev.org/c/openstack/octavia/+/558962 | 15:21 |
johnsom | kevko But we have not committed to merging that at this point. | 15:21 |
kevko | johnsom: well, use case is that if I have ACTIVE_STANDBY amphorae are scheduled to some hosts ..anti-affinity will work only per host basis ...so in some cases they will be scheduled in same datacentre on different hosts ... | 15:22 |
kevko | but what if tornado come and one datacentre will be down ? :D ..both amphoras are down ... cant switch VIP because both are down | 15:22 |
johnsom | kevko Currently they will only be scheduled on different hosts in the same AZ. (Though there are bugs in nova that it will sometimes still schedule them on the same host) | 15:23 |
kevko | yes ..and that's the point .. | 15:23 |
johnsom | kevko Right, I get that. We are mostly interested in if you have a shared L2 across the DCs or not. I.e. are you going to need the BGP solution as well. | 15:23 |
kevko | it will be always scheduled to one AZ | 15:23 |
kevko | well ..it's our customer ..but i don't think L2 is shared | 15:24 |
johnsom | Right. We really wish nova would support cross AZ anti-afinity. Otherwise we have to implement our own scheduler which won't have all of the information nova does (capacity, etc.). | 15:24 |
kevko | we are also using BGP in neutron | 15:24 |
johnsom | kevko Ok, thanks. That helps. | 15:25 |
kevko | johnsom: i was asked in nova channel and it was postponed ..maybe it will be not implemented | 15:25 |
johnsom | kevko Yeah, we have got the same "no" answer | 15:25 |
kevko | ;( | 15:26 |
kevko | :( | 15:26 |
johnsom | kevko We will probably have to go down the path of implementing a scheduler in octavia to work around it. It will need a bunch of retry logic to deal with scheduling issues we can't see in nova. | 15:27 |
johnsom | kevko Currently we don't have any contributors working on that in Octavia. If you are interested we can help you get started. | 15:28 |
kevko | and what is loadbalancer availabilitzone create and other friends in doc/command line / | 15:29 |
kevko | *? | 15:29 |
johnsom | You can specify which AZ you want your LB created in. | 15:30 |
kevko | ok, i got it ..same | 15:30 |
kevko | same as in nova | 15:30 |
johnsom | Yeah, for us it's a bit more complicated as we need to know some network information too. But, yeah | 15:30 |
kevko | hmm, theoretically if i split my hosts to N availability zones of size 2hosts where one host will be from DC A and one from DC B ...it will work ..right ? | 15:30 |
kevko | am i right ? | 15:31 |
johnsom | I don't think so, but I'm not a nova AZ expert | 15:31 |
johnsom | kevko This is what we ask when you setup an AZ in Octavia: | 15:34 |
johnsom | https://www.irccloud.com/pastebin/T3x3TJFX/ | 15:34 |
johnsom | compute_zone is what is passed into nova when we create the VMs | 15:34 |
kevko | well if I can set availability zone for loadbalanacer ..I suppose that amphoras has AZ set ..so AvailabilityZoneFilter in nova should work ...and if I have anti-affinity in octavia set ..nova ServerGroupAntiAffinityFilter should also work ...so If i will have several AZs of size 2 ..then it should work ..no ? | 15:37 |
johnsom | I think nova will block having hosts in a server group that are in different AZs, but I am not a nova expert. My best answer would be try it.... | 15:39 |
kevko | amphoras with AZ1 -> scheduler -> check AvailabilityZoneFilter [AZ1] -> compute0,compute1 -> ServerGroupAntiAffinityFilter -> amphora0 -> AZ1-compute1 -> amphora1 -> AZ1-compute0 | 15:39 |
johnsom | Oh, yeah, that is how the current feature works. correct. | 15:39 |
kevko | so that is my question ...why octavia has avaialbility_zone as one string ? | 15:40 |
kevko | i think it can be list of AZs ...ok it's not ideal ..because for AZs bigger than 2 hosts it will not work as I mentioned above .. | 15:41 |
kevko | but for AZs of size 2 it will work ..but worker will just take AZ from list and sent to nova .. | 15:41 |
johnsom | You can use nova anti-affinity with AZs bigger than two hosts. | 15:42 |
kevko | yes I know .. | 15:42 |
johnsom | We don't support splitting a load balancer across AZs today. We can't pass a list of AZs to nova to have it schedule them like that. | 15:43 |
kevko | but if i will have AZ with 3 hosts ..and 1 host will be in DC1 and 2 hosts in DC2 ...there can situation that amphora will be scheduled in DC2 | 15:43 |
kevko | johnsom: well instead of read AZ from config file and send to nova ...octavia can take one AZ from list of AZs from octavia config ..and send only one AZ for created LB ... can't ? | 15:44 |
johnsom | Right if you have one AZ, spread across two datacenters, yeah, in theory nova would do the right thing, but yes, it's anti-affinity options don't understand anything beyond hosts. Not AZs, not racks, etc. So it could put them on two hosts that are right next to each other. | 15:45 |
johnsom | We currently support sending one AZ per LB, meaning both amphora instances go to the same AZ in nova. | 15:46 |
kevko | yeah I know ..but if I define AZ1 - AZN where every AZ will be couple of *2* servers and each server will be from geographic DC1 and DC2 ..it will work | 15:46 |
kevko | correct .. | 15:46 |
johnsom | We do not have a scheduler that will take a list of AZs and try each one with nova to split the amphora across multiple AZs today. | 15:46 |
johnsom | Yeah, it might. | 15:47 |
kevko | so, I am asking why octavia here https://github.com/openstack/octavia/blob/6253560d22f255c499c91612ac4286dd0d8329e1/etc/octavia.conf#L577 can't use list ? | 15:49 |
johnsom | kevko Because that is the "default" AZ if users don't specify one on the command line | 15:49 |
kevko | johnsom: I am not sure how loadbalancers are created in their environment ..but if I am correct ..it's automatic process where k8s is creating that LBs | 15:50 |
johnsom | Yeah, that is common | 15:50 |
kevko | and probably there will be no mechanism to do it ... | 15:51 |
kevko | so if octavia instead of default AZ allow "list of allowed AZs" in config ... work will be done in octavia ..worker just take one random AZ from list ..and that's it | 15:52 |
johnsom | It would need a scheduler added, yes. | 15:53 |
kevko | is it needed some complicated scheduler ? | 15:53 |
johnsom | To some degree yes, since nova fails regularly | 15:54 |
kevko | https://github.com/openstack/octavia/blob/a17c893d6a97305eadd0b7f74d60fe6b540e8f20/octavia/controller/worker/v2/tasks/compute_tasks.py#L88 | 15:54 |
kevko | here, if it is empty ...no AZ set ..but if yes ..it sent AZ name ... | 15:55 |
kevko | so, if this will be list, just take one and send to nova ...and yes ..there should be some retry mechanism ..and maybe some other things | 15:55 |
johnsom | Yeah, you have to keep track of where you tried, store the AZ selected for the LB, etc. | 15:57 |
johnsom | Really, to add the feature for splitting across AZs you want to do it right and not require 100's of AZs with 2-3 nodes in them. | 15:58 |
johnsom | There really is no simple hack in Octavia to make this work well. It would be simplest to just fix the anti-affinity in nova to understand more than just hosts IMO. | 15:59 |
kevko | agree | 15:59 |
kevko | but I am afraid that we do not have capacity for implement it ... or maybe we have ..but you know ..everyone wants to have solution right now | 16:00 |
johnsom | Oh I know. lol Everyone wants every feature in four versions ago of OpenStack. grin | 16:00 |
kevko | yeah | 16:04 |
kevko | and if you provide solution ...it's in gerrit for year ;d | 16:05 |
kevko | :D | 16:05 |
kevko | and no one cares | 16:05 |
*** ysandeep is now known as ysandeep|out | 16:06 | |
johnsom | I can tell you the Octavia team does what it can... We support additional contributors | 16:07 |
*** dkehn_ is now known as dkehn | 16:08 | |
kevko | I haven't had the opportunity yet | 16:10 |
kevko | but good to know :) | 16:10 |
kevko | johnsom: last question, can u add some metadata to amphora instances to distinguish each other ? | 16:12 |
kevko | is it possible ? or let's say hard to implement / | 16:13 |
johnsom | kevko So Octavia supports tags on the load balancers. Is that what you are looking for? | 16:13 |
kevko | hmm | 16:13 |
kevko | can u point me little bit ? | 16:13 |
johnsom | https://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-a-load-balancer-detail#create-a-load-balancer | 16:14 |
johnsom | The tags field. It's an list of strings you can add to the objects in Octavia. | 16:14 |
johnsom | https://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-a-load-balancer-detail#filtering-by-tags | 16:14 |
kevko | is it accessible in amphoras then ? | 16:15 |
kevko | i mean, as some metadata ? | 16:15 |
johnsom | No, it's not available inside the amphora themselves | 16:15 |
johnsom | If you want to pass something into the amphora via nova / cloud-init, we only have a global setting for that (meaning every amp will get that data). | 16:16 |
kevko | is it passable through you to implement it? | 16:16 |
kevko | no inside VM ... i mean metadata | 16:16 |
johnsom | You can't use tags on the amphora APIs. You would have to map the amphora back to the load balancer object and read the tags from there. | 16:18 |
kevko | openstack server show ID -> properties | 16:19 |
johnsom | Nope, we don't have a feature for that today. We only identify them by name "amphora-<uuid>" | 16:19 |
kevko | johnsom: because if I am wondering about it ...this can be very usefull ..when octavia is creating amphoras ... just give them property amphora1: X amphora2: Y | 16:20 |
kevko | that's it | 16:20 |
johnsom | I don't see a "properties" field in nova server show: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#id30 | 16:21 |
johnsom | I'm guessing you are thinking like scheduler hints, etc. | 16:22 |
johnsom | We have talked about extending the flavor support in Octavia for that. | 16:23 |
johnsom | You can accomplish that through the nova flavors however: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-flavor-detail,create-extra-specs-for-a-flavor-detail#create-extra-specs-for-a-flavor | 16:23 |
johnsom | Add your special properties to the nova flavor, then use that new flavor as the compute flavor in Octavia (conf or Octavia flavor definition). | 16:24 |
kevko | don't know why but i thought that this is possible also for instance .. | 16:24 |
kevko | but it doesn't matter ...i thought this .. | 16:24 |
kevko | https://docs.openstack.org/nova/latest/admin/aggregates.html | 16:24 |
kevko | so, in case ACTIVE_STANDBY is used .. user will setup flavor1, flavor2 ..flavor1 and flavor2 will have different extra spec ...and then just configure scheduler and that's it | 16:25 |
kevko | johnsom: but I can setup only one flavor for octavia no ? | 16:26 |
johnsom | Only one compute flavor per load balancer | 16:27 |
kevko | it is not pretties solution on the world ..but if octavia will accept list of flavors (which will be same in meaning cpu, ram ...etc) which will have different extra spec ...then nova can be configured via scheduler hints | 16:28 |
johnsom | lol | 16:29 |
kevko | no, it's crap :D | 16:30 |
johnsom | I like your motivation to go down every path though. lol | 16:30 |
kevko | :D | 16:31 |
kevko | hh, yeah ..that's me :D | 16:31 |
kevko | am I blind or i can't see that metadata thing .. | 16:34 |
kevko | :D | 16:34 |
johnsom | Which? tags? | 16:34 |
kevko | If you want to pass something into the amphora via nova / cloud-init, we only have a global setting for that (meaning every amp will get that data). | 16:34 |
kevko | ^^ this | 16:34 |
kevko | can I read something about ? | 16:34 |
johnsom | Oh, one sec | 16:34 |
kevko | johnsom: and metadata I asked before .. i meant "metadata" https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#id30 | 16:35 |
johnsom | Hmm, looks like it was removed, so compute flavor is your only method. | 16:40 |
kevko | hmm, metadata it was once called, now it is tags | 16:40 |
johnsom | Yeah, the terminology over in nova is very confusing | 16:40 |
kevko | this one https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-server-detail#create-server | 16:42 |
kevko | create servers tag | 16:42 |
kevko | *tags | 16:42 |
kevko | so, what octavia can do, when creating instances, give them two tags, LOADBALANCER_ID = id, ID = 1 or 2 | 16:50 |
kevko | firstly LB id will be usefull just to quick check to which LB belongs given amphora | 16:50 |
kevko | and second ID can be used in nova schedulers .. | 16:51 |
johnsom | You can add an ID for the nova schedulers via the nova flavor: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-flavor-detail,create-extra-specs-for-a-flavor-detail#create-extra-specs-for-a-flavor | 16:52 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!