Thursday, 2022-05-26

*** ysandeep|out is now known as ysandeep01:27
*** ysandeep is now known as ysandeep|afk02:50
*** mnasiadka_ is now known as mnasiadka05:08
*** tkajinam_ is now known as tkajinam05:11
*** ysandeep|rover is now known as ysandeep|rover|lunch07:35
*** ysandeep|rover|lunch is now known as ysandeep|rover09:23
*** ysandeep|rover is now known as ysandeep|rover|break11:44
*** ysandeep|rover|break is now known as ysandeep|rover12:36
*** mnaser_ is now known as mnaser12:41
kevkoHi Octavia team 14:13
kevkoI really would like to schedule 2 instances in server group with anti-affinity policy to be scheduled to different host but also in different datacentre (or az)14:14
kevkosomething similar as it is here 14:15
kevkohttps://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/complex-anti-affinity-policies.html14:15
kevkoinstead of max_server_per_host key should be something as max_server_per_az14:15
kevkomain idea is to start amphoras on different locations in case of one datacentre outage 14:15
johnsomkevko Nova doesn’t support AZ anti-affinity.14:16
kevkoyeah i know :) 14:17
kevkois it possible to somehow reach it with octavia ? 14:17
johnsomThat is one reason we don’t have that feature yet.14:17
johnsomDo you have a shared L2 network between the AZs?14:18
*** ysandeep|rover is now known as ysandeep|dinner14:27
*** ysandeep|dinner is now known as ysandeep14:50
kevkojohnsom: will it help ? 15:19
johnsomkevko Well, we are trying to understand the use cases here. If you have shared L2 it makes it easier as the VIP can migrate across the AZs.15:20
johnsomkevko If you don't, then we need to bring in some BGP work too.15:20
johnsomkevko With shared L2 this experiment was said to work: https://review.opendev.org/c/openstack/octavia/+/55896215:21
johnsomkevko But we have not committed to merging that at this point.15:21
kevkojohnsom: well, use case is that if I have ACTIVE_STANDBY amphorae are scheduled to some hosts ..anti-affinity will work only per host basis ...so in some cases they will be scheduled in same datacentre on different hosts ...15:22
kevkobut what if tornado come and one datacentre will be down ? :D ..both amphoras are down ... cant switch VIP because both are down 15:22
johnsomkevko Currently they will only be scheduled on different hosts in the same AZ. (Though there are bugs in nova that it will sometimes still schedule them on the same host)15:23
kevkoyes ..and that's the point ..15:23
johnsomkevko Right, I get that. We are mostly interested in if you have a shared L2 across the DCs or not. I.e. are you going to need the BGP solution as well.15:23
kevkoit will be always scheduled to one AZ 15:23
kevkowell ..it's our customer ..but i don't think L2 is shared 15:24
johnsomRight. We really wish nova would support cross AZ anti-afinity. Otherwise we have to implement our own scheduler which won't have all of the information nova does (capacity, etc.).15:24
kevkowe are also using BGP in neutron 15:24
johnsomkevko Ok, thanks. That helps.15:25
kevkojohnsom: i was asked in nova channel and it was postponed ..maybe it will be not implemented 15:25
johnsomkevko Yeah, we have got the same "no" answer15:25
kevko;(15:26
kevko:(15:26
johnsomkevko We will probably have to go down the path of implementing a scheduler in octavia to work around it. It will need a bunch of retry logic to deal with scheduling issues we can't see in nova.15:27
johnsomkevko Currently we don't have any contributors working on that in Octavia. If you are interested we can help you get started.15:28
kevkoand what is loadbalancer availabilitzone create and other friends in doc/command line / 15:29
kevko*?15:29
johnsomYou can specify which AZ you want your LB created in.15:30
kevkook, i got it ..same 15:30
kevkosame as in nova 15:30
johnsomYeah, for us it's a bit more complicated as we need to know some network information too. But, yeah15:30
kevkohmm, theoretically if i split my hosts to N availability zones of size 2hosts where one host will be from DC A and one from DC B ...it will work ..right ? 15:30
kevkoam i right ? 15:31
johnsomI don't think so, but I'm not a nova AZ expert15:31
johnsomkevko This is what we ask when you setup an AZ in Octavia:15:34
johnsomhttps://www.irccloud.com/pastebin/T3x3TJFX/15:34
johnsomcompute_zone is what is passed into nova when we create the VMs15:34
kevkowell if I can set availability zone for loadbalanacer ..I suppose that amphoras has AZ set ..so AvailabilityZoneFilter in nova should work ...and if I have anti-affinity in octavia set ..nova ServerGroupAntiAffinityFilter should also work ...so If i will have several AZs of size 2  ..then it should work ..no ?  15:37
johnsomI think nova will block having hosts  in a server group that are in different AZs, but I am not a nova expert. My best answer would be try it....15:39
kevkoamphoras with AZ1  -> scheduler -> check AvailabilityZoneFilter [AZ1] -> compute0,compute1 -> ServerGroupAntiAffinityFilter -> amphora0 -> AZ1-compute1 -> amphora1 -> AZ1-compute015:39
johnsomOh, yeah, that is how the current feature works. correct.15:39
kevkoso that is my question ...why octavia has avaialbility_zone as one string ? 15:40
kevkoi think it can be list of AZs ...ok it's not ideal ..because for AZs bigger than 2 hosts it will not work as I mentioned above ..15:41
kevkobut for AZs of size 2 it will work ..but worker will just take AZ from list and sent to nova ..15:41
johnsomYou can use nova anti-affinity with AZs bigger than two hosts.15:42
kevkoyes I know ..15:42
johnsomWe don't support splitting a load balancer across AZs today. We can't pass a list of AZs to nova to have it schedule them like that.15:43
kevkobut if i will have AZ with 3 hosts ..and 1 host will be in DC1 and 2 hosts in DC2 ...there can situation that amphora will be scheduled in DC2 15:43
kevkojohnsom: well instead of read AZ from config file and send to nova ...octavia can take one AZ from list of AZs from octavia config ..and send only one AZ for created LB ... can't ? 15:44
johnsomRight if you have one AZ, spread across two datacenters, yeah, in theory nova would do the right thing, but yes, it's anti-affinity options don't understand anything beyond hosts. Not AZs, not racks, etc. So it could put them on two hosts that are right next to each other.15:45
johnsomWe currently support sending one AZ per LB, meaning both amphora instances go to the same AZ in nova.15:46
kevkoyeah I know ..but if I define AZ1 - AZN where every AZ will be couple of *2* servers and each server will be from geographic DC1 and DC2 ..it will work 15:46
kevkocorrect ..15:46
johnsomWe do not have a scheduler that will take a list of AZs and try each one with nova to split the amphora across multiple AZs today.15:46
johnsomYeah, it might.15:47
kevkoso, I am asking why octavia here https://github.com/openstack/octavia/blob/6253560d22f255c499c91612ac4286dd0d8329e1/etc/octavia.conf#L577 can't use list ? 15:49
johnsomkevko Because that is the "default" AZ if users don't specify one on the command line15:49
kevkojohnsom: I am not sure how loadbalancers are created in their environment ..but if I am correct ..it's automatic process where k8s is creating that LBs15:50
johnsomYeah, that is common15:50
kevkoand probably there will be no mechanism to do it ...15:51
kevkoso if octavia instead of default AZ allow "list of allowed AZs" in config ... work will be done in octavia ..worker just take one random AZ from list ..and that's it 15:52
johnsomIt would need a scheduler added, yes.15:53
kevkois it needed some complicated scheduler ? 15:53
johnsomTo some degree yes, since nova fails regularly15:54
kevkohttps://github.com/openstack/octavia/blob/a17c893d6a97305eadd0b7f74d60fe6b540e8f20/octavia/controller/worker/v2/tasks/compute_tasks.py#L8815:54
kevkohere, if it is empty ...no AZ set ..but if yes ..it sent AZ name ...15:55
kevkoso, if this will be list, just take one and send to nova ...and yes ..there should be some retry mechanism ..and maybe some other things 15:55
johnsomYeah, you have to keep track of where you tried, store the AZ selected for the LB, etc.15:57
johnsomReally, to add the feature for splitting across AZs you want to do it right and not require 100's of AZs with 2-3 nodes in them.15:58
johnsomThere really is no simple hack  in Octavia to make this work well. It would be simplest to just fix the anti-affinity in nova to understand more than just hosts IMO. 15:59
kevkoagree15:59
kevkobut I am afraid that we do not have capacity for implement it ... or maybe we have ..but you know ..everyone wants to have solution right now 16:00
johnsomOh I know. lol Everyone wants every feature in four versions ago of OpenStack. grin16:00
kevkoyeah16:04
kevkoand if you provide solution ...it's in gerrit for year ;d 16:05
kevko:D16:05
kevkoand no one cares 16:05
*** ysandeep is now known as ysandeep|out16:06
johnsomI can tell you the Octavia team does what it can... We support additional contributors16:07
*** dkehn_ is now known as dkehn16:08
kevkoI haven't had the opportunity yet16:10
kevkobut good to know :) 16:10
kevkojohnsom: last question, can u add some metadata to amphora instances to distinguish each other ? 16:12
kevkois it possible ? or let's say hard to implement / 16:13
johnsomkevko So Octavia supports tags on the load balancers. Is that what you are looking for?16:13
kevkohmm16:13
kevkocan u point me little bit ? 16:13
johnsomhttps://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-a-load-balancer-detail#create-a-load-balancer16:14
johnsomThe tags field. It's an list of strings you can add to the objects in Octavia.16:14
johnsomhttps://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-a-load-balancer-detail#filtering-by-tags16:14
kevkois it accessible in amphoras then ? 16:15
kevkoi mean, as some metadata ? 16:15
johnsomNo, it's not available inside the amphora themselves16:15
johnsomIf you want to pass something into the amphora via nova / cloud-init, we only have a global setting for that (meaning every amp will get that data).16:16
kevkois it passable through you to implement it?16:16
kevkono inside VM ... i mean metadata 16:16
johnsomYou can't use tags on the amphora APIs. You would have to map the amphora back to the load balancer object and read the tags from there.16:18
kevkoopenstack server show ID -> properties 16:19
johnsomNope, we don't have a feature for that today. We only identify them by name "amphora-<uuid>"16:19
kevkojohnsom: because if I am wondering about it ...this can be very usefull ..when octavia is creating amphoras ... just give them property amphora1: X amphora2: Y16:20
kevkothat's it 16:20
johnsomI don't see  a "properties" field in nova server show: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#id3016:21
johnsomI'm guessing you are thinking like scheduler hints, etc.16:22
johnsomWe have talked about extending the flavor support in Octavia for that. 16:23
johnsomYou can accomplish that through the nova flavors however: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-flavor-detail,create-extra-specs-for-a-flavor-detail#create-extra-specs-for-a-flavor16:23
johnsomAdd your special properties to the nova flavor, then use that new flavor as the compute flavor in Octavia (conf or Octavia flavor definition).16:24
kevkodon't know why but i thought that this is possible also for instance ..16:24
kevkobut it doesn't matter ...i thought this ..16:24
kevkohttps://docs.openstack.org/nova/latest/admin/aggregates.html16:24
kevkoso, in case ACTIVE_STANDBY is used .. user will setup flavor1, flavor2 ..flavor1 and flavor2 will have different extra spec ...and then just configure scheduler and that's it 16:25
kevkojohnsom: but I can setup only one flavor for octavia no ? 16:26
johnsomOnly one compute flavor per load balancer16:27
kevkoit is not pretties solution on the world ..but if octavia will accept list of flavors (which will be same in meaning cpu, ram ...etc) which will have different extra spec ...then nova can be configured via scheduler hints16:28
johnsomlol16:29
kevkono, it's crap :D 16:30
johnsomI like your motivation to go down every path though. lol16:30
kevko:D 16:31
kevkohh, yeah ..that's me :D 16:31
kevkoam I blind or i can't see that metadata thing ..16:34
kevko:D16:34
johnsomWhich? tags?16:34
kevko If you want to pass something into the amphora via nova / cloud-init, we only have a global setting for that (meaning every amp will get that data).16:34
kevko^^ this 16:34
kevkocan I read something about ? 16:34
johnsomOh, one sec16:34
kevkojohnsom: and metadata I asked before .. i meant "metadata" https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#id3016:35
johnsomHmm, looks like it was removed, so compute flavor is your only method.16:40
kevkohmm, metadata it was once called, now it is tags16:40
johnsomYeah, the terminology over in nova is very confusing16:40
kevkothis one https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-server-detail#create-server16:42
kevkocreate servers tag 16:42
kevko*tags16:42
kevkoso, what octavia can do, when creating instances, give them two tags, LOADBALANCER_ID = id, ID = 1 or 216:50
kevkofirstly LB id will be usefull just to quick check to which LB belongs given amphora 16:50
kevkoand second ID can be used in nova schedulers ..16:51
johnsomYou can add an ID for the nova schedulers via the nova flavor: https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail,create-flavor-detail,create-extra-specs-for-a-flavor-detail#create-extra-specs-for-a-flavor16:52

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!