Tuesday, 2019-06-25

*** spatel has joined #openstack-lbaas00:48
*** dayou_ has quit IRC00:56
*** spatel has quit IRC01:26
rm_workhuh, why is today the first time i heard about pycodestyle01:29
*** mithilarun has quit IRC01:32
*** dayou_ has joined #openstack-lbaas01:36
openstackgerritMerged openstack/octavia master: Add bindep.txt for Octavia  https://review.opendev.org/66720801:39
*** bzhao__ has joined #openstack-lbaas01:45
*** rcernin has quit IRC01:48
*** rcernin has joined #openstack-lbaas01:48
*** hongbin has joined #openstack-lbaas02:04
*** rcernin has quit IRC02:42
*** rcernin has joined #openstack-lbaas02:42
*** yamamoto has joined #openstack-lbaas02:50
*** yamamoto_ has joined #openstack-lbaas02:55
*** yamamoto_ has quit IRC02:56
*** yamamoto has quit IRC02:56
*** yamamoto has joined #openstack-lbaas02:59
*** yamamoto has quit IRC02:59
*** rcernin has quit IRC03:04
*** rcernin has joined #openstack-lbaas03:04
*** sapd1_x has joined #openstack-lbaas03:07
*** fyx has quit IRC03:18
*** mnaser has quit IRC03:19
*** fyx has joined #openstack-lbaas03:19
*** mnaser has joined #openstack-lbaas03:20
*** rcernin has quit IRC03:30
*** rcernin has joined #openstack-lbaas03:31
*** psachin has joined #openstack-lbaas03:39
*** AlexStaf has joined #openstack-lbaas04:24
*** hongbin has quit IRC04:25
*** yamamoto has joined #openstack-lbaas04:34
*** yamamoto has quit IRC04:39
*** hongbin has joined #openstack-lbaas04:42
*** hongbin has quit IRC04:49
*** ivve has joined #openstack-lbaas05:04
*** sapd1_x has quit IRC05:34
*** pcaruana has joined #openstack-lbaas05:56
*** pcaruana has quit IRC05:57
*** pcaruana has joined #openstack-lbaas05:57
*** AlexStaf has quit IRC05:58
*** gcheresh has joined #openstack-lbaas06:16
*** ccamposr has joined #openstack-lbaas06:22
*** ccamposr__ has quit IRC06:25
*** luksky has joined #openstack-lbaas06:41
*** rcernin has quit IRC06:41
*** altlogbot_3 has quit IRC06:46
*** altlogbot_3 has joined #openstack-lbaas06:49
*** altlogbot_3 has quit IRC06:50
*** altlogbot_3 has joined #openstack-lbaas06:55
*** tesseract has joined #openstack-lbaas07:11
*** ccamposr__ has joined #openstack-lbaas07:12
*** ccamposr has quit IRC07:14
*** yboaron_ has joined #openstack-lbaas07:26
*** AlexStaf has joined #openstack-lbaas07:30
*** Emine has joined #openstack-lbaas07:33
*** yboaron_ has quit IRC07:36
*** yboaron_ has joined #openstack-lbaas07:36
*** ricolin has joined #openstack-lbaas08:05
*** yboaron_ has quit IRC08:07
*** ricolin has quit IRC08:13
*** yboaron_ has joined #openstack-lbaas08:21
*** ricolin has joined #openstack-lbaas08:22
*** yboaron_ has quit IRC08:28
*** Dinesh_Bhor has quit IRC08:29
*** yboaron_ has joined #openstack-lbaas08:29
*** lemko has joined #openstack-lbaas08:36
*** Dinesh_Bhor has joined #openstack-lbaas09:01
*** kobis1 has joined #openstack-lbaas09:05
*** ccamposr has joined #openstack-lbaas09:21
*** ccamposr__ has quit IRC09:23
*** gcheresh has quit IRC09:34
openstackgerritNoboru Iwamatsu proposed openstack/octavia master: Add failover logging to show the amphora details.  https://review.opendev.org/66731609:52
*** fnaval has quit IRC09:55
*** fnaval has joined #openstack-lbaas10:00
*** fnaval has quit IRC10:04
*** yamamoto has joined #openstack-lbaas10:15
*** Dinesh_Bhor has quit IRC10:16
*** yamamoto has quit IRC10:17
*** kobis1 has quit IRC10:28
*** kobis1 has joined #openstack-lbaas10:42
*** gcheresh has joined #openstack-lbaas10:52
*** gcheresh has quit IRC10:57
*** gcheresh has joined #openstack-lbaas10:58
*** AlexStaf has quit IRC11:08
*** ajay33 has joined #openstack-lbaas11:11
*** AlexStaf has joined #openstack-lbaas11:23
*** AlexStaf has quit IRC11:45
*** psachin has quit IRC11:54
*** spatel has joined #openstack-lbaas11:55
*** luksky has quit IRC12:00
*** goldyfruit has joined #openstack-lbaas12:00
*** spatel has quit IRC12:00
*** goldyfruit has quit IRC12:05
*** boden has joined #openstack-lbaas12:15
*** AlexStaf has joined #openstack-lbaas12:16
*** AlexStaf has quit IRC12:27
*** AlexStaf has joined #openstack-lbaas12:28
*** goldyfruit has joined #openstack-lbaas12:33
*** lemko has quit IRC12:55
*** luksky has joined #openstack-lbaas13:03
*** Vorrtex has joined #openstack-lbaas13:30
*** openstackgerrit has quit IRC13:48
*** spatel has joined #openstack-lbaas13:52
*** spatel has quit IRC13:52
*** yamamoto has joined #openstack-lbaas13:55
*** fnaval has joined #openstack-lbaas14:16
*** lemko has joined #openstack-lbaas14:25
*** Dinesh_Bhor has joined #openstack-lbaas14:25
*** Dinesh_Bhor has quit IRC14:40
*** yamamoto has quit IRC14:45
*** yamamoto has joined #openstack-lbaas14:45
*** yamamoto has quit IRC14:47
mnaseris there a way to 'rebuild' a load balnacer if its in `provisioning_status` status?14:58
cgoncalvesmnaser, as in a transient state? PENDING_*15:01
mnasernope in ERROR15:01
cgoncalvesyou can failover15:02
cgoncalveshttps://review.opendev.org/#/q/Ic4b4516cd6b2a254ea32939668c906486066da4215:02
mnaseroh let me try15:03
mnaser"Invalid state ERROR of loadbalancer resource 707d743f-1b40-4416-8889-6e72783d8d92 (HTTP 409) (Request-ID: req-db5d99e7-00f2-4e1b-bd14-7bd722f48da5)"15:04
*** yboaron_ has quit IRC15:04
cgoncalvesmnaser, check if you are running with that patch15:06
*** sapd1_x has joined #openstack-lbaas15:16
*** yamamoto has joined #openstack-lbaas15:23
mnasercgoncalves: sigh, looks like i have to recreate the vrrp ports15:24
mnaserand now neutron is giving 404s when creating a port with secgroups that arent part of my tenant15:24
mnaseris there  any reason why octavia just doesnt recreate the vrrp_port_id if it cant find it?15:24
*** kobis1 has quit IRC15:25
johnsomIt should on failover create the base vrrp_port. It sill not re-create the VIP however, as that has the reserved/assigned VIP IP address.15:26
mnaserthe vip port exists15:26
mnaserthe misssing port is the vrrp one here15:26
*** gcheresh has quit IRC15:26
johnsomThat said, working on this failover flow is one of the next things I want to look at and fix some of this stuff.15:27
mnaserany infra issues seem to cause a failing failover and then leave all of octavia in a broken state unfortuantely15:27
johnsomSo, yeah, that base vrrp port is expendable, so should be recreated15:27
johnsomautomatically by the failover flow/process15:28
mnaseryeah.. and now im struggling to recreate it15:28
*** yamamoto has quit IRC15:28
johnsomYes, with the defaults, it attempts automatic repairs, if those also fail, we stop and mark it error for the operator. There has been debate about if we should endlessly loop, etc. I pushed for us to error on the side of "don't do more harm to the cloud", but interested in your feedback15:29
mnaserany infra issues has almost always left me with an irrecoverable-y broken octavia load balancers15:31
johnsomWe do have a few moles to whack when nova and neutron fail us, agreed. It's one of the next two things I'm going to look at. I don't like how the failover flow was designed.15:32
johnsommnaser Which version are you running BTW?15:33
mnaserafaik this might be rocky.. or stein15:33
johnsom4.0.1?15:33
johnsomIf you can't launch a failover on ERROR objects, you are likely behind in the patch releases.15:34
mnaserjohnsom: not sure, but uh i'm looking at this https://github.com/openstack/octavia/blob/09020b6bfc06c615f6d01d550188b1e6d7ed9d21/octavia/network/drivers/neutron/allowed_address_pairs.py#L592-L61115:36
mnaserim thinking maybe if i set the amphora state to deleted it might skip that ?15:36
mnaseri dont know if it needs that down the line15:36
mnaserhttps://github.com/openstack/octavia/blob/ff4680eb71cf03e4eae1a58e7a66321ddadcbead/octavia/controller/worker/v2/flows/amphora_flows.py#L291-L32415:38
mnasercause it only plugs the vip15:38
*** AlexStaf has quit IRC15:39
*** Emine has quit IRC15:39
*** sapd1_x has quit IRC15:40
johnsomI don't even know why the failover flow would call that15:41
johnsomI see that it is, but why?15:41
colin-what you're describing mnaser (setting states to deleted/error/etc in db) is what i've had to do to recover from the scenario you've described each time nova or neutron stops giving 200s15:44
mnaseri have no idea how to fix these at this point, i moved the amphoras to deleted state, and then move the provisioning status to active, and force a failover15:45
mnaserand uh, nothing is happeneing afaik15:45
colin-that flow is successful, right?15:45
mnaserif anything, when something goers wrong with the control plane, it almost always propagates to data plane15:45
colin-what you just described15:45
mnasercolin-: well i dont have debugging on but it isnt stacktracing anymore15:45
colin-gotcha15:45
colin-depending if you're in active/standby or not, i have sometimes had to edit the master/backup roles too15:46
mnasernope this is just single15:46
mnaserim not seeing anything being fixed tho15:46
johnsommnaser No, it almost NEVER propagates to the data plane. Were your operating status OFFLINE?15:46
mnaseri can easily make octavia destroy every single load balancer unfortunately15:47
mnaserwhen things dont go right, it tries to failover, and then breaks things even more15:47
johnsomIt is designed to fail safe, it may be in provisioning_status ERROR, but the LBs are still handling traffic.15:47
colin-yeah, that has impacted my data plane in the past15:47
colin-sorry to be contrary15:47
mnaserthis is the 3rd case i have where data plane dies because of a control plane issue15:48
mnaserand not one or two lbs, every single one15:48
johnsomIf that is the case, we need a detailed bug as we have not seen that for over a year when the "both down" bug was fixed when nova nukes both amps in an LB.15:48
colin-https://storyboard.openstack.org/#!/story/2005512 is the closest i've come to describing it15:48
colin-probably does not cover what mnaser is talking about (at least not 100%)15:48
*** kobis1 has joined #openstack-lbaas15:49
johnsomcolin- Yeah, that doesn't match up. Your story shows you had amps. I know the vxlan cut traffic, but you had working amps right?15:49
mnaservxlan cuts traffic15:50
mnaserhealth manager freaks out15:50
mnaserstarts blowing up amphoras to fix them15:50
mnasercontrol plane is borked already15:50
colin-mnaser: do you see any control plane logs from octavia that suggest heartbeats weren't able to be written? they're easy to spot because the log info comes out in all caps with something like "THIS IS NOT GOOD!"15:50
mnaseryes, ive had that once when a rabbitmq upgrade pooped out15:51
*** sapd1_x has joined #openstack-lbaas15:51
mnaserlike it'd be nice if i just had a "rebuild" button15:51
*** kobis1 has quit IRC15:51
colin-ok15:51
mnaser"you have everything you need about this lb, just rebuild it."15:51
*** kosa777777 has joined #openstack-lbaas15:53
johnsomYeah, that was exactly what failover is supposed to be. It just didn't get implemented well.15:55
johnsomTHIS IS NOT GOOD doesn't have anything to do with rabbit, BTW. The health manager doesn't use rabbit for anything15:56
mnaserhttp://paste.openstack.org/show/753372/15:56
mnaseri ended up writing this and giving it to a customer15:56
mnaser(that should not exist)15:56
johnsomplease, if you see a scenario that actually takes down the data plane, (not provisioning status ERROR), please report it and provide logs, we would like to see that.15:58
johnsommnaser Aslo, you have your VXLAN setup to disable ports if it's sees duplicate mac addresses? Neutron is doing that and impacted colin-, it would be good to collect another deployment that has VXLAN configured that way so we can approach the neutron team with the problem.15:59
*** luksky has quit IRC16:00
*** sapd1_x has quit IRC16:02
*** kosa777777 has quit IRC16:04
*** ricolin has quit IRC16:33
*** tesseract has quit IRC16:35
colin-is it expected when creating a pool with ~75 members that the provisioning statsus should bounce between ACTIVE and PENDING_UPDATE before all members exist in the pool definition?16:36
*** tesseract has joined #openstack-lbaas16:36
colin-can we add multiple members in one call in code? or is it always one at a time?16:37
colin-just saw the batch update in the member section of the api ref, pursuing that for now16:38
colin-oh, it's not additive it just replaces?16:39
cgoncalvesyeah. we don't have batch member create16:43
*** ramishra has quit IRC16:46
cgoncalvesI think it could be implemented rather easy...?16:47
*** mithilarun has joined #openstack-lbaas16:49
*** mithilarun has quit IRC16:50
*** mithilarun has joined #openstack-lbaas16:52
colin-this is all via gophercloud, might try to leverage this https://github.com/gophercloud/gophercloud/blob/master/openstack/loadbalancer/v2/pools/requests.go#L35616:52
colin-but it's populating the pool one member at a time heh16:52
johnsomWait, it is totally additive and support batch member create.16:54
johnsomcgoncalves colin- It is a "desired state" API, you can add, delete, and update as many members as you want all at once.16:55
colin-weird, i have a watch going on pool show -c members wc -l and it's growing one at a time16:56
colin-from 14 on its way to 75 atm16:56
cgoncalves"This may include creating new members, deleting old members, and updating existing members."16:56
cgoncalveshttps://developer.openstack.org/api-ref/load-balancer/v2/?expanded=update-a-member-detail,batch-update-members-detail#batch-update-members16:56
johnsomcolin- I don't know what gophercloud has, but our API supports batch create.16:57
colin-that's why i linked it :)16:57
colin-understood, appreciate the notes16:58
johnsomOk, I just saw cgoncalves saying we didn't have batch create and was .... Yes, we do.16:58
johnsomlol. Just getting off a meeting so context switching.16:58
*** tesseract has quit IRC16:59
cgoncalvesyeah, my bad16:59
cgoncalvesI had to expand and read the first two lines. shame17:00
johnsomDifferent "we" here. lol My "we" is always Octavia. grin17:01
*** yamamoto has joined #openstack-lbaas17:06
xgermangophercloud has issues…17:09
colin-without committing to anything do you guys have any strong negative reaction to supporting syntax like `openstack loadbalancer member create <member1> <member2> <member3> ....`?17:09
cgoncalvesI bet rm_work does not ;)17:10
colin-in the future17:10
cgoncalves1 sec17:10
cgoncalveshttps://review.opendev.org/#/c/634302/17:10
johnsomThat is a very different patch17:11
johnsomSo, for the CLI we have discussed this with the OpenStack client team. It's a tricky one to implement given each of those <member1> would have to contain the whole member definition.17:12
xgermanand might be confusing for users17:13
johnsomThe last plan I remember, to address the batch member functionality via the CLI was to pass the CLI a JSON document that reflected the API: https://developer.openstack.org/api-ref/load-balancer/v2/index.html?expanded=batch-update-members-detail#id9517:13
johnsomIf it were delete calls with a list of IDs, yes, I would love to see that. But the create, it's not clear how to pass all of those params, for each member, and have them all line up17:16
colin-yeah i see what you mean about doing it on the cli that way17:17
colin-any middle ground where the api could still accept that format without the client having to be involved so that other services contacting octavia could express in that way without having to present the entire finished manifest each time (adding v overwriting member config on the amps)17:18
colin-?*17:19
johnsomThat is an interesting use case. Where you just want to bulk add, but not specify all of the existing stuff.17:20
johnsomI would probably lobby for extending the member create API to accept a list of members.17:21
colin-ok cool, yeah that would likely help in this case. thanks for the info everyone17:23
*** lemko has quit IRC17:34
*** gcheresh has joined #openstack-lbaas17:52
*** gcheresh has quit IRC18:32
*** spatel has joined #openstack-lbaas18:33
spateljohnsom: yt?18:33
johnsomspatel Hi18:33
spatelhttps://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html18:34
spatelis this octavia_management_net_subnet_cidr  lb-mgmt right?18:35
johnsomYes, I think so.18:35
spatelin openstack-ansible there is a variable   octavia_management_net_subnet_allocation_pools:18:36
spateltrying to understand that variable..18:36
spateldoes amphora vm need lb-mgmt ip address?18:37
spatelor just compute node?18:37
johnsomI am not 100% up to speed on the openstack ansible role, but I can try to answer.18:37
spateli have subnet 172.27.8.0/21 for lb-mgmt  and trying to understand who will need IP from that subnet18:38
spatelcontroller/compute and who else?18:38
johnsomSo the lb-mgmt-net is used by the controllers to send command/control messages to the amphora, and for the amphora to send heartbeat message back to the controllers.  Each amphora gets an IP on that network (it's outside the tenant network namespace however).18:39
johnsomEach controller (worker, health manager, and housekeeping) will need an IP on that network as well, to talk with the amphora.18:39
*** gcheresh has joined #openstack-lbaas18:39
johnsomAPI does not need to be on the lb-mgmt-net at this time18:39
*** ianychoi_ has quit IRC18:49
spatelso this option octavia_management_net_subnet_allocation_pools: is for amphora ip pool?18:50
*** ianychoi_ has joined #openstack-lbaas18:50
johnsomYes, you could DHCP the controllers too if you want, but it's best to used fixed for those18:51
spatelI have fix IP for controller and compute..18:52
johnsomThe compute hosts should not need an IP on that network, only the octavia parts.18:52
spatelso i am thinking out of 172.27.8.1-172.27.10.255 i keep it for static  and give 172.27.11.1-172.27.15.255 give it to DHCP for amphora18:53
*** luksky has joined #openstack-lbaas18:53
johnsomSure, that is a big block for static though. I wouldn't expect you need that many controllers. lol.18:54
spatelcontroller + compute ( static )18:54
spateldon't we need br-lbaas for controller?18:55
*** mithilarun has quit IRC18:58
spateljohnsom: how neutron wire up lb-mgmt to amphora ?19:04
johnsomI had to dig through old notes to see what br-lbaas was.  That should only be on the controller side where the containers are.  lb-mgmt-net is just a neutron network, and neutron/nova should handle the computes for you19:04
spateldo we need to create subnet in neutron ?19:04
johnsomYes. Doesn't the OSA role do that for you?19:05
spateltrying to understand.. and looking at playbook19:05
johnsomYeah, OSA does that for you: https://github.com/openstack/openstack-ansible-os_octavia/blob/master/tasks/octavia_mgmt_network.yml#L2719:06
spatelhmmm19:08
spatelyou are saying neutron will take care of lb-mgmt-net19:08
spatelans lb-mgmt-net will be different than br-lbaas right?19:09
johnsomlb-mgmt-net is just a neutron network/subnet that gets attached to the amphora. Nothing special.  The harder part is connecting the controllers to it, which is what the OSA stuff uses those bridges for. OSA seems overly complicated to me, but not my call19:09
spateloh.. i think i get it what you saying..19:10
spatelbr-lbaas is different subnet pool than lb-mgmt-net19:11
spatelthey can't be same..19:11
johnsomIsn't br-lbaas just a bridge? It shouldn't have a subnet pool19:11
johnsomIt should all be the same19:12
spatelyes br-lbaas is just a bridge19:14
spateljohnsom: this is my openstack_user_config.yml file http://paste.openstack.org/show/753393/19:15
spatelif you see first block cidr you will be i have define lbaas: 172.27.8.0/2119:16
*** mithilarun has joined #openstack-lbaas19:16
spatellater you can see container_bridge: br-lbaas block19:16
spatelbr-lbaas is wire up with three service octavia-worker / octavia-housekeeping / octavia-health-manager using veth19:17
spateljohnsom: https://imgur.com/a/wn1MRNT19:19
spatelbased on this diagram lb-mgmt should talk to worker/housekeeping/manager + amphora19:19
spateljohnsom: going with this config and will see if any issue19:24
-spatel- octavia_ssh_enabled: True19:24
-spatel- octavia_management_net_subnet_allocation_pools: 172.27.12.1-172.27.15.25519:24
-spatel- octavia_management_net_subnet_cidr: 172.27.8.0/2119:24
-spatel- octavia_loadbalancer_topology: ACTIVE_STANDBY19:24
-spatel- octavia_enable_anti_affinity: True19:24
*** Vorrtex has quit IRC19:38
*** pcaruana has quit IRC19:38
*** mithilarun has quit IRC19:48
*** mithilarun has joined #openstack-lbaas20:04
*** mithilarun has quit IRC20:20
*** mithilarun has joined #openstack-lbaas20:21
*** ivve has quit IRC20:24
*** gcheresh has quit IRC20:42
*** ianychoi_ has quit IRC20:59
*** ianychoi_ has joined #openstack-lbaas21:00
*** ianychoi_ has quit IRC21:05
*** ianychoi_ has joined #openstack-lbaas21:09
*** ianychoi_ is now known as ianychoi21:18
*** boden has quit IRC21:35
*** openstackgerrit has joined #openstack-lbaas21:40
openstackgerritCarlos Goncalves proposed openstack/octavia master: WIP: Add VIP access control list  https://review.opendev.org/65962621:40
openstackgerritCarlos Goncalves proposed openstack/python-octaviaclient master: Add support to VIP access control list  https://review.opendev.org/65962721:44
*** Emine has joined #openstack-lbaas21:45
*** tobberydberg has quit IRC21:49
*** tobberydberg has joined #openstack-lbaas21:51
*** Emine has quit IRC22:07
rm_workmnaser: just read scrollback, I'm super interested in why your failovers are predictably dieing22:09
johnsomrm_work this local cert manager is.... annoying22:10
rm_workYes22:10
rm_workIt's not designed to be used for anything except debug22:10
rm_workIt should work though, just drop files in place22:10
johnsomI'm trying to use it for a functional test22:10
rm_workOr mock open22:11
johnsomI'm just getting the .pass file at the moment22:11
rm_workWhat's wrong with it?22:11
johnsomIDK, just venting22:11
johnsomThis simple functional test is becoming a nightmare22:12
johnsomI guess it's good as it's going to test the world, but22:12
johnsomAh, figured it out. cleanup was firing early22:16
*** mithilarun has quit IRC22:17
rm_workYou could probably even mock higher up if you wanted and not even have to deal with mocking open and such22:20
rm_workJust mock the whole get functions? lol22:20
rm_workCould do that even with the Barbican driver :D22:21
*** mithilarun has joined #openstack-lbaas22:21
johnsomI got local working, my mistake which caused the cleanup to fire early22:24
rm_workYeah22:29
rm_workJust saying that might be less complex to maintain22:29
*** fnaval has quit IRC22:30
johnsomComing from someone recently commenting on the level of mocking going on... lol22:33
johnsomI have all of them working now except the SNI certs, which is likely a bug in the utils.22:33
*** luksky has quit IRC22:34
*** spatel has quit IRC22:38
*** fnaval has joined #openstack-lbaas22:40
*** sapd1_x has joined #openstack-lbaas22:41
rm_workwell not like anyone is running the local cert manager in production :D22:47
johnsomWe definitely have some gaps in SNI testing.22:50
johnsomNot sure how to create a listener with SNI using the repos. Creating SNI records requires a listener, creating a listener with SNI requires SNI objects....22:52
johnsomOk, listener first, no reference to the SNI, then create SNI records. sqlalchemy magic22:54
johnsomrm_work BTW, colin was inquiring about a member bulk add option this morning. I.e. not re-pushing the existing members.22:57
rm_worki saw22:57
rm_workyou cleared that up22:57
rm_worki almost choked when i read "i see update but no batch create" lol22:58
rm_workglad you were around to respond :D22:58
johnsomDoes that work for you? Adding a way to pass an array of members to the POST?22:58
rm_workyeah maybe22:58
johnsomYeah, I was like, what?22:58
johnsomThe part I'm not 100% sure about is if we can support both or need a new path for that.22:59
rm_workwell you mean, for additive only?22:59
rm_workyeah uhh >_>22:59
rm_workwe could add a query flag22:59
johnsomSeems like the wsme types might get unhappy and we might violate the lbaasv2 compat if we just add it.22:59
rm_work"create_only=True" or something22:59
rm_worknah i mean it's the same structure22:59
rm_workjust if create_only is set, then skip the delta calculation23:00
johnsomOh, like on the bulk update endpoint23:00
rm_workeasy peasy to code23:00
johnsomI was thinking make POST accept an array.23:00
johnsomOther services have done that.23:00
johnsomMight need to be v3 though23:00
rm_workahh23:00
rm_workehhh23:00
rm_workyeah23:00
rm_workv323:00
rm_workbut can do what he wants in v2 on the update23:01
johnsomcolin- Are you still around?23:01
colin-indeed23:01
colin-read the buffer, understood23:01
johnsomI probably led you down the wrong path this morning23:01
colin-just out of curiosity why the distinction of v3 for the array on POST?23:02
johnsomYeah, Adam's idea with the query string would work and be fairly straight forward.23:02
colin-(ignorant of wsme/compat subject)23:02
johnsomWell, the LBaaSv2 specification does not show it as a list, it's just an object. So, we have to maintain compatibility with that on the /v2 Octavia API. We can't just switch that as anything coded to the existing API would break if we start requiring a list.23:04
colin-oh ok. i was imagining it earlier as an optional alternative to how it works now that you'd only use electively23:04
colin-that makes sense23:05
johnsomIt is likely not possible to accept both given the API validation layer, wsme23:05
*** rcernin has joined #openstack-lbaas23:05
*** fnaval has quit IRC23:05
johnsomIt's pretty picky about the content formatting, as it should be really.23:05
rm_workyeah23:08
johnsomActually, it might be possible... Still not sure it's a good idea23:10
rm_workmaaaaaybe23:13
rm_workit's not the best idea23:13
rm_workwe can do additive on the batch call and it will not interfere23:13
rm_workwant me to write it up really quick?23:13
rm_workcolin-: i'll do the core work really quick if you're willing to babysit the patch through merging23:13
colin-sure that sounds good to me, thanks for offering23:15
rm_workmeanwhile will people review my multivip stuff? lol23:20
rm_workhttps://review.opendev.org/#/c/660239/23:20
rm_workpretty plz23:21
rm_workget your vips here, hot off the vip-line, as many as you want!23:21
johnsom* subject to network limitations and restrictions.23:22
rm_workshhh fine print23:23
rm_workshhh fine print23:23
johnsom* offer only valid on the Train model year and later.23:27
johnsom* subject to multiple subnet availability23:27
johnsom* only take VIPs as directed23:28
*** yamamoto has quit IRC23:36

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!