Friday, 2021-07-02

opendevreviewPrzemyslaw Szczerbik proposed openstack/neutron-lib master: Use os-resource-classes lib to avoid duplication  https://review.opendev.org/c/openstack/neutron-lib/+/79903405:16
opendevreviewliuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS  https://review.opendev.org/c/openstack/neutron/+/79915905:22
*** gthiemon1e is now known as gthiemonge06:32
opendevreviewliuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS  https://review.opendev.org/c/openstack/neutron/+/79915906:34
opendevreviewRabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth  https://review.opendev.org/c/openstack/neutron/+/79916206:42
opendevreviewRabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth  https://review.opendev.org/c/openstack/neutron/+/79916206:51
opendevreviewRabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth  https://review.opendev.org/c/openstack/neutron/+/79916207:09
opendevreviewliuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS  https://review.opendev.org/c/openstack/neutron/+/79915907:34
opendevreviewyangjianfeng proposed openstack/neutron stable/victoria: Keepalived version check  https://review.opendev.org/c/openstack/neutron/+/79916407:35
opendevreviewyangjianfeng proposed openstack/neutron stable/ussuri: Keepalived version check  https://review.opendev.org/c/openstack/neutron/+/79833507:43
opendevreviewyangjianfeng proposed openstack/neutron stable/ussuri: HA-non-DVR router don't need manually add static route  https://review.opendev.org/c/openstack/neutron/+/79287607:44
opendevreviewAkihiro Motoki proposed openstack/networking-odl stable/pike: Fix networking-l2gw location  https://review.opendev.org/c/openstack/networking-odl/+/79900707:59
hemanth_njlibosva: lucasagomes: can you take a look on this patch when you get sometime https://review.opendev.org/c/openstack/neutron/+/796613, thanks08:25
opendevreviewAkihiro Motoki proposed openstack/networking-midonet stable/rocky: Fix networking-l2gw location  https://review.opendev.org/c/openstack/networking-midonet/+/79899308:25
jlibosvahemanth_n: looking08:30
opendevreviewAkihiro Motoki proposed openstack/networking-midonet stable/rocky: Fix networking-l2gw location  https://review.opendev.org/c/openstack/networking-midonet/+/79899308:35
lucasagomeshemanth_n, will do08:42
hemanth_nthank you both08:42
opendevreviewliuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS  https://review.opendev.org/c/openstack/neutron/+/79915908:49
opendevreviewSlawek Kaplonski proposed openstack/neutron stable/wallaby: Use "multiprocessing.Queue" for "TestNeutronServer" related tests  https://review.opendev.org/c/openstack/neutron/+/79914909:47
opendevreviewXiaoYu Zhu proposed openstack/neutron master: L3 router support ECMP  https://review.opendev.org/c/openstack/neutron/+/74366109:49
opendevreviewSlawek Kaplonski proposed openstack/neutron stable/victoria: Use "multiprocessing.Queue" for "TestNeutronServer" related tests  https://review.opendev.org/c/openstack/neutron/+/79915010:00
opendevreviewSlawek Kaplonski proposed openstack/neutron stable/ussuri: Use "multiprocessing.Queue" for "TestNeutronServer" related tests  https://review.opendev.org/c/openstack/neutron/+/79915110:00
opendevreviewsean mooney proposed openstack/os-vif master: update os-vif ci to account for devstack default changes  https://review.opendev.org/c/openstack/os-vif/+/79803810:03
opendevreviewsean mooney proposed openstack/os-vif master: add configurable per port bridges  https://review.opendev.org/c/openstack/os-vif/+/79805510:03
opendevreviewSlawek Kaplonski proposed openstack/neutron stable/train: Use "multiprocessing.Queue" for "TestNeutronServer" related tests  https://review.opendev.org/c/openstack/neutron/+/79919110:05
opendevreviewsean mooney proposed openstack/os-vif master: update os-vif ci to account for devstack default changes  https://review.opendev.org/c/openstack/os-vif/+/79803810:06
opendevreviewsean mooney proposed openstack/os-vif master: add configurable per port bridges  https://review.opendev.org/c/openstack/os-vif/+/79805510:06
opendevreviewSlawek Kaplonski proposed openstack/neutron stable/queens: Call install_ingress_direct_goto_flows() when ovs restarts  https://review.opendev.org/c/openstack/neutron/+/78354310:23
opendevreviewRabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth  https://review.opendev.org/c/openstack/neutron/+/79916210:46
opendevreviewHemanth N proposed openstack/neutron master: Update arp entry of snat port on qrouter ns  https://review.opendev.org/c/openstack/neutron/+/79919711:34
opendevreviewHemanth N proposed openstack/neutron master: Update arp entry of snat port on qrouter ns  https://review.opendev.org/c/openstack/neutron/+/79919711:37
opendevreviewRodolfo Alonso proposed openstack/neutron-specs master: [WIP]Create intermediate OVS bridge to improve live-migration in OVN  https://review.opendev.org/c/openstack/neutron-specs/+/79919811:37
hemanth_nslaweq: I didnt see you have assigned yourself for bug 1933092 and I worked on a patch. Sorry for not looking at it before working on patch. Please do check on the patch when you get time if the change makes sense.11:52
slaweqhemanth_n: I wanted to check if as next thing but I didn't yet11:54
slaweqso thx for the patch11:54
slaweqI will take a look at it today11:54
hemanth_nack and thanks11:54
slaweqplease assign Yourself to that bug11:54
slaweqI'm now working on https://bugs.launchpad.net/neutron/+bug/1933273 btw :)11:54
slaweqso also dvr related thing11:55
hemanth_nhack11:56
hemanth_nack*11:56
opendevreviewRodolfo Alonso proposed openstack/neutron stable/wallaby: [OVN] Do not fail when processing SG rule deletion  https://review.opendev.org/c/openstack/neutron/+/79920913:57
slaweq#startmeeting neutron_drivers14:00
opendevmeetMeeting started Fri Jul  2 14:00:59 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
opendevmeetThe meeting name has been set to 'neutron_drivers'14:00
mlavalleo/14:01
opendevreviewRodolfo Alonso proposed openstack/neutron stable/victoria: [OVN] Do not fail when processing SG rule deletion  https://review.opendev.org/c/openstack/neutron/+/79921014:01
slaweqhi14:01
ralonsohhi14:01
obondarevhi14:01
slaweqlet's wait few more minutes for people to join14:01
slaweqI know that haleyb and njohnston are on PTO today14:01
sebahi14:01
slaweqbut amotoki and yamamoto will maybe join14:02
amotokihi14:02
opendevreviewPedro Henrique Pereira Martins proposed openstack/neutron master: Extend database to support portforwardings with port range  https://review.opendev.org/c/openstack/neutron/+/79896114:02
manubhi14:02
slaweqok, let's start14:03
slaweqagenda for today is at https://wiki.openstack.org/wiki/Meetings/NeutronDrivers14:04
mlavalledo we have quorum?14:04
slaweqmlavalle: I think so14:04
slaweqthere is You, ralonsoh amotoki and me14:04
mlavalleok14:04
slaweqso minimum but quorum, right?14:04
amotokiyeah, I think so14:04
slaweq#topic RFEs14:04
ralonsohactually I'm presenting a RFE, so I should not vote14:04
slaweqralonsoh: sure, so with Your rfe we can wait for next meeting14:05
ralonsohperfect14:05
slaweqwe have then one rfe for today :)14:05
slaweqhttps://bugs.launchpad.net/neutron/+bug/193086614:05
ralonsohwho is presenting it?14:07
mlavalledoesn't matter14:07
mlavallewe can discuss it14:07
ralonsohok, perfect14:07
mlavallethat's the usual approach14:07
slaweqyeah, personally I think it is totally valid issue14:07
slaweqI didn't know that nova have something like "lock server"14:08
mlavalleit is. we should worry about the complete end user experience accross all projects, not only Neutron14:08
slaweqmlavalle: yes, exactly :)14:08
mlavalleend users don't use Neutron. They use OpenStack14:09
ralonsohbecause this is part of the Nova API, can we ask them to modify the VM ports? as obondarev suggested14:09
ralonsohor should be us responsible of checking this state?14:10
slaweqralonsoh: yes, but looking from the neutron PoV only, we should provide some way to "lock" port in such case14:10
amotokithe bug is reported about locked instances. what I am not sure is whether we need to handle ports used by locked instances specially.14:10
mlavallesimilar to what we do with the dns_name attribute when NOva creates a port for an instance14:10
slaweqthen nova could "lock" port as part of server lock14:10
amotokipotentially endusers can hit similar issues even for non-lcoked instances.14:11
mlavalleyeap14:11
ralonsohI don't see in what scenario, sorry14:12
slaweqamotoki: IMHO we should, I'm not sure if forbid to delete any port which is attached to the instance would be good idea as that would be pretty big change in API14:12
mlavallebut in the case of locked instances we really deliver an awful end user experience, because OpenStack made a promise that gets broken14:12
jkuliklooking from Cinder perspective, if a Volume is attached, you cannot just delete it. would the same make sense for a port?14:12
obondarevneutron already forbids deleting port of certain types, right?14:12
slaweqif we will change neutron so it will forbid deletion all ports which are attached to vms, we will break nova I think14:13
slaweqand nova will need to adjust own code to first detach port and then delete it14:13
amotokiyes, we already forbit deleting ports used by router interfaces (and otherr maybe)14:13
slaweqand that will make problems during e.g. upgrade14:13
slaweqor am I missing something?14:14
ralonsohslaweq, right we need to mark those ports somehow 14:14
mlavalleno14:14
amotokislaweq: I haven't checked the whole procedure in server delete. It may affect nova precedures in deelting ports attached to instances.14:14
mlavalleno, you are not missing anything14:15
mlavallemaybe we want to discuss this with Nova folks14:15
mlavalleis gibi around?14:15
gibimlavalle: hi14:15
slaweqhi gibi :)14:15
jkuliktbh, I always found it confusing that there are 2 interfaces to attach/detach a port/network to/from an instances - Nova and Neutron directly14:16
amotokijkulik: precisely speaking there is no two ways to attach ports. neutron port deletion is not visilbe to nova, so it confuses users.14:17
sean-k-mooneyfor what its worth we have suggested bocking port deletion of in use port in the past14:17
sean-k-mooneyamotoki: actully it is14:17
sean-k-mooneyneutron send a network-vif-delete event to nova when the neutron prot is delete14:17
amotokisean-k-mooney: ah, good point. i totally forget it.14:18
sean-k-mooneyamotoki: form a nova point of vew we have never really support this usecause though we would really prefer if you detached it first then deleted it if you needed too14:18
ralonsohsorry but I don't think we should go this way, making this change in Neutron/Nova14:19
gibiI agree with sean-k-mooney, while deleting a bound port is possible today and there is some level of support for it in nova, this is something that complicates things14:19
sean-k-mooneyregarding https://bugs.launchpad.net/neutron/+bug/1930866 is there an object to just blocking port delete while it has the device-owner and device id set 14:19
slaweqgibi: sean-k-mooney: but today nova, when e.g. vm is deleted will just call neutron once to delete port, right?14:20
slaweqor will it first detach port and then delete it?14:20
sean-k-mooneyslaweq: that is a good question14:20
gibislaweq: nova will unbind the port during VM delete, and if the port was actually created by nova during the boot with a network, then nova will delete the port too14:21
sean-k-mooneywe proably dont do a port update and then a delete but we could14:21
sean-k-mooneygibi: oh we do unbind i was just going to check that14:21
sebadisallowing port delete for ports with deviceowner/device_id set would mean that a user could not remove dangling ports anymore without having write access to those fields14:21
sean-k-mooneyseba: those feild i belive are writable by the user. and they still could by doing a nova server delete or port detach via nova14:22
amotokiseba: no, what we discuss is just about port deletion.14:22
gibithe above RFE talks about locked instances specifically. Nova could reject the network-vif-delete event if the instance is locked. It could be a solution14:23
slaweqdevice_owner is writable by NET_OWNER and ADMINs https://github.com/openstack/neutron/blob/master/neutron/conf/policies/port.py#L38014:23
ralonsohgibi but at this point the port is already gone14:23
sean-k-mooneygibi: i think neutron sends that asyc form the deletion of the port14:23
slaweqso in typical use case, user will be able to clean it14:23
mlavalleI agree with gibi ... let's reduce the scope of this to locked instances14:23
gibiralonsoh: ahh, then never mind, we cannot do that14:23
amotokigibi: but neutron still can delete a port though?14:23
sean-k-mooneymlavalle: well neutron realy shoudl not care or know if an instance is locked14:24
amotokiralonsoh commented the same thing already :)14:24
gibimy suggestion can only be implemented if nova can prevent the port deletion by rejecting the netwokr-vif-delete event14:24
sean-k-mooneygibi: that would require the neutron server to send that event first and check the return before proceeding with the db deletion14:25
gibisean-k-mooney: yeah, I realize that14:25
sean-k-mooneyi dont think neutroc currently check the status code of those notificaitons14:25
slaweqpersonally I like the idea of not allowing port deletion if it is attached to vm but to avoid problems during e.g. upgrades we could add temporary config knob to allow old behaviour14:25
slaweqif we would forbid deletion of such ports it would be more consistent with what cinder do14:26
gibislaweq: if we go that way the I thin Octavia folks should be involved, I think they also depend on deleting a bound port14:26
slaweqso IMHO more consisten UX in general :)14:26
ralonsohqibi do Octavia use Nova or Neutron API?14:26
slaweqgibi: ouch, I didn't know that14:26
slaweqso I wonder who else we may break :)14:27
gibiI have a faint recollection from a PTG where they approached us with this port delete case as nova had some issue with it14:27
jkulikhttps://github.com/sapcc/nova/blob/cd084aeeb8a2110759912c1b529917a9d3aac555/nova/network/neutron.py#L1683-L1686 looks like nova unbinds pre-existing ports, but directly deletes those it created without unbind. looks like an easy change though.14:27
gibiI have to dig if I found a recording of it14:27
slaweqjkulik: that's what I though :)14:27
gibijkulik: good reference, and I agree we can change that sequence14:27
slaweqso nova would need changes too14:27
sean-k-mooneyyes it look like it would we coudl backport that however14:29
sean-k-mooneymaybe you could contol this tempoerally on the neutron side with a workaround config option14:29
sean-k-mooneyi hate config driven api behavior but since neutron does not use microverions14:29
sean-k-mooneythe only other way to do this would be with a new extsion but tha tis not backportable14:30
ralonsohbut we have extensions14:30
jkulikif it's not a bugfix, but a change to not allow deletion of ports with device-owner and -id, can this even be supported by a new API version? or would all old API versions also behave differently, then?14:30
slaweqralonsoh yes, but imagine upgrade case:14:30
slaweqolder nova, new neutron14:30
obondarev_so what are downsides of: "nova sets 'locked' flag in port_binding dict for locked instances, neutron checks that flag on port delete"?14:30
slaweqold nova wants to delete bound port and it fails14:31
slaweqand old nova don't know about extension at all :)14:31
sean-k-mooneyslaweq: so the extion would hae to be configurable14:31
ralonsohslaweq, right. So let's make this configurable for the next release14:31
slaweqsean-k-mooney: yes, that's what I wrote few minutes ago also :) I think that we could add temporary config option for that14:31
sean-k-mooneyslaweq: yep for xena and then make it mandaroy for Y14:32
slaweqI know we did that already with some other things between nova and neutron14:32
sean-k-mooneythat would allow nova to always unbidn before deleting14:32
slaweqsean-k-mooney++ for me would be ok14:32
jkulikiirc, locked instances in Nova can still be changed by an admin14:32
jkulikwould Nova then be able to delete the locked port, too?14:32
sean-k-mooneyjkulik: well instances are locked not ports right14:32
slaweqexactly14:33
jkuliksean-k-mooney: if we would go for "nova locks the port in neutron"14:33
sean-k-mooneyok so new neutron extion for locked ports14:33
sean-k-mooneyif nova detect it we lock them automiatcly if you lock the vm?14:33
mlavallethat seems reasonable14:34
mlavallenova already detects other neutron extensions, like extended port binding14:34
sean-k-mooneyand then neutron just prevents updating locked ports 14:34
sean-k-mooneymlavalle: yep that shoudl not be hard to add on the nova side14:34
gibisean-k-mooney: does it mean nova needs an upgrade step where it updates the ports of already locked instances?14:34
gibilike syncing this state for exising instances14:35
sean-k-mooneygood question14:35
sean-k-mooneywe could do that in init host i guess14:35
sean-k-mooneyour we could just not and document that you will need to lock them again14:36
gibiI think locking / unlocking happens in the API so it would be strange to do the state synch in compute side14:36
gibianyhow, this needs a nova spec14:36
gibiI dont want to solve all the open question on friday afternoon :D14:36
mlavallenad I suggest a Neutron spec as well14:36
sean-k-mooneywell i was going to say technially its not an api change on the nova side so it could be a specless blueprint but ya for the upgrade question a spec is needed14:37
slaweqso do we want to add "lock port" extension to neutron or forbid deletion of ports in-use?14:37
slaweqIIUC we have such 2 alternatives now, right?14:37
mlavalleadd lock port extension14:37
sean-k-mooneyyes14:37
amotokiyes14:37
sean-k-mooneyeither seam vaild but lock-port is proably a better mapping to the rfe14:38
jkulikif I'm admin, I can delete a locked instance. Neutron needs to take this into account for the "locked" port14:38
sean-k-mooneyi would still personally like to forbid deliting in use ports14:38
sean-k-mooneyjkulik: well no nova can unlock them in that case14:39
amotokithe bug was reported for locked instances, but I personally prefer to "block deletion of ports used by nova".14:39
slaweqbut what "locked port" would mean - it can't be deleted only? can't be updated at all?14:39
jkuliksean-k-mooney: yeah, makes sense14:39
sean-k-mooneyslaweq: i would assume cant be updated at all but we could detail that in the spec14:39
slaweqIMHO blocking deletion of ports in use is more straight forward solution, but from the other hand it may break more people so it's more risky :)14:40
slaweqsean-k-mooney: yeah, we could try to align with nova's behaviour for locked instances14:40
amotokineutron already blocks direct port deletion of router interfaces. in this case we disallow to delete ports used as router interface, but we still allow to update device_owner/id of such ports. if users would like to delete such ports expliclity they first need to clear device_owner/id and then they can delete these ports.14:40
slaweqand that can be clarified in the spec14:40
sean-k-mooneyi know that doing a delete this way used ot leak some resouces on the nova side in the past like sriov resouces14:41
sean-k-mooneyslaweq: unfrotunetly i dont knwo off hand what the nova behivor actully is14:41
sean-k-mooneybut yes it would be nice to keep them consitnet14:41
slaweqsean-k-mooney: np, it can be discussed in the spec as You said :)14:41
slaweqso, do we want to vote for prefered option?14:44
mlavalleif we have to vote I lean towrds lock port extension14:45
obondarevif use existing port's binding_profile dict - do we need a neutron API extension at all?14:45
slaweqobondarev: I would say yes, to make discoverable new behaviour in neutron14:45
mlavalleanother key - value pair there?14:45
slaweqit's still API change14:45
obondarevok, makes sense14:46
amotokislaweq: +114:46
slaweqso You need to somehow tell users that neutron supports that14:46
ralonsohIMO this conversation has been a but chaotic: we started with the "lock port" idea, then we moved to block a bounf port deletion and now we are voting for a "lock port" extension14:46
ralonsohI really don't understand what happened in the middle14:46
slaweqralonsoh: :)14:46
mlavallethen let's not vote and decide the entire thing in the spec14:46
ralonsohwe were going to implement this RFE by blocking the deletion of a bound port 14:47
amotokiyeah, we discussed two approaches14:47
ralonsohI know14:47
ralonsohbut mixing both14:47
ralonsohso the point is to provide a transition knob from neutron to Nova14:47
ralonsohor extension14:47
ralonsohto know if this is actually supported in Neutron14:48
ralonsohand then implement the port deletion block14:48
ralonsoh(that will also comply with the RFE)14:48
sean-k-mooneyobondarev: the content of the binding_profile is own by nova and is one way14:48
slaweqso I propose that:14:48
sean-k-mooneyit provide info from nova to the neutron backedn14:48
slaweq1. we will approve rfe and will continue work on it in spec - I think we all agree that this if valid rfe14:49
slaweq2. I will summarize this discussion in the LP's comment and will describe both potential solutions14:49
mlavalle+1, yes it's a valid rfe14:50
ralonsoh+1, the RFE is legit 14:50
slaweqif there is anyone who wants to work on it and propose spec for it, that's great but if not, I will propose something14:50
slaweq*by something I mean RFE :)14:50
slaweqsorry, spec :)14:50
mlavalleand I can work on it14:50
slaweqmlavalle: great, thx14:50
ralonsohthanks14:50
amotokitotally agree what is proposed.14:51
slaweqthx, so I think we have agreement about next steps with that rfe :)14:51
slaweqaccording to second rfe from ralonsoh we will discuss it on next meeting14:52
ralonsohthanks14:52
ralonsohI'll update the spec14:52
slaweq#topic On Demand agenda14:52
slaweqseba: You wanted to discuss about https://review.opendev.org/c/openstack/neutron/+/78871414:52
sebayes!14:53
slaweqso You have few minutes now :)14:53
sebaokay, so just so you understand where I come from: I maintain a neutron driver using hierarchical portbinding (HPB), which allocates second-level VLAN segments. If I end up with a (network, physnet) combination exists with different segmentation_ids my network breaks.14:53
sebaThis can happen when using allocate_dynamic_segment(), so my goal would be to either get allocate_dynamic_segment() safe or find another way for safe segment allocation in neutron.14:53
sebaWe discussed https://bugs.launchpad.net/neutron/+bug/1791233 on another driver meeting and the idea to solve this was to employ a constraint on the network segments table to make (network_type, network, physical_network) unique.14:54
ralonsohthat could be an solution, but doesn't work for tunneled networks14:55
sean-k-mooneywell in that case physical network would be None14:55
ralonsohyou'll have (vxlan, net_1, None)14:55
ralonsohyes and repeated several times14:56
sebaralonsoh, that should not be a problem, two "None"s are never the same14:56
sean-k-mooneywe dont curently supprot have 2 vxlan secment for one neutron network14:56
sebaralonsoh, jkulik wrote something about the NULL values in the bugreport and how they're not the same with most if not all major databases14:57
jkulikso that use-case would not be hindered by the UniqueConstraint14:57
sean-k-mooneythere is no valid configuration wehre we can have 2 (vxlan, net_1, None) segment with different vxlan vids right14:57
sebasean-k-mooney, I have one top-level vxlan segment and then a vlan segment below it for handoff to the next driver. I don't see though what would stop me from having a second level vxlan segment14:58
sean-k-mooneywell i was thinkign about what the segments exteion allows14:58
sean-k-mooneywhen doing heracial port binding that is sligly idfferent14:58
sebaah, so you're thinking about multiple vxlan segments without specifying a physnet?14:59
sean-k-mooneyseba: yes sicne tunnels do not have a physnet14:59
slaweqwe need to finish meeting now, but please continue discussion in the channel :) I have to leave now becuase I have another meeting. Have a great weekend!14:59
slaweq#endmeeting14:59
opendevmeetMeeting ended Fri Jul  2 14:59:52 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:59
opendevmeetMinutes:        https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.html14:59
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.txt14:59
opendevmeetLog:            https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.log.html14:59
mlavalleo/15:00
sebabye slaweq 15:00
sebaso, tunnels don't have a physnet, what implications would that have on the constraint?15:00
amotoki(this is a good point to have our meeting in #-neutron, we can continue a discussion in a same channel :))15:00
sean-k-mooneyseba: so in our hiracical domain are teh vlans/vxlan ids shared between both levels 15:00
sean-k-mooneythat is whwere the conflict is yes15:01
sean-k-mooneyseba: the tuple for a network with segmentaiton id 10 and segmenation type vxlan in your scheme would be (vxlan, net_1, None)15:02
sebayes15:02
sean-k-mooneyit would also be the same tuple for segmentaiton id 42 and segmenation type vxlan15:02
jkuliksean-k-mooney: and you want to prohibit this from happening?15:03
sean-k-mooneyno i think that was the edgecasse that ralonsoh  was raising15:03
jkulikah. but that should work as mentioned in the bug15:03
ralonsohexactly15:03
ralonsohand I've explained that several times15:03
sean-k-mooneyby incudign net_1, in the touple i think it already is15:03
jkulikhttps://stackoverflow.com/questions/3712222/does-mysql-ignore-null-values-on-unique-constraints/3712251#3712251 looks like it should work at least15:03
ralonsohwe cannot add this constraint to the DB15:03
sean-k-mooneysince a signel neutron network cannot have to segmenation ids15:04
sean-k-mooneyunless we are talking about l3 routed networks15:04
jkulikas I understood it, the NULL in the tuple makes the tuple unique every time15:04
sean-k-mooneyin which case the segments can have there own segmenation type and id 15:04
sean-k-mooneyjkulik: i dont think it would15:05
sean-k-mooneyalthough that might be db dependent15:05
ralonsohthis is possible15:05
ralonsohhttp://paste.openstack.org/show/807144/15:05
ralonsohand that will be prevented with this patch15:05
ralonsohthat's for routed networks15:05
sean-k-mooneyralonsoh: is that vaildi today15:05
ralonsohwhat?15:06
sean-k-mooneywhat you posted15:06
sebaralonsoh, what's the physical_network column saying for these two networksegments?15:06
ralonsohcopy/paste from my env15:06
sean-k-mooneymy understading was that the only way neutron had to map segments to host was the physnet15:06
sean-k-mooneyso how do you assocate thos segmetn with different hosts in that case15:06
jkulik> Null values are not considered equal.  (https://www.postgresql.org/docs/9.0/indexes-unique.html)15:07
jkulik> A UNIQUE index permits multiple NULL values for columns that can contain NULL. (https://dev.mysql.com/doc/refman/8.0/en/create-index.html)15:07
sean-k-mooneyso it would block rodolfos case http://paste.openstack.org/show/807144/ but im not sure how useful that is15:08
ralonsohsean-k-mooney, no, that's right, you need physical nets15:08
sean-k-mooneyralonsoh: so implciatly then for tunnels we assume host level conectivty across the entire cloud15:09
ralonsohyes15:09
sean-k-mooneythere is not way to map those segment to your underlying hardware15:09
ralonsohyes15:09
ralonsoh(well, no, there is no way)15:09
sean-k-mooneyso does it add any useful benifity to have 2 tunneled segment on one network15:09
ralonsohlet me check15:10
sean-k-mooneyit poteally partions the vxlan mesh into two i guess15:10
sean-k-mooneyand resuces the brodacst domain slightly15:10
sean-k-mooneyseba: would incuding the segmenation id break your fix15:11
sebayes15:11
sean-k-mooneyok so (vxlan,42,  net_1, None)15:11
sean-k-mooneyis not something you can supprot15:11
sebawe can support that, no problem15:11
jkulik(vxlan, net_1, None) and (vxlan, net_1, None) are not equal.15:12
sean-k-mooneythat would allow ralonsoh case to be supproted15:12
seba(vxlan, 42, net_1, None), (vxlan, 23, net_1, None) does not break the constraint, as physical_network is None and two None/NULL values are not regarded as being equal15:12
ralonsohright15:12
jkulikso this contraint should not break anything you're currently relying on, right?15:13
ralonsohexactly15:13
jkulikthen I don't get the point. you want to extend the fix to preventing something else?15:13
ralonsoh(why I didn't think about it before?)15:13
sean-k-mooneyso (segmenation_type, segmenation_id, network_name, physnet)15:14
ralonsohthat will prevent what you are describing in the bug15:14
sebasean-k-mooney, that would be the format of the above mock-db-rows, yes15:14
sebabut segmentation_id must not be part of the unique constraint15:14
sean-k-mooneywhy15:15
ralonsohno, that's incorrect, we can't include the tag id15:15
sean-k-mooney(vlan, 10,  net_1, datacenter) and (vlan, 20, net_1, datacenter)15:15
sebabecause if we add segmentation_id to the constraint then (vlan, 23, net_1, physnet_X), (vlan, 42, net_1, physnet_X) would be possible, which I want to prevent15:15
ralonsohexactly15:15
sean-k-mooneywoudl be falid when using segmented networks15:16
ralonsohsean-k-mooney, I need to check if we can have two tunneled segments per network15:16
sean-k-mooneyok but that is allows for l3 routed networks15:16
ralonsohas you said, those are vlan nets15:16
ralonsohthat's in the spec15:16
ralonsohfor host association15:16
sebasean-k-mooney, how do l3 routed networks play into networksegments?15:17
ralonsohbut as I said, we need to confirm that is not possible to have two tunneled segments (same type) in one net15:17
ralonsohseba, segments are the base for routed networks15:17
ralonsohthis is how traffic is segregated 15:17
ralonsohsorry, I'm closing now, I have an appointment. I'll review the patch next week15:19
jkulikthank you15:19
sebayeah, thanks ralonsoh15:19
ralonsohsean-k-mooney, I'll check your comments on the live migration spec15:19
ralonsohsean-k-mooney++15:19
sebaI'm also available for further discussion in irc in normalâ„¢ CEST workhours15:19
ralonsohme too15:20
sean-k-mooneyseba: this is the routed network spec by the way https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html15:22
sebatnx15:23
opendevreviewSlawek Kaplonski proposed openstack/neutron master: [DVR] Fix update of the MTU in the SNAT namespace  https://review.opendev.org/c/openstack/neutron/+/79922616:01
slaweqmlavalle: if You will have few minutes, please check my last comment in https://bugs.launchpad.net/neutron/+bug/1930866 if I didn't miss something16:14
slaweqthx in advance :)16:14
slaweqand have a great weekend :)16:14
opendevreviewMerged openstack/neutron stable/wallaby: Copy existing IPv6 leases to generated lease file  https://review.opendev.org/c/openstack/neutron/+/79906717:57
opendevreviewMerged openstack/neutron stable/victoria: Copy existing IPv6 leases to generated lease file  https://review.opendev.org/c/openstack/neutron/+/79906818:29
simondodsleyHi - newbie question. I have an instance that can ping the compute host it lives on, but I cannot ping beyond that host. What would cause that? This is a devstack environment running tempest.20:32
opendevreviewMerged openstack/neutron master: [OVN] neutron-ovn-metadat-agent add retry logic for sb_idl  https://review.opendev.org/c/openstack/neutron/+/79661320:52
opendevreviewMerged openstack/neutron stable/ussuri: Copy existing IPv6 leases to generated lease file  https://review.opendev.org/c/openstack/neutron/+/79906921:45

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!