Friday, 2024-11-15

opendevreviewIhar Hrachyshka proposed openstack/neutron master: tests: Remove skip_if_ovs_older_than decorator from a test case  https://review.opendev.org/c/openstack/neutron/+/93525002:12
opendevreviewIhar Hrachyshka proposed openstack/neutron master: functional: Don't assume privsep-helper is installed in virtualenv  https://review.opendev.org/c/openstack/neutron/+/93525102:22
opendevreviewIhar Hrachyshka proposed openstack/neutron master: functional: Allow to override OS_FAIL_ON_MISSING_DEPS  https://review.opendev.org/c/openstack/neutron/+/93525202:40
opendevreviewOpenStack Proposal Bot proposed openstack/neutron master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/neutron/+/93525403:44
opendevreviewLiushy proposed openstack/neutron-fwaas master: [OVN] Fix the provider error in devstack settings  https://review.opendev.org/c/openstack/neutron-fwaas/+/93410303:55
opendevreviewLiushy proposed openstack/neutron-fwaas master: Fix the tests import error  https://review.opendev.org/c/openstack/neutron-fwaas/+/93459107:03
ralonsohslaweq, lajoskatona haleyb hello! Please check https://review.opendev.org/c/openstack/releases/+/935265. That should fix the issue with the tempest ovn job, when the docker image is created in the FFR tests using the os-ken testing FW07:48
lajoskatonathanks ralonsoh07:57
opendevreviewMerged openstack/neutron-fwaas unmaintained/2023.1: Update .gitreview for unmaintained/2023.1  https://review.opendev.org/c/openstack/neutron-fwaas/+/93509408:53
opendevreviewRodolfo Alonso proposed openstack/neutron master: Catch when the process does not exist when killing it  https://review.opendev.org/c/openstack/neutron/+/93504610:11
opendevreviewRodolfo Alonso proposed openstack/neutron master: Set a new removal release for ``set_network_type``  https://review.opendev.org/c/openstack/neutron/+/93527210:23
opendevreviewRodolfo Alonso proposed openstack/neutron master: [OVN] QoS max and min rules should be defined in LSP for phynet ports  https://review.opendev.org/c/openstack/neutron/+/93441810:26
ralonsohslaweq, hi! if you have 1 minute: https://review.opendev.org/c/openstack/neutron/+/93465210:29
ralonsohthanks!10:29
opendevreviewRodolfo Alonso proposed openstack/neutron master: Filter out the floating IPs when removing a shared RBAC  https://review.opendev.org/c/openstack/neutron/+/93527811:13
opendevreviewRodolfo Alonso proposed openstack/neutron master: Filter out the floating IPs when removing a shared RBAC  https://review.opendev.org/c/openstack/neutron/+/93527811:14
opendevreviewRodolfo Alonso proposed openstack/neutron master: Migrate from tenant_id to project_id in ``test_network.py``  https://review.opendev.org/c/openstack/neutron/+/93528011:32
opendevreviewChris Buggy proposed openstack/ovn-octavia-provider master: Adding Tobiko Test To CI  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/93528111:46
slaweqhi, do we have drivers meeting today?14:03
ralonsohI'm waiting for it, yes14:03
mlavalleI thought so14:03
lajoskatonawe have agenda so something will happen :-)14:03
slaweqI think we need someone to chair it then :)14:03
ralonsohone sec14:03
ralonsoh#startmeeting neutron_drivers14:04
opendevmeetMeeting started Fri Nov 15 14:04:06 2024 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.14:04
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:04
opendevmeetThe meeting name has been set to 'neutron_drivers'14:04
ralonsohPing list: ykarel, mlavalle, mtomaska, slaweq, obondarev, tobias-urdin, lajoskatona, amotoki, haleyb, ralonsoh 14:04
mlavalle\o14:04
ralonsohhello all14:04
lajoskatonao/14:04
obondarevo/14:04
slaweqo/14:04
s3rj1khi all14:04
ralonsohwe have many RFEs to discuss today14:04
ralonsohand we have quorum, so I think we can start14:05
amnikhello all14:05
ralonsohFirst one: [RFE] Configurable Agent Termination on OVS Restart14:05
ralonsohhttps://bugs.launchpad.net/neutron/+bug/208677614:05
ralonsohs3rj1k, please, present the RFE14:05
s3rj1kralonsoh: Hi, yea so basically the idea is to automate manual fixes that are done by human, of agent in k8s env14:06
ralonsohs3rj1k, my question, same as Liu in the bug is: why don't we fix what is broken in the OVS agent code?14:07
ralonsohwhat exact flows are not recovered?14:07
s3rj1kto do that, there is an idea to introduce opt-in functionality to terminate agent instead of flow recovery14:07
s3rj1kfixing is a separate thing, here is about making sure that prod envs are working without human interaction14:08
ralonsohthe OVS agent is capable of detecting when the vswitch has been restarted and tries to restore the flows14:09
opendevreviewSebastian Lohff proposed openstack/neutron master: docs: Fix typo in openstack create subnet command  https://review.opendev.org/c/openstack/neutron/+/93518214:09
s3rj1kLiu also mentioned that this is only related to containerized evns, as for systemd setups there is already similar restarts happen, as I understand14:09
ralonsohif we fix what is broken, that will solve your problem (and for any other one using it)14:09
ralonsohLiu is stating, as he is right, that restaring the OVS agent has a high cost in the Neutron API14:10
s3rj1kto fix that we need to reporo bug, there is no time to do that on envs that have client workloads14:10
s3rj1k>  restaring the OVS agent has a high cost - agree, hence opt-in14:10
slaweqI personally feel like this proposal is more like workaround for some bug rather then fix of it14:11
ralonsohexactly14:11
s3rj1kagree, workaround14:11
ralonsohs3rj1k, can you share with the community what is broken when the OVS restores the flows?14:11
ralonsohbecause this is the critical point here but is not documented14:11
ralonsohthis is what needs to be fixed14:11
s3rj1kI have no specific repros right now, details that I have is in ticket, one of the flows was not recovered14:12
ralonsohs3rj1k, so please, add this information in the LP bug and how to reproduce it locally, if possible14:12
s3rj1kwhen I will have a repro I will create a bug for it, but htis does not mean that workaround is invalid14:12
ralonsohwith this information we can try to fix it14:13
lajoskatonaCould you please check the suggestion from Liu (https://bugs.launchpad.net/neutron/+bug/2086776/comments/5 ) that seam reasonable and in sync with the current design?14:13
ralonsohthat is a lightweight workaround too14:13
ralonsohno API involved in this case14:13
ralonsohbut maybe could be a solution: instead of the current code that restores the flows, when the OVS is restarted, restart the OVS agent monitoring mechanism14:14
lajoskatonaanyway if we have a list of missing things as ralonsoh suggested that can help to find the places to fix14:14
lajoskatonayeah perhaps14:15
ralonsohs3rj1k, so please, before proceeding on any decision, we need to know what is actually broken. Then we can propose a solution (fix the restore methods, lightweight restart or full restart)14:16
s3rj1kare we sure we want to block general workaround on specific bug repro?14:16
s3rj1kthings can be done async 14:17
ralonsohsorry what does it mean?14:17
s3rj1kcontinue with terminate workaround and as a separate bug try to find repro for that one specific flow14:17
s3rj1kI am pretty sure that there are more cases for flows recovery but I have no info on that, only value report on one of them14:18
ralonsohif you ask me to implement a workaround for a bug I don't understand, I will say no14:18
ralonsohI need to know what is broken14:18
slaweqpersonally I don't like idea of such kind of workaround because it don't seems for me as professional thing - it's a bit like implementing restart of service to fix memory leak :) So I would personally vote for fixing real bugs and handle ovs restart properly instead of this rfe14:19
s3rj1kas basically fix is manual restart, nobody is triaging bugs in prod14:19
ralonsohnobody is triaging bugs in prod --> I do this for a living14:20
s3rj1kslaweq: so better to ignore that and keep line1 ops just restart all the things?14:20
slaweqalso if you need workaround like that you can probably implement some small script which will be doing what you need to do - it don't need to be in the agent itself14:20
slaweqs3rj1k no, IMO it's better to try to reproduce the issue on dev env and try to fix it14:20
s3rj1ksure it can be done as some script14:20
obondarev no need to reproduce intentionally, just next time when manual intervention is needed (agent restart) - dump flows before and after restart, that could be enough for investigation14:21
greatgatsby_was just coming in here to post about our outage we've had during 2 of our last 3 upgrade tests.  Seems directly related to the OVS conversation you're having here now14:21
lajoskatonaobondarev: +1, good that you added here14:22
s3rj1kas I said, when some details come in, I will create a separate bug for that14:22
greatgatsby_we've had a 5 minute outage between when kolla-ansible restarts OVS containers (which triggers 2 separate flow re-creations) and we don't get FIP connectivity back until the neutron role, some 5 minutes later14:23
s3rj1kthis is more about do we want some workaround in place or not14:23
greatgatsby_in our testing, both the restart of openvswitch_db and openvswitch-vswitchd trigger neutron agent to recreate the flows, this happens within about 20 seconds of each other14:24
slaweqIMHO it is better to have script e.g. in https://opendev.org/openstack/osops for that rather then implement that in neutron14:25
obondarev+1 to slaweq14:25
ralonsohs3rj1k, we want to fix this bug, for sure. We don't want to implement a workaround for something we don't understand. In order to push code, we need to know what is happening14:25
greatgatsby_for some reason, but not always, neutron ovs agent doesn't seem to like this and connectivity is lost until the agent is restarted during the neutron role14:25
greatgatsby_sorry for just injecting my recent experience, just excited that this is being discussed in here, hoping for some kind of mitigation14:26
ralonsohso, for now, we are not going to vote for this RFE until we have a better description of the current problem14:26
ralonsohs3rj1k, please, do not open a separate bug, provide this info in the current one14:26
ralonsohok, we have more RFEs and just 35 mins left14:27
lajoskatonagreatgatsby_: do you think your issue is related to https://bugs.launchpad.net/neutron/+bug/2086776 ?14:27
s3rj1kgreatgatsby_: can you please add more info to RFE if possible14:27
ralonsohok, next RFE: [RFE] L3 Agent Readiness Status for HA Routers14:28
s3rj1kralonsoh: ok, if have more details I'll add them to this RFE14:28
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/208679414:28
ralonsohwe had a similar bug https://bugs.launchpad.net/neutron/+bug/201142214:28
ralonsohthat was also discussed in a PTG (I don't remember which one)14:28
s3rj1kcan be also discussed together with 208679914:28
lajoskatonaI have a general comment for this and next one ([RFE] OVS Agent Synchronization Status for Container Environments )14:28
ralonsohfirst let's wait for the RFE proposal14:29
ralonsohs3rj1k, please, go on14:29
lajoskatonaAre these about monitoring our processes / agents etc.... Nova discussed something similar: https://etherpad.opendev.org/p/nova-2025.1-ptg#L255 perhaps worth to check before reinventing it 14:29
s3rj1kboth RFEs are about extending available data for k8s probes 14:29
greatgatsby_it sounds very similar.  We've only just identified the trigger, and it not being 100% reproducable, it takes me a couple days to get back to a fresh environment.  My next test I will log the flows the whole upgrade.  For now, we've just identified it's caused by the 2 OVS container restarts, neutron agent doesn't recover properly somehow, and connectivity is lost to our VMs until minutes later 14:30
greatgatsby_when the agent is restarted as part of the neutron role in kolla-ansible14:30
s3rj1kso that pod mgmnt can do monitroing and life-cycling of containerized agents14:30
greatgatsby_we've using DVR and VLANs, if that matters14:31
ralonsohs3rj1k, but what is the information required?14:31
ralonsohwhat do you exactly need to monitor?14:31
slaweqI think it is good idea generally for all the agents to report in some file what they are doing, like "full_sync in progress", "sync finished", etc.14:31
s3rj1kOVS agent's synchronization status (started/ready/dead states)14:31
ralonsohslaweq, why a file? we have the Neutron API and heartbeats14:32
slaweqas sometimes such full sync after e.g. agent start can take very long time indeed14:32
s3rj1kand for second one HA router status14:32
slaweqralonsoh IIUC it's about checking it locally14:32
slaweqby e.g. readiness probe in k8s14:32
s3rj1kI can extend both of RFEs with details if in general this sounds good14:32
lajoskatonaagree with neutron API, we already have some healtcheck API, that should be extended14:32
greatgatsby_we were surprised that both OVS container restarts each trigger flow re-creation in rapid succession14:33
lajoskatonaNova also move that direction see the spec: https://specs.openstack.org/openstack/nova-specs/specs/2024.2/approved/per-process-healthchecks.html14:33
greatgatsby_https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/openvswitch/handlers/main.yml14:33
ralonsohwe have a fantastic Neutron API that can be feed with anything we want14:33
slaweqand  for such puprose reporting this to the neutron db and checking for each agent through api will not scale at all14:33
ralonsohlajoskatona, exactly: this is a matter of improving the agent information mechanism14:33
s3rj1kfor k8s probes it is best to use something local to pod14:34
slaweqwe can send such info in the hearbeat too but saving it in a file locally shouldn't hurt anyone and can help in some cases14:34
slaweqbut we should probably have some standarized list of possible states and reuse them in all agents14:35
ralonsohI'm against storing local info if we don't push that to the API too14:35
s3rj1kprobes in general tend to run frequently so anything that is network related can have pref impact on cluster, better to have local data14:35
slaweqnot that each agent will report something slightly different14:35
ralonsohslaweq, yes, that was the PTG proposal, having a set of states per agent, that coudl be configurable14:36
ralonsohDHCP: network status (provisioning), L3: router status, etc14:36
ralonsohthat could be defined in each agent independently14:36
ralonsohbut again, this info should go to the API. If we want to store it locally as an option, perfect14:37
mlavallelike a state machine14:37
ralonsohmlavalle, yes, and not only a global one per agent, but also per resource (network, router, etc)14:37
slaweqdo we need them per agent, shouldn't just few standard ones be enough? Like e.g. "sync in progress", "operational" or something like that14:37
s3rj1kralonsoh: works for my case if bot local and API14:37
slaweqwhat else would be needed there?14:38
ralonsohslaweq, do you mean having standard states?14:38
ralonsohregardless of the resource/agent14:38
slaweqralonsoh yes, IMHO it can work that way14:38
slaweqbut maybe I am missing something14:38
ralonsohI think that makes sense14:38
ralonsohslaweq, we can always add new states (that don't need to be used by all resources/agents)14:39
slaweqtrue14:39
mlavallemaybe there is a common set of states and then each agent has a few of its own14:39
ralonsohbut yes, initially I would propose a set of states that could match with the state of the common agents14:39
slaweqbut that way we can have them in the neutron lib defined so that everyone will be able to rely on them pretty easily14:39
ralonsohyes to both14:39
lajoskatonaso let's have now some framweork with basic common information locally and on API?14:39
slaweqon api it is very easy as you can add whatever you want to the dict send in the heartbeat IIRC14:40
ralonsohyes, some information that will be provided to the API and, via config, stored locally if needed14:40
slaweqso yes, I would say - have them in both places - api and locally14:40
ralonsohI'm ok with this proposal, but of course we need a spec for sure14:41
slaweqlocally it would be similar to the ha state of the router stored in the file14:41
ralonsoh^ yes14:41
lajoskatona+114:41
slaweqyes, spec would be good. It don't need to be long but we should define there what states we want to have14:41
ralonsohso +1 with spec14:42
obondarev+114:42
slaweq+1 from me14:42
mlavalle+114:42
s3rj1kshould we start proposing states in RFE comments?14:42
lajoskatona+114:42
ralonsohs3rj1k, it would be better to describe it in the spec14:42
mlavalles3rj1k: wrte a short spec14:42
s3rj1kok, will do14:43
lajoskatonaLike these : https://opendev.org/openstack/neutron-specs/src/branch/master/specs/2024.2 under 2025.1 folder14:43
mlavalles3rj1k: do you know how to propose a spec?14:43
ralonsohs3rj1k, are you going to combine the RFEs in one single spec?14:43
s3rj1kI'll walk through docs, in case of questions just ask them here on regular day, so no prob with that, thanks14:43
s3rj1k>  in one single spec? - that can make sense yes14:44
ralonsohperfect, I'll comment that in the LP bugs after this meeting14:44
ralonsohso we can skip the 3rd one and go for the last one14:45
ralonsoh[RFE] Add OVN Ganesha Agent to setup path for VM connectivity to NFS-Ganesha14:45
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/208754114:45
ralonsohIs Amir here now?14:45
amnikYes I'm here14:45
ralonsohperfect, can you explain it a bit?14:46
amniksure14:46
amnikAccording to the Manila documentation, there is a need for network connectivity between the NFS-Ganesha service and the client mounting the Manila share.14:46
amnikCurrently, no existing solution addresses this requirement. This RFE proposes to solve it using Neutron and OVN.14:46
slaweqfirst thing from me - instead of new agent we can do this as extension to the neutron-ovn-agent probably14:47
ralonsohI couldn't agree more (note: during this release I need to make the OVN agent the default one...)14:47
mlavalle+114:48
ralonsohwe already have a generic agent for OVN, with a plugable interface14:48
mlavallethat was the whole point of it14:48
ralonsohis just a matter of implementing the needed plugin and enable it14:48
ralonsohamnik, what is actually needed? what the agent needs to do?14:49
ralonsohwhat will trigger these actions?14:49
amnikthis solution inspired by ovn metadata agent14:49
amnikit detects ports for ganesha connectivity and plug the on compute nodes14:50
amnikthese ports are distributed localport like metadata port14:50
slaweqI don't know about Ganesha at all, can you maybe in briefly explain what this port is needed for on compute nodes , how vms will use it, etc.?14:51
amnikganesha mediated traffic to the cephFS. You can think about it like NFS server between vms and cephFS in ceph cluster14:52
amnikafter we plugging the port on compute nodes we add some iptables rule to dnat traffic to the ganesha14:53
amnikprivate_ip_port:2049 -> ganesha_ip:204914:54
ralonsohis this port a VM port?14:54
slaweqand it is one virtual port per network? So all vms conected to that network will use the same port?14:54
amnikIt like metadata port. Not bind to any chassis. 14:55
ralonsohbut who is creating this port?14:55
ralonsohis in the OVN database?14:55
slaweqand other question: ganesha_ip is from the same private network or not?14:55
ralonsohapart from the technical questions, I'm a bit reluctant to add something so specific in the Neutron repository14:57
amnikslaweq: I think better to create port for each share that vms want to connect to. So we can manage it per share(storage on cephFS )14:57
amnikralonsoh: the port will be created with API and after that ganesha agent detect it14:58
amnikralonsoh: yes in OVN database that agent can detect it with event.14:59
ralonsohhow? this port won't be bound to a chassis14:59
ralonsohwe use the SB and the local OVS database14:59
ralonsohsame as an ovn-controller14:59
slaweqcould it be that neutron-ovn-agent extension which would be respoinsible for this would be actually in Manila and just loaded by neutron-ovn-agent if needed? We can accept new device type if needed so that you can create port with this new device type in neutron api but other things may actually be in Manila as it seems that this is team of experts on this topic, not we15:00
amnikslaweq: No ganesha is service that deploy for example on our controller servers and has IP with network of that server.15:00
amnikralonsoh: agent can detect it by device_id convention like ovnganesh- and port type in PORT_binding table15:02
ralonsohslaweq, yes, that could work15:02
ralonsohamnik, you said this port is not bound15:02
ralonsohwho is physically creating this port?15:03
slaweqif that would have to land in the neutron repository, we would probably need spec but I agree with ralonsoh that this may be a bit outside our expertise here so maybe it would be better to keep as much as possible in manila and just add necessary bits in neutron (neutron-lib) to handle that new type of ports correctly15:03
ralonsohagree with this15:04
amnikralonsoh: This is a seperate API call that user should create the port (openstack port create ...)15:04
ralonsohamnik, no15:04
ralonsohwho is creating the layer one port15:04
ralonsoh?15:04
ralonsohanyway, we are running out of time15:05
ralonsohwe can continue after this meeting15:05
ralonsohwhat is the recommendation for this RFE?15:05
ralonsohI agree with slaweq's comment15:05
mlavalleme too15:06
slaweqI would like to see detailed spec with exact description about how this is going to work in both API and backend15:06
amnikralonsoh: This is a responsiblity of ganesha agent. user create port. agent will plug it to the compute node with veth devices like metadata port15:06
ralonsohbut for now this RFE is not approved, right?15:07
slaweqThen we can discuss there what have to be in neutron and maybe what can be somewhere else15:07
obondarevI think RFE needs to be updated at least, new agent seems an overkill15:07
slaweqralonsoh IMO not yet, it's too early15:07
ralonsohok, I'll comment that in the LP bug15:07
ralonsohI'm closing this meeting now15:07
ralonsoh#endmeeting15:07
opendevmeetMeeting ended Fri Nov 15 15:07:54 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:07
opendevmeetMinutes:        https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-11-15-14.04.html15:07
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-11-15-14.04.txt15:07
opendevmeetLog:            https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-11-15-14.04.log.html15:07
ralonsohthanks for attending15:08
lajoskatona1Bye15:08
slaweqo/15:08
s3rj1kthanks, bye15:08
amnikbye15:08
ralonsohamnik, what I'm saying is that the agents read form the SB and the OVS local DB15:08
ralonsohif you don't bind the port, who is adding it? the OVN agent cannot15:08
amnikAgent read from OVN DB. The API that user call add it to database.15:09
ralonsohamnik, there are two DBs: NB and SB15:10
ralonsohso you are expecting the OVN agent to create the port15:11
amnikralonsoh: yes, from Port_Binding table in NB. https://man7.org/linux/man-pages/man5/ovn-nb.5.html15:11
ralonsohamnik, port binding is in the SB15:12
amnikoh ,yes sorry your right. https://man7.org/linux/man-pages/man5/ovn-sb.5.html#Port_Binding_TABLE here15:14
ralonsohbut you won't have an event if the port is not created15:14
ralonsohagain, you are expecting the OVN agent to create the port15:14
ralonsohNeutron doesn't create port (there are a few exception with trunk ports in OVS)15:14
ralonsohI mean, Neutron doesn't create L1 ports15:15
amnikralonsoh: I tested this solution on staging environment and implement most of it's logic and It work15:16
amnikopenstack port create --network {} --device ovnganesha-d60f1efd-4791-419e-8c0d-93dc209fad49  --device-owner network:distributed manila15:16
amnikafter above command I can get the event and plug the port on appropriate compute nodes.15:17
ralonsohamnik, again, the proposal is to use the OVN agent but implement the code in your repository. You need to make a plugin for the OVN agent15:18
amnikralonsoh: Yes I agree to implement it as OVN agent plugin.15:20
amnikI don't have time to say:)15:20
vprokofevralonsoh, could you please advise on this RFE? https://bugs.launchpad.net/neutron/+bug/2083214 it's approved, I've pushed the spec, what's the next step would be?15:23
ralonsohvprokofev, hi, now we need to review it. I'll add it to the Neutron meeting agenda on Tuesdays, to highlight it15:28
ralonsohI'll also add other Neutron cores to the review15:28
vprokofevThank you15:29
opendevreviewRodolfo Alonso proposed openstack/neutron master: Bump os-ken version to 2.11.1  https://review.opendev.org/c/openstack/neutron/+/93536415:36
ralonsoh@all: the CI is failing since os-ken 2.11.016:12
ralonsohhttps://bugs.launchpad.net/neutron/+bug/208828516:12
opendevreviewRodolfo Alonso proposed openstack/neutron-tempest-plugin master: Limit temporarily the os-ken version to <2.11.0  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93536616:17
opendevreviewLajos Katona proposed openstack/neutron-tempest-plugin master: Change references from Quagga to FRR  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93536816:26
ralonsohlajoskatona1, hey16:26
ralonsohI've pushed and fast approved https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93536616:27
ralonsohI'll stop it if you are fixing it16:27
lajoskatona1ralonsoh: not sure if this is enugh: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/93536816:27
ralonsohif not, I'16:27
ralonsohtomorrow I'll send the patch capping the version16:28
ralonsohjust to fix temporarily the CI16:28
ralonsohI need to leave now, I'll catch up later today16:28
fricklerralonsoh: what about 2.11.1, will that work? I just approved https://review.opendev.org/c/openstack/requirements/+/93536016:28
ralonsohfrickler, no16:29
lajoskatona1ralonsoh: ack, I have to leave also soon16:29
ralonsohthat is only a fix for the docker image build16:29
fricklerlooks like you cannot easily override the reqs in neutron anyway, so will need a revert of the upper-constraints bump, then16:30
ralonsohno no, no need for now16:30
ralonsohwe can cap it in n-t-p and fix that later16:30
frickleryou can't, zuul is very -1 on that16:30
ralonsohok because that is incorrect16:31
ralonsohI'll push a new one or a fix16:31
ralonsohbut not now, in some hours 16:31
ralonsohCI can wait a bit a friday afternoon16:31
ralonsohI need to leave now, see you later16:31
fricklerok then, ping me if you still need anything on the reqs side16:32
ralonsohthanks!16:32
lajoskatona1frickler: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/935368 I suspect this will fail at least on the stable jobs as tempest is branchless, so perhaps we really need some magic on the reqs :-)16:38
opendevreviewIhar Hrachyshka proposed openstack/neutron master: tests: Remove skip_if_ovs_older_than decorator from a test case  https://review.opendev.org/c/openstack/neutron/+/93525016:42
fricklerlajoskatona1: essentially you would revert https://review.opendev.org/c/openstack/requirements/+/935082 and denylist 2.11.0 and 2.11.116:51
fricklerlajoskatona1: (ralonsoh offline): tkajinam also noticed n-d-r failures caused by os-ken, not sure if these have the same root cause or would need yet another fix https://bugs.launchpad.net/neutron/+bug/208827917:07
opendevreviewIhar Hrachyshka proposed openstack/neutron master: Enable no-else-return pylint check  https://review.opendev.org/c/openstack/neutron/+/93420517:43
lajoskatona1firckler: yes the same as I see as https://bugs.launchpad.net/neutron/+bug/208828517:57
opendevreviewIhar Hrachyshka proposed openstack/neutron master: functional: Don't assume privsep-helper is installed in virtualenv  https://review.opendev.org/c/openstack/neutron/+/93525118:03
opendevreviewGaudenz Steinlin proposed openstack/neutron master: Add conntrackd support to HA routers in L3 agent  https://review.opendev.org/c/openstack/neutron/+/91743020:54

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!