gmann | slaweq: you were right. only nova and placement has the admin role assigned to service user and neutron is all good. I am adding a var in devstack to not assign admin role to nova/placement user also https://review.opendev.org/c/openstack/devstack/+/923013 | 01:59 |
---|---|---|
gmann | slaweq: also adding a experimental job to 'know where all we need change' and 'test the policy change' https://review.opendev.org/c/openstack/tempest/+/923014 | 02:00 |
gmann | slaweq: server external events test should fail in that job where neutron should be using 'nova' service user (without admin role) https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_server_external_events.py | 02:06 |
opendevreview | Lajos Katona proposed openstack/os-ken unmaintained/xena: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923023 | 07:11 |
opendevreview | Lajos Katona proposed openstack/os-ken unmaintained/xena: Raise ValueError in case unpack_from returns zero length https://review.opendev.org/c/openstack/os-ken/+/922814 | 07:12 |
opendevreview | Lajos Katona proposed openstack/os-ken master: Add periodic weekly job for os-ken https://review.opendev.org/c/openstack/os-ken/+/915273 | 07:33 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: DNM == WIP == Testing patch for LP#2071426 https://review.opendev.org/c/openstack/neutron/+/923026 | 07:43 |
opendevreview | Maximilian Sesterhenn proposed openstack/networking-bgpvpn master: WIP: Add OVN-based Neutron BGPVPN driver https://review.opendev.org/c/openstack/networking-bgpvpn/+/883060 | 08:43 |
opendevreview | Merged openstack/os-ken unmaintained/xena: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923023 | 09:31 |
opendevreview | Bence Romsics proposed openstack/neutron master: Remove taps in trunk bridges from the dead vlan https://review.opendev.org/c/openstack/neutron/+/923035 | 09:44 |
opendevreview | Bence Romsics proposed openstack/os-vif master: Handle ports in the dead vlan as neutron does https://review.opendev.org/c/openstack/os-vif/+/923036 | 09:45 |
opendevreview | liuyulong proposed openstack/neutron master: Always get local vlan from port other_config https://review.opendev.org/c/openstack/neutron/+/923040 | 10:09 |
opendevreview | liuyulong proposed openstack/neutron master: Always get local vlan from port other_config https://review.opendev.org/c/openstack/neutron/+/923040 | 10:10 |
opendevreview | Merged openstack/neutron-lib unmaintained/yoga: Update .gitreview for unmaintained/yoga https://review.opendev.org/c/openstack/neutron-lib/+/907859 | 10:15 |
opendevreview | Merged openstack/os-ken master: Add periodic weekly job for os-ken https://review.opendev.org/c/openstack/os-ken/+/915273 | 10:15 |
*** whoami-rajat_ is now known as whoami-rajat | 11:25 | |
opendevreview | Lajos Katona proposed openstack/os-ken unmaintained/wallaby: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923046 | 12:22 |
opendevreview | Lajos Katona proposed openstack/os-ken unmaintained/victoria: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923047 | 12:26 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Add the port "fixed_ips" information in the DHCP RPC https://review.opendev.org/c/openstack/neutron/+/923026 | 12:30 |
ralonsoh | lajoskatona, slaweq ykarel ^^ please check this patch. This is solving an issue in the DHCP agent when processing the port events | 12:30 |
ralonsoh | as reported in the LP bug, I found 127.000 occurrences of this error in the last year | 12:30 |
ralonsoh | that could be causing instability in the DHCP agents and port processing | 12:31 |
lajoskatona | ralonsoh: checking | 12:31 |
ralonsoh | in any case, I still need to check the DHCP agent logs after the execution | 12:31 |
ralonsoh | but most probably we won't find this exception anymore | 12:32 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: WIP == Reduce the DHCP processing loop to a single thread https://review.opendev.org/c/openstack/neutron/+/922719 | 12:33 |
opendevreview | Elod Illes proposed openstack/neutron-vpnaas stable/2024.1: Remove reference to devstack-gate https://review.opendev.org/c/openstack/neutron-vpnaas/+/923049 | 12:50 |
opendevreview | Elod Illes proposed openstack/neutron-vpnaas stable/2023.2: Remove reference to devstack-gate https://review.opendev.org/c/openstack/neutron-vpnaas/+/923050 | 12:51 |
opendevreview | Elod Illes proposed openstack/neutron-vpnaas stable/2023.1: Remove reference to devstack-gate https://review.opendev.org/c/openstack/neutron-vpnaas/+/923051 | 12:51 |
ykarel | ralonsoh, hmm dropped 3 years back as part of https://github.com/openstack/neutron/commit/4ab699e5cd0c4c552012d694d449b0b2e474013e#diff-98ffe18904b43cb18ee90f2ac4d8705d8c3bf8156028dc3c8c2d72e131441b8dR287 | 12:59 |
ralonsoh | ykarel, yes, correct | 12:59 |
ralonsoh | affecting since Xena, that is a long time ago | 12:59 |
ralonsoh | the patch CI is now running, I'll check after if we have any occurrence | 13:00 |
ralonsoh | btw, all DHCP agent logs in the CI have this exception | 13:00 |
ykarel | yeap logsource records last 14 days of logs | 13:01 |
ykarel | so those number of occurrances are from these days | 13:01 |
slaweq | gmann thx for confirmation, I was testing that locally few day ago and yes, external events notifications are failing due to that and this cause failures of all tests which needs to spawn vm as neutron can't really notify nova about port changes | 13:02 |
ykarel | ok you have those commits in commit message, i should have read that first :) | 13:03 |
ralonsoh | yes hehehe | 13:03 |
ralonsoh | there is an error in py jobs, but trivial (already fixed in locally) | 13:04 |
ralonsoh | but I want the tempest jobs to finish | 13:04 |
ykarel | +1 | 13:05 |
slaweq | gmann once it will be done on the nova's side, we may test it again and open bugs for neutron if that would be needed. For now I consider this as working fine on the neutron side at least. Thx for help | 13:09 |
sean-k-mooney | slaweq: once what is doen on the nova side? | 13:46 |
sean-k-mooney | oh proably this https://review.opendev.org/c/openstack/devstack/+/923013 | 13:47 |
sean-k-mooney | and https://review.opendev.org/c/openstack/tempest/+/923014 | 13:47 |
slaweq | sean-k-mooney exactly | 13:57 |
haleyb | i'll start drivers meeting in a second | 14:00 |
haleyb | sorry, other meeting ran late | 14:01 |
haleyb | #startmeeting neutron_drivers | 14:01 |
opendevmeet | Meeting started Fri Jun 28 14:01:29 2024 UTC and is due to finish in 60 minutes. The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:01 |
opendevmeet | The meeting name has been set to 'neutron_drivers' | 14:01 |
ralonsoh | hello | 14:01 |
mlavalle | \o | 14:02 |
slaweq | o/ | 14:02 |
lajoskatona | o/ | 14:02 |
haleyb | i guess that's quorum at 5 | 14:03 |
haleyb | we had three items in the agenda. i did see an RFE bug come in yesterday but have not looked yet, if we have time can do that as well | 14:03 |
haleyb | #link https://bugs.launchpad.net/neutron/+bug/2070376 | 14:04 |
haleyb | ralonsoh: yours is first | 14:04 |
ralonsoh | thanks, in one shot, to speed up | 14:04 |
ralonsoh | Hi, I would like to talk about https://bugs.launchpad.net/neutron/+bug/2070376 | 14:04 |
ralonsoh | This is related to the community goal "eventlet-deprecation" | 14:04 |
ralonsoh | I started the investigation of how DHCP agent works and I found that we have several threads running at the same time. | 14:04 |
ralonsoh | 1 for the periodic resyncs, 1 for reading the RPC messages, 1 to notify the server about the port reservations | 14:04 |
ralonsoh | and multiple threads to process the RPC events and execute the resource updates | 14:04 |
ralonsoh | There we have several problems: if we switch to a preemptive thread module, we no longer will have the "protection" of the collaborative threads and we'll need to add execution locks | 14:04 |
ralonsoh | But my main concert with this resource update threads are that we don't really gain any speed having more than one thread | 14:04 |
ralonsoh | In https://review.opendev.org/c/openstack/neutron/+/626830 we implemented a way to process the ports using a priority queue | 14:04 |
ralonsoh | But along with this patch we introduced the multithreading processing, that doesn't add any speed gain | 14:04 |
ralonsoh | So my proposal is to keep the priority queue (that is of coursed valuable and needed) but remove the multithreading for the event processing | 14:04 |
ralonsoh | That will (1) not reduce the processing speed and (2) make more robust the event processing when changing to kernel threads | 14:04 |
ralonsoh | (I have a very simple patch testing that: https://review.opendev.org/c/openstack/neutron/+/922719/) | 14:05 |
haleyb | and this removes the need for locking as well? | 14:06 |
haleyb | oh, you said that | 14:06 |
haleyb | big copy/paste | 14:06 |
ralonsoh | more or less, because we have the port reservations notification | 14:06 |
ralonsoh | but at least the single thread processing the events will be unique | 14:07 |
haleyb | i guess i always thought the multiple threads helped when configuring, like creating namespaces, when lots were involved, but you didn't see that? | 14:08 |
ralonsoh | we can use the rally test to validate that | 14:08 |
ralonsoh | but I don't see speed gain with multiple threads | 14:09 |
ralonsoh | rally CI, I mean | 14:09 |
slaweq | this multi threads could be most useful when e.g. agent was restarted | 14:09 |
ralonsoh | but how? | 14:09 |
slaweq | and have to go over many networks and ports | 14:09 |
ralonsoh | there won't be multiple threads working at the same time | 14:09 |
ralonsoh | this is python | 14:09 |
slaweq | right | 14:10 |
slaweq | I was more refering to what haleyb wrote | 14:10 |
ralonsoh | to improve that (and I think that was commented before) we should need a multiprocess DHCP agent | 14:10 |
haleyb | right, i think it was always restart when there were issues, like an hour wait, although the l3-agent was always worse | 14:10 |
lajoskatona | but for that we need locks | 14:10 |
mlavalle | which is ralonsoh's point | 14:11 |
ralonsoh | and, btw, we use ctypes.PyDLL | 14:11 |
ralonsoh | # NOTE(ralonsoh): from https://docs.python.org/3.6/library/ | 14:12 |
ralonsoh | # ctypes.html#ctypes.PyDLL: "Instances of this class behave like CDLL | 14:12 |
ralonsoh | # instances, except that the Python GIL is not released during the | 14:12 |
ralonsoh | # function call, and after the function execution the Python error | 14:12 |
ralonsoh | # flag is checked." | 14:12 |
ralonsoh | so the GIL will be attached to this thread | 14:12 |
obondarev_ | late o/ | 14:12 |
*** obondarev_ is now known as obondarev | 14:13 | |
ralonsoh | I can, of course, do some load testing with multiple VMs on a node and restarting the DHCP agent | 14:13 |
ralonsoh | with different DHCP_PROCESS_GREENLET_MIN/DHCP_PROCESS_GREENLET_MAX values | 14:13 |
slaweq | that would be good test IMHO | 14:14 |
mlavalle | agree | 14:14 |
ralonsoh | perfect, I'll spawn a single compute node with tons of RAM and I'll try to spawn as many VMs as possible | 14:14 |
ralonsoh | and the restart the agent with different thread values | 14:14 |
ralonsoh | I'll update the LP bug | 14:14 |
slaweq | I don't think You really need vms for that | 14:14 |
slaweq | probably creating networks and ports would be enough | 14:15 |
ralonsoh | just ports in the netowkr | 14:15 |
ralonsoh | right | 14:15 |
slaweq | maybe https://github.com/slawqo/neutron-heater can help with that too | 14:15 |
ralonsoh | ahh yes | 14:15 |
ralonsoh | so let's wait for my feedback on this, but please consider the problem we have ahead with the eventlet deprecation | 14:16 |
ralonsoh | (that's all from my side, thanks a lot) | 14:17 |
lajoskatona | thanks ralonsoh | 14:17 |
mlavalle | yes, thanks ralonsoh | 14:18 |
haleyb | ralonsoh: thanks, will keep a lookout on the patches | 14:18 |
haleyb | and i guess we don't need to vote as it's not an rfe, but i agree with doing this work | 14:19 |
seba | we've been having many problems with eventlet, especially when running neutron-api with uwsgi, so I'm looking forward to this | 14:20 |
haleyb | slaweq: yours is next | 14:21 |
haleyb | #link https://bugs.launchpad.net/neutron/+bug/2060916 | 14:21 |
slaweq | thx | 14:22 |
slaweq | I recentrly wanted to finally start working on this | 14:22 |
slaweq | it came up when we introduced 'service' role policies | 14:22 |
slaweq | as it seems that with those new policies trusted_vif can't be set through the 'binding_profile' attribute | 14:23 |
slaweq | so this is pretty small and easy RFE to do where new api extension would be proposed and it would add new attribute to the port | 14:24 |
slaweq | this field would be then set by neutron in the binding_profile to be send e.g. to nova | 14:24 |
slaweq | as it is now | 14:24 |
slaweq | so that other components would not require changes | 14:24 |
slaweq | and binding_profile would be used (more) as it should be used so for the machine2machine communication | 14:25 |
slaweq | that's all from me | 14:25 |
ralonsoh | +1 to decouple Neutron configurations parameters written in port.binding_profile, as done before with others | 14:25 |
obondarev | +1, sounds reasonable | 14:26 |
lajoskatona | +1 | 14:26 |
mlavalle | +1 | 14:26 |
haleyb | +1 from me | 14:27 |
slaweq | thank You, so I assume that RFE is approved and I can start work on it now, right? | 14:28 |
mlavalle | yes | 14:28 |
mlavalle | fire away | 14:28 |
haleyb | yes, i will mark it approved, don't think you need a spec as the bug is pretty clear | 14:28 |
slaweq | thank You, that's all from me then :) | 14:28 |
slaweq | haleyb exactly, that's why I didn't propose any spec until now as I was hoping it will not be needed :) | 14:29 |
haleyb | the next one was added by me (and mlavalle :) | 14:29 |
haleyb | #link https://bugs.launchpad.net/neutron/+bug/2067183 | 14:29 |
haleyb | #link https://review.opendev.org/c/openstack/neutron/+/920459 | 14:30 |
haleyb | I added because we have broken things when tweaking dns_domain in the past | 14:31 |
slaweq | again DNS :) | 14:31 |
mlavalle | we have gotten and even implemented this in the past (Assaf implemented it) and then we reveresed it | 14:31 |
slaweq | we broke it so many times that I can't even count them :P | 14:31 |
mlavalle | so I think there is a group of users whose use case we are not properly covering | 14:32 |
ralonsoh | so in https://review.opendev.org/c/openstack/neutron/+/571546 we were directly reading the network dns_domain value in the DHCP agent | 14:33 |
ralonsoh | and in your proposal you are inheriting this value from the network | 14:33 |
mlavalle | while at the same time we are trying to preserve the current behavior, which was specified here https://specs.openstack.org/openstack/neutron-specs/specs/liberty/internal-dns-resolution.html | 14:33 |
ralonsoh | is almost the same, right? | 14:33 |
mlavalle | it's not my proposal | 14:33 |
ralonsoh | Jay Jahns proposal | 14:33 |
mlavalle | I just thought, while looking at the patch, that we are not addressing a use case | 14:34 |
mlavalle | so why don't we do this optional through an extension? The code change mostly is in a ml2 extension: https://review.opendev.org/c/openstack/neutron/+/920459 | 14:35 |
mlavalle | so why not create a new extension which allows users to have this new behavior? | 14:35 |
ralonsoh | that won't break current deployments and will allow this network dns inheritance | 14:35 |
ralonsoh | +1 to this idea | 14:36 |
mlavalle | yeap | 14:36 |
slaweq | my (minor) issue with that is that we have already so many dns integration extensions that it may not be easy for users which they should use | 14:36 |
frickler | like there wouldn't be enough dns extensions already :-/ | 14:36 |
ralonsoh | correct... | 14:36 |
frickler | and you cannot stack them | 14:36 |
slaweq | and they inherits one from the other | 14:36 |
obondarev | maybe just a bit more descriptive name.. | 14:36 |
mlavalle | yes, but we have users who seem to need a new behavior | 14:37 |
haleyb | mlavalle: this change (as is) could break things for users not expecting it i'd guess? | 14:37 |
mlavalle | and it keeps returning at us | 14:37 |
ralonsoh | agree with the number of DNS related extensions, and the problems configuring them (some of them are incompatible) | 14:38 |
mlavalle | haleyb: yes, think so | 14:38 |
ralonsoh | but that could be documented | 14:38 |
slaweq | another problem with having such two different extensions is testing of them in the CI | 14:38 |
lajoskatona | do we have now jobs with DNS without designate? | 14:39 |
slaweq | maybe we should look at it from the other perspective and e.g. propose new API extension which would fit this 'new' use case? | 14:39 |
mlavalle | that's exactly what I'm saying slaweq | 14:39 |
slaweq | mlavalle so you are talking about api extension now? I though that You want to have another ml2 plugin extension for that | 14:40 |
slaweq | lajoskatona I though that there are (or were) some tests like that in neutron_tempest_plugin and were run in every our job there | 14:41 |
mlavalle | slaweq: I meant booth. A new API extension that is implemented by a ml2 extension | 14:41 |
slaweq | but maybe I'm wrong and we don't have them anymore | 14:41 |
slaweq | I would be fine with new API extension for sure | 14:42 |
slaweq | regarding new ml2 extension related to dns - ok, but maybe we could somehow refactor what we have now and add this new functionality to the exisitng one somehow? But this can also be maybe done as a different task probably | 14:43 |
lajoskatona | slaweq: we have these extensions in zuul for DNS: dns-domain-ports, dns-integration, dns-integration-domain-keywords so it seems we have some tests | 14:43 |
haleyb | so an API extension to add something to the network? | 14:43 |
ralonsoh | that could be an option, to add a new field to the network | 14:43 |
ralonsoh | so this behaviour will apply not globally but per network | 14:44 |
slaweq | ++ | 14:45 |
ralonsoh | and this could be implemented, most probably, in the current dns plugin extensions | 14:45 |
mlavalle | exactly | 14:45 |
slaweq | ralonsoh++ for that | 14:45 |
slaweq | that would be IMO even better if we could do it in existing ml2 extension(s) | 14:45 |
ralonsoh | agree | 14:46 |
lajoskatona | +1 for new field for network | 14:47 |
ralonsoh | +1 to this network API DNS extension | 14:47 |
mlavalle | +1 | 14:48 |
haleyb | +1 | 14:48 |
obondarev | +1 | 14:48 |
slaweq | +1 | 14:49 |
haleyb | mlavalle: can you write-up ^^ and put it in the bug? you are better at dns wording than i am :) | 14:49 |
mlavalle | yes, I'll take care of it haleyb | 14:49 |
mlavalle | and I'll help Jan with the implementation | 14:49 |
haleyb | mlavalle: great, thanks | 14:50 |
*** dmellado07553 is now known as dmellado0755 | 14:51 | |
haleyb | i'm not sure we have time to talk about rfe liushy filed yesterday as it has not been triaged | 14:52 |
haleyb | #link https://bugs.launchpad.net/neutron/+bug/2071323 | 14:52 |
haleyb | in case anyone is wondering | 14:52 |
haleyb | but now reading that it looks like the metering agent did something like it | 14:53 |
slaweq | ovs can send sflow data to some monitoring tool IIRC | 14:54 |
slaweq | wouldn't that be enough? | 14:54 |
mlavalle | yes, ovs can do that | 14:55 |
mlavalle | I've tested it | 14:55 |
slaweq | for the SG rules accept/deny statistics we have SG logging - maybe that is enough | 14:55 |
slaweq | thx mlavalle for confirmation | 14:55 |
slaweq | I am not sure what data should neutron agents collets according to this rfe | 14:56 |
slaweq | I think this would require more detailed description IMO | 14:56 |
ralonsoh | I think we is thinking about OVS agent, but I'm just guessing | 14:56 |
slaweq | yes, probably | 14:56 |
haleyb | slaweq: right, there are some pieces in place, and i'm not sure either, but agree it is probably OVS related based on their deployments | 14:57 |
slaweq | but this agent can already be busy | 14:57 |
ralonsoh | can we request more info or to participate in this meeting? | 14:58 |
haleyb | I will put a comment in there asking, and yes, it would be better if he was in the meeting | 14:59 |
lajoskatona | +1 | 14:59 |
slaweq | ++ | 15:00 |
ralonsoh | +1 | 15:00 |
haleyb | that said, with it being summer, I will be out the next two Fridays (US holiday-ish, vacation), and again a couple weeks after | 15:00 |
mlavalle | I will be off this coming week | 15:01 |
ralonsoh | lucky you! | 15:01 |
lajoskatona | enjoy it :-) | 15:01 |
haleyb | but if liushy can attend on july 12th maybe someone else can lead? assuming quorum | 15:01 |
slaweq | enjoy | 15:01 |
obondarev | have a nice vacation mlavalle! | 15:01 |
ralonsoh | we can lead the meeting, for sure | 15:01 |
mlavalle | thanks | 15:02 |
haleyb | ok, i will ask, i know it's hard for timezone | 15:02 |
mlavalle | ralonsoh can lead the weekly meeting and I can lead the drivers, or viceversa | 15:02 |
mlavalle | whichever he prefers | 15:02 |
ralonsoh | perfect for me, I can lead the weekly meeting next week | 15:02 |
haleyb | i will be here for next week's neutron meeting, just not drivers | 15:03 |
ralonsoh | ah perfect | 15:03 |
haleyb | so if there is an rfe you can run drivers, up to you based on that | 15:03 |
mlavalle | can I get someone to push this over the edge, please: https://review.opendev.org/c/openstack/neutron/+/918151 | 15:04 |
slaweq | we will see if there will be quorum | 15:04 |
mlavalle | ? | 15:04 |
haleyb | we have no way to share schedules with other really | 15:04 |
haleyb | anyways, i will end this meeting, thanks for attending and discussion! | 15:04 |
ralonsoh | we'll check next Friday | 15:04 |
haleyb | #endmeeting | 15:04 |
opendevmeet | Meeting ended Fri Jun 28 15:04:39 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:04 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.html | 15:04 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.txt | 15:04 |
opendevmeet | Log: https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.log.html | 15:04 |
ralonsoh | have a nice weekend! | 15:04 |
lajoskatona | Bye | 15:04 |
slaweq | o/ | 15:04 |
obondarev | o/ | 15:05 |
opendevreview | Lajos Katona proposed openstack/os-ken unmaintained/yoga: Raise ValueError in case unpack_from returns zero length https://review.opendev.org/c/openstack/os-ken/+/922810 | 15:17 |
opendevreview | Sebastian Lohff proposed openstack/neutron-lib master: Add start_rpc_listeners() to MechanismDriver https://review.opendev.org/c/openstack/neutron-lib/+/919589 | 15:17 |
opendevreview | Merged openstack/neutron-vpnaas stable/2024.1: Remove reference to devstack-gate https://review.opendev.org/c/openstack/neutron-vpnaas/+/923049 | 15:37 |
opendevreview | Merged openstack/os-ken unmaintained/wallaby: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923046 | 16:31 |
gmann | slaweq: sure thing. thanks | 17:12 |
opendevreview | Merged openstack/os-ken unmaintained/yoga: Raise ValueError in case unpack_from returns zero length https://review.opendev.org/c/openstack/os-ken/+/922810 | 17:39 |
opendevreview | Merged openstack/os-ken unmaintained/victoria: Drop lower-constraints.txt and its testing https://review.opendev.org/c/openstack/os-ken/+/923047 | 18:10 |
opendevreview | Miro Tomaska proposed openstack/neutron stable/2023.1: [OVN] Add the bridge name and datapath type to the port VIF details https://review.opendev.org/c/openstack/neutron/+/922431 | 20:27 |
opendevreview | melanie witt proposed openstack/neutron master: Add dynamic lookup for tcpdump binary https://review.opendev.org/c/openstack/neutron/+/923073 | 21:43 |
melwitt | ^ cc frickler who has fixed this in the past https://review.opendev.org/c/openstack/neutron/+/858018 | 21:46 |
mlavalle | haleyb: you still around? | 22:35 |
mlavalle | if yes, would you push this over the edge: https://review.opendev.org/c/openstack/neutron/+/918151? | 22:36 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!