Friday, 2024-06-28

gmannslaweq: you were right. only nova and placement has the admin role assigned to service user and neutron is all good. I am adding a var in devstack to not assign admin role to nova/placement user also https://review.opendev.org/c/openstack/devstack/+/92301301:59
gmannslaweq: also adding a experimental job to 'know where all we need change' and 'test the policy change' https://review.opendev.org/c/openstack/tempest/+/92301402:00
gmannslaweq: server external events test should fail in that job where neutron should be using 'nova' service user (without admin role) https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_server_external_events.py02:06
opendevreviewLajos Katona proposed openstack/os-ken unmaintained/xena: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92302307:11
opendevreviewLajos Katona proposed openstack/os-ken unmaintained/xena: Raise ValueError in case unpack_from returns zero length  https://review.opendev.org/c/openstack/os-ken/+/92281407:12
opendevreviewLajos Katona proposed openstack/os-ken master: Add periodic weekly job for os-ken  https://review.opendev.org/c/openstack/os-ken/+/91527307:33
opendevreviewRodolfo Alonso proposed openstack/neutron master: DNM == WIP == Testing patch for LP#2071426  https://review.opendev.org/c/openstack/neutron/+/92302607:43
opendevreviewMaximilian Sesterhenn proposed openstack/networking-bgpvpn master: WIP: Add OVN-based Neutron BGPVPN driver  https://review.opendev.org/c/openstack/networking-bgpvpn/+/88306008:43
opendevreviewMerged openstack/os-ken unmaintained/xena: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92302309:31
opendevreviewBence Romsics proposed openstack/neutron master: Remove taps in trunk bridges from the dead vlan  https://review.opendev.org/c/openstack/neutron/+/92303509:44
opendevreviewBence Romsics proposed openstack/os-vif master: Handle ports in the dead vlan as neutron does  https://review.opendev.org/c/openstack/os-vif/+/92303609:45
opendevreviewliuyulong proposed openstack/neutron master: Always get local vlan from port other_config  https://review.opendev.org/c/openstack/neutron/+/92304010:09
opendevreviewliuyulong proposed openstack/neutron master: Always get local vlan from port other_config  https://review.opendev.org/c/openstack/neutron/+/92304010:10
opendevreviewMerged openstack/neutron-lib unmaintained/yoga: Update .gitreview for unmaintained/yoga  https://review.opendev.org/c/openstack/neutron-lib/+/90785910:15
opendevreviewMerged openstack/os-ken master: Add periodic weekly job for os-ken  https://review.opendev.org/c/openstack/os-ken/+/91527310:15
*** whoami-rajat_ is now known as whoami-rajat11:25
opendevreviewLajos Katona proposed openstack/os-ken unmaintained/wallaby: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92304612:22
opendevreviewLajos Katona proposed openstack/os-ken unmaintained/victoria: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92304712:26
opendevreviewRodolfo Alonso proposed openstack/neutron master: Add the port "fixed_ips" information in the DHCP RPC  https://review.opendev.org/c/openstack/neutron/+/92302612:30
ralonsohlajoskatona, slaweq ykarel ^^ please check this patch. This is solving an issue in the DHCP agent when processing the port events12:30
ralonsohas reported in the LP bug, I found 127.000 occurrences of this error in the last year12:30
ralonsohthat could be causing instability in the DHCP agents and port processing12:31
lajoskatonaralonsoh: checking12:31
ralonsohin any case, I still need to check the DHCP agent logs after the execution12:31
ralonsohbut most probably we won't find this exception anymore12:32
opendevreviewRodolfo Alonso proposed openstack/neutron master: WIP == Reduce the DHCP processing loop to a single thread  https://review.opendev.org/c/openstack/neutron/+/92271912:33
opendevreviewElod Illes proposed openstack/neutron-vpnaas stable/2024.1: Remove reference to devstack-gate  https://review.opendev.org/c/openstack/neutron-vpnaas/+/92304912:50
opendevreviewElod Illes proposed openstack/neutron-vpnaas stable/2023.2: Remove reference to devstack-gate  https://review.opendev.org/c/openstack/neutron-vpnaas/+/92305012:51
opendevreviewElod Illes proposed openstack/neutron-vpnaas stable/2023.1: Remove reference to devstack-gate  https://review.opendev.org/c/openstack/neutron-vpnaas/+/92305112:51
ykarelralonsoh, hmm dropped 3 years back as part of https://github.com/openstack/neutron/commit/4ab699e5cd0c4c552012d694d449b0b2e474013e#diff-98ffe18904b43cb18ee90f2ac4d8705d8c3bf8156028dc3c8c2d72e131441b8dR28712:59
ralonsohykarel, yes, correct12:59
ralonsohaffecting since Xena, that is a long time ago12:59
ralonsohthe patch CI is now running, I'll check after if we have any occurrence13:00
ralonsohbtw, all DHCP agent logs in the CI have this exception13:00
ykarelyeap logsource records last 14 days of logs13:01
ykarelso those number of occurrances are from these days13:01
slaweqgmann thx for confirmation, I was testing that locally few day ago and yes, external events notifications are failing due to that and this cause failures of all tests which needs to spawn vm as neutron can't really notify nova about port changes13:02
ykarelok you have those commits in commit message, i should have read that first :)13:03
ralonsohyes hehehe13:03
ralonsohthere is an error in py jobs, but trivial (already fixed in locally)13:04
ralonsohbut I want the tempest jobs to finish13:04
ykarel+113:05
slaweqgmann once it will be done on the nova's side, we may test it again and open bugs for neutron if that would be needed. For now I consider this as working fine on the neutron side at least. Thx for help13:09
sean-k-mooneyslaweq: once what is doen on the nova side?13:46
sean-k-mooneyoh proably this https://review.opendev.org/c/openstack/devstack/+/92301313:47
sean-k-mooneyand https://review.opendev.org/c/openstack/tempest/+/92301413:47
slaweqsean-k-mooney exactly13:57
haleybi'll start drivers meeting in a second14:00
haleybsorry, other meeting ran late14:01
haleyb#startmeeting neutron_drivers14:01
opendevmeetMeeting started Fri Jun 28 14:01:29 2024 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'neutron_drivers'14:01
ralonsohhello14:01
mlavalle\o14:02
slaweqo/14:02
lajoskatonao/14:02
haleybi guess that's quorum at 514:03
haleybwe had three items in the agenda. i did see an RFE bug come in yesterday but have not looked yet, if we have time can do that as well14:03
haleyb#link https://bugs.launchpad.net/neutron/+bug/207037614:04
haleybralonsoh: yours is first14:04
ralonsohthanks, in one shot, to speed up14:04
ralonsohHi, I would like to talk about https://bugs.launchpad.net/neutron/+bug/207037614:04
ralonsohThis is related to the community goal "eventlet-deprecation"14:04
ralonsohI started the investigation of how DHCP agent works and I found that we have several threads running at the same time.14:04
ralonsoh1 for the periodic resyncs, 1 for reading the RPC messages, 1 to notify the server about the port reservations14:04
ralonsohand multiple threads to process the RPC events and execute the resource updates14:04
ralonsohThere we have several problems: if we switch to a preemptive thread module, we no longer will have the "protection" of the collaborative threads and we'll need to add execution locks14:04
ralonsohBut my main concert with this resource update threads are that we don't really gain any speed having more than one thread14:04
ralonsohIn https://review.opendev.org/c/openstack/neutron/+/626830 we implemented a way to process the ports using a priority queue14:04
ralonsohBut along with this patch we introduced the multithreading processing, that doesn't add any speed gain14:04
ralonsohSo my proposal is to keep the priority queue (that is of coursed valuable and needed) but remove the multithreading for the event processing14:04
ralonsohThat will (1) not reduce the processing speed and (2) make more robust the event processing when changing to kernel threads14:04
ralonsoh(I have a very simple patch testing that: https://review.opendev.org/c/openstack/neutron/+/922719/)14:05
haleyband this removes the need for locking as well?14:06
haleyboh, you said that14:06
haleybbig copy/paste14:06
ralonsohmore or less, because we have the port reservations notification14:06
ralonsohbut at least the single thread processing the events will be unique14:07
haleybi guess i always thought the multiple threads helped when configuring, like creating namespaces, when lots were involved, but you didn't see that?14:08
ralonsohwe can use the rally test to validate that14:08
ralonsohbut I don't see speed gain with multiple threads14:09
ralonsohrally CI, I mean14:09
slaweqthis multi threads could be most useful when e.g. agent was restarted14:09
ralonsohbut how?14:09
slaweqand have to go over many networks and ports14:09
ralonsohthere won't be multiple threads working at the same time14:09
ralonsohthis is python14:09
slaweqright14:10
slaweqI was more refering to what haleyb wrote14:10
ralonsohto improve that (and I think that was commented before) we should need a multiprocess DHCP agent14:10
haleybright, i think it was always restart when there were issues, like an hour wait, although the l3-agent was always worse14:10
lajoskatonabut for that we need locks14:10
mlavallewhich is ralonsoh's point14:11
ralonsohand, btw, we use ctypes.PyDLL14:11
ralonsoh        # NOTE(ralonsoh): from https://docs.python.org/3.6/library/14:12
ralonsoh        # ctypes.html#ctypes.PyDLL: "Instances of this class behave like CDLL14:12
ralonsoh        # instances, except that the Python GIL is not released during the14:12
ralonsoh        # function call, and after the function execution the Python error14:12
ralonsoh        # flag is checked."14:12
ralonsohso the GIL will be attached to this thread14:12
obondarev_late o/14:12
*** obondarev_ is now known as obondarev14:13
ralonsohI can, of course, do some load testing with multiple VMs on a node and restarting the DHCP agent14:13
ralonsohwith different DHCP_PROCESS_GREENLET_MIN/DHCP_PROCESS_GREENLET_MAX values14:13
slaweqthat would be good test IMHO14:14
mlavalleagree14:14
ralonsohperfect, I'll spawn a single compute node with tons of RAM and I'll try to spawn as many VMs as possible14:14
ralonsohand the restart the agent with different thread values14:14
ralonsohI'll update the LP bug14:14
slaweqI don't think You really need vms for that14:14
slaweqprobably creating networks and ports would be enough14:15
ralonsohjust ports in the netowkr14:15
ralonsohright14:15
slaweqmaybe https://github.com/slawqo/neutron-heater can help with that too14:15
ralonsohahh yes14:15
ralonsohso let's wait for my feedback on this, but please consider the problem we have ahead with the eventlet deprecation14:16
ralonsoh(that's all from my side, thanks a lot)14:17
lajoskatonathanks ralonsoh14:17
mlavalleyes, thanks ralonsoh 14:18
haleybralonsoh: thanks, will keep a lookout on the patches14:18
haleyband i guess we don't need to vote as it's not an rfe, but i agree with doing this work14:19
sebawe've been having many problems with eventlet, especially when running neutron-api with uwsgi, so I'm looking forward to this14:20
haleybslaweq: yours is next14:21
haleyb#link https://bugs.launchpad.net/neutron/+bug/206091614:21
slaweqthx14:22
slaweqI recentrly wanted to finally start working on this14:22
slaweqit came up when we introduced 'service' role policies14:22
slaweqas it seems that with those new policies trusted_vif can't be set through the 'binding_profile' attribute14:23
slaweqso this is pretty small and easy RFE to do where new api extension would be proposed and it would add new attribute to the port14:24
slaweqthis field would be then set by neutron in the binding_profile to be send e.g. to nova14:24
slaweqas it is now14:24
slaweqso that other components would not require changes14:24
slaweqand binding_profile would be used (more) as it should be used so for the machine2machine communication14:25
slaweqthat's all from me14:25
ralonsoh+1 to decouple Neutron configurations parameters written in port.binding_profile, as done before with others14:25
obondarev+1, sounds reasonable14:26
lajoskatona+114:26
mlavalle+114:26
haleyb+1 from me14:27
slaweqthank You, so I assume that RFE is approved and I can start work on it now, right?14:28
mlavalleyes14:28
mlavallefire away14:28
haleybyes, i will mark it approved, don't think you need a spec as the bug is pretty clear14:28
slaweqthank You, that's all from me then :)14:28
slaweqhaleyb exactly, that's why I didn't propose any spec until now as I was hoping it will not be needed :)14:29
haleybthe next one was added by me (and mlavalle :)14:29
haleyb#link https://bugs.launchpad.net/neutron/+bug/206718314:29
haleyb#link https://review.opendev.org/c/openstack/neutron/+/92045914:30
haleybI added because we have broken things when tweaking dns_domain in the past14:31
slaweqagain DNS :)14:31
mlavallewe have gotten and even implemented this in the past (Assaf implemented it) and then we reveresed it14:31
slaweqwe broke it so many times that I can't even count them :P14:31
mlavalleso I think there is  a group of users whose use case we are not properly covering14:32
ralonsohso in https://review.opendev.org/c/openstack/neutron/+/571546 we were directly reading the network dns_domain value in the DHCP agent14:33
ralonsohand in your proposal you are inheriting this value from the network14:33
mlavallewhile at the same time we are trying to preserve the current behavior, which was specified here https://specs.openstack.org/openstack/neutron-specs/specs/liberty/internal-dns-resolution.html14:33
ralonsohis almost the same, right?14:33
mlavalleit's not my proposal14:33
ralonsohJay Jahns proposal14:33
mlavalleI just thought, while looking at the patch, that we are not addressing a use case14:34
mlavalleso why don't we do this optional through an extension? The code change mostly is in a ml2 extension: https://review.opendev.org/c/openstack/neutron/+/92045914:35
mlavalleso why not create a new extension which allows users to have this new behavior?14:35
ralonsohthat won't break current deployments and will allow this network dns inheritance14:35
ralonsoh+1 to this idea14:36
mlavalleyeap14:36
slaweqmy (minor) issue with that is that we have already so many dns integration extensions that it may not be easy for users which they should use14:36
fricklerlike there wouldn't be enough dns extensions already :-/14:36
ralonsohcorrect...14:36
fricklerand you cannot stack them14:36
slaweqand they inherits one from the other14:36
obondarevmaybe just a bit more descriptive name..14:36
mlavalleyes, but we have users who seem to need a new behavior14:37
haleybmlavalle: this change (as is) could break things for users not expecting it i'd guess?14:37
mlavalleand it keeps returning at us14:37
ralonsohagree with the number of DNS related extensions, and the problems configuring them (some of them are incompatible)14:38
mlavallehaleyb: yes, think so14:38
ralonsohbut that could be documented14:38
slaweqanother problem with having such two different extensions is testing of them in the CI14:38
lajoskatonado we have now jobs with DNS without designate?14:39
slaweqmaybe we should look at it from the other perspective and e.g. propose new API extension which would fit this 'new' use case?14:39
mlavallethat's exactly what I'm saying slaweq 14:39
slaweqmlavalle so you are talking about api extension now? I though that You want to have another ml2 plugin extension for that14:40
slaweqlajoskatona I though that there are (or were) some tests like that in neutron_tempest_plugin and were run in every our job there14:41
mlavalleslaweq: I meant booth. A new API extension that is implemented by a ml2 extension14:41
slaweqbut maybe I'm wrong and we don't have them anymore14:41
slaweqI would be fine with new API extension for sure14:42
slaweqregarding new ml2 extension related to dns - ok, but maybe we could somehow refactor what we have now and add this new functionality to the exisitng one somehow? But this can also be maybe done as a different task probably14:43
lajoskatonaslaweq: we have these extensions in zuul for DNS: dns-domain-ports, dns-integration, dns-integration-domain-keywords so it seems we have some tests 14:43
haleybso an API extension to add something to the network?14:43
ralonsohthat could be an option, to add a new field to the network14:43
ralonsohso this behaviour will apply not globally but per network14:44
slaweq++14:45
ralonsohand this could be implemented, most probably, in the current dns plugin extensions14:45
mlavalleexactly14:45
slaweqralonsoh++ for that14:45
slaweqthat would be IMO even better if we could do it in existing ml2 extension(s)14:45
ralonsohagree14:46
lajoskatona+1 for new field for network14:47
ralonsoh+1 to this network API DNS extension14:47
mlavalle+114:48
haleyb+114:48
obondarev+114:48
slaweq+114:49
haleybmlavalle: can you write-up ^^ and put it in the bug? you are better at dns wording than i am :)14:49
mlavalleyes, I'll take care of it haleyb 14:49
mlavalleand I'll help Jan with the implementation14:49
haleybmlavalle: great, thanks14:50
*** dmellado07553 is now known as dmellado075514:51
haleybi'm not sure we have time to talk about rfe liushy filed yesterday as it has not been triaged14:52
haleyb#link https://bugs.launchpad.net/neutron/+bug/207132314:52
haleybin case anyone is wondering14:52
haleybbut now reading that it looks like the metering agent did something like it14:53
slaweqovs can send sflow data to some monitoring tool IIRC14:54
slaweqwouldn't that be enough?14:54
mlavalleyes, ovs can do that14:55
mlavalleI've tested it14:55
slaweqfor the SG rules accept/deny statistics we have SG logging - maybe that is enough14:55
slaweqthx mlavalle for confirmation14:55
slaweqI am not sure what data should neutron agents collets according to this rfe14:56
slaweqI think this would require more detailed description IMO14:56
ralonsohI think we is thinking about OVS agent, but I'm just guessing 14:56
slaweqyes, probably14:56
haleybslaweq: right, there are some pieces in place, and i'm not sure either, but agree it is probably OVS related based on their deployments14:57
slaweqbut this agent can already be busy14:57
ralonsohcan we request more info or to participate in this meeting?14:58
haleybI will put a comment in there asking, and yes, it would be better if he was in the meeting14:59
lajoskatona+114:59
slaweq++15:00
ralonsoh+115:00
haleybthat said, with it being summer, I will be out the next two Fridays (US holiday-ish, vacation), and again a couple weeks after15:00
mlavalleI will be off this coming week15:01
ralonsohlucky you!15:01
lajoskatonaenjoy it :-)15:01
haleybbut if liushy can attend on july 12th maybe someone else can lead? assuming quorum15:01
slaweqenjoy15:01
obondarevhave a nice vacation mlavalle!15:01
ralonsohwe can lead the meeting, for sure15:01
mlavallethanks15:02
haleybok, i will ask, i know it's hard for timezone15:02
mlavalleralonsoh can lead the weekly meeting and I can lead the drivers, or viceversa15:02
mlavallewhichever he prefers15:02
ralonsohperfect for me, I can lead the weekly meeting next week15:02
haleybi will be here for next week's neutron meeting, just not drivers15:03
ralonsohah perfect15:03
haleybso if there is an rfe you can run drivers, up to you based on that15:03
mlavallecan I get someone to push this over the edge, please: https://review.opendev.org/c/openstack/neutron/+/91815115:04
slaweqwe will see if there will be quorum15:04
mlavalle?15:04
haleybwe have no way to share schedules with other really15:04
haleybanyways, i will end this meeting, thanks for attending and discussion!15:04
ralonsohwe'll check next Friday15:04
haleyb#endmeeting15:04
opendevmeetMeeting ended Fri Jun 28 15:04:39 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:04
opendevmeetMinutes:        https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.html15:04
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.txt15:04
opendevmeetLog:            https://meetings.opendev.org/meetings/neutron_drivers/2024/neutron_drivers.2024-06-28-14.01.log.html15:04
ralonsohhave a nice weekend!15:04
lajoskatonaBye15:04
slaweqo/15:04
obondarevo/15:05
opendevreviewLajos Katona proposed openstack/os-ken unmaintained/yoga: Raise ValueError in case unpack_from returns zero length  https://review.opendev.org/c/openstack/os-ken/+/92281015:17
opendevreviewSebastian Lohff proposed openstack/neutron-lib master: Add start_rpc_listeners() to MechanismDriver  https://review.opendev.org/c/openstack/neutron-lib/+/91958915:17
opendevreviewMerged openstack/neutron-vpnaas stable/2024.1: Remove reference to devstack-gate  https://review.opendev.org/c/openstack/neutron-vpnaas/+/92304915:37
opendevreviewMerged openstack/os-ken unmaintained/wallaby: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92304616:31
gmannslaweq: sure thing. thanks17:12
opendevreviewMerged openstack/os-ken unmaintained/yoga: Raise ValueError in case unpack_from returns zero length  https://review.opendev.org/c/openstack/os-ken/+/92281017:39
opendevreviewMerged openstack/os-ken unmaintained/victoria: Drop lower-constraints.txt and its testing  https://review.opendev.org/c/openstack/os-ken/+/92304718:10
opendevreviewMiro Tomaska proposed openstack/neutron stable/2023.1: [OVN] Add the bridge name and datapath type to the port VIF details  https://review.opendev.org/c/openstack/neutron/+/92243120:27
opendevreviewmelanie witt proposed openstack/neutron master: Add dynamic lookup for tcpdump binary  https://review.opendev.org/c/openstack/neutron/+/92307321:43
melwitt^ cc frickler who has fixed this in the past https://review.opendev.org/c/openstack/neutron/+/85801821:46
mlavallehaleyb: you still around?22:35
mlavalleif yes, would you push this over the edge: https://review.opendev.org/c/openstack/neutron/+/918151?22:36

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!