Tuesday, 2021-12-14

*** rlandy|ruck is now known as rlandy|out00:54
opendevreviewMerged openstack/project-config master: Add openEuler 20.03 LTS SP2 node  https://review.opendev.org/c/openstack/project-config/+/81872304:56
*** ysandeep|out is now known as ysandeep05:27
*** sshnaidm|afk is now known as sshnaidm06:57
*** ysandeep is now known as ysandeep|lunch07:26
*** amoralej|off is now known as amoralej08:15
*** ysandeep|lunch is now known as ysandeep08:41
*** ysandeep is now known as ysandeep|afk09:35
*** ysandeep|afk is now known as ysandeep10:32
*** dviroel|rover|afk is now known as dviroel|rover11:01
*** rlandy is now known as rlandy|ruck11:13
*** jpena|off is now known as jpena11:42
*** jcapitao is now known as jcapitao_lunch11:58
*** soniya29 is now known as soniya29|afk12:04
*** ysandeep is now known as ysandeep|brb12:21
*** outbrito_ is now known as outbrito12:22
*** ysandeep|brb is now known as ysandeep12:48
*** jcapitao_lunch is now known as jcapitao13:04
*** amoralej is now known as amoralej|lunch13:16
opendevreviewyatin proposed openstack/project-config master: Update Neutron's Grafana as per recent changes  https://review.opendev.org/c/openstack/project-config/+/82170613:30
opendevreviewMerged openstack/project-config master: Update Neutron's Grafana as per recent changes  https://review.opendev.org/c/openstack/project-config/+/82170613:56
*** soniya29|afk is now known as soniya2913:58
*** amoralej|lunch is now known as amoralej14:04
*** ysandeep is now known as ysandeep|out14:13
*** dviroel|rover is now known as dviroel|rover|lunch14:59
elodillesclarkb fungi : as usual, I'll run the EOL branch cleanup script now, as I see, there is nothing that would block that15:18
fungielodilles: go for it. how many branches are being deleted, approximately?15:21
fungii'm curious to see if we encounter the same zuul reconfigure event processing backlog we get from mass branch creations15:21
fungielodilles: if it's a lot, then it may be a good idea to try some smaller batches at first, just to make sure we're not seeing significant impact15:24
elodillesfungi: it should be like a dozen or so, so not that much15:29
dpawlikfungi, clarkb: hey, just FYI I started logscraper + log-gearman-* on logscraper01.openstack.org15:29
fungielodilles: a dozen or so deletions is probably fine to try in one go, enough that we'll probably be able to tell if it has the same impact as creation, but not enough to create a severe backlog15:30
fungidpawlik: thanks for the heads up15:30
elodillesfungi: i was wrong, there is 37 ocata openstack-ansible branch to delete :S15:34
elodillesfungi: but actually it's not doing it in one go, since the script asks for confirmation for every branch before it starts to delete it15:34
dpawlikfungi: no problem :) Just FYI, if Reed says that the service should be stopped for now and I will be not available, please ping tristanC or nhicher - they will stop the services. 15:35
fungidpawlik: noted, thanks15:35
fungielodilles: that sounds fine, maybe do 10 or so in fairly rapid succession and let's see how zuul is handling it before adding the resy?15:36
elodilles(so altogether ~50 branches to delete, but not in one go)15:36
fungirest15:36
elodillesfungi: actually i think zuul is not taking part of the branch deletion as it is only a gerrit thingy. or am I wrong?15:37
elodillesanyway, i'll be careful and look at what happens, and will leave time between branches15:39
fungigerrit sends events on its event stream for creation and deletion of branches. since zuul's configuration is branch-aware, zuul needs to update (reconfigure) its view of the projects to know it should drop configuration which was present on those branches now they're deleted15:40
fungiso zuul reacts to the branch deletion events by adding a reconfigure for each one into its management event queue15:40
fricklerwill be another interesting test for the new two-headed setup I guess15:41
fungiyeah15:41
elodillesoh, i see, sorry. then i'll definitely keep pauses between branch deletions15:41
fungireconfiguration often takes a few minutes, and if deletion events arrive faster than it can process a reconfiguration we've seen the management event queue count climb and the scheduler mostly stop doing other things until it gets through them15:42
elodilles(2 branch have been deleted so far)15:47
fungielodilles: you can fo a bit faster if you want, right now the management event queue length is 115:50
fungielodilles: you can see it listed in the top-right corner of https://zuul.opendev.org/t/openstack/status/15:50
fungier, top-left i mean15:50
elodilleswell, as soon as I get to the osa branches I'll queue up more branches & let you know (but i'll be on meeting meanwhile, so it might take awhile o:))15:53
fungino worries15:53
ade_leefungi, clarkb hey guys - could I get a node held for https://review.opendev.org/c/openstack/cinder/+/790535 ?15:54
ade_leewe have failure either for cinder-tempest-plugin-lvm-lio-barbican-fips or cinder-tempest-lvm-multibackend-fips that we're unable to reproduce locally15:55
ade_leesame  test fails - so node for either one would work15:56
ade_leetimburke_, will try to look at your fips failure today15:56
*** dviroel|rover|lunch is now known as dviroel|rover15:59
fungiade_lee: i've added an autohold for the cinder-tempest-plugin-lvm-lio-barbican-fips build of that change, recheck at will16:02
ade_leefungi, thanks -- I think its already running now from a rebase I did about 10 mins ago .. should I recheck?16:03
funginah, that will be good enough. once it fails it will get held16:03
ade_leefungi, cool -thanks!16:04
fungiyw16:06
timburke_ade_lee, thanks! it's strange to me; seems very hit-or-miss whether the problem will manifest or not16:12
*** beekneemech is now known as bnemec17:08
*** amoralej is now known as amoralej|off17:26
*** jpena is now known as jpena|off17:37
elodillesfungi: fyi, 10 branches have been deleted now within a couple of minutes. I see now "8 management events."17:54
elodillesi've stopped now to see how zuul handles them17:55
fungithanks for keeping an eye on it!17:56
fungicurious to see how quickly it churns through those17:56
elodillesand now it is "10 management events."17:57
elodillesdo you see any slowdown?17:59
fungiwell, i mean, zuul won't queue any new builds/changes while it's processing management events18:00
fungiit's not necessarily a "slowdown"18:00
fungibut it does delay other activities18:01
fungiit also won't process build results until it's done with whatever management event it's on18:01
elodilles:S18:01
elodillesso now zuul is kind of blocked until the "management events" number goes down to 0?18:02
clarkbelodilles: ya basically18:03
clarkbjobs that are already running keep running so its not a full stop18:04
clarkbbut new work like merging changes or starting new jobs is paused aiui18:04
elodillesi see that events counter is increasing, but fortunately the management events counter decreasing slowly18:04
fungiyes, it'll take a few minutes to process each one18:04
fricklerwould zuul be able to merge those events?18:05
elodillesok, so it's definitely better to wait between branch deletions for now, it seems. hmmm.18:05
fricklerlike they mostly would supercede each other, wouldn't they?18:06
fungifrickler: corvus was saying that he thought it was supposed to collapse them (sort of like how it does with its supercedent pipeline manager) and isn't sure why that's not actually happening18:07
fricklerwould it make sense to activate tracing while this is happening? maybe too late now but before the next deletions?18:08
* frickler notices corvus isn't here, will repeat in #opendev18:10
fungiade_lee: i lost track earlier, but circling back around now it looks like we have a held node for you... what ssh key(s) do you want authorized?18:57
ade_leefungi, https://github.com/vakwetu.keys19:27
fungiade_lee: ssh root@173.231.254.23319:29
ade_leefungi, perfect - thanks!19:30
fungiyw19:30
elodillesso as I wrote on opendev channel, I've interrupted now the branch cleanup script and will run it again later (depending on the status of the zuul bug fix tomorrow). btw, today these branches were deleted: https://paste.opendev.org/show/811671/19:38
clarkbelodilles: a fix is up now and looks straightfowrard I expect this will be able to proceed tomorrow19:38
clarkbBut I still need to properly review the fix19:38
elodillesclarkb: oh, nice, that sounds promising! thanks for the info!20:11
*** dviroel|rover is now known as dviroel|out20:31

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!