Thursday, 2025-02-27

opendevreviewLajos Katona proposed openstack/project-config master: Neutron: add review priority to ovn-bgp-agent and ovsdbapp  https://review.opendev.org/c/openstack/project-config/+/93890809:17
opendevreviewMerged openstack/project-config master: Neutron: add review priority to ovn-bgp-agent and ovsdbapp  https://review.opendev.org/c/openstack/project-config/+/93890813:41
opendevreviewMerged openstack/project-config master: Remove Freezer DR from infra  https://review.opendev.org/c/openstack/project-config/+/93818415:16
opendevreviewClark Boylan proposed opendev/system-config master: Run gitea with memcached cache adapter  https://review.opendev.org/c/opendev/system-config/+/94265015:58
clarkbthat update simply changes the memcached container image location to the new mirrored image15:59
jamesdenton@fungi No VMs on PUBLICNET in Flex - we implemented service types a while back on those subnets to only allow Floating IPs and router attachments. This allows for tenants to manage their own network space while only consuming IPs for 1:1 NATs when necessary16:12
jamesdentonso, un fortunately, you'll have to do the floating IP dance for each VM if you require ingress16:13
clarkbjamesdenton: thank you for confirming16:14
clarkbso the next step for us is to update our cloud launcher to build the networking for us16:14
jamesdentonsure thing. Apologies for the confusion. We will likely also implement the 'get me a network' function, just aren't there yet16:14
fungimakes sense, thanks for clarifying!16:15
fungiand yeah, support for nova get-me-a-network would probably require a default network/port/router autoprovisioned with each new project when it's created, i don't think there's any magic glue for those steps (but it's possible i missed that it was added at some point)16:16
clarkbfungi: shoudl I push up a chnge to update cloud launcher or do you intend on doing that?16:41
fungii can pull it together after my next meeting, i expect it'll just be copying entries from the old raxflex-sjc3 config16:42
fungimost of the definitions already exist, so just need to add the references16:42
clarkbyes shouldn't need to define any new resources just apply them to the new cloud regions16:43
clarkbzuul is very busy this morning17:03
fricklerjamesdenton: fungi: it would be nice to make "openstack network auto allocated topology create" work. afaict for that you'd a) need to set the default flag on PUBLICNET and b) create a public/shared subnet pool, see also https://docs.openstack.org/neutron/latest/admin/config-auto-allocation.html17:30
jamesdentonfrickler 100% agree - it's in our backlog17:30
fricklerif that works, you should be able to simply do "server create --net=auto"17:30
fricklerjamesdenton: before IPv6 or after it? ;-D17:31
* jamesdenton runs away17:31
jamesdentonbefore17:31
jamesdentonfrickler in the IPv6 implementations you've worked with, do they tend to be direct provider network attach or is it tenant-driven w/ subnet pools and dynamic routing?17:51
clarkbjamesdenton: currently all of the clouds opendev uses use direct attachment to public networks for ipv4 and ipv6. Then its a mixture if some are set up to also allow tenant driven networks behind that public net17:52
clarkbre zuul being busy one thing I notice is the ready node count seems somewhat high. I wonder if that means that zuul sin't able to assign nodes as quickly as nodepool is building them17:53
clarkbthere are short drops in executor counts but in general all executors are available so I don't think we're hitting the executor resource limit checks17:53
jamesdentonthanks clarkb. I assume those are setup as dual-stack, too?17:53
clarkbjamesdenton: yes. We have in the past had some providers that were primarily ipv6 with only NAT'd ipv4 outbound. But we don't currently have any of those17:54
clarkbI think the current set are rax classic, vexxhost, and ovh. rax classic and ovh are what I'd call provider networking. YOu just get an ip on a network and don't get to customize that. Vexxhost is the one that does the other thing iirc17:55
clarkbwhere you can have easy mode or opt into doing the tenant driven network setup for each stack iirc17:55
opendevreviewJeremy Stanley proposed opendev/system-config master: Add networks and routers in new flex tenants  https://review.opendev.org/c/opendev/system-config/+/94293617:56
fungiclarkb: ^17:56
jamesdentoncool, thank you very much17:56
fungiand then ovh does something weird with little ipv6 /124 networks, i think?17:57
clarkbI think ovh does /128 and /32 addrs17:58
fungiactually no, looking at a server there, it's provider network with a /5617:58
clarkboh its only ipv4 that does the small subnet then17:58
fungii guess it eventually changed17:58
clarkbpretty sure ipv4 is a /3217:58
clarkband linux knows to talk to your default route even if it isn't on the same subnet17:58
fungiaha, yep ipv4 has a /32 mask17:58
jamesdentonthat's an interesting setup17:59
fungiand yeah, the default v4 route in ovh uses a dev route17:59
fungiinet 158.69.69.81/32 metric 100 scope global dynamic ens317:59
fungidefault via 158.69.64.1 dev ens3 proto dhcp src 158.69.69.81 metric 10017:59
clarkbI think it is a way of helping the instance undersatnd it is in a pvlan like setup?18:00
fungi158.69.64.1 dev ens3 proto dhcp scope link src 158.69.69.81 metric 10018:00
fungithat's what adds the connection for the gateway18:00
clarkbthat way you don't try to ARP neighbors that can never respond. But yes it is different18:00
jamesdentonwe had discussed using subnet pools for IPv6 but i'm not super crazy about needing to depend on BGP agent for that to work, unless there's some other method18:00
clarkbfungi: I +2'd your change but didn't approve it. Zuul is sufficiently busy that there is no rush to approve it I don't think. But feel free to do so if no one else manages to review it by the time zuul +1's it18:01
fungiwell, i'm going to pop out for a quick lunch now that my meetings are over, but can look at it when i get back18:01
clarkblooks like a good chunk of the ready/available nodes are in ovh bhs1. I wonder if something about that particular region makes it slower to spin up jobs?18:02
clarkbhttps://grafana.opendev.org/d/2b4dba9e25/nodepool3a-ovh?orgId=1&from=now-6h&to=now&timezone=utc&var-region=$__all the difference between bhs1 and gra1 is unexpected at least18:02
fricklerjamesdenton: I've never built something with a provider network, always tenant networks and n-d-r as documented in https://docs.openstack.org/neutron-dynamic-routing/latest/install/usecase-ipv6.html18:13
fricklerbut then I was a network engineer in an earlier life, so using BGP is kind of natural for me ;)18:16
clarkbboth zuul schedulers have low system loads so I don't think the schedulers are failing to farm out the work to the nodes18:30
clarkbI half wonder if we could be having zookeeper contention. Load on the three zk cluster nodes is fine too. But zk is io limited iirc18:30
clarkbthat said if it was zk realted I wouldn't expect bhs1 and gra1 to have such different graphs18:31
clarkbzk is central to everything and slowness there would in theory impact everything somewhat consistently18:31
clarkblooking at bhs1 node listing there are definitely unlocked ready nodes with several hour long timestamps. I'm picking one of those and will try and trace it back to zuul and see if there is any indicator for why18:35
clarkblooks like node 0040009172 belongs to node request 300-0026459610 for openstack/tacker 942762,1 tacker-ft-v2-st-userdata-ccvp which is a queued job from about 6 hours ago. I'm beginning to wonder if the problem is lots of multinode requests not being able to be fulfilled do to general system contention18:38
clarkbso not slowness of the processing but bucket filling problems with available cloud resources18:38
clarkbhrm that node requests eventually got declined by nl04's bhs1 provider because some node(s) failed18:40
clarkbso these ready nodes are going back into the available bucket and we're not using them in subsequent requests for whatever reason.18:40
clarkbmeanwhile that job's node request is being processed elsewhere which I will try to track down next18:40
clarkbrax-ord declined it due to failed node boots an hour and a half ago18:42
clarkbI don't see anyone else picking it up yet (likely due to being at capacity in other regions?)18:42
corvusso primary anomaly at this point is why wasn't 0040009172 reassigned?18:43
clarkbtheory time: as mentioned before tacker is quite demanding in its resource usage. Just a few (even two maybe?) tacker chagnes are enough to use all of our available quota. If enough of those requests fail in say bhs1 while we are already at quota that region won't be reused by those requests and we end up with large available node counts. Meanwhile every other ergion is at capacity18:44
clarkbdue to additiaonl demand and not able to service those requests18:44
clarkbI think the thing I'm most confused about is why bhs1 isn't using those nodes now to fulfill other requests to get other changes moving along. But maybe that is due to node request priority (They are handled in fifo order? will that prevent cheaper newer requests from being handled in providers that have already denied the older expensive requests?)18:44
clarkbcorvus: yup I think that is the main question18:44
clarkbI suspect that things will eventually return to normal particularly as the day goes on and overall demand eases. (Some of this must be related to opensatck release activity?)18:46
clarkbbut this illustrates a reason why projects like tacker should not do what they are doing18:46
clarkbcorvus: I see this in the nl04 launcher debug log: nodepool.PoolWorker.ovh-bhs1-main: Handler is paused, deferring request18:48
clarkbcorvus: is it possible that the provider poolworker sees we are at quota/capacity and pauses without first checking if it can fulfill the request out of ready unlocked nodes?18:49
clarkbI haven't looked at that code in a while but I want to say we checked ready unlocked nodes before we look to the cloud and its quota so I don't think that is the case but it would explain what we are seeing18:49
corvusthis is a bug that is exposed by this unusual situation18:52
corvuswhen we unlock a multinode request when it fails, we don't deallocate the node from the request (this is the main bug).  that means that node sits there, still allocated but unlocked.  the cleanup method will eventually deallocate it once the request goes away.  but that will be a long time with these big requests retrying.18:54
corvusi think we can deallocate earlier, gimme a sec.18:54
clarkboh interesting. Is this behavior not the same for a single node request?18:57
corvusremote:   https://review.opendev.org/c/zuul/nodepool/+/942939 Clear allocation when unlocking failed node requests [NEW]        18:57
corvusclarkb: if a single node request encounters a failure launching a node, then the only node of interest is a failed one, not a ready one, so no one notices.18:58
corvusand an unlocked failed node can always be deleted even if it's allocated18:59
corvusso basically, the amount this bug is noticeable scales proportionally to the ratio of successful/failed node launches in a single request -- muliplied by the number of providers that can handle that request.18:59
corvusso the current situation is tailor made to bring it to our attention.19:00
clarkbgot it19:01
corvusi don't think there's any (safe) zk surgery we could do to speed this up; pretty much only just yanking node requests or rolling out that fix.19:01
clarkband when we have projects like tacker making many large multinode requests the problem is made worse and probably not noticeable in most other situations19:01
clarkbcorvus: if we clear the node allocation that would fix the underlying bug and allow the ready unlocked nodes to be reused?19:06
clarkbput another way do we expect 942939 to be the solution or merely a mitigation?19:07
corvusbtw, now is a really good time to look at the status page and appreciate the motivation for making the "zoomed" out view.19:07
corvusclarkb: yes and yes19:07
corvusor rather yes and "solution"19:07
clarkbre the status page the total chagne count does seem to have caused it to slow down a bit19:08
clarkbstill useable but not sure if that is worth looking into as well (do the firefox and/or chrome debugging tools have profilers/)19:08
corvusit is and they do19:08
clarkbok looking at the nodepool source the node.allocated_to field is a request id and we can't assign the node to a different request with that set19:09
clarkbso ya this should fix it up19:09
clarkb+2 on the change from me. Not somethign we can really force merge as we want the images to build and publish. We could manually updated the images if this doesn't slowly resolve itself I guess19:10
corvusi went ahead and put it in gate and promoted it, so we may yet benefit from it today.  :)19:11
corvusi'll go run the test suite locally19:12
fungicatching back up19:16
fungiseems like everyone here was a network engineer at some point19:16
fungiand yeah, bgp prepends and pads still haunt my dreams19:17
clarkbcorvus: would dequeing changes work as a safe workaround?19:17
corvusclarkb: yes19:18
corvusthat would remove the node request19:18
clarkbI think I'd be ok with that particularly for the check queue if things don't improve19:19
fungi+2 from me on 942939 as well, thanks for the quick rundown and solution!19:20
clarkbsome changes in check are waiting on only a job or two. I'm half tempted to post to those changes a link to the buildset results for what did run then dequeue to try and improve things. However if they only have a job or two then the impact per change is likely small and I would have to do that for a large number of changes19:28
clarkbany thoughts on whether we thnk that is a worthwhile mitigation to begin nowish/19:29
fungifor tacker specifically, or everything?19:32
fungialso yikes re: the status page, as corvus mentioned. i guess this is the end of openstack feature freeze week in action19:33
fungiquick someone take a screenshot for dansmith ;)19:33
fungi165 items in the check pipeline and 57 in gate is definitely something i appreciate being summarized19:34
dansmithit's getting to look like a mindcraft kinda came with all the colored tiles19:34
dansmith(in collapsed view)19:34
fungior minesweeper for us old folks19:34
clarkbfungi: basically everything stuck for hours19:35
* fungi is afraid to click on any tiles, there might be a bomb19:35
clarkbopenstacksdk has a few too19:35
clarkbkolla-ansible19:35
clarkbetc19:35
fungiyeah, i mean this is almost like "old times" if you multiply it by a factor of 10, but 8-hour delays on stuff in check is pretty extreme these days19:36
fungialmost as long in the integrated pipeline in the gate too19:37
clarkbya the problem is they have jobs with many nodes in them that aren't getting their requests fulfilled and those requests are effectively locking out the quota we have19:37
clarkbwhich causes the problem to snowball. If we remove/dequeue those changes then the locked resources can be freed for use again19:37
funginice that resource prioritization has ensured that things in the gate are actually returning results faster than in check, on average19:37
fungigranted not by much at the moment19:38
clarkbof course if I start dequeing and people immedaitely recheck we're likely to devovle back to this situation. But if we can get nodepool images updated in the interim that may be fine19:38
clarkbbut also in theory this will slowly recover as the total number of changes decreases19:39
clarkbbut to be determined if that will happen19:39
fungii haven't actually heard any complaints, am inclined to say we can just let things take their course and when we get restarted on corvus's bugfix they'll improve19:39
clarkbwfm. We should monitor that thigns don't trend in the worse direction though. Current queue size for openstack is check: 164 and gate: 5919:41
fungithe system isn't particularly misbehaving, so much as certain projects are monopolizing our available quota. though yes we have a bug that's causing some of our quota to get unnecessarily blocked19:41
clarkbthereare also check-arm64 things queued up but they are separate resources so I don't think they matter for this particular issue19:41
fungiright, i was basically ignoring the arm builds, those are best-effort whenever osuosl has room to run them19:41
clarkbyes my main concern is that the rate of new changes is greater than our ability to process changes in which case we will never catch up again. but we can monitor that and make decisions later if that appears to be the case19:41
fungiwe're in the middle of the pst workday, near the end of the busiest week in the openstack development cycle, so all things considered this is not terrible19:42
fungithis is probably our peak utilization until ~6 months from now19:43
fungii suspect that within 4 hours the inbound rate will slow and then we'll start catching up19:43
fungiper https://grafana.opendev.org/d/21a6e53ea4/zuul-status we're clocking 350 concurrent jobs with 450 nodes in use and a 1.5k node backlog, so not terrible19:46
clarkbyup that is possible. It is also possible that we deadlock with all nodes in a ready unlocked state but not useable by any jobs. I think that is unlikely at this point given our ability to continue processing things. I just want us to keep an eye on it19:46
clarkbthe available node count has fallen slightly from its earlier peak too which is a good sign19:47
fungiwe reached nearly 600jph at the start of the pst workday and are now down to about half that19:47
fungii don't see that deadlock likely, no, the 19:47
clarkb147 appears to be the recent peak and we are down to 13019:47
fungi"available" nodes on the test nodes graph are pretty steady19:48
fungidon't seem to be climbing19:48
clarkbif you look at the 12 hour graph its been steadily climbing over that period but does appear to be plateauing now19:48
fungigood point, i was on the 6-hour view, but yes i agree19:49
fungianyway, it looks to me like we could be using our available quota more efficiently, but nothing's really "broken" just overused in very inefficient ways by some projects19:50
clarkbya the problem is mostly that this is a feedback loop on its self19:52
clarkbdue to the bug19:52
clarkbas long as that loop starts trending in the right direction we're fine19:52
fungii mostly focus on the node requests backlog graph, when that begins to trend downward we're recovering. it doesn't particularly seem to be increasing at this point19:53
fungibut the prior increases were choppy over the past few hours, so it's hard to say just yet19:54
fungiother than that and the nodes stuck in available status that aren't freed up until the related multinode builds complete, things are looking pretty healthy. executors are all accepting, balance of running builds is consistent across all executors, none of the governors look at risk of kicking in...19:56
clarkbya that is a side effect of the bug too. We aren't running as many jobs are we are capable of19:57
clarkbso overall load is relatively low19:57
clarkbwhich is good. Better to have one problem and not two19:57
fungiznode count and zk data size are trending steadily upward, but i suspect those will tip back over when we begin to burn down outstanding node requests19:57
clarkbyup those are directly related to the node requests  I think19:57
clarkbfungi: if you get a second can you look at the openstack helm repo merge proposal? I suspect the listed plan won't work (particularly the git review step as that will try to push hundreds/thousands of commits?)19:59
clarkbthat email just went to openstack-discuss19:59
fungithanks, looking19:59
clarkbessentially gerrit wants merge changes to consist of chagnes that were already in review (not necessarily merged themselves I don't think) so git reviewing would create those changes or maybe just fail if those changes aren't there arelady?20:00
clarkbbut I'm not positive of that20:00
fungiyeah, i expect we're better off with the traditional way we've done it, push the new repo state in directly rather than as reviews20:01
clarkbI need to eat luch now but can followup if you don't awnt to20:07
clarkbit will just be a after lunch that I get to it20:07
fungialready writing a reply20:07
clarkbthanks20:07
corvusthe nodepool tests blowed up; probably needs the grpcio pin from zuul; if anyone wants to port that over that would help; otherwise i will do it when i finish this sandwich20:12
fungiworking on it20:18
Clark[m]In positive news the upgraded grafana seems to be working20:19
fungivery well, i might add20:20
fungiknock on wood, the node requests backlog does seem to be falling, it's back down to where it was a few hours ago now20:25
fungigate pipeline size is shrinking, albeit slowly20:26
fungitoday i discovered ietf rfc 3676, and turned it on in my mua/composer20:35
fungionly 21 years late20:35
fungihttps://www.rfc-editor.org/rfc/rfc367620:36
fungiold dog learns new trick20:36
corvusapproved and promoted20:44
corvusincidentally, in a bit of irony, i keep promoting these changes over the nodepool-in-zuul changes that implement quota handling and intelligent provider selection.  (intelligent may be overstating it, but it's an improvement)20:56
clarkbLooking at queues more closely I think that gate resets are making things worse20:56
clarkbhttps://zuul.opendev.org/t/openstack/build/a9b6856edca946699ec5596bbb43b28a/log/job-output.txt#507 apparently doc jobs build pdfs in pre-run and those can timeout leading to retries as well20:56
clarkbthere were some openstacksdk installation issues in some jobs too20:57
clarkbthat stuff is all more typical for feature freeze / release activity so not necessarily unexpected20:57
clarkbbut I Think it does play into the slow backlog processing20:57
corvusdoc jobs do what now?20:58
clarkbcorvus: in that log I pasted a link to above it seems they build teh docs in pre-run not in run so when taht timed out building pdfs the job went into retry status20:58
clarkbthe 3rd retry is queued up now20:59
corvusi think that's against the rules20:59
clarkbhttps://zuul.opendev.org/t/openstack/build/360525c8ec954a93a35aee77abfcab97 this is the openstacksdk issue. This one didn't cause restarts but did cause a gate reset20:59
clarkbI'll run down how/when that changed21:01
clarkbthat == docs run in pre-run21:01
clarkboh it isn't doing the actual build it is just trying to install the pdf build deps but since this involves latex and similar the package list must be huge?21:02
corvushrm it says its just prereqs21:03
corvusyeah21:03
corvusi guess it's no_logs though?21:03
clarkbso consistently slow nodes are timing out trying to install a big package list21:03
clarkbcorvus: no it doesn't appear to be but it is a package task not a shell/command task so we don't get streaming logs unless teh task completes21:03
corvusoh yeah i see that21:03
corvushttps://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-build-pdf-docs/tasks/main.yaml21:04
clarkbya thats the one21:04
corvusannoying :( but doesn't look like anything wrong in the job.  maybe worth a timeout bump21:04
clarkbagreed. I also wonder if there are leaner toolchains for pdf generation. But I have no idea21:04
corvusi mean, that's not that big of a package list21:04
corvusi know (believe me i know) that will pull in a lot of deps, but they're all tiny21:05
clarkbthe latex stuff explodes in size21:05
clarkbhrm I don't think I have latex installed anymore but I seem to remember it being several gigs of stuff when I did. But probably worth inspecting directly before assuming21:05
corvusoh inkscape is in there :)21:06
clarkbthis ran on jammy. Let me start a container and apt-get install that list and see what happens21:06
corvus0 upgraded, 415 newly installed, 0 to remove and 33 not upgraded.21:06
corvusNeed to get 408 MB of archives.21:06
corvusAfter this operation, 1468 MB of additional disk space will be used.21:06
corvusclarkb: ^that :)21:07
clarkbok so 1.3 ish gigs not terrible but not small either21:07
clarkbslow node networking maybe?21:08
clarkbthere are successful recent runs of that particular job against that project so probably is related to where it ran21:09
corvusyeah, or maybe we're actually pushing our mirrors?21:09
clarkbsystem load on the dfw mirror is currently reasonable. But it could be related to openafs caches too21:10
clarkbdfw is closest to the afs fileservers though so should be least impacted by the rtt openafs problems21:11
clarkbthe available node count fell down to 126ish21:13
clarkbit repeaked at 149 recently before doing so. A good sign that in general the growth isn't happening I think21:13
clarkbspeaking of minesweeper https://danielben.itch.io/dragonsweeper is a fun variant22:20
clarkbit runs right in your browser too22:20
clarkbhttps://zuul.opendev.org/t/zuul/build/f985aba9494c4cbcb7367096d4d64d22 the nodepool fix failed on this error. I've not seen it before has anyone else?22:28
clarkb1.32/stable is the latest stable series and it looks like we tried to install 1.31/stable. I do wonder if this is another acse where we just need to use the new thing?22:28
clarkbhttps://forum.snapcraft.io/t/launchpad-builds-failing-with-cannot-install-snap-base-core20/45833 this seems to explain the issue22:29
clarkbapparently you can `sudo snap install core snapd` to update snap inplace?22:30
clarkboh interesting the job runs on debian bookworm. I was about to say that it is weird for ubuntu to have this kind of incompatibility. Maybe the job should run on Ubuntu?22:31
clarkbI'm going to try that since it is an easy fix if it helps22:32
clarkbI'll squash it into the existing change since I think both things will be needed22:33
fungisgtm, thx22:38
clarkbcore20 apparently maps to ubuntu focal which is what had me confused that is old enough to be present on a modern ubuntu22:38
clarkbbut bookworm's snapd must not be as up to date22:39
clarkbI want to see that job pass before we approve and promote it again Just because there is enough stuff in the zuul queue that doing so is disruptive if this job is still a problem22:44
corvusit's great to see snapd has solved all packaging version problems22:49
corvusclarkb: i approved the change but did not promote; that should save time without adding cost22:49
clarkbgood point22:50
clarkbya so I guess the "core" is a base image that other things build on. And instead of just pulling in a newer core if they need it you break if you don't have it22:51
clarkban odd choice if the idea is to effectively containerize everything but I don't use snap or flatpak enough to know why22:51
corvusi recently tried installing plucky on a machine, and the livecd (or usb or whatever you call it now i guess) worked fine, but the installer is a snap and used an older core version which crashed on the hardware i was using.  so... cool.  i installed noble.22:55
clarkbhttps://zuul.opendev.org/t/zuul/build/096fa9c630ca4bc29f6f258936c21b3a the first attempt failed downloading files from the bhs1 mirror. Clicking on links that failed I am able to download those files now23:02
clarkbI wonder if we are missing an apt-get update somewhere23:03
clarkbok the k8s job succeeded. Do we want to promote now or just wait? Things do seem to be moving more quickly at this point but periodics enqueue in about 3 hours23:16
corvusi think no promote for now, there are some changes that might merge at the top of the queue, but if we see an opportunity later, sure23:19
corvusnm, head just failed; i prometed.  win-win.23:21
tonybIs there such a thing as an "OpenDev" slide template?  I don't think I can use presentty (sp?), If no template is there a std. set of images somewhere?23:25
corvusi think it's been a while since anyone updated it, but there are a lot of presentations in https://opendev.org/opendev/publications23:31
corvusalso i guess we should merge some changes https://review.opendev.org/q/status:open+-is:wip+project:opendev/publications23:32
corvuskeep in mind the real action there is on tags and branches, not the master branch23:32
clarkbbut also I'm not sure we had a constent template?23:38
corvustrue.  a lot of plain white backgrounds if i recall.  :)  but i dunno, maybe there's some stuff to mine there.  :)23:46
clarkbor black backgrounds with presentty :)23:46
corvus++23:46
kevko\o23:53
kevkoFolks, can I ask here what is with openvdev CI / nodes ? Or is it just overloaded because of too many reviews testing at once ? 23:54
kevkoor ? 23:54
kevko134  reviews stacked in check pipeline for example ... plenty of them are red ... so I wanted to know if it is overloaded ..or there is something happening in infra ? 23:56

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!