Wednesday, 2025-03-12

opendevreviewIan Wienand proposed opendev/system-config master: install-root-key : run on localhost  https://review.opendev.org/c/opendev/system-config/+/94408401:38
ianwtonyb: what ever happened with the tzdata thing -> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_dee/944084/2/check/system-config-run-letsencrypt/deea11d/bridge99.opendev.org/ansible/install-root-key.2025-03-12T01%3A51%3A16.log02:28
tonybThere is one patch that is ready to merge, but getting the rest merged means larger work which I'm hoping to get to this week02:57
*** mrunge_ is now known as mrunge06:39
amorinhey team, corvus, we wont be able to perform the flavor upgrade on march 17, the team is partially off.08:50
amorinwe will most likely be able to perform that in april. I'll keep you in touch with date proposal08:50
opendevreviewKarolina Kula proposed opendev/glean master: WIP: Add support for CentOS 10 keyfiles  https://review.opendev.org/c/opendev/glean/+/94167209:35
karolinkuHello, working on support keyfiles is progressing, (passing https://zuul.opendev.org/t/openstack/build/35604b7b0b704cd8b0e3a43f1395b4b0) but I still miss two functionalities - ipv6 and bonding. I found some sample config which includes bonding, but now I'm looking for config which would include Ipv6 modification (to have something I can base on). Can you help me find something?11:12
slaweqhi frickler and fungi - qq about zuul jobs, do we have any job that can be used in e.g. "post" queue to trigger quay.io's webhook to build container image there?12:44
tonybslaweq: I don't think we do.  I think we have jobs that build in our CI and then publish to Quay.io12:52
tonybkarolinku: Check out https://opendev.org/openstack/ironic/src/branch/master/ironic/tests/json_samples/network_data.json#L22-L33 and https://opendev.org/openstack/ironic/src/branch/master/ironic/tests/json_samples/network_data.json#L64-L8212:53
slaweqtonyb thx for info, I will check those then13:01
fricklerslaweq: not sure what is actually needed for quay.io, but we do have something to trigger webhooks for readthedocs builds, maybe you could adapt that13:02
slaweqyes, I know that one13:02
karolinku[m]@tonyb this is bonding and I already found it. ipv6 is sth I coudn't13:07
fungithanks for the update amorin!13:07
tonybIsn't the 2nd link IPv6?13:07
karolinku[m]oh, sorry, you are right, I got confused by "private-ipv4" name13:22
karolinku[m]thank you!13:22
*** elodilles is now known as elodilles_pto13:22
tonybkarolinku[m]: Yeah.  I guess bonus points if you fix that in ironic ;P13:48
* tonyb wonders if it's worth borrowing that network_data.json file and adding it to glean for more test coverage13:49
tonybbut that's a not right now suggestion13:49
karolinku[m]I mostly based on this config: https://github.com/canonical/cloud-init/issues/536613:50
tonybAh true13:51
Clark[m]tonyb: karolinku: glean has ipv6 examples: https://opendev.org/opendev/glean/src/branch/master/glean/tests/fixtures/rax-iad/mnt/config/openstack/latest/network_data.json13:54
tonybOkay.  That figures13:55
tonybCan we +A https://review.opendev.org/c/opendev/system-config/+/923684  ... the rest of the series needs work but that one should be safe14:08
fungidone14:28
fungii also approved the ansible log redirect change14:28
fungihopefully in a bit we can approve 944081 now that we see gitea09 is still fine today14:29
opendevreviewMerged opendev/system-config master: run-production-playbook: redirect via ansible logger  https://review.opendev.org/c/opendev/system-config/+/94399914:49
opendevreviewMerged opendev/system-config master: Also include tzdata when installing ARA  https://review.opendev.org/c/opendev/system-config/+/92368414:49
fungii'll check logs from 943999 when it deploys14:50
fungilooks like it only ran infra-prod-bootstrap-bridge14:57
opendevreviewAmy Marrich proposed opendev/irc-meetings master: Adjust meeting time an hour earlier  https://review.opendev.org/c/opendev/irc-meetings/+/94412515:02
fungi923684 similarly only deployed infra-prod-bootstrap-bridge15:02
fungibut the hourlies are running now so those will be a good indication15:03
clarkbya I think that is expected and agreed hourlies should be good checks15:04
clarkblast night's periodic build for infra-prod-service-gitea pass despite the port 3000 block on gitea09 and the system-config run change for gitea blocking port 3000 passed. Any objection to me approving the change to block port 3000 on all the giteas now?15:17
clarkbI guess we can wait for fungi to beh appy with the logging situation15:17
clarkbfungi: let me know when you're satisfied and I'll approve the gitea port block15:17
fungihttps://zuul.opendev.org/t/openstack/buildset/f130cb4fdf4c4a0d820f658ba8f67308 is the latest hourly buildset that just reported at 15:12:26 utc15:20
clarkbhttps://zuul.opendev.org/t/openstack/build/aefb7604fb4846eab30071f982c85447/console#2/1/3/bridge01.opendev.org this shows the new command ran and we don't get stderr to the stdout field in ansible anymore15:21
clarkbnow we need to check the log on bridge but I need to load my ssh keys so will be a minute before I can do that15:21
fungihttps://zuul.opendev.org/t/openstack/build/8216d8ada27f483c998246035e1e737e/console#2/1/3/bridge01.opendev.org for example includes the 2>&1 not15:21
fungis/not/now/15:21
fungistderr is also empty, as expected15:22
fungi/var/log/ansible/service-eavesdrop.yaml.log on bridge looks fine to me15:22
clarkbas does service-zuul.yaml.log. I do note the expected stderr output isn't in that file and I think that isb eacuse the stderr output from manage-projects was specific to things running against review as there is some host that isn't in inventory but we use a group for it or something15:23
clarkbso I think that is still fine15:24
fungicontents of /var/log/ansible/ansible.log for that run is very minimal now too, as intended15:24
fungiclarkb: please approve 944081 when ready, i was just waiting for you to be around and settled in for the day15:28
fungior i can approve it15:28
clarkbdone15:28
opendevreviewMerged opendev/irc-meetings master: Adjust meeting time an hour earlier  https://review.opendev.org/c/opendev/irc-meetings/+/94412515:29
opendevreviewClark Boylan proposed opendev/system-config master: Update infra-prod limit semaphore to a max of 4  https://review.opendev.org/c/opendev/system-config/+/94412615:32
clarkbI don't think ^ is urgent but I didn't want to forget15:33
clarkbfungi: have a moment for https://review.opendev.org/c/opendev/system-config/+/944063 as well? That is a good one just to avoid future problems making updates to those images15:52
clarkbI suspect we can approve https://review.opendev.org/c/opendev/system-config/+/943992 now as well since ifnra-prod stuff seems generally happy?15:52
clarkbthanks! once that set of changes flushes out I'm going to look at booting a noble server or two16:20
opendevreviewMerged opendev/system-config master: Drop public port 3000 access for Gitea  https://review.opendev.org/c/opendev/system-config/+/94408117:02
opendevreviewMerged opendev/system-config master: Update infra-prod limit semaphore to a max of 4  https://review.opendev.org/c/opendev/system-config/+/94412617:02
clarkbthose ended up just behind the hourly jobs17:04
fungiand the semaphore change landed barely too late to increase parallelism in this hourly run17:05
clarkbThe iptables update change is deploying now. I've ssh'd into gitea09 and 10 to confirm iptables updates when ansible gets there17:12
opendevreviewMerged opendev/system-config master: Trigger related jobs when statsd images update  https://review.opendev.org/c/opendev/system-config/+/94406317:15
opendevreviewMerged opendev/system-config master: Have puppet depend on letsencrypt  https://review.opendev.org/c/opendev/system-config/+/94399217:15
fungithose should benefit from the parallelsim17:16
clarkbinfra-prod-base seems to have updated the iptables rules files on disk but loaded rules haven't updated yet. That may be an ansible handler though whcih ahppens late so I'm being patient17:18
clarkbah yup loaded rules appear updated on gitea09 and gitea10 now17:19
fungiconfirmed, no mention of 3000 in iptables or ip6tables -L now17:19
fungias expected, 944081 is running at 2x parallelism still17:20
clarkbside note: if you open :3000 in firefox then edit to :3081 firefox doesn't seem to recheck connectivity until you hit refresh17:20
fungiinteresting17:20
clarkbbut anyway :3081 on 09 and 10 is available to me and https://opendev.org seems to work17:21
clarkbThe infra-prod-service-gitea job which is going to run soon is the last major check I think17:21
fungii guess that and manage-projects aren't running because they depend on the letsencrypt job17:22
clarkbI think gitea depends on LE and manage-projects may depend on gitea?17:23
clarkbso ya we can still end up with some serialized job sets.17:23
fungicurious to see how the next several waiting in line do17:34
fungialso this manage-projects run is worth checking the logs for after it finishes since it's the only one we publish in the clear, i think?17:35
fungihttps://zuul.opendev.org/t/openstack/build/084541f37a1342e990b71b5e41c03ac5/log/manage-projects.yaml.log17:35
fungithere it is17:35
fungilgtm17:36
clarkbthe stderr is in there now rather tahn the zuul logs too17:36
fungimmm, 944126 is only going to run the bootstrap job17:36
clarkbya it just needs to update the git repos basically17:36
fungii wonder, is there a reason to run that job by itself?17:37
clarkbyes for the git repo updates17:37
fungii mean, do the git repo updates have any benefit if no other jobs are using that new state?17:37
fungi9440631 may show us >2x parallelism17:38
fungilooks like it's going to run both statsd jobs at the same time as the bootstrap17:38
fungioh, though they're just promote jobs17:38
clarkbfungi: we interact with the git repos on the host as do cron jobs like zuul upgrades and reboots. 99% of the time its probable fine to let something else catch it up later but being accurate seems good to me17:38
fungiyeah, cronjobs are a good reason17:38
fungihere we go, gitea-lb, zuul-lb and zookeeper all at the same time now17:39
clarkbzoom zoom17:39
fungiso that's 3x17:39
fungisomebody needs to update the inventory, that'll be a great test17:40
clarkbyup I plan on booting some new mirrors in vexxhost as soon as this set of work is able to be paged out17:40
clarkbbut happy for someone else to find other edits too17:40
clarkbfungi: I would expect the hourly jobs to do 4x too. We should try and catch bridge system load while that happens17:43
fungiyeah, that should be coming up in about 15 minutes17:43
clarkblooks like each vexxhost mirror is 2vcpu 8GB memory with 50GB root disk. Then our typical 200GB cache volume split into two mounts half for apache2 and half for openafs17:47
fungisounds right17:47
tonybThat's my recollection 17:48
fungithe new rackspace flex mirrors are 4vcpu and 80gb rootfs, but otherwise similar17:49
clarkband looks like both are boot from volume. I might switch away from boot from volume if that is an option17:49
fungiworth a try17:49
clarkb(and if anyone is wondering I'm looking at these two mirrors because review is currently hosted in vexxhost so booting new noble there seems like the next get feedback step for review replacement)17:49
fungimaybe also upload a fresh vanilla noble cloud image too17:50
clarkbour launch node system does a full system update and reboot before handling the node over to us17:50
clarkbit should be fine to use the image that tonyb uploaded previously17:50
fungioh, if it's fairly recent then yeah that works. mainly just wanted to be sure it was an official ubuntu image and not a doctored one17:51
clarkbI think the images were upload late last yearso not super recent but also not doctored17:52
clarkbbut I'll double check. Right now I'm still parsing flavor details17:53
clarkbin sjc1 we booted with flavor v2-standard-2 which has no root disk size hence the boot from volume. v3-standard-2 appears to have a 40gb root disk and otherwise matches v2-standard-2 (2vcpu 8gb memory)17:54
clarkbI'm going to try the v3-standard-2 flavor and not boot from volume17:54
clarkboh wait17:56
clarkbtonyb: you've already added new mirrors in vexxhost in both regions running noble looks like. Do we just need to update DNS then remove the old server?17:56
clarkbhrm the sjc1 server doesn't appear to have a volume mounted for caches17:57
clarkband the flavor is v3-standard-8 not v3-standard-2 so maybe bigger than we need.17:58
clarkbtonyb: fungi: thoughts on whether we want to retrofit those servers to add the volumes or create new smaller ones with new volumes?17:58
clarkbbasically we have some half baked mirror02 nodes booted on noble in both vexxhost regions. The base flavor is bigger than necessary and we'd need to attach 200GB volumes to each then migrate content from the existing root fs hosted paths onto the newly mounted paths18:00
clarkbor I can boot new mirror03 nodes and start fresh on a smaller flavor18:00
clarkbthen cleanup mirror02 and mirror01 in each region18:00
clarkbgiven the extra unneeded resource usage I'm leaning twoards starting over18:01
fungiare they in the inventory yet?18:02
clarkbfungi: yes18:02
clarkbbut the primary dns record for the mirrors in those regions don't point to them yet18:03
clarkbhourly jobs have started so I'm going to pay attention to that for a minute18:03
clarkbthere are 4 jobs running right now18:04
clarkbload average is almost 218:04
fungii don't think we could move the mirror02 instances to new flavors without recreating them, could we? and we avoid replacing inventory entries with the same name because of ansible fact caches on bridge18:04
clarkbI believe both things to be true18:04
clarkbyou can increase the size of nodes but not shrink them and only in some clouds iirc18:05
clarkbload average up to 2.1418:05
fungioh, right, this would be a shrinking not a growing18:05
clarkband then 2.7518:06
fungiso i think either live with the mirror02 instances being on the wrong larger flavors, or create new mirror03 instances on our preferred flavors18:06
fungii was too late to catch the parallelism, there are only two remaining jobs running now18:07
clarkbyes and if we go with mirror02 instances we will have to do extra surgery to sort out the volume situation to copy over old data or clear it out to avoid masking it. WIth new nodes we can ensure the volumes are ready in place before adding to inventory so we only ever write the content to the cache volume. Either way works18:07
clarkbfungi: it ran 4 jobs at once18:07
clarkband I think load peaked at 2.7518:07
fungiawesome18:07
clarkbwe can see if tonyb has thoughts since I think tonyb did the deployments of the 02 mirrors too18:09
clarkbwe just ran hourly jobs in like 7 minutes18:09
fungisnappy18:09
fungithat also means less of a delay when we end up with deploy jobs enqueued just after the top of the hour18:10
fungiso these speed improvements compound on one another18:10
clarkbyup18:10
clarkbI think this is pretty minimal too bceause the nodepool job runs for 6 minutes 15 seconds and the bootstrap job which has to run first took 1 minute 15 ish seconds before pausing18:12
clarkbthats 7 minutes and 30 seconds ish and the entire buildset took 7 minutes 43 seconds18:12
clarkbany additional parallelism won't be appreciated unless we run bigger buildsets like periodic or some deployments18:13
clarkbI've marked the infra prod parallel execution item on our TODO list as done18:15
clarkbwhile we wait for tonyb feedback on the mirror stuff https://review.opendev.org/c/opendev/system-config/+/943819 is another one that we can probably rollout. The statsd container updates lgtm as both zookeeper and haproxy graphs on grafana have recent content18:17
clarkband with lunch approaching maybe I wait until I've been fed if I don't hear back by then then we proceed with mirror surgery that sounds best to us?18:18
tonybUmmm I don't recall the state of those noble mirrors.   I strongly suspect I chose poor names for testing the images I uploaded.18:21
tonybI'd suggest just deleting them and starting with fresh18:21
clarkbtonyb: the image names are fine "Ubuntu Noble OpenDev 20240724"18:22
clarkbI think what happened is you thought the -8 in v3-standard-8 meant 8GB memory but it means 8vcpu I think18:22
clarkband memory is 32GB18:22
tonybAlso those images are as close to vanilla as possible.   I scripted download/convert upload to all the clouds 18:22
clarkbwhich is much much bigger than we need for the mirror nodes :)18:22
clarkbtonyb: ya I plan to reuse your images. They are stale but our launch system updates them and reboots before we use them for anything important so should be fine18:23
tonybYes much bigger that's also partially why I think we should drop them and start from scratch 18:23
fungithe weather here has taken an unexpected turn for the pleasant, so in a bit i may take a longer late-lunch/early-dinner break to enjoy some food outdoors and get in a short walk18:23
clarkbit was more a qusetion of is it better to fix the inflight stuff or start over. And ya I think starting over to get the sizing right makes sense to me18:23
tonybIf we can use the same setup as other regions where we have an external volume that'd be cool too18:23
clarkbI'll launch new mirror03 nodes and we can cleanup mirror01 and mirror02 together when the time is right18:23
clarkbtonyb: yup that is the plan18:24
tonybPerfect thanks18:24
tonybSorry I left that dangling 18:24
clarkbit happens. I'm sure you can find things I've left behind in gerrit and elsewhere18:24
tonybHehe18:24
clarkbfungi: enjoy18:24
fungii have an entire list of half-done things i mean to get back to, but the list itself is only half-done and i've been meaning to get back to it18:25
clarkbI'll start booting the new node in sjc1 momentarily. I want to work through one before I worry about the other so that I can fix mistakes once not twice :)18:25
fungiokay, heading out, i'll probably be up to a couple of hours, but should be around again by 20:30-ish18:34
clarkbthe sjc1 server is up and I've created a cache volume for it. Next step is to attach that volume, format it, mount it and then I can push up a change to add it to the inventory. Though maybe I should do one chagne for both regions to speed stuff up18:42
clarkbbut I need to pause here to get lunch going so will pick it back up in a bit18:42
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Add new vexxhost mirrors  https://review.opendev.org/c/opendev/zone-opendev.org/+/94415020:00
opendevreviewClark Boylan proposed opendev/system-config master: Replace mirror02 with mirror03 in vexxhost regions  https://review.opendev.org/c/opendev/system-config/+/94415120:03
clarkbinfra-root ^ please review that carefully as there are a number of moving parts20:03
clarkbinfra-root looking at zuul status we may have an unhappy swift target20:09
fungilooking20:10
fungidecided to check back in before we take a walk20:10
clarkbbuild 367a55e1a95d4aa289223604bccfd3fc ran on ze05 I'm looking at its logs there20:11
clarkbit tried to upload to ovh_bhs20:12
fungii guess see if we get the same errors to gra1 as well20:12
fungisometimes they both go in tandem20:13
clarkbya let me grab a few more example builds and see if I can find them20:13
clarkbhttps://public-cloud.status-ovhcloud.com/ this says keystone may be down. Instead fo needle in a haystack checks let me see what grafana says20:13
clarkbhttps://grafana.opendev.org/d/2b4dba9e25/nodepool3a-ovh?orgId=1&from=now-6h&to=now&timezone=utc&var-region=$__all seems to have started just before 18:3020:14
clarkbI'll push up a change to disable both regions20:14
fungistanding by to review20:14
opendevreviewClark Boylan proposed opendev/base-jobs master: Disable ovh job log uploads  https://review.opendev.org/c/opendev/base-jobs/+/94415220:16
funginot urgent, but question on 944150 when you're free to revisit it20:16
fungiapproved 944152 but can also bypass zuul to merge if you like, there's a good chance it won't land on its own20:17
clarkbfungi: responded20:18
clarkbfungi: ya I think we probably should bypass zuul20:18
fungialready halfway there20:18
opendevreviewMerged opendev/base-jobs master: Disable ovh job log uploads  https://review.opendev.org/c/opendev/base-jobs/+/94415220:20
clarkbHow about status notice One of our Zuul job log storage providers is experiencing errors. We have removed that storage target from base jobs. You should be able to safely recheck changes now.20:21
fungilgtm20:21
clarkb#status notice One of our Zuul job log storage providers is experiencing errors. We have removed that storage target from base jobs. You should be able to safely recheck changes now.20:21
opendevstatusclarkb: sending notice20:21
-opendevstatus- NOTICE: One of our Zuul job log storage providers is experiencing errors. We have removed that storage target from base jobs. You should be able to safely recheck changes now.20:22
clarkbhttps://zuul.opendev.org/t/opendev/build/8ca02d7df1ec495fbcfeadcf9eb1254a looks like we also have a regular job failuer in that repo20:22
clarkbI don't think it was caused by my change20:22
clarkboh thats promote after you force merged complaining that there was no build to promote20:23
clarkbnevermind that is expected in this situation20:23
fungiright20:23
fungithese are not the droids you're looking for, move along20:23
clarkbroger roger20:24
opendevstatusclarkb: finished sending notice20:25
opendevreviewMerged opendev/zone-opendev.org master: Add new vexxhost mirrors  https://review.opendev.org/c/opendev/zone-opendev.org/+/94415020:27
fungiokay, things seem to be working again so heading out for a walk, should be back in no more than 45 minutes20:27
clarkbenjoy!20:30
clarkbthe inventory update change for the mirrors failed on an rsync ssh key exhcange problem. I'll recheck it once it reports21:06
*** benj_1 is now known as benj_21:13
fungicool, back now anyway21:17
clarkband it is back in the gate again21:44
fungishould be merging any moment now, the last running gate job is just starting to upload its logs22:12
clarkbI'm still paying attention. This will be a good exercise of the increased semaphore limit due to the inventroy update I think22:13
fungiyep22:13
opendevreviewMerged opendev/system-config master: Replace mirror02 with mirror03 in vexxhost regions  https://review.opendev.org/c/opendev/system-config/+/94415122:14
fungiand hourlies are already well and done thanks to how fast they're running22:14
clarkbservice-mirror is about 2/3 of the way through the list so maybe starting in half an hour?22:15
clarkbthough with the doubled limit I'm just guessing22:15
fungiwith bootstrap paused and just base running, load average on bridge is around 2.5 already22:19
clarkbya base hits every server so likely has higher load costs22:20
fungiup over 3 now22:20
clarkbI guess taht is an important thing to keep in mind the laod on bridge is proportional to the number of servers we have ansible interacting with at once22:20
clarkbso it kinda works out that base is a dependency of everything else as it is going to be costly on its own22:20
fungibut since base runs by itself this is fairly okay22:20
fungiyeah, exactly22:21
fungihere comes the load22:24
clarkbcharge!22:24
fungithese 4 jobs aren't half the system load as base imposed though22:24
fungiso yeah, we could probably increase the parallelism even further if we see this working out for a while, but changes that touch the inventory are probably the only ones it will make any significant additional performance improvement for22:25
fungiit's doing a good job of saturating the semaphore at least, as soon as one of the four completes another spins up to take its place22:26
clarkbya we get diminishing returns as the number goes higher anyway22:27
fungii guess everything else past here has to wait for letsencrypt to finish, sort of an event horizon22:28
clarkbyup22:28
fungiso we start the bootstrap, pause it and run base by itself, then about a third of the jobs at 4x parallelism, then once letsencrypt completes we do the remaining 3/3 at 4x again22:30
fungier, remaining 2/322:30
fungithis is also best case conditions at the moment as we're not really running a node request backlog and have plenty of available quota, so new builds are starting with very little latency22:31
clarkbwhich should still be a massive improvment22:31
fungiabsolutely22:32
fungiit'll be interesting to compare to historical deploy buildset times for other changes that touched the inventory22:33
clarkbI think ovh may be happier now looking at https://grafana.opendev.org/d/2b4dba9e25/nodepool3a-ovh?orgId=1&from=now-6h&to=now&timezone=utc&var-region=$__all and https://public-cloud.status-ovhcloud.com/incidents/9myc4g6tfvlb22:34
clarkblet me see if I can find my base-test test change and if that looks good I can push up a revert22:34
fungiand it's off to the races once more22:34
fungijust had to catch its breath is all22:34
clarkbhttps://review.opendev.org/c/zuul/zuul-jobs/+/680178 is the base-test test change22:35
fungik22:35
fungisoftware factory is still reporting syntax errors on zuul-jobs changes22:36
fungiload average on bridge during parallel jobs is mostly staying in the 0.5-1.5 range22:38
clarkbservice-mirror just started my estimate was too high by 7 ish minutes22:38
fungikeeping in mind this is also a 6vcpu  server22:38
fungier, 8vcpu22:39
clarkbhttps://zuul.opendev.org/t/zuul/build/b8d6e273124f4230b2eba82ccbc6db58/logs appears to have uploaded to ovh confirm things appear to work again22:41
opendevreviewClark Boylan proposed opendev/base-jobs master: Revert "Disable ovh job log uploads"  https://review.opendev.org/c/opendev/base-jobs/+/94417122:43
clarkbI don't think the mirror job has gotten to either mirror yet22:44
fungihttps://public-cloud.status-ovhcloud.com/incidents/9myc4g6tfvlb22:44
fungiseems to confirm22:44
clarkboh its compiling openafs on the mirrors22:45
clarkbwhich si sloooooow22:45
fungieven slower on my workstation unfortunately22:45
opendevreviewMerged opendev/base-jobs master: Revert "Disable ovh job log uploads"  https://review.opendev.org/c/opendev/base-jobs/+/94417122:49
clarkbhttps://mirror03.sjc1.vexxhost.opendev.org/ https://mirror03.ca-ymq-1.vexxhost.opendev.org/ tada22:50
fungiand the full deploy buildset is nearly done22:50
clarkbgive me a sec and I'll get a change up to update DNS and then tomorrow or Friday we can clean up the old servers22:50
fungimight be finished in under 40 minutes from the time it enqueued22:51
funginever mind, zuul's estimate for the remaining job probably puts it a bit over22:51
clarkbso we went from 2 hours to 1 hour to 40 minutes roughly22:52
clarkboh but this one is an exceptionally long one due to compiling openafs22:52
clarkbwouldn't surprise me if periodic is closer to half an hour tonight22:53
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Switch vexxhost mirrors over to the new Noble mirrors  https://review.opendev.org/c/opendev/zone-opendev.org/+/94417322:54
fungianyway, yeah, comparing system load peaks on bridge to its cpu count, i don't think it's even close to breaking a sweat22:55
clarkb41 minutes 27 seconds22:57
clarkbsays the buildsets page22:57
clarkbpretty good22:57
funginot too shabby at all22:57
opendevreviewMerged opendev/zone-opendev.org master: Switch vexxhost mirrors over to the new Noble mirrors  https://review.opendev.org/c/opendev/zone-opendev.org/+/94417323:09
clarkbthat landed while hourly jobs were running and its already deploying23:12
funginice23:12
clarkband dns resolves the new records for me23:14
fungisame here23:20

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!