Thursday, 2022-03-31

*** dviroel|out is now known as dviroel00:05
*** dviroel is now known as dviroel|out00:20
*** rlandy|bbl is now known as rlandy|out01:17
*** soniya29 is now known as soniya29|rover04:19
*** ysandeep|out is now known as ysandeep05:05
opendevreviewdaniel.pawlik proposed openstack/ci-log-processing master: Add regex for g-api and c-vol timestamp  https://review.opendev.org/c/openstack/ci-log-processing/+/83593406:10
opendevreviewdaniel.pawlik proposed openstack/ci-log-processing master: Add Opensearch configuration information; change README file; enable md  https://review.opendev.org/c/openstack/ci-log-processing/+/83265907:11
*** ysandeep is now known as ysandeep|lunch07:23
opendevreviewchzhang8 proposed openstack/project-config master: register and bring back tricircle under x namespaces  https://review.opendev.org/c/openstack/project-config/+/80044207:34
*** jpena|off is now known as jpena07:34
opendevreviewchzhang8 proposed openstack/project-config master: register and bring back tricircle under x namespaces  https://review.opendev.org/c/openstack/project-config/+/80044207:55
opendevreviewchzhang8 proposed openstack/project-config master: create tricircle under x namespaces  https://review.opendev.org/c/openstack/project-config/+/80044208:28
opendevreviewchzhang8 proposed openstack/project-config master: create tricircle under x namespaces  https://review.opendev.org/c/openstack/project-config/+/80044209:02
*** ysandeep|lunch is now known as ysandeep10:07
*** soniya29|rover is now known as soniya29|rover|lunch10:08
*** rlandy|out is now known as rlandy10:25
*** soniya29|rover|lunch is now known as soniya29|rover10:27
opendevreviewMerged openstack/openstack-zuul-jobs master: Add stable/yoga to periodic-stable templates  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/83523010:30
*** bhagyashris_ is now known as bhagyashris11:09
*** soniya29|rover is now known as soniya29|rover|afk11:18
*** dviroel_ is now known as dviroel11:22
*** ysandeep is now known as ysandeep|brb11:49
*** ysandeep|brb is now known as ysandeep|afk12:05
*** soniya29 is now known as soniya29|rover12:07
sean-k-mooneyhave there been any infra issues with opendev.org git repos lately? i was seeing very slow clone times as in singel to double digit KiB/s12:22
sean-k-mooney/times/rates/12:23
fricklersean-k-mooney: we did a gitea upgrade recently, so that might be related. is it reproducible? via v4 or v6? specific repos or anything?12:31
sean-k-mooneyit seamed ot be all repos i just exported GIT_BASE=https://github.com for now12:33
sean-k-mooneyit was ipv4 i belive at least i got an ipv4 adress when i ping opendev.org12:34
sean-k-mooneyi can try doing a verbose clone and see what its useing 12:34
sean-k-mooneyi noticed one of my devstack buildes spend 2600+ second in the git_timed section12:35
sean-k-mooneyopendev.org.            3017    IN      A       38.108.68.12412:37
sean-k-mooneyverbose clone does not show much extra but it seam to happen on all repos i clone. i was cloneing placment but os-vif is a nice small repo to test with12:39
fricklerinfra-root: ^^ I can confirm this at least on some attempts getting rates ~ 100 KB/s, while others run much faster, maybe someone can doublecheck who is not 200ms RTT away13:04
fricklerfwiw according to SSL I'm landing on gitea03, but nothing obvious in cacti13:05
fungisean-k-mooney: frickler: i haven't looked, but my money's on everyone doing openstack yoga upgrades from source13:07
fungiprobably the servers are just overwhelmed13:07
sean-k-mooneyya could be13:08
sean-k-mooneyi have updated my ansible role to use github for now. implementeing a repo and pip cache is on my todo list but wanted to keep it simple first13:10
fungiwe saw our git server farm get slammed regularly in the weeks immediately following the past few major openstack releases, so i was sort of bracing for it to happen again13:14
*** ysandeep|afk is now known as ysandeep13:14
fungiday after the release is announced is a strong correlation, but i'll try to take a look at the resource consumption graphs in cacti to see if it pans out13:15
fungii'm away most of today though, so probably not much help past that13:15
sean-k-mooneyi assume you cant add cloudflare or one of the other cdn providers that has an free opensource hosting policy in front of it13:16
sean-k-mooneyfungi: dont worry about it13:16
sean-k-mooneyas i said i just used GIT_BASE to work around it by using github for now and gerrit seams fine13:16
fungisean-k-mooney: i'm not sure caching git requests would work all that well without some custom protocol support for it, but if that's a possibility then we could probably implement it ourselves with squid or similar13:18
sean-k-mooneyya im not sure either i looked into it a few years ago ci ci reasons but then we just used zuul and it took care if it13:20
fungiright, the load problem has generally been that for clone operations and the like, the entire repository has to be read into memory. do that 4-5 times with the nova repo on the same backend, and it's oom'sville13:22
sean-k-mooneyright so i wonder is there a way to cache that when its a full clone at least13:31
sean-k-mooneythere might not be13:32
sean-k-mooneybut ya nova is non trivial13:32
fricklerreally sad that we had to disable shallow cloning with gitea. testing from github that seems to make a huge impact both in term of time and data volume. like a factor of 5 for the nova repo for example13:39
sean-k-mooneyis that just not supported by gitea or was there another reason it was disabled13:43
sean-k-mooneyalthough if you really hate your self you shoudl try cloning nova form gerrit... with all the review data in the repo13:44
sean-k-mooneythe gerrit version is over a gig on disk13:46
fricklersean-k-mooney: gitea has a bug that makes older git clients fail. https://github.com/go-gitea/gitea/issues/1911813:52
fricklerdoing the same clone from github works fine13:52
sean-k-mooneyah how old is old13:53
fricklerand yeah, cloning from gerrit would be even worse13:53
fricklersean-k-mooney: git in ubuntu focal is broken, impish works13:53
sean-k-mooneyack ya i tought you were going to say centos 8 or something13:54
sean-k-mooneybut 20.04 is not that old13:54
fungiokay, so i did not get time to check the cacti graphs before leaving, but memory consumption and swap utilization driving system cpu and load average up are the symptoms to look for13:54
fungiwhen the server falls over, haproxy will temporarily remove it from the pool and then the problem load will shift to other servers, so you'll see sharp cliffs from that13:55
*** dasm|off is now known as dasm14:00
clarkbthe gitea update was also just a bugfix update and those tend to be very small deltas15:29
clarkbI just got 1.65MBps whcih is fairly typical for me and the git cluster15:34
*** ysandeep is now known as ysandeep|out15:40
*** dviroel_ is now known as dviroel15:54
opendevreviewMerged openstack/project-config master: Revert "[OVH/GRA1] Disable nodepool temporarily"  https://review.opendev.org/c/openstack/project-config/+/83542215:59
*** rlandy is now known as rlandy|rover16:09
*** dviroel is now known as dviroel|lunch16:14
*** dasm is now known as dasm|ruck16:38
*** jpena is now known as jpena|off16:39
*** dviroel|lunch is now known as dviroel17:18
rezabojnordiHi guys19:27
rezabojnordiI have question about my architecture?19:27
*** dviroel is now known as dviroel|afk20:24
opendevreviewMerged openstack/ci-log-processing master: Add regex for g-api and c-vol timestamp  https://review.opendev.org/c/openstack/ci-log-processing/+/83593420:43
*** dasm|ruck is now known as dasm|off22:59
*** rlandy|rover is now known as rlandy|out23:08

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!