Friday, 2019-06-07

*** panda has quit IRC00:01
*** mattw4 has quit IRC00:03
*** panda has joined #openstack-infra00:04
*** rlandy|ruck is now known as rlandy|ruck|bbl00:08
openstackgerritIan Wienand proposed opendev/system-config master: Run ansible jobs on bridge.yaml changes  https://review.opendev.org/66380400:08
openstackgerritMerged opendev/system-config master: Evaluate files vhosts after we determine ssl file paths  https://review.opendev.org/66379600:09
*** dpawlik has joined #openstack-infra00:10
*** slaweq has joined #openstack-infra00:11
openstackgerritIan Wienand proposed opendev/system-config master: mirror: rename 80/443 log files  https://review.opendev.org/66235700:14
*** dpawlik has quit IRC00:14
*** mriedem has joined #openstack-infra00:17
*** gyee has quit IRC00:23
openstackgerritIan Wienand proposed opendev/system-config master: Switch mirror Apache logs to ISO8601  https://review.opendev.org/66380600:23
*** slaweq has quit IRC00:24
ianwinfra-root: ^ if we can look at those, i can bring up say rax ord & iad opendev.org mirrors which we can switch in for more testing coverage00:24
*** eernst has joined #openstack-infra00:25
*** eernst has quit IRC00:27
*** diablo_rojo has quit IRC00:27
*** markvoelker has joined #openstack-infra00:29
*** eernst has joined #openstack-infra00:29
*** mriedem has quit IRC00:30
*** eernst has quit IRC00:33
*** eernst has joined #openstack-infra00:33
*** rkukura has quit IRC00:36
*** igordc has joined #openstack-infra00:38
*** eernst has quit IRC00:38
*** aakarsh has quit IRC00:55
clarkbianw +2'd00:57
*** markvoelker has quit IRC01:02
*** aakarsh has joined #openstack-infra01:04
*** slaweq has joined #openstack-infra01:06
*** michael-beaver has quit IRC01:09
*** jamesmcarthur has joined #openstack-infra01:14
*** jcoufal has joined #openstack-infra01:16
*** jamesmcarthur has quit IRC01:18
*** slaweq has quit IRC01:19
*** aakarsh has quit IRC01:25
*** auristor has quit IRC01:41
*** auristor has joined #openstack-infra01:43
*** rosmaita has left #openstack-infra01:52
*** apetrich has quit IRC01:57
*** markvoelker has joined #openstack-infra01:59
*** gregoryo has joined #openstack-infra02:00
*** rkukura has joined #openstack-infra02:05
openstackgerritTristan Cacqueray proposed zuul/zuul master: model: add annotateLogger procedure  https://review.opendev.org/66381902:07
*** tinwood has quit IRC02:10
*** dpawlik has joined #openstack-infra02:11
*** slaweq has joined #openstack-infra02:11
*** tinwood has joined #openstack-infra02:11
*** rlandy|ruck|bbl is now known as rlandy|ruck02:12
*** rlandy|ruck has quit IRC02:15
*** dpawlik has quit IRC02:15
*** jcoufal has quit IRC02:16
*** slaweq has quit IRC02:24
*** rh-jelabarre has quit IRC02:24
*** dpawlik has joined #openstack-infra02:27
*** sean-k-mooney has quit IRC02:28
*** jamesmcarthur has joined #openstack-infra02:29
*** dpawlik has quit IRC02:32
*** markvoelker has quit IRC02:32
*** threestrands has joined #openstack-infra02:54
*** jamesmcarthur has quit IRC02:58
*** sean-k-mooney has joined #openstack-infra03:00
*** whoami-rajat has joined #openstack-infra03:10
*** slaweq has joined #openstack-infra03:11
*** sean-k-mooney has quit IRC03:14
*** sean-k-mooney has joined #openstack-infra03:16
*** slaweq has quit IRC03:25
*** igordc has quit IRC03:25
*** aakarsh has joined #openstack-infra03:29
*** ykarel|away has joined #openstack-infra04:03
*** ykarel|away is now known as ykarel04:03
*** yamamoto has quit IRC04:09
*** threestrands has quit IRC04:12
*** slaweq has joined #openstack-infra04:16
*** sean-k-mooney has quit IRC04:19
*** sean-k-mooney has joined #openstack-infra04:21
*** slaweq has quit IRC04:24
*** kjackal has joined #openstack-infra04:27
*** markvoelker has joined #openstack-infra04:30
*** udesale has joined #openstack-infra04:31
*** hwoarang has quit IRC04:39
*** hwoarang has joined #openstack-infra04:40
*** dpawlik has joined #openstack-infra04:43
*** yamamoto has joined #openstack-infra04:43
*** dpawlik has quit IRC04:48
*** yamamoto has quit IRC04:54
*** raukadah is now known as chandankumar04:55
*** markvoelker has quit IRC05:03
*** yamamoto has joined #openstack-infra05:13
*** slaweq has joined #openstack-infra05:13
*** slaweq has quit IRC05:24
*** dpawlik has joined #openstack-infra05:30
*** slaweq has joined #openstack-infra05:32
*** kaisers has quit IRC05:37
*** kaisers has joined #openstack-infra05:40
*** slaweq has quit IRC05:44
*** dchen has quit IRC05:45
*** markvoelker has joined #openstack-infra06:00
*** dchen has joined #openstack-infra06:03
*** jbadiapa has quit IRC06:13
*** kjackal has quit IRC06:15
*** kjackal has joined #openstack-infra06:15
*** gregoryo has quit IRC06:16
*** lpetrut has joined #openstack-infra06:19
*** evgenyl has quit IRC06:20
*** piotrowskim has joined #openstack-infra06:21
*** hwoarang has quit IRC06:23
*** hwoarang has joined #openstack-infra06:24
*** evgenyl has joined #openstack-infra06:24
*** sparkycollier has quit IRC06:25
*** sparkycollier has joined #openstack-infra06:26
*** markvoelker has quit IRC06:32
*** apetrich has joined #openstack-infra06:34
*** dchen has quit IRC06:34
*** pcaruana has joined #openstack-infra06:35
evrardjpclarkb corvus pabelanger so there was 2 ways to secure against the leakage of information about the zuul executor: http://logs.openstack.org/43/663743/1/promote/openstack-helm-images-promote-elasticsearch-s3/30b200e/ara-report/result/9544aee4-1f89-4dcf-9696-88d5f45b2253/ ... Because I tried this locally, https://opendev.org/zuul/zuul/src/branch/master/zuul/executor/server.py#L409-L410 and it worked to stil gather facts. I06:49
evrardjp suppose we mask the setup module.06:49
*** jtomasek has joined #openstack-infra06:50
*** hwoarang has quit IRC06:55
*** yolanda__ is now known as yolanda06:56
openstackgerritJean-Philippe Evrard proposed zuul/zuul-jobs master: Revert "Explicitly store date facts for promote"  https://review.opendev.org/66384806:56
openstackgerritTristan Cacqueray proposed zuul/zuul master: model: add annotateLogger procedure  https://review.opendev.org/66381906:58
*** hwoarang has joined #openstack-infra06:58
openstackgerritJean-Philippe Evrard proposed zuul/zuul-jobs master: Revert "Explicitly store date facts for promote"  https://review.opendev.org/66384806:58
*** slaweq has joined #openstack-infra06:59
*** ginopc has joined #openstack-infra07:08
*** tesseract has joined #openstack-infra07:09
*** kopecmartin|off is now known as kopecmartin07:12
openstackgerritIan Wienand proposed opendev/zone-opendev.org master: Add RAX IAD/ORD opendev.org mirrors  https://review.opendev.org/66384907:13
*** imacdonn has quit IRC07:16
*** dtantsur|afk is now known as dtantsur07:20
openstackgerritJean-Philippe Evrard proposed zuul/zuul-jobs master: Revert "Explicitly store date facts for promote"  https://review.opendev.org/66384807:21
*** markvoelker has joined #openstack-infra07:30
*** ykarel has quit IRC07:30
openstackgerritIan Wienand proposed opendev/zone-opendev.org master: Add RAX IAD/ORD opendev.org mirrors  https://review.opendev.org/66384907:31
*** ykarel has joined #openstack-infra07:32
openstackgerritIan Wienand proposed opendev/system-config master: Add RAX IAD/ORD opendev.org mirrors  https://review.opendev.org/66385207:38
*** pgaxatte has joined #openstack-infra07:39
openstackgerritIan Wienand proposed openstack/project-config master: Switch RAX IAD/ORD mirrors to new opendev.org mirrors  https://review.opendev.org/66385407:40
*** aedc has quit IRC07:40
*** ccamacho has joined #openstack-infra07:41
evrardjphello folks -- I have broken promote pipeline, and this is the fix: https://review.opendev.org/#/c/663848/07:48
*** hwoarang has quit IRC07:49
*** hwoarang has joined #openstack-infra07:50
*** jaosorior has joined #openstack-infra07:51
*** xek has joined #openstack-infra07:51
*** Emine has joined #openstack-infra07:52
*** jpena|off is now known as jpena07:53
*** rcernin has quit IRC07:55
openstackgerritMerged zuul/zuul-jobs master: Revert "Explicitly store date facts for promote"  https://review.opendev.org/66384807:58
*** ralonsoh has joined #openstack-infra07:59
*** markvoelker has quit IRC08:03
openstackgerritIan Wienand proposed opendev/system-config master: Add RAX IAD/ORD opendev.org mirrors  https://review.opendev.org/66385208:05
ianwclarkb: https://review.opendev.org/#/q/topic:rax-mirrors+(status:open+OR+status:merged) is a little stack to bring online two more rax mirrors ... i think it would be good to run them for a bit and see if we get any weirdness like before08:06
ianwthe hosts are up, have the right volumes attached and cache drives mounted08:07
*** roman_g has joined #openstack-infra08:09
*** pkopec has joined #openstack-infra08:10
*** gfidente has joined #openstack-infra08:16
*** lucasagomes has joined #openstack-infra08:17
*** aedc has joined #openstack-infra08:22
*** e0ne has joined #openstack-infra08:23
*** trident has quit IRC08:35
*** derekh has joined #openstack-infra08:37
*** trident has joined #openstack-infra08:37
*** imacdonn has joined #openstack-infra08:37
mnasiadkakolla jobs have some problems using ovh-gra1 nodepool provider - all our build jobs fail on connection to percona.com:443 - is there a way to temporarily switch all jobs to rax for example?08:45
openstackgerritMatthieu Huin proposed zuul/zuul-jobs master: install-nodejs: add support for RPM-based OSes  https://review.opendev.org/63104908:49
dpawlikI guess percona blacklist us not we percona :P08:50
*** emine__ has joined #openstack-infra08:51
*** Emine has quit IRC08:52
openstackgerritMark Meyer proposed zuul/zuul master: Add Bitbucket Server source functionality  https://review.opendev.org/65783708:54
openstackgerritMark Meyer proposed zuul/zuul master: Create a basic Bitbucket build status reporter  https://review.opendev.org/65833508:54
openstackgerritMark Meyer proposed zuul/zuul master: Create a basic Bitbucket event source  https://review.opendev.org/65883508:54
openstackgerritMark Meyer proposed zuul/zuul master: Upgrade formatting of the patch series.  https://review.opendev.org/66068308:54
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213408:54
*** markvoelker has joined #openstack-infra09:00
mnasiadkadpawlik: either way it doesn’t work :)09:01
*** ykarel is now known as ykarel|lunch09:02
*** priteau has joined #openstack-infra09:25
openstackgerritMerged openstack/hacking master: Dropping the py35 testing  https://review.opendev.org/65428709:25
*** zbr has joined #openstack-infra09:29
*** ykarel|lunch is now known as ykarel09:30
*** jaosorior has quit IRC09:30
*** markvoelker has quit IRC09:32
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213409:35
zbrcan we bring a bit more love to gertty? I use it an I know others doing the same, so maybe it would be a good idea to have more cores? Asking because a linting CR should not wait for 11 months....09:36
*** tkajinam has quit IRC09:37
*** lpetrut has quit IRC09:38
*** jaosorior has joined #openstack-infra09:47
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213409:52
*** hrw has joined #openstack-infra09:54
hrwhello09:54
*** tdasilva_ has joined #openstack-infra09:57
*** tdasilva_ is now known as tdasilva09:57
*** yamamoto has quit IRC10:02
*** lpetrut has joined #openstack-infra10:04
*** lpetrut has quit IRC10:04
*** lpetrut has joined #openstack-infra10:05
*** pkopec is now known as pkopec|brb10:06
*** hwoarang has quit IRC10:08
*** hwoarang has joined #openstack-infra10:10
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213410:13
*** jaosorior has quit IRC10:13
*** piotrowskim has quit IRC10:20
*** ociuhandu has joined #openstack-infra10:25
*** yamamoto has joined #openstack-infra10:29
*** ociuhandu has quit IRC10:30
*** ociuhandu has joined #openstack-infra10:31
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213410:35
*** yamamoto has quit IRC10:35
*** pkopec|brb has quit IRC10:36
*** hrw has left #openstack-infra10:38
*** pkopec has joined #openstack-infra10:39
*** zbr is now known as zbr|rover10:44
*** Lucas_Gray has joined #openstack-infra10:45
*** Lucas_Gray has quit IRC10:53
*** udesale has quit IRC10:53
*** udesale has joined #openstack-infra10:53
*** janki has joined #openstack-infra10:57
lpetruthi, is there anyone around that has the required rights to perform openstack github repo transfers?11:05
*** jaosorior has joined #openstack-infra11:07
openstackgerritAurelio Jargas proposed zuul/zuul master: Break long repo names to make them fit  https://review.opendev.org/66389911:12
*** Lucas_Gray has joined #openstack-infra11:24
*** xek has quit IRC11:29
*** markvoelker has joined #openstack-infra11:30
*** Lucas_Gray has quit IRC11:32
*** bobh has joined #openstack-infra11:33
*** rosmaita has joined #openstack-infra11:35
*** Lucas_Gray has joined #openstack-infra11:35
evrardjpmnasiadka: what's the issue? dns resolution? http problems?11:36
mnasiadkaevrardjp: connection timed out, seems like there’s a firewall somewhere that is blocking it - maybe some blacklist result11:37
evrardjpnot that I can change things, but switching to "where it works" sounds a less good than fixing the problem in the first place11:38
mnasiadkaevrardjp: works from multiple other places11:38
*** jpena is now known as jpena|lunch11:38
evrardjpare you fetching a package there or?11:38
evrardjpcould you make use of infra's reverse proxy cache features instead?11:39
mnasiadkaevrardjp: Yes, a gpg key for yum repo and packages later11:39
evrardjp(if it's always flaky I mean)11:39
evrardjpgpg key can be vendored in I guess11:39
mnasiadkaevrardjp: if you can point me to details I can try it out :)11:39
evrardjpwell i guess the first step is to establish the problem, and if there is maybe some one working on a fix :D11:40
evrardjpI suppose there are ppl here that can help you better than me, I just wanted to raise the fact there are many ways to skin that cat :)11:41
evrardjpmnasiadka: the caching mirrors are in system-config (playbooks/roles/mirror/templates/mirror.vhost.j2)11:44
evrardjpreverse proxy cache I mean11:44
evrardjpone is percona11:44
evrardjpit might help you there.11:44
*** EmilienM is now known as EvilienM11:45
*** yamamoto has joined #openstack-infra11:46
*** xek has joined #openstack-infra11:53
*** yamamoto has quit IRC11:54
*** ykarel is now known as ykarel|mtg11:54
*** yamamoto has joined #openstack-infra11:56
mordredlpetrut: yup, which one do you need?11:56
lpetrutthe cloudbase-init one11:56
lpetrutwe removed the one from /cloudbase11:57
mordredcool - lemme go try11:57
lpetrutgreat, thanks!11:57
mordredlpetrut: so - you either need to now add me as an admin to the cloudbase org - or I need to transfer it to someone who has admin who can then do the final transfer12:00
mordredlpetrut: do you have repo creation rights there?12:01
mordred(this is a quirk with how github transfers work)12:01
lpetrutgot it, I think you may transfer it to ociuhandu and we'll take it from there12:03
*** markvoelker has quit IRC12:03
mordredlpetrut: ociuhandu should now have a transfer request to approve12:04
lpetrutdone, thanks a lot for taking care of this12:04
mordredmy pleasure!12:05
openstackgerritDmitry Tantsur proposed openstack/diskimage-builder master: ironic-agent: install mdadm on the ramdisk  https://review.opendev.org/66391612:06
*** udesale has quit IRC12:10
*** bobh has quit IRC12:10
*** udesale has joined #openstack-infra12:11
*** rh-jelabarre has joined #openstack-infra12:17
*** yamamoto has quit IRC12:18
*** yamamoto has joined #openstack-infra12:19
*** jcoufal has joined #openstack-infra12:20
*** happyhemant has joined #openstack-infra12:20
*** tdasilva has quit IRC12:20
*** ianychoi_ has joined #openstack-infra12:23
*** slaweq_ has joined #openstack-infra12:24
*** ianychoi has quit IRC12:27
*** slaweq has quit IRC12:27
*** udesale has quit IRC12:27
*** paladox has quit IRC12:27
*** udesale has joined #openstack-infra12:27
*** flaper87 has quit IRC12:27
*** lpetrut has quit IRC12:27
*** flaper87 has joined #openstack-infra12:29
*** dansmith has quit IRC12:29
*** lsell has quit IRC12:29
*** lsell has joined #openstack-infra12:30
*** rfarr_ has joined #openstack-infra12:31
*** dansmith has joined #openstack-infra12:32
*** rlandy has joined #openstack-infra12:32
*** rlandy is now known as rlandy|ruck12:33
*** rfarr__ has joined #openstack-infra12:33
*** kjackal has quit IRC12:34
*** rfarr_ has quit IRC12:36
pabelangerevrardjp: left comment on 66282812:38
*** rtjure has joined #openstack-infra12:40
*** aaronsheffield has joined #openstack-infra12:41
*** tdasilva has joined #openstack-infra12:42
openstackgerritMerged opendev/zone-opendev.org master: Add RAX IAD/ORD opendev.org mirrors  https://review.opendev.org/66384912:44
*** Lucas_Gray has quit IRC12:45
fungizbr|rover: gertty is not an infra project. i recommend you take it up with the gertty maintainer directly. that said, lack of whitespace checking the source code hasn't been creating any bugs in it that i've noticed12:47
evrardjppabelanger: and answered... :)12:47
*** jpena|lunch is now known as jpena12:47
evrardjpthanks btw :)12:47
fungimnasiadka: evrardjp: i have a feeling proxying requests through our mirror server in the same region will also break similarly if it's percona blocking connections from ovh as a whole12:47
fungibetter to find out why percona has decided to block connectivity from ovh, if that's what's really going on12:48
evrardjpfungi: that might be true, but I don't really know what the problem is, so I can't judge, this is why I said that mnasiadka needs ppl with more knowledge of what's going on there that I have12:48
pabelangerevrardjp: okay, I'll have to look at docker jobs more12:49
*** markvoelker has joined #openstack-infra12:49
evrardjppabelanger: thank you, your kind sir!12:49
*** ginux has joined #openstack-infra12:50
*** ginopc has quit IRC12:50
*** ginux is now known as ginopc12:50
*** Lucas_Gray has joined #openstack-infra12:52
*** pkopec has quit IRC12:55
fungievrardjp: yeah, we can perform some manual tests from ovh, and also talk to folks at ovh and at percona and try to get it straightened out. i doubt we're the only users in ovh having this problem if it really is breaking the way it sounds like12:55
*** kjackal has joined #openstack-infra12:57
*** pkopec has joined #openstack-infra13:03
*** janki has quit IRC13:04
*** janki has joined #openstack-infra13:04
*** lseki has joined #openstack-infra13:04
*** mriedem has joined #openstack-infra13:10
fungiinfra-root: per conversation with cdent in #openstack-dev, it looks like mailman hasn't processed any new messages to openstack-discuss for some hours. i'm digging into logs now to see if i can find a cause13:12
*** jcoufal has quit IRC13:13
*** yamamoto has quit IRC13:15
*** rkukura has quit IRC13:18
clarkbfungi: could be a stuck log again if the server was rebooted13:19
clarkbs/log/lock/13:19
openstackgerritMerged opendev/system-config master: mirror: rename 80/443 log files  https://review.opendev.org/66235713:21
fungididn't seem to be rebooted13:21
fungithere was a recent-ish lock expiration logged in /srv/mailman/openstack/logs/locks13:22
*** priteau has quit IRC13:23
clarkbthe lock file has a pid iirc and if that pid does not exist the file can be removed13:26
*** ykarel|mtg is now known as ykarel13:27
clarkbThat was the big reason for stopping mail in the past iirc13:27
*** dpawlik has quit IRC13:27
fungiactually, that lock expiration i saw in the logs was from ~30 hours ago13:31
fungiuptime for the server is 55 days13:31
*** rtjure has quit IRC13:32
*** jamesmcarthur has joined #openstack-infra13:32
fungiand yeah, no locks for openstack-discuss in /srv/mailman/openstack/locks/13:34
fungi5347 .pck files in /srv/mailman/openstack/qfiles/in/ currently and rising13:35
*** priteau has joined #openstack-infra13:37
clarkbare the mailman processes running?13:38
clarkbremember different set per vhost13:38
*** rtjure has joined #openstack-infra13:42
*** tdasilva has quit IRC13:45
*** dpawlik has joined #openstack-infra13:46
*** anteaya has quit IRC13:47
*** yamamoto has joined #openstack-infra13:48
*** yamamoto has quit IRC13:49
*** yamamoto has joined #openstack-infra13:50
*** tdasilva has joined #openstack-infra13:51
*** tdasilva has quit IRC13:54
*** tdasilva has joined #openstack-infra13:55
fungiyeah, i'm trying to figure out how to match them up, but maybe i'll just count13:55
*** ricolin has joined #openstack-infra13:55
fungi5 processes matching '^list.*IncomingRunner'13:56
fungiwhich is the number of sites we have on that server13:57
fungistart time for all of them is april 12 when the server was last rebooted13:57
*** slaweq_ is now known as slaweq14:00
*** tdasilva has quit IRC14:01
*** jcoufal has joined #openstack-infra14:01
*** tdasilva has joined #openstack-infra14:02
*** tdasilva has quit IRC14:04
*** tdasilva has joined #openstack-infra14:06
*** ykarel is now known as ykarel|away14:06
*** chandankumar is now known as raukadah14:06
fungimatching the pidfile up to the parent process, the incoming queue runner for the openstack site is 4677 and `strace -p 4677` indicates it's quite busy14:08
*** tdasilva has quit IRC14:08
*** janki has quit IRC14:08
*** tdasilva has joined #openstack-infra14:10
clarkbis it spinning on discarding a bunch of spam?14:10
fungihard to tell since mostly reads of a bunch of 8-bit data14:11
*** liuyulong has joined #openstack-infra14:11
fungii do see a lot of snippets of the bounce message we reject posts with14:12
clarkbiirc you can view the input pickle files to seeif it is spam14:14
*** tdasilva has quit IRC14:14
*** tdasilva has joined #openstack-infra14:15
fungiyeah, was just doing that. though it doesn't really tell me much because most of the posts to openstack-discuss are spam (and we have rules to just reject non-subscriber messages from the worst offender domains like qq.com and 163.com14:16
fungi)14:16
fungithough looking at the queue dir, it's working its way through them so that's probably what's happened14:18
*** tdasilva has quit IRC14:18
fungibut not processing them as fast as they're arriving, since the number of files in that directory continues to increase14:19
ttxIt's been slow recently (~5-10min processing), but haven't seen 20-30 minutes yet14:19
fungithe last message delivered to openstack-discuss was from 08:48:07z14:19
fungithough the queue has caught up to messages from 10:38z now14:20
fungiso i think nothing's broken, it's just really slowed by trying to handle all the spam14:20
fungii'll see if there are options to just drop the messages which match the rejection pattern rather than bouncing them14:21
*** iurygregory has quit IRC14:21
openstackgerritScott Little proposed openstack/project-config master: Add ansible-playbook repo to starlingx  https://review.opendev.org/66395414:23
*** roman_g has quit IRC14:23
*** igordc has joined #openstack-infra14:23
fungihrm, no, the patterns i was thinking of are already set to discard not reject14:23
fungi(presumably so as to avoid creating backscatter)14:24
*** cdent has joined #openstack-infra14:24
fungihrm, i bet the ones i see it rejecting in the strace are probably for a different ml14:25
fungiopenstack-discuss messages only account for ~50% of the backlog14:26
*** iurygregory has joined #openstack-infra14:26
aakarshHi seems to be an issue as post job isn't getting triggered for one of the project ( browbeat ). The commit to add the post job had landed recently https://review.opendev.org/#/c/663451/4. And a commit was just merged https://review.opendev.org/#/c/626908/ but I can't see it in the queue https://zuul.openstack.org/status14:26
aakarshcan anyone please help me find out what's wrong ^14:26
*** dpawlik has quit IRC14:27
*** Goneri has quit IRC14:29
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213414:29
fungithis is the current breakdown: http://paste.openstack.org/show/752632/14:30
fungiaakarsh: take a look at https://zuul.openstack.org/builds?project=x%2Fbrowbeat14:32
*** Goneri has joined #openstack-infra14:33
aakarshaah thanks fungi exactly what I was looking for14:33
fungiaakarsh: http://logs.openstack.org/5c/5caaa650ff78aba018c8eadd3cbb32e5e9c8a519/post/browbeat-upload-git-mirror/4335148/ara-report/14:33
fungirht-perf-ci@github.com: Permission denied (publickey).14:33
fungiaakarsh: you may want to double-check that you generated the secret for that correctly (as in using the x/browbeat project's public key to encode it)14:34
corvus#status log removed files02 from emergency file14:38
openstackstatuscorvus: finished logging14:38
*** eharney has joined #openstack-infra14:40
cdentfungi: any theories on the ml situation?14:41
fungicdent: yeah, seems it's working just backlogged by neary 4 hours at the moment14:41
fungihandling a larger than usual amount of spam from nonsubscribers14:41
cdentblargh. thanks for looking into it. I don't mind a bit of a mail gap, but wanted to be sure nothing was wrong14:42
fungior at least i think it's a larger than usual volume... it always gets a lot of spam so it's possible something else has slowed its ability to process the incoming queue14:43
fungiit's currently processing messages which arrived at 10:58z14:43
clarkbany idea which way it is trending?14:44
clarkb(is queue growing or shrinking)14:44
fungithe backlog is currently growing, not shrinking, so i suspect we need to do something14:45
fungii'm still noodling on what14:45
fungisudo grep -r --files-with-match 'From [0-9]\+@qq.com' /srv/mailman/openstack/qfiles/in/|wc -l14:46
fungi585514:46
fungithat's 99% of the messages in the backlog14:46
fungicorvus: is it safe to delete files out of there?14:47
corvusfungi: should be14:47
corvus(except the one it's processing)14:47
fungianybody have an opinion on me deleting the messages from all-numeric qq.com addresses out of the incoming queue?14:48
clarkbare they all all-numeric?14:48
clarkb(I seem to recall that being a thing)14:48
clarkbin other words that may not exclude valid emails14:48
fungino, [0-9]\+@qq.com addresses can be valid addresses, but the few legitimate messages i've from qq.com users aren't from the numeric addresses14:49
fungier, messages i've seen from14:50
clarkbgotcha14:50
*** ykarel|away has quit IRC14:50
aakarshhey fungi can you expand a bit on using x/browbeat project's public key? how do i get it.  sorry I'm a bit confused. when i'd generated the key initially i used the ssh key that was added to the user (rht-perf-ci ) and then generated it as --tenant openstack https://zuul.openstack.org openstack/browbeat . which probably was the issue, so I re-generated with --tenant openstack https://zuul.openstack.org14:50
aakarshx/browbeat14:50
mordredfungi: I think it seems like a good potential tradeoff - should we add a more permanent filter to block all-numeric qq addresses earlier?14:51
*** paladox_ has joined #openstack-infra14:51
fungiclarkb: right now we already tell mailman to silently discard messages from [0-9]\+@qq.com if they're not subscribed, though other lists may not and so are probably trying to send moderation notifications or rejection notices to them14:51
clarkbaakarsh: did you add the key to the github account?14:51
aakarshyes I did clarkb14:52
fungier, i should say we already tell mailman to silently discard messages from [0-9]\+@qq.com to openstack-discuss if they're not subscribed14:52
clarkbaakarsh: I would double check that you can ssh to github with that key locally outside of zuul14:52
clarkbaakarsh: if that works maybe the encryption failed or you encrypted the wrong value? It should be the private key that gets encrypted in zuul14:52
clarkbfungi: gotcha, then ya that seems mostly safe?14:52
aakarshack checking clarkb, i did encrypt the private key.14:53
*** paladox_ is now known as paladox14:53
*** paladox is now known as paladox__14:53
*** paladox__ is now known as paladox14:53
corvusfungi, clarkb: we could enable spf checking in exim.  qq.com has a -all record.14:54
clarkbcorvus: that would ensure the origin of the smtp connection is a valid qq.com server? that seems reasonable14:54
corvusyep.  the mail we are getting currently is not -- it's from botnets14:55
fungiyeah, that would help14:55
fungiwould allow dropping the discard pattern from openstack-discuss too14:55
fungialso would reduce the amount of spam i sift through in the moderation queue every day14:55
mordredyeah - and dropping at the exim layer would reduce the load on mailman14:56
fungisince i manually inspect the messages (or at least the subjects) from non-numeric @qq.com non-subscribers14:56
fungivery much so, yes14:56
fungiwell, dropping or rejecting at rcpt time, the latter is preferable14:56
clarkbunrelated but for booting a gitea06 replacement. Do we have a preference for that being called "gitea06" and do replacement with same name or should I make a gitea09 and delete gitea06 and retire that name for now?14:58
corvusi would keep 0614:59
fungiyeah, that seems fine14:59
fungijust delete the old 06 first14:59
fungithe reason to do 09 is if we needed to keep 06 around temporarily15:00
clarkbwell we don't need to delete the old one first either15:01
clarkbour inventory should handle duplicates iirc15:01
clarkbnope I'm wrong15:01
clarkbthat changed when we switched to the static inventory15:01
clarkbI guess I remove the old one from inventory then can delete it after. I'll probably do that15:02
*** pkopec has quit IRC15:02
openstackgerritMarcin Juszkiewicz proposed opendev/system-config master: epel: mirror also aarch64  https://review.opendev.org/66397315:04
fungi#status log deleted ~6k messages matching 'From [0-9]\+@qq.com' in /srv/mailman/openstack/qfiles/in/ on lists.o.o15:04
openstackstatusfungi: finished logging15:04
openstackgerritJames E. Blair proposed opendev/system-config master: Enable SPF checking on all incoming mail  https://review.opendev.org/66397415:04
corvusfungi, clarkb, mordred: ^ that's copypasta from the exim manual, i think that would do it.15:04
fungi6 new messages have been delivered to openstack-discuss now that mm has caught up on the processing backlog for openstack lists15:06
fungicdent: yours was among them15:06
clarkbcorvus: sounds great to me.15:06
cdentcool15:06
fungicorvus: thanks, reviewing now as the backlog is already growing again15:06
corvuspuppet just ran on files02 and apache is happy15:07
fungiwe're already up to almost 100 new messages waiting in the incoming queue15:08
fungii'm going to step away for just a moment but will brb15:08
*** cdent has quit IRC15:08
corvusdocs.opendev.org, tarballs.opendev.org, zuul-ci.org all seem to work15:08
clarkbTaking another look at vexxhost flavors for new gitea06. I think we need 4vcpu and 6GB of RAM. THe current flavor seems to be the only one that minimally captures that need. (we can do 2vcpu + 8GB RAM or 8vcpu 8GB of RAM)15:08
clarkbI guess i can ask mnaser if using 16GB of ram with 4vcpu is preferable to 8vcpu + 8GB ram15:09
clarkbmnaser: ^ do you have an opinion on that?15:09
clarkbcorvus: yay15:10
mnaserI'm equally okay with both. I would imagine that 4/16 will give a better experience for the users because more file cache15:10
mnaserWhatever works best for you!15:11
clarkbmnaser: thanks15:11
*** aedc has quit IRC15:12
mnaserNo problem. Thanks for asking!15:12
clarkb/var/backups/gitea-mariadb/gitea-mariadb.sql.gz exists on gitea01 so that backup cron worked \o/15:12
clarkbThe last bit we'll have to sort out is how to recover gitea06's db from say gitea0115:13
clarkbbut that can happen after we have the server up and running I think15:13
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213415:16
*** e0ne has quit IRC15:22
clarkboh right I was gonna look for leaked images in other clouds. I should do that before gitea06 so I don't forget15:23
*** bnemec is now known as beekneemech15:29
clarkbmost of the images we leak appear to be "queued" which I think means we fail to upload to them15:30
*** smarcet has joined #openstack-infra15:32
clarkbI'm going to clean out 2k ish leaked images in inap now15:35
clarkbthe vast majority are in a 'queued' state15:36
clarkbmordred: Shrews ^ I expect that this is due to a flaw in the error catching code in shade/sdk15:37
*** aakarsh has quit IRC15:37
mordredclarkb: I don't think users have the ability to delete queued images15:38
*** cmurphy is now known as cmorpheus15:38
clarkbmordred: they seem to be deleting just fine now15:38
mordredclarkb: oh - neat15:38
clarkb(at least openstackclient isn't reutrning errors, I'll do a new listing after my for loop completes15:38
mordredclarkb: still - it's a weird state for nodepool to be able to know what to do about - if an upload sticks in queued, when would we clean it out? or maybe we're not recording it properly yet or something?15:39
clarkbmordred: nodepool doesn't know about the images. I think as far as it is concerned the upload failed and it "cleaned it up"15:39
mordredI suppose at some point in life we move on to upload another image - so maybe that's a good time to clean out old queued images?15:39
mordredAH15:39
mordredwith you now15:40
clarkbmy process is do a cloud image list, remove all images that show up in nodepool image-list run for loop over those uuids deleting them all15:40
*** pgaxatte has quit IRC15:40
*** roman_g has joined #openstack-infra15:42
*** smarcet has quit IRC15:44
*** lucasagomes has quit IRC15:47
*** smarcet has joined #openstack-infra15:47
*** markvoelker has quit IRC15:49
openstackgerritMark Meyer proposed zuul/zuul master: Extend event reporting  https://review.opendev.org/66213415:50
clarkbcorvus: http://logs.openstack.org/74/663974/1/check/system-config-run-base/39de554/job-output.txt.gz#_2019-06-07_15_33_47_106670 yay for testing15:50
clarkbcorvus: I guess something isn't quite right in the spf cbhange15:50
*** gyee has joined #openstack-infra15:51
corvusexim4-daemon-light could be the culprit15:53
*** iurygregory has quit IRC15:55
openstackgerritJames E. Blair proposed opendev/system-config master: Enable SPF checking on all incoming mail  https://review.opendev.org/66397415:56
corvusclarkb: let's see if that's any different15:56
*** ramishra has quit IRC15:56
*** michael-beaver has joined #openstack-infra15:57
clarkbis that like heavy water? more deuterium?15:57
corvusclarkb: yes, you don't want to drink exim4-daemon-heavy15:57
corvusthe light package has zero calories.15:58
*** dtantsur is now known as dtantsur|afk15:58
*** aedc has joined #openstack-infra15:58
openstackgerritBen Nemec proposed openstack/pbr master: Make WSGI tests listen on localhost  https://review.opendev.org/66375816:00
openstackgerritBen Nemec proposed openstack/pbr master: Use sphinxcontrib-apidoc for api docs  https://review.opendev.org/66398516:00
clarkbdeleting images via api is not very quick16:00
*** ginopc has quit IRC16:02
fungidebian has finally done away with the heavy/light split for exim packages16:04
*** Lucas_Gray has quit IRC16:04
openstackgerritScott Little proposed openstack/project-config master: Add ansible-playbook repo to starlingx  https://review.opendev.org/66395416:04
*** stephenfin is now known as finucannot16:05
*** jtomasek has quit IRC16:05
*** aedc has quit IRC16:05
openstackgerritBen Nemec proposed openstack/pbr master: Switch to release.o.o for constraints  https://review.opendev.org/66398816:05
*** aakarsh has joined #openstack-infra16:06
fungioh, i thought they had, but apparently not. i guess it's still under discussion16:07
*** mattw4 has joined #openstack-infra16:07
*** xek has quit IRC16:11
fungispeaking of backlogs, looks like zuul's been pegged on available quota since ~08:00z and the waiting node request count is still rising16:11
clarkbthere are a lot of tripleo and nova changes in the queue16:11
clarkbtripleo gate is 45 changes deep16:12
clarkband second change caused a reset16:12
clarkber 45 is gate total not just tripleo16:12
*** panda is now known as panda|off16:13
clarkbin any case it is busy16:13
fungiwhat has the utilization breakdown been looking like? done an analysis run for may yet?16:14
clarkbI haven't16:14
clarkbrunning one now16:16
clarkbalso I think we merged the statsd tracking change but unsure if scheduler has been restarted since16:16
clarkbso we can put up a grafana dashboard soon probably16:16
*** smarcet has quit IRC16:16
openstackgerritBen Nemec proposed openstack/pbr master: Make WSGI tests listen on localhost  https://review.opendev.org/66375816:17
openstackgerritBen Nemec proposed openstack/pbr master: Switch to release.o.o for constraints  https://review.opendev.org/66398816:17
clarkbhttp://paste.openstack.org/show/752634/ neutron now takes top honors16:17
clarkbtempest-slow-py3 is our biggest resource hog now16:18
clarkbabout twice as much as tempest-full-py3 (I find that interesting)16:18
logan-scheduler restart soon might be a good idea also because memory usage is starting to creep up again: http://cacti.openstack.org/cacti/graph_image.php?action=view&local_graph_id=64792&rra_id=216:18
clarkblogan-: good call16:19
clarkbcorvus: ^ is that something you have debugging stuff in place for?16:19
*** emine__ has quit IRC16:19
corvusclarkb: nope, i'm not prepared to debug it at all16:20
logan-seems like its on a similar trajectory as the issue 2 weeks ago where ZK resets started happening http://cacti.openstack.org/cacti/graph_image.php?action=view&local_graph_id=64792&rra_id=316:20
clarkbk, should we make any changes before restarting to make debugging easier next time? I'm thinking of the repl maybe?16:21
clarkbhttp://logs.openstack.org/74/663974/2/check/system-config-run-base/f411e24/job-output.txt.gz#_2019-06-07_16_13_06_270278 exim still failing16:21
corvusclarkb: i'll rebase the repl change16:21
aakarshhi fungi clarkb i was able to clone and update https://github.com/cloud-bulldozer/browbeat/tree/test with the ssh key. not sure what i'm doing wrong. I took the private key from the same host I was able to pull and update and encrypted by running with options --infile ~/.ssh/id_rsa --tenant openstack https://zuul.openstack.org x/browbeat16:23
aakarsh16:23
*** yamamoto has quit IRC16:23
clarkbdoes white space around = matter in exim config?16:23
clarkbseems like no based on other content in the file16:24
openstackgerritJames E. Blair proposed zuul/zuul master: WIP add repl  https://review.opendev.org/57996216:24
corvusclarkb: we can do a cherry-pick and manual install of that16:24
clarkbcorvus: k16:24
*** priteau has quit IRC16:25
aakarshthe .zuul.yaml file is here https://opendev.org/x/browbeat/src/branch/master/.zuul.yaml16:25
clarkblooking at my local github ssh remotes the username is actually git16:27
clarkbis it possible that the username value there isn't needed?16:27
fungiaakarsh: and the rht-perf-ci user has push permissions for the corresponding branch of that repository?16:27
clarkbI have to pop out in a minute but let me check ara's setup really quick16:27
fungiclarkb: aakarsh: oh, yep that is likely the cause16:28
aakarshfungi, yes rht-perf-ci user has push permission.16:28
aakarshooh so user should be git16:28
clarkbhttps://opendev.org/recordsansible/ara-infra/src/branch/master/.zuul.yaml#L16-L62 I think that is the issue16:28
clarkbuser should be git16:28
corvusaakarsh: what instructions were you following?16:28
*** mriedem has quit IRC16:28
aakarshah makes sense i looked at airshipit/armada they'd user as git https://github.com/airshipit/armada/blob/master/.zuul.yaml#L22416:28
aakarshhttp://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html16:28
corvusso that we can make sure they are updated16:28
aakarshis what I was following ^ corvus16:28
clarkbok I'm popping out for a bit. My image deletes will keep running16:29
*** mriedem has joined #openstack-infra16:29
corvusaakarsh: thanks16:29
clarkbtechnically the docs are correct because the remote user is git16:29
aakarshi'll try again with git and get back to you.16:29
corvusthat email is technicall correct, but misleading for github users16:29
clarkbits just not obvious that it is git16:29
clarkbya16:29
fungiaakarsh: https://opendev.org/recordsansible/ara-web/src/branch/master/.zuul.yaml#L3616:29
fungiso, yes, should be git@16:29
clarkbI guess when I get back we can plan to do a zuul scheduler restart and I'll keep deleting iamges in clouds16:30
*** smarcet has joined #openstack-infra16:30
aakarshyep thanks corvus fungi http://paste.openstack.org/show/752637/ i should've directly tried to ssh. sorry for the chaos.16:33
fungino worries, glad you got it sorted16:34
openstackgerritJeremy Stanley proposed zuul/zuul-jobs master: [DNM] Test unittests and multinode with base-test  https://review.opendev.org/66399516:39
*** tesseract has quit IRC16:40
*** owalsh has quit IRC16:40
openstackgerritJeremy Stanley proposed openstack/openstack-zuul-jobs master: [DNM] Test some jobs on top of base-test  https://review.opendev.org/66399616:41
*** jaosorior has quit IRC16:42
corvusokay, that SPF option is available starting in 4.91; we're at 4.86 on lists.o.o, so we'll need to do it the old debian way16:43
openstackgerritMichael Johnson proposed openstack/diskimage-builder master: Remove the rhel 8 check for xfs  https://review.opendev.org/66399816:45
*** markvoelker has joined #openstack-infra16:50
*** efried is now known as fried_rolls16:51
*** weifan has joined #openstack-infra16:51
*** owalsh has joined #openstack-infra16:52
fungioh, ouch, even the exim4 package on bionic is slightly too old to get that16:53
openstackgerritJames E. Blair proposed opendev/system-config master: Enable SPF checking on all incoming mail  https://review.opendev.org/66397416:53
corvusperl to the rescue16:54
*** yamamoto has joined #openstack-infra16:54
fungii'm guessing the volume of dns queries will be mitigated by our use of unbound?16:55
fungisince most of the queries will be cache hits16:55
openstackgerritJames E. Blair proposed zuul/zuul master: Proposed spec: tenant-scoped admin web API  https://review.opendev.org/56232116:56
*** jpena is now known as jpena|off16:56
fungiin the interim, i'm clearing out more messages from [0-9]\+@qq.com since we've reached a 30-minute backlog16:58
fungithat removed >99% of the ~850 messages in the inbound queue16:59
*** bobh has joined #openstack-infra16:59
*** yamamoto has quit IRC16:59
zbr|roverplease let me know if the proposed openstack-tox-mol template looks ok now: https://review.opendev.org/#/c/663599/ --- I added links to two jobs that depends on it and it seems to work correctly.16:59
*** kopecmartin is now known as kopecmartin|off17:03
*** derekh has quit IRC17:04
*** weifan has quit IRC17:04
*** weifan has joined #openstack-infra17:05
*** weifan has quit IRC17:05
*** weifan has joined #openstack-infra17:05
*** weifan has quit IRC17:06
*** liuyulong has quit IRC17:06
*** weifan has joined #openstack-infra17:06
*** weifan has quit IRC17:07
*** weifan has joined #openstack-infra17:07
*** weifan has quit IRC17:08
*** weifan has joined #openstack-infra17:08
*** weifan has quit IRC17:08
*** ralonsoh has quit IRC17:09
*** bobh has quit IRC17:10
openstackgerritJames E. Blair proposed zuul/zuul master: Add opendev tarball jobs  https://review.opendev.org/66400617:12
*** diablo_rojo has joined #openstack-infra17:13
clarkbok mostly back now17:13
clarkbfungi: ya we should cache all those lookups locally17:14
*** factor has joined #openstack-infra17:14
clarkbzbr|rover: I was actually thinking abouit that a bit and wondered if a molecule job would make sense in zuul-jobs17:14
*** ociuhandu_ has joined #openstack-infra17:15
zbr|roverclarkb: if you want, I can add it there. my only concern was related to success/failure-urls.17:15
clarkbzbr|rover: well I was thinking that because zuul uses ansible roles, better testing support for ansible roles in zuul itself would be good17:15
clarkbfor the reports url I think it is fine since it is relative path and specific to the tool17:16
fungiheh, i had asked the same question when i looked at it17:16
zbr|roverclarkb: i would love to make a poc that applies to zuul roles too. getting almost instant feedback is very useful when making changes.17:16
*** ociuhandu has quit IRC17:18
*** ociuhandu_ has quit IRC17:20
corvusfungi, clarkb: 663974 passes, however, we don't have a lists.openstack.org host in our testing (because it's still "puppet"), so that change isn't really tested17:24
*** ricolin has quit IRC17:24
corvusmaybe i should add a new job in that change17:25
*** markvoelker has quit IRC17:25
fungiwe could test it turned on globally, but not merge it that way, as a compromise17:25
corvusyeah, that would work too17:26
corvuslemme see how big a deal a new job is first17:26
corvuscause i think we'd want that eventually anyway17:26
fungior throw a child change on which enables it17:26
fungibut yeah, an actual job would be awesome17:26
clarkbas a double check /etc/resolv.conf points to localhost and localhost:53 is unbound17:29
clarkbso we should be doing what we can to minimize those dns lookups17:29
openstackgerritJames E. Blair proposed opendev/system-config master: Enable SPF checking on lists  https://review.opendev.org/66397417:29
funginode request backlog looks like it may have peaked for today, so hopefully zuul will begin gaining ground on its backlog now17:30
corvusfungi, clarkb: does that job look good?  i didn't add any testinfra stuff, but that should exercise the whole deployment on the list server17:30
clarkblooking (and at least applying the ansible is a start)17:31
*** weifan has joined #openstack-infra17:31
corvusoh i think i can add a testinfra thing17:31
clarkbcorvus: you should be able to check that port 25 is listening17:32
clarkband exim is running?17:32
fungiyeah, even just making sure exim doesn't crash/fail to restart would be helpful17:32
corvusoh, actually, we'll run the existing testinfra tests17:32
corvustest_base.py runs on the 'all' group17:32
corvusso that's going to run "exim -bt root", which would crash if the config is invalid17:32
corvusi believe that's what caught the earlier error17:33
*** smarcet has quit IRC17:33
fungiperfect17:33
corvusoh, actually, it was ansible that caught it before that17:33
corvusanyway, i think we're good :)17:33
clarkbk17:33
clarkbdo we need to add that package to a package install list somewhere?17:34
corvus(in the future, maybe we can have testinfra send mail to a mailing list, but that's out of scope today i think)17:34
clarkbspf-tools-perl specifically17:34
corvusdid i....forget to git add?17:34
corvusyep17:34
openstackgerritJames E. Blair proposed opendev/system-config master: Enable SPF checking on lists  https://review.opendev.org/66397417:34
clarkbinap image clean up finally finished. I'm going to do the next cloud region17:35
*** weifan has quit IRC17:35
*** aedc has joined #openstack-infra17:36
openstackgerritStephen Finucane proposed openstack/pbr master: Stop using pbr sphinx integration  https://review.opendev.org/65556517:38
*** smarcet has joined #openstack-infra17:39
*** aedc has quit IRC17:41
*** roman_g has quit IRC17:45
clarkbcorvus: most recent ps lgtm17:50
clarkbwe'll need to rebase ianws changes assuming that gets in. I can do that so he has up to date ci results17:51
*** weifan has joined #openstack-infra17:51
fungiwhat incorporates the roles/exim/tasks/Debian.yaml role?17:54
fungier, tasklist, whatever it's called17:54
*** jamesmcarthur has quit IRC17:54
fungisomething preexisting in the exim role/17:55
fungi?17:55
clarkbfungi: yes there is a default.yaml and a RedHat.yaml that get loaded17:55
clarkband now we'll have a Debian.yaml17:55
clarkb(I checked that when doing the review)17:55
fungiahh, so it's some sort of magic fact matching which notices it's there and adds it in?17:56
fungibased on operating system family or something like that?17:57
clarkbyes its a specific include tasks task17:57
clarkblet me get a link17:57
clarkbhttps://opendev.org/opendev/system-config/src/branch/master/roles/exim/tasks/main.yaml#L9-L1517:57
fungiaha, so it will discover it if a match exists. awesome17:59
zbr|roverclarkb: fungi: have a look at this https://review.opendev.org/#/c/661994/8/tasks/main.yml -- about layered loading of distro specific configs.18:02
*** finucannot is now known as stephenfin18:02
*** gfidente has quit IRC18:02
zbr|roveri found it suggested on an ansible bug, used it already in a couple of places.18:02
clarkbya we do similar though simpler because fewer cases to worry about18:03
zbr|roversure, just wanted to share the idea, no need to go so deep this the loop18:03
clarkbcorvus: you updated .zuul.yaml so gitea ran and it failed http://logs.openstack.org/74/663974/5/check/system-config-run-gitea/05bbf86/job-output.txt.gz18:05
clarkbmaybe if the lists test passes we remove the test, merge it, then add the test back?18:05
fungiwfm18:06
aakarshcorvus, fungi just wanted to let you know guys know that updating user to git fixed the problem, and the remote repository is now in sync :) thanks again18:06
fungiaakarsh: great! glad it worked out18:06
fungiclarkb: so 3000/tcp on the gitea server ends up rejecting connections?18:10
*** smarcet has quit IRC18:12
clarkbfungi: seems like it18:13
fungiis that new? do you know? i guess we're maybe testing newer gitea and there are regressions or something?18:14
clarkbI think we should only be testing what is in corvus' fork so if that updated then that is possible. Otherwise should be the same version18:14
clarkbno idea if that is new18:14
fungiahh, i thought we had switched off the fork a couple weeks back18:15
clarkbI thought there was one remaining change? Maybe I'm mistaken in which case ya could be a gitea update breaking us18:15
*** smarcet has joined #openstack-infra18:16
fungiwell, anyway, so when you say "remove the test" and "add the test back" you're referring to the gitea test i guess?18:16
fungiotherwise we can't land the exim test until we work out why the gitea test is broken (short of bypassing zuul)18:17
clarkbno remove the lists.o.o test so that we don't have changes to .zuul.yaml18:17
clarkbthe gitea job won't run if we remove the .zuul.yaml change18:17
*** aedc has joined #openstack-infra18:18
*** ccamacho has quit IRC18:18
fungisure, but then we don't have a working exim test merged18:19
fungieither way we seem to not have a working gitea test merged at the moment18:19
corvusmaybe we should drop the .zuul.yaml file matchers18:19
fungithat could be another option, though would allow us to merge broken changes to those jobs, i suppose18:20
*** bobh has joined #openstack-infra18:20
corvusyeah, we'll have to be careful.  neither thing (run no jobs on .zuul.yaml changes / run all jobs on .zuul.yaml changes) is what we want.  someday i'll have time to add 'run this job if its config changes' support to zuul, which is what we really want)18:21
clarkbif anyone is wondering: "Image transition from deleted to deleted is not allowed"18:21
fungidigging into the gitea node log from that failure now to see if i can spot the problem18:21
fungiheh, so you can't delete a deleted image, huh?18:21
clarkbnope18:21
*** markvoelker has joined #openstack-infra18:22
*** aedc has quit IRC18:23
corvusfungi: based on http://logs.openstack.org/74/663974/5/check/system-config-run-gitea/05bbf86/gitea01.opendev.org/docker/giteadocker_gitea-web_1.txt it looks like it could not connect to the db18:23
fungiahh18:23
fungi2019-06-07 17:57:37 0 [Note] mysqld: ready for connections.18:24
fungiso not for lack of a running db server at least18:24
corvusyeah, i'm puzzled.18:24
corvusfun fact: we're in limestone here18:24
corvusand attempting to connect to [::1]:330618:25
clarkbmaybe mariadb isn't listening on v6?18:25
corvusin production we have: tcp6       0      0 [::]:mysql              [::]:*                  LISTEN18:27
corvuswhile we're pondering, i'm going to recheck that and see if we get a different answer18:31
clarkbk18:31
fungigood idea18:31
fungicould be this hasn't bitrotted, merely only worked on certain providers18:31
corvuswe've got the gitea image and mariadb images pinned18:32
*** jcoufal has quit IRC18:32
corvusthough, the mariadb image is not pinned very specifically -- it's 10.4, which was updated 3 days ago18:33
*** bobh has quit IRC18:34
corvuswe can drop back to 10.4.4 or 10.4.3 if we want to backtrack a bit to 23 days or 3 months ago, respectively18:34
clarkbwe have a debian-jessie image in ovh bhs1. Not for much longer18:34
clarkbcorvus: possible that the binding behavior changed between those releases and the one 3 days ago I suppose18:34
clarkbcorvus: might be worth pinning more specifically if recheck reproduces18:34
corvusya18:35
openstackgerritJames E. Blair proposed opendev/system-config master: DNM: try mariadb 10.4.4  https://review.opendev.org/66402718:36
openstackgerritJames E. Blair proposed opendev/system-config master: DNM: try mariadb 10.4.3  https://review.opendev.org/66402818:36
corvusor, you know, maybe just throw some computers at the problem and see what they come up with18:36
corvusmaybe we'll have 3 new data points after lunch :)18:37
openstackgerritGaëtan Trellu proposed openstack/project-config master: Add api-ref for Qinling  https://review.opendev.org/66403018:37
*** diablo_rojo has quit IRC18:43
*** smarcet has quit IRC18:44
*** rascasoft has quit IRC18:52
*** markvoelker has quit IRC18:55
*** rascasoft has joined #openstack-infra18:55
*** e0ne has joined #openstack-infra18:59
openstackgerritJames E. Blair proposed opendev/base-jobs master: Fix js content tarball job name  https://review.opendev.org/66403219:00
openstackgerritJames E. Blair proposed zuul/zuul master: Add opendev tarball jobs  https://review.opendev.org/66400619:02
corvusshould i change opendev-promote-javascript-content to opendev-promote-javascript-content-tarball for consistency?19:03
clarkbmight help reduce further confusion, but I don't think it is necessary19:05
fungino opinion19:07
fungiif we're not going to promote other javascript content besides tarballs, then i don't find it confusing19:08
clarkbthe gitea job succeeded on recheck19:11
clarkbon the exim change19:11
fungiharumph19:12
clarkbovh, vexxhost, inap leaked images are cleared out. Limestone is in progress. I'll finish up with linaro-london and rax19:12
clarkbbut I need to pop out now for lunch activities19:12
openstackgerritMerged opendev/base-jobs master: Fix js content tarball job name  https://review.opendev.org/66403219:12
clarkbthe deletes take time anyway19:12
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Store autohold requests in zookeeper  https://review.opendev.org/66111419:15
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Add autohold-info CLI command  https://review.opendev.org/66248719:15
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Record held node IDs with autohold request  https://review.opendev.org/66249819:15
openstackgerritDavid Shrewsbury proposed zuul/zuul master: WIP: Auto-delete expired autohold requests  https://review.opendev.org/66376219:15
*** fried_rolls is now known as efried19:15
openstackgerritMerged zuul/zuul master: Break long repo names to make them fit  https://review.opendev.org/66389919:16
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Add caching of autohold requests  https://review.opendev.org/66341219:18
mordredcorvus: back from what turned out to be an extra long sandwich - anything you want me to look at or poke at?19:19
*** e0ne has quit IRC19:24
corvusmordred: wow, you had one of these?  https://www.subway.com/~/media/Base_English/Images/SubwayCatering/giant_sub_notext.jpg19:28
*** emine__ has joined #openstack-infra19:28
mordredcorvus: I think mine was longer than that19:28
corvusmordred: i think we're just waiting right now, with like 3 irons in the fire19:29
mordredcorvus: I thnik it would be nice if english differentiated between length and duration ...19:29
mordredcorvus: awesome19:29
corvusfungi, clarkb, mordred: ah, since the gitea job succeeded on recheck, we should probably entertain the idea that there's a weird docker v6 thing happening with that job in limestone.19:30
corvusi don't really want to open that can of worms today, but we probably should poke at that soon.19:31
corvusi'll just go ahead and abandon my mariadb changes19:31
mordredcorvus: I can't even19:31
mordredcorvus: the whole docker + v6 story is just a gift that keeps on giving19:32
*** weifan has quit IRC19:32
mordredassuming that's actually the issue of course19:32
corvusit seems very likely that *something* about docker is the cause19:32
*** bhavikdbavishi has joined #openstack-infra19:33
*** smarcet has joined #openstack-infra19:33
*** e0ne has joined #openstack-infra19:36
fungione possibillity is it's a race... mariadb logged reaching a running state at 17:57:37 but the 17:57:38 db ping is the point at which gitea-web seems to have given up trying to connect to it19:41
*** udesale has quit IRC19:41
fungior did it maybe successfully connect at that point and not log anything afterward?19:41
*** bobh has joined #openstack-infra19:43
corvusfungi: well, that's 5m after it started; maybe that's our timeout19:44
openstackgerritDavid Shrewsbury proposed zuul/zuul master: WIP: Auto-delete expired autohold requests  https://review.opendev.org/66376219:46
fungiahh, so could be we just got unlucky with a slow db container setup/start and lost the race19:46
fungiassuming a 5m timeout19:47
fungialso, the openstack ml backlog surpassed an hour so i've removed another ~1600 [0-9]\+@qq.com messages from the inbound processing queue19:48
*** markvoelker has joined #openstack-infra19:52
*** guimaluf has joined #openstack-infra19:52
clarkbnow down to just rax leaked images. Note I'm not sure how to check for leakages in swift there19:54
clarkbso that will have to be a different pass19:54
*** emine has joined #openstack-infra19:56
clarkband actually rax will have to wait because there are uploading images and I don't want to delete one unexpectedly19:57
clarkbNow to context switch to zuul restarts19:57
*** smarcet has left #openstack-infra19:58
clarkbthe exim fix is almost done in the gate so won't restart until that is in19:58
*** emine__ has quit IRC19:59
clarkbMy plan is to manually install zuul so that we can get the repl change in, then I'll use the playbook to restart the whole thing19:59
clarkbcorvus: are the python unittests expected to fail on the repl change? and if so is it safe to install it?20:01
clarkbmaybe it is the thread list checker in the unittests that is failing20:02
corvusclarkb: that's what i'm thinking -- that it doesn't shut down cleanly.  should be ok.20:02
clarkbaddress alreasy in use errors so much be related to binding the socket20:02
mordredclarkb: for swift, once we're not uploading actively, we can delete any and all objects in the images container20:04
*** emine has quit IRC20:04
mordredonce the images are imported the swift objects are no longer needed20:04
clarkbmordred: oh right glance copies it internally20:04
mordredyah- so the glance process you've been doing should just work on rax as well, but I agree, an additional check for leaked objects is a good idea - when it's not uploading :)20:04
*** emine has joined #openstack-infra20:05
*** smarcet has joined #openstack-infra20:05
clarkbya I'll hold off until it quiets down20:05
*** jamesmcarthur has joined #openstack-infra20:07
*** mriedem has quit IRC20:08
clarkbzuul==3.8.1.dev154  # git sha ce30029 is now installed on zuul01. That is the wip repl change. The previous commit merged is e0c975a98086882127f2c1b2c30a28876d84ebb720:09
clarkband now I wait for exim fix20:09
*** emine__ has joined #openstack-infra20:11
*** mriedem has joined #openstack-infra20:11
fungi#status log filed a removal request from the spamhaus pbl for the ip address of the new ask.openstack.org server20:11
*** weifan has joined #openstack-infra20:12
openstackstatusfungi: finished logging20:12
*** jamesmcarthur has quit IRC20:12
*** raissa has joined #openstack-infra20:13
*** emine has quit IRC20:13
*** emine has joined #openstack-infra20:15
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Auto-delete expired autohold requests  https://review.opendev.org/66376220:16
*** emine__ has quit IRC20:16
*** weifan has quit IRC20:16
*** raissa has quit IRC20:18
fungiinfra-root: a reminder, when we build/replace servers in rax which send e-mail to a variety of users we have to request exclusions from the spamhaus pbl for them20:18
*** raissa has joined #openstack-infra20:18
*** bhavikdbavishi has quit IRC20:19
clarkbfungi: might be worth adding that to the meeting agenda as a reminder (I think people read the agenda and meeting notes even if they don't attend)20:19
*** markvoelker has quit IRC20:20
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Add autohold-info CLI command  https://review.opendev.org/66248720:22
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Record held node IDs with autohold request  https://review.opendev.org/66249820:22
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Auto-delete expired autohold requests  https://review.opendev.org/66376220:22
fungiclarkb: god idea, thanks!20:22
fungigood idea too20:23
*** raissa has quit IRC20:23
fungiadded20:24
*** igordc has quit IRC20:28
*** raissa has joined #openstack-infra20:32
*** raissa has quit IRC20:34
*** e0ne has quit IRC20:37
*** diablo_rojo has joined #openstack-infra20:41
openstackgerritDavid Shrewsbury proposed zuul/zuul master: WIP: Mark nodes as USED when deleting autohold  https://review.opendev.org/66406020:41
clarkblast job for the exim fix is finally running20:44
clarkbzuul says 10 minutes to merging then we can restart zuul20:44
*** weifan has joined #openstack-infra20:45
clarkbhow does `ansible-playbook -f 10 /opt/system-config/playbooks/zuul_restart.yaml` look?20:51
clarkbI guess I should check the mergers and executors for their installed version20:52
clarkbzm01 and ze01 are on 00d0abbc709148527a3b57cf8733541f4ba817d820:53
clarkbwhich lgtm20:53
openstackgerritMerged opendev/system-config master: Enable SPF checking on lists  https://review.opendev.org/66397420:53
*** emine has quit IRC20:53
clarkbcorvus: fungi mordred ^ ok that merged. Anything else I should do before zuul restart?20:54
clarkbI'll save queues and run the playbook if not20:54
*** aakarsh has quit IRC20:55
corvuspuppet was running when that merged, so we're expecting that to take effect at 21:24 i guess20:55
corvusclarkb: i think we're gtg if you've done the manual install on the sched20:56
clarkbpbr freeze still shows zuul==3.8.1.dev154  # git sha ce30029 on the scheduler20:56
clarkbI'll save queues now20:56
clarkbqueues saved. Running playbook next20:56
corvusclarkb: remember you will need to rm the web pid file manually20:57
clarkbwe had started to swap quite a bit so this was timely20:57
clarkbcorvus: yup20:57
clarkbthats curious zuul scheduler is still running but playbook thinks it isn't20:58
fungiclarkb: nothing i'm aware of which is urgent20:58
clarkbI won't clear the web pid file until scheduler has fully stopped20:58
fungiand i'll keep an eye on lists.o.o in the next half-hour to see if things break when the spf filer is applied20:58
clarkbcorvus: is it possible that is a rogue pair of processes?20:59
fungiif this works and allows us to go back to accepting ${listname}-owner@ messages, i'll be ecstatic20:59
clarkboh they are gone now20:59
clarkbproceeding with web pid removal20:59
corvus(could be internal pid removal happened before kernel finished swapping in the procs)20:59
clarkbscheduler and mergers have been restarted. Playbook is waiting on executors to stop so it can start them again21:00
*** bobh has quit IRC21:02
clarkbabout half the executors have been restarted at this point21:03
clarkbstill waiting on configs to load before I can requeue21:03
clarkber I guess they've only stopped. It waits for the full set to stop before restarting the executors21:05
clarkbscheduler is up. Loading queues21:06
*** mriedem has quit IRC21:06
clarkbwaiting on ze10 and ze0721:08
clarkbplaybook is done. Restart is complete other than reloading check queue21:10
clarkbpabelanger: you should be able to try ansible 2.8 now21:10
corvusrepl looks good21:10
clarkbmemory usage and swap activity back to normal21:15
clarkblogan-: thank you for pointing that out21:15
*** markvoelker has joined #openstack-infra21:16
clarkbcheck is loaded now21:18
clarkbI think that concludes the restart21:18
*** smarcet has quit IRC21:19
clarkb#status log Performed a full zuul service restart. This reset memory usage (we were swapping), installed the debugging repl, and gives us access to ansible 2.8. Scheduler is running ce30029 on top of e0c975a and mergers + executors are running 00d0abb21:19
openstackstatusclarkb: finished logging21:19
logan-np clarkb21:20
*** markvoelker has quit IRC21:21
corvus2019-06-07 21:29:17 H=(tjtianhe.com.cn) [175.174.81.197] F=<635538059@qq.com> rejected RCPT <OpenStack-operators@lists.openstack.org>: SPF check failed.21:29
corvusthat's promising21:29
fungiyup, just saw it showed yo21:29
fungier, showed up21:30
corvusfungi: you see we have a lot of stuff queued for you, yeah?21:30
fungiwe're at a ~40-minute backlog in the openstack ml queue again21:30
fungiyep, shall i clean out the likely spam there one last time and then keep an eye on it?21:31
corvusfungi: (it's not important; just logspam)21:31
*** e0ne has joined #openstack-infra21:31
fungior i can let it go on its own... it's burning down the backlog fairly quickly now that nothing's being heaped on21:32
corvusfungi: to be clear, i'm talking about exim deferred deliveries for fungi@yuggoth21:32
fungiahh, yeah i hadn't spotted those but will take a look now21:33
fungii thought i had whitelisted the listserv so it wouldn't grelist those21:33
clarkbwe should probably #status log the spf change so that if people have trouble with email they have that breadcrumb?21:33
fungiclarkb: good call21:34
corvusdoes anyone have a message they need to send to a list?21:34
corvusit'd be nice to see a legit message go through :)21:34
clarkbI don't but won't be offended if we throw a test email at -infra21:34
*** whoami-rajat has quit IRC21:34
*** kjackal has quit IRC21:34
fungianyway we seem to be blowing through the backlog at a rate of ~1 message every 2 seconds, so will be caught up in another 25 minutes if all goes well21:35
clarkbhow about #status log Exim on lists.openstack.org/lists.opendev.org/lists.starlingx.io/lists.airshipit.org is now enforcing spf failures (not soft failures). This means if you send email from a host that isn't allowed to by the spf record that email will be rejected.21:35
fungii can e-mail the infra ml to mention the spf change21:35
*** rfarr__ has quit IRC21:35
fungikill two birds with one stone21:35
corvusclarkb, fungi: ++21:35
clarkb#status log Exim on lists.openstack.org/lists.opendev.org/lists.starlingx.io/lists.airshipit.org is now enforcing spf failures (not soft failures). This means if you send email from a host that isn't allowed to by the spf record that email will be rejected.21:36
openstackstatusclarkb: finished logging21:36
*** rlandy|ruck has quit IRC21:37
corvusfungi: it might be worth triggering the openstack release test to make sure those emails are ok21:38
fungiagreed. we may need a different address for them21:38
corvuswe can also whitelist that address to bypass spf checking, if it's a problem21:38
fungithough openstack.org just publishes a ?all policy so hopefully not?21:38
corvusjust, you know, don't tell anyone :)21:38
*** smarcet has joined #openstack-infra21:39
fungito confirm, we're blocking failures for -all policies but not ~all or ?all right?21:39
clarkbfungi: correct softfails won't block and the others are soft right?21:39
fungiyes, should be21:40
fungihttps://tools.ietf.org/html/rfc720821:40
*** e0ne has quit IRC21:40
clarkbmy spf record is ?all if you want me to send email and test21:40
fungithe domain i'm sending from has no spf record at all21:41
corvuswe should really make that test list :)21:43
corvuscause maybe emails from both of you would be a good idea21:43
clarkbI can respond to fungi's email21:44
fungiokay, bombs away21:44
corvusfungi's mail was delivered to mm21:45
fungiyep, /srv/mailman/openstack/qfiles/in/1559943888.530389+ca0ad2bcfe55cb0d3bd4269962b3ff33ac3d927d.pck21:46
fungi~610 messages ahead of it in the backlog21:46
clarkbI guess I can't reply to it until mm processes it. Should I wait or just send an email?21:46
fungiif we want i can clean up the input queue (hopefully one last time)21:47
fungiand then it should go straight out21:47
fungior we can wait another ~20 minutes21:47
fungibased on the current queue size and processing rate21:47
clarkbfungi: I'd be ok with that21:50
*** pcaruana has quit IRC21:51
fungicleaned up ~570 suspected spams from [0-9]\+@qq.com in the input queue21:51
fungii hope that's the last of them21:51
fungigood news is the queue size is no longer growing21:51
clarkbresponse sent21:52
fungii received both21:57
corvus\o/21:57
fungii've intentionally kept myself as the openstack-discuss owner since the migration so that i could filter spam for the -owner address locally. going to see if that dries up now, and if so we can talk about dropping the various blackhole aliases for other lists21:58
clarkbI think image uploads to rax may be failing. Possibly as a result of sdk updates and my restart of the builders?21:59
clarkbI don't have time to dig into that today, but can start looking monday if nothing uploads between now and then21:59
fungiin other news, i've been checking afs02.dfw and we're down to only 3 remaining stale volume replicas22:00
corvusfungi: which?22:01
*** slaweq has quit IRC22:01
fungimirror.fedora, mirror.ubuntu and mirror.ubuntu-ports all of which i suspect are substantial in size22:01
fungiso not entirely surprising22:02
fungithe count has been steadily falling all day22:02
fungihopefully those will finish up at least by the end of the weekend22:02
corvusi doubt they will22:02
corvusCould not lock the VLDB entry for the volume 536871006.22:02
corvusthey probably exceeded the timeout on mirror-update22:03
fungioh :/22:03
fungido we need to manually acquire the lock and then start them again in a screen session?22:03
fungi(after deleting the old lock?)22:03
corvusyes, but that's an afs volume lock, so we have to verify that it's really okay to unlock and then override it22:04
corvusyep -- the afs creds timed out during the release: rxk: authentication expired22:06
corvusthat happened sometime between 2019-06-06T22:45:49,877443816+00:00 and 2019-06-07T18:44:01,757147237+00:0022:06
corvusfungi: i think we should grab the lock in a screen session to prevent further releases from mirror-update01, remove the afs volume lock, then perform a vos release in screet from afsdb01 so it runs with localauth22:07
*** slaweq has joined #openstack-infra22:08
corvuswe should make sure there are no currently running transactions before removing the volume lock22:08
fungitook me a moment to figure out why we should vos release in secret ;)22:08
fungiand yeah, i'll take a look22:08
corvusi think that's "vos status" and it doesn't show anything22:10
fungioh, cool, i was checking the process list on mirror-update22:10
corvuseven if the 'vos release' command terminates, the underlying transaction (which is from the fileserver on afs01.dfw to afs02.dfw) could still be going22:11
fungiahh, okay22:11
corvusbasically, 'vos release' locks the vldb, tells 01 to replicate to 02, waits for it to finish, then unlocks the vldb22:11
fungijudging from the crontab the script locks for these three are /var/run/fedora-mirror.lock /var/run/reprepro/ubuntu.lock /var/run/reprepro/ubuntu-ports.lock22:11
corvusso vos release was effectively killed (due to cred timeout) sometime during that process, but it could have been anytime after the lock and before the unlock.22:12
corvusfungi: yeah, i think we're ready to grab those files.  want to do it?22:12
fungidoing it now22:12
fungiokay, i have three screen windows in the root session each with a bash flock'd on one of those lockfiles22:13
corvusi've verified there are still no transactions running22:13
corvusi'll unlock the vldb now22:13
fungiahh, thanks22:14
fungii've done it before, but would have to look up the commands i used22:14
corvusi'm running "vos unlock mirror.fedora"22:14
corvusand similar22:14
fungigot it22:14
*** slaweq has quit IRC22:15
corvusokay, we should be able to start releases on afs01db now22:15
corvusdo we want to do them sequentially or in parallel?22:15
fungii guess they'll fight for bandwidth, right?22:16
fungiwill the be more efficient sequenced? or just reduce the chances of all of them running afoul of a random network problem22:16
corvusi... think so?  i've forgotten all the kernel udp trivia i need to know to answer that for certain22:16
fungiyeah, i can do them one by one. i'll queue them all up in one command22:16
fungier, one command line22:17
corvusfungi: i think "vos release mirror.fedora -localauth" is what wants to run on afsdb01.openstack.org22:17
corvusfungi: you want to take care of that too?22:18
fungiahh, yeah, i suppose i don't need to rerun the mirror scripts themselves22:18
corvusnope, the rw volume is in good shape; rsyncs finished and all.22:19
fungiroot screen session started on afsdb0122:19
corvusfungi: maybe && instead of ; ?22:19
corvus(just in case something goes wrong but takes 12 hours to do it)22:19
fungigood idea22:20
corvusfungi: lgtm22:20
fungiand there it goes22:20
corvus"vos examine mirror.fedora" shows it locked for release22:20
fungii'll check in on it off and on it while i'm awake at least22:21
fungionce these complete successfully i'll exit the bash processes holding the mirror update locks for these22:21
corvusinfra-root: ^ long running vos release processes in screen on afsdb01 if you want to check on progress over the weekend22:21
clarkbrgr22:22
corvus#status log fedora, ubuntu, ubuntu-ports mirrors are currently resyncing to afs02.dfw and won't update again until that is finished22:23
openstackstatuscorvus: finished logging22:23
openstackgerritMerged opendev/storyboard-webclient master: Add a subcontroller for Team projects  https://review.opendev.org/64196322:25
openstackgerritMerged opendev/storyboard-webclient master: Add UI for making security teams related to projects  https://review.opendev.org/64196422:25
*** weifan has quit IRC22:28
*** weifan has joined #openstack-infra22:29
*** hamzy_ has joined #openstack-infra22:30
*** hamzy has quit IRC22:32
*** weifan has quit IRC22:33
*** weifan has joined #openstack-infra22:36
*** EvilienM is now known as EmilienM22:42
*** weifan has quit IRC22:45
*** weifan has joined #openstack-infra22:46
*** weifan has quit IRC22:50
*** weifan has joined #openstack-infra22:52
openstackgerritMerged openstack/pbr master: Make WSGI tests listen on localhost  https://review.opendev.org/66375822:53
*** smarcet has quit IRC23:01
*** diablo_rojo has quit IRC23:01
*** hwoarang has quit IRC23:03
*** hwoarang has joined #openstack-infra23:06
openstackgerritMerged openstack/pbr master: Switch to release.o.o for constraints  https://review.opendev.org/66398823:06
openstackgerritMerged opendev/storyboard-webclient master: Add support for Story permission endpoints  https://review.opendev.org/64207023:10
openstackgerritMerged opendev/storyboard-webclient master: Allow marking stories as security-related  https://review.opendev.org/64207123:10
*** aaronsheffield has quit IRC23:10
*** slaweq has joined #openstack-infra23:11
*** gyee has quit IRC23:12
*** slaweq has quit IRC23:15
*** markvoelker has joined #openstack-infra23:17
*** michael-beaver has quit IRC23:24
*** tjgresha has joined #openstack-infra23:29
*** diablo_rojo has joined #openstack-infra23:37
*** markvoelker has quit IRC23:38
pabelangerclarkb: ack, thanks23:53
*** weifan has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!