Wednesday, 2021-03-31

*** xarlos has quit IRC00:06
*** jistr has quit IRC00:18
*** jistr has joined #openstack-infra00:19
*** hamalq_ has quit IRC00:49
*** gyee has quit IRC01:01
*** jamesdenton has quit IRC01:04
*** jamesden_ has joined #openstack-infra01:04
*** iurygregory has quit IRC01:16
*** iurygregory has joined #openstack-infra01:17
*** __ministry has joined #openstack-infra01:17
*** iurygregory has quit IRC01:18
*** iurygregory has joined #openstack-infra01:18
*** osmanlicilegi has joined #openstack-infra01:20
*** gshippey has quit IRC01:35
*** openstackgerrit has joined #openstack-infra01:44
openstackgerritJeremy Stanley proposed openstack/project-config master: Clean up OpenEdge configuration  https://review.opendev.org/c/openstack/project-config/+/78399001:44
*** carloss has quit IRC01:57
*** iurygregory has quit IRC02:08
*** iurygregory has joined #openstack-infra02:09
*** rcernin has quit IRC02:31
*** rcernin has joined #openstack-infra02:38
*** armax has quit IRC02:55
*** armax has joined #openstack-infra02:59
*** rcernin has quit IRC03:07
*** rcernin has joined #openstack-infra03:07
*** rcernin has quit IRC03:07
*** akahat has quit IRC03:08
*** rcernin has joined #openstack-infra03:09
*** kopecmartin has quit IRC03:09
*** dulek has quit IRC03:09
*** nhicher has quit IRC03:09
*** rcernin has quit IRC03:11
*** rcernin has joined #openstack-infra03:12
*** nhicher has joined #openstack-infra03:13
*** kopecmartin has joined #openstack-infra03:13
*** rcernin has quit IRC03:14
*** rcernin has joined #openstack-infra03:14
*** rcernin has quit IRC03:16
*** rcernin has joined #openstack-infra03:16
*** rcernin has quit IRC03:18
*** rcernin has joined #openstack-infra03:19
*** akahat has joined #openstack-infra03:22
*** zer0c00l|afk is now known as zer0c00l03:38
*** psachin has joined #openstack-infra03:41
*** armax has quit IRC03:57
*** ykarel|away has joined #openstack-infra04:20
*** vishalmanchanda has joined #openstack-infra04:30
*** ykarel|away is now known as ykarel04:39
*** zbr|rover4 has joined #openstack-infra05:04
*** zbr|rover has quit IRC05:06
*** zbr|rover4 is now known as zbr|rover05:06
*** whoami-rajat has joined #openstack-infra05:17
*** auristor has quit IRC05:27
*** adriant has quit IRC05:43
*** ralonsoh has joined #openstack-infra06:10
*** slaweq has joined #openstack-infra06:10
*** ajitha has joined #openstack-infra06:11
*** sboyron has joined #openstack-infra06:21
*** gfidente|afk is now known as gfidente06:21
*** jcapitao has joined #openstack-infra06:27
*** eolivare has joined #openstack-infra06:30
*** hashar has joined #openstack-infra06:45
*** dklyle has quit IRC06:51
openstackgerritHervé Beraud proposed openstack/project-config master: Use publish-to-pypi on barbican ansible roles  https://review.opendev.org/c/openstack/project-config/+/78401106:52
*** dulek has joined #openstack-infra07:04
*** jamesden_ has quit IRC07:08
*** jamesdenton has joined #openstack-infra07:09
*** rcernin has quit IRC07:25
*** tosky has joined #openstack-infra07:33
hberaudHello infra team, please can you have a look ASAP to this patch => /https://review.opendev.org/c/openstack/project-config/+/784011 This patch a bit urgent and it will allow us to land 2 new deliverables within Wallaby. Deadline is close and the gates are stuck by this patch. Thank you for your understanding.07:38
*** xarlos has joined #openstack-infra07:41
*** ykarel has quit IRC08:01
fricklerhberaud: done08:07
*** ociuhandu has joined #openstack-infra08:07
*** dpawlik0 is now known as dpawlik08:08
*** lucasagomes has joined #openstack-infra08:09
openstackgerritMerged openstack/project-config master: Use publish-to-pypi on barbican ansible roles  https://review.opendev.org/c/openstack/project-config/+/78401108:16
hberaudfrickler: thank you very much08:28
*** psachin has quit IRC08:30
*** derekh has joined #openstack-infra08:30
*** ociuhandu has quit IRC08:36
*** vishalmanchanda has quit IRC08:39
*** ykarel has joined #openstack-infra08:43
*** ykarel is now known as ykarel|lunch08:43
*** jcapitao has quit IRC08:51
*** ociuhandu has joined #openstack-infra08:52
*** ociuhandu has quit IRC08:53
*** ociuhandu has joined #openstack-infra08:55
*** vishalmanchanda has joined #openstack-infra09:11
*** derekh has quit IRC09:21
*** derekh has joined #openstack-infra09:21
*** jcapitao has joined #openstack-infra09:24
hberaudfrickler: sorry to disturb you again, do you have an idea why this error happen? https://zuul.opendev.org/t/openstack/build/3c94d9fe7dbc41fb82030a7e5adbf88a/log/job-output.txt#425709:38
hberaudit doesn't seems related to the deliverable in itself09:39
hberaudit more looks like to an environment issue (network issue or something like that) but I'm not sure09:39
hberaudhowever, another release patch successfully passed more or less at the same time https://review.opendev.org/c/openstack/releases/+/78401409:41
hberaudso I wonder if some other elements aren't missing somewhere09:41
hberaud(for these ansible roles)09:42
*** hashar has quit IRC09:52
*** jcapitao has quit IRC09:56
*** jcapitao has joined #openstack-infra09:56
*** yamamoto has quit IRC10:00
*** ykarel|lunch is now known as ykarel10:04
*** rcernin has joined #openstack-infra10:06
*** rcernin has quit IRC10:08
*** rcernin has joined #openstack-infra10:08
*** jamesdenton has quit IRC10:20
*** jamesden_ has joined #openstack-infra10:20
*** derekh has quit IRC10:26
*** derekh has joined #openstack-infra10:26
*** yamamoto has joined #openstack-infra10:33
*** jcapitao has quit IRC10:36
*** hjensas has joined #openstack-infra10:36
*** jcapitao has joined #openstack-infra10:36
*** jcapitao is now known as jcapitao_lunch10:39
fricklerhberaud: this looks like a rate limit on the pypi side to me, maybe too many things merged at once? maybe some other infra-root can dig deeper10:43
hberaudfrickler: yeah we currently discuss about this on #openstack-release and ttx just proposed https://review.opendev.org/c/openstack/releases/+/78406810:44
*** yamamoto has quit IRC10:46
*** jcapitao_lunch has quit IRC10:46
*** jcapitao_lunch has joined #openstack-infra10:47
zbr|roveri observed that a particular job is reaching RETRY_LIMIT very often but i am not able to get any feedback regarding why, zuul provides no logs.10:47
zbr|roverhttps://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ansible-centos-8-molecule-tripleo-modules&project=openstack/tripleo-ansible10:47
*** jcapitao_lunch has quit IRC10:51
*** jcapitao_lunch has joined #openstack-infra10:51
*** carloss has joined #openstack-infra10:55
*** mugsie__ is now known as mugsie11:01
*** jcapitao_lunch has quit IRC11:04
*** ociuhandu has quit IRC11:13
fungizbr|rover: we had what looked like a memory leak in the zuul scheduler yesterday which started causing zookeeper disconnects once the server began lightly utilizing swap, and that seemed to be resulting in jobs spontaneously getting retried. we restarted the scheduler to relieve some of the memory pressure11:29
*** ociuhandu has joined #openstack-infra11:29
zbr|roverfungi: i wonder if there is nothing particular with this job, i got one running and stuck at https://zuul.opendev.org/t/openstack/stream/b3f23e90b8c448e39dd9aa17830f76e5?logfile=console.log11:30
zbr|roveroops, interesting, just got something out of it: 2021-03-31 11:29:49.414003 | centos-8 |   "msg": "Data could not be sent to remote host \"104.130.239.177\". Make sure this host can be reached over ssh: ssh: connect to host 104.130.239.177 port 22: Connection timed out\r\n",11:30
zbr|roverso that task finally failed after being stuck for 17mins11:31
zbr|roverthis is how it looks: http://paste.openstack.org/show/804072/11:31
fungilooks like a node somewhere in rackspace11:31
fungiand yeah, i'm not seeing the memory pressure reappearing in the scheduler yet, so this is likely unrelated11:32
zbr|roverwere you able to spot anything else with such a high failure rate? https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ansible-centos-8-molecule-tripleo-modules&project=openstack/tripleo-ansible11:34
*** ociuhandu has quit IRC11:34
*** dtantsur|afk is now known as dtantsur11:34
zbr|roveri failed to spot one, thus is why i suspect it may be something particular this job, or at least the nodeset.11:34
fungiso maybe the tripleo-ansible-centos-8-molecule-tripleo-modules job is crashing nodes, or the centos-8-stream nodes could be unstable?11:36
fungiand no, i'm still waking up, trying to broadly survey what's going on with everything before i know where to focus my efforts11:36
fungineed to follow up on some release failures too11:36
fungisee if they're related or not11:36
zbr|roversure, i also need to go and grab lunch or i will endup starving11:36
*** lpetrut has joined #openstack-infra11:37
fungiwe can set an autohold for one of the failing changes if it's repeatable, and then try to see if the node recovers after the build completes, or try to reboot it from the api to get it back online and then investigate11:38
fungii can also try to snag a vm console log from one if we can catch it failing11:38
*** ociuhandu has joined #openstack-infra11:45
*** jcapitao_lunch has joined #openstack-infra11:46
*** ociuhandu has quit IRC11:50
*** sshnaidm|off is now known as sshnaidm11:57
*** ociuhandu has joined #openstack-infra12:01
*** jcapitao_lunch is now known as jcapitao12:02
*** auristor has joined #openstack-infra12:05
*** ociuhandu has quit IRC12:10
*** nweinber has joined #openstack-infra12:28
*** yamamoto has joined #openstack-infra12:30
*** rcernin has quit IRC12:31
*** yamamoto has quit IRC12:39
*** smcginnis has quit IRC12:53
*** dchen has quit IRC13:03
*** yamamoto has joined #openstack-infra13:10
*** yamamoto has quit IRC13:15
*** yamamoto has joined #openstack-infra13:15
*** ociuhandu has joined #openstack-infra13:20
*** yamamoto has quit IRC13:55
*** rpioso is now known as rpioso|afk13:55
*** rcernin has joined #openstack-infra14:12
*** rcernin has quit IRC14:17
*** armax has joined #openstack-infra14:18
*** ociuhandu has quit IRC14:19
*** ociuhandu has joined #openstack-infra14:20
*** jamesden_ has quit IRC14:24
*** jamesdenton has joined #openstack-infra14:24
*** ociuhandu has quit IRC14:29
*** ociuhandu has joined #openstack-infra14:29
*** rcernin has joined #openstack-infra14:31
*** xarlos has quit IRC14:34
*** rcernin has quit IRC14:35
*** yamamoto has joined #openstack-infra14:35
*** xarlos has joined #openstack-infra14:37
*** ykarel is now known as ykarel|away14:40
*** yamamoto has quit IRC14:42
clarkbfungi: frickler hberaud I agree that looks like rate limiting from the pypi side. Looks like that job doesn't run particularly slowly, maybe we can get away with some self induced sleep between requests to pypi?14:56
hberaudclarkb: already done with https://review.opendev.org/c/openstack/releases/+/78406814:56
hberaudAnd I confirm that fixed the problem14:56
clarkbgreat14:57
*** dklyle has joined #openstack-infra14:57
hberaudclarkb: however ykarel|away just noticed us some problem with tarballs14:57
*** dklyle has quit IRC14:57
hberaudEspecially with this one => https://tarballs.opendev.org/openstack/barbican-tempest-plugin/?C=M;O=D14:58
fungiwell, there's no guarantee it fixed the problem (we might have just not retriggered the issue on the next attempt), but it seems like a good measure to help mitigate14:58
hberaudfungi: I think I'll rewrite it with tenacity14:58
hberaudto better handle this case14:59
clarkbpossibly related: pypi is going to remove the xml search api14:59
hberaudand as elod suggested we should move away from the xmlrpc which is deprecated14:59
hberaudclarkb: yes14:59
clarkbyup, well and not just deprecated it sounds like it will be removed entirely soon15:00
hberaudok15:00
hberaudSo we need to worry quickly15:01
clarkbI'm trying to remember where I last saw updates, I think on a github issue15:02
clarkbthe pypi-announce list is still quiet though so maybe not as soon as I thought15:02
*** ajitha has quit IRC15:03
*** dklyle has joined #openstack-infra15:04
fungihberaud: clarkb: looks like the missing additions on the tarballs site are probably delayed vos release for the project.tarballs volume. looking into it now15:04
funginot sure if ianw started doing the ord replica sync for that, but if so maybe he temporarily held a lock or something15:05
clarkbhttps://github.com/pypa/warehouse/issues/8769 implies you can do similar with the rest api at least15:05
*** ociuhandu has quit IRC15:05
*** ociuhandu has joined #openstack-infra15:06
clarkbhberaud: https://github.com/pypa/warehouse/issues/4321 that has details on the rate limit itself15:06
hberaudclarkb: excellent thanks for the link15:06
fungithere is a "vos release -v -localauth project.tarballs" running on afs01.dfw since 00:48 utc15:07
fungii have a feeling that's in progress adding the afs01.ord replica15:07
fungiso the tarballs site will be stale until that's done15:07
clarkbor we serve from the RW replica temporarily15:08
*** __ministry1 has joined #openstack-infra15:10
*** ociuhandu has quit IRC15:12
*** mfixtex has joined #openstack-infra15:14
*** lpetrut has quit IRC15:22
*** rcernin has joined #openstack-infra15:26
*** noonedeadpunk has quit IRC15:27
*** noonedeadpunk has joined #openstack-infra15:28
fungiyeah, if this takes much longer i may push that change up15:29
fungineed to take a look at the traffic graph and try to estimate how much time is remaining15:30
*** rcernin has quit IRC15:31
*** ykarel|away has quit IRC15:37
*** ociuhandu has joined #openstack-infra15:38
*** hashar has joined #openstack-infra15:41
*** ociuhandu has quit IRC15:43
mtreinishI had a weird reno question (not sure if this the best channel to ask).15:47
mtreinishI've been using reno in some projects on github and we're facing a problem with stable branches and point releases15:47
mtreinishbasically, when we backport fixes with notes to stable branches, they only show up under point releases when we run `reno report` from that stable branch15:47
mtreinishWhen we run it from master the note from the commit which was backported from the master branch will be shown under pending release section15:48
mtreinishI was thinking maybe it was the branch scanning config settings but we use 'stable/0.x' branch name which should match the default patterns15:48
mtreinishdoes anyone have any thoughts?15:49
*** spotz has joined #openstack-infra15:50
fungimtreinish: we were discussing this in #openstack-release yesterday15:51
* fungi gets a link15:51
mtreinishah, a channel I probably should idle in. (I lost my znc config a while back and forgot to rejoin several channels)15:53
fungihttp://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-03-30.log.html#t2021-03-30T19:49:1815:53
mtreinishfungi: hmm, it's a bit more than that in my case I think. Like point releases tagged on the stable branches don't show up in the output at all unless you're current checkout is the stable branch15:56
fungimtreinish: yes, that's always been true i think15:56
fungibranches are handled independently, you can create a release notes document per branch15:57
mtreinishhas it? I could have sworn it used to do the right thing when running from master and would show all stable point releases tagged on the stable branches15:58
mtreinishbut I could just be misremembering15:58
*** ociuhandu has joined #openstack-infra15:59
fungithe way openstack works around that is by having one release notes document per release: https://docs.openstack.org/releasenotes/nova/victoria.html15:59
fungiso the stable/victoria release notes continue to update that victoria release notes document16:00
fungiand things which happened on master after victoria branched go into a wallaby release notes document16:00
*** rpioso|afk is now known as rpioso16:00
*** lucasagomes has quit IRC16:01
*** __ministry1 has quit IRC16:01
fungiprojects with all their releases in one document have struggled with it as a result. e.g. when zuul wanted to backport a fix recently we created a temporary branch and put a fix on it with a release note, but then that wouldn't show up in the release notes for the master branch at all. what we ended up having to do was revert all release notes in master since the point where we branched, merge the16:02
fungifix branch back into master with the tag that was on it, then readd all the master branch release notes with new reno ids after the merge point16:02
fungiand i think that only worked because we hadn't tagged anything new on master after the branch point16:02
mtreinishah ok, yeah I just ran some tests locally and understand my confused memory now. On tempest it shows the stable point releases, but tempest is branchless :P16:03
mtreinishok well it's good to know I'm not doing something wrong, and it's a headache for everyone16:04
fungiright, and if tempest ever needs to "backport" a fix to an old release, it will probably have the same struggle zuul does16:04
fungias zuul is similarly branchless16:04
mtreinishI would have assumed because the documentation says it supports multiple branch scanning it would be able to handle this case but I guess not16:04
mtreinishI'll have to take a look at the reno code and see if there is a way to handle doing this (maybe optionally because I assume it won't be perfect)16:05
mtreinishthanks for the help16:05
fungithe reno maintainers (openstack release management team) would probably be fine accepting a patch which made that better, but i expect the trick will be working out the reintegration logic for multiple branches16:06
fungithe problem is not so much that nobody's thought to add support for it, but that the actual logistics are unclear16:06
mtreinishyeah, that's why I was thinking it would be an optional thing, I was thinking of doing something like a BFS traversal approach, but then have to match tags to stable branches which could get hairy16:09
*** ociuhandu has quit IRC16:12
*** hamalq has joined #openstack-infra16:18
zbr|roverwhat would be the best channel to chat about pbr?16:18
*** hamalq_ has joined #openstack-infra16:19
*** hamalq has quit IRC16:22
*** jamesdenton has quit IRC16:24
*** jamesdenton has joined #openstack-infra16:25
fungizbr|rover: maybe #openstack-oslo since it's an official deliverable of the openstack oslo team. i don't do a lot with pbr maintenance, but i patch or review things in it from time to time and am happy to join discussion there16:38
*** jcapitao has quit IRC16:40
*** hamalq_ has quit IRC16:41
*** hamalq has joined #openstack-infra16:41
zbr|roverthanks, i did not know that aspect and the irc channel was not mentioned on readme.16:42
*** eolivare has quit IRC16:43
openstackgerritSorin Sbârnea proposed openstack/pbr master: Make default envlist generic in tox.ini  https://review.opendev.org/c/openstack/pbr/+/75744516:56
*** derekh has quit IRC17:00
*** dtantsur is now known as dtantsur|afk17:04
openstackgerritStephen Finucane proposed openstack/pbr master: Add test for cfg -> py transformation  https://review.opendev.org/c/openstack/pbr/+/78065817:09
openstackgerritStephen Finucane proposed openstack/pbr master: Reverse ordering of 'D1_D2_SETUP_ARGS'  https://review.opendev.org/c/openstack/pbr/+/78065917:09
*** TViernion has quit IRC17:10
*** TViernion has joined #openstack-infra17:16
*** rcernin has joined #openstack-infra17:27
*** rcernin has quit IRC17:35
*** vishalmanchanda has quit IRC17:51
*** timburke has joined #openstack-infra17:52
*** gfidente is now known as gfidente|afk17:59
*** yamamoto has joined #openstack-infra18:50
*** yamamoto has quit IRC18:54
*** jamesdenton has quit IRC18:56
*** jamesden_ has joined #openstack-infra18:56
*** hashar has quit IRC19:08
*** rcernin has joined #openstack-infra19:31
*** rcernin has quit IRC19:36
*** hjensas has quit IRC20:10
*** sboyron has quit IRC20:12
*** nweinber has quit IRC20:14
*** rcernin has joined #openstack-infra20:30
*** ralonsoh has quit IRC20:31
*** d34dh0r53 has quit IRC20:35
ianwfungi: urgh, yeah i added the ORD sites as mentioned in the meeting and started a vos release ... tarballs is still going20:37
ianwit's in a screen on afs0120:37
fungiyep, that seemed to be what it turned out to be, no worries20:37
ianwi dunno.  it's going at 10mbps (1.3Mbps).  yesterday i was copying things at 45MiBs rax->vexxhost.  i don't know why it's so slow20:38
*** jamesden_ has quit IRC20:38
clarkbafs and latency :(20:39
*** jamesdenton has joined #openstack-infra20:39
ianwis the latency between dfw & ord that big?20:39
ianwit seems to be very suspiciously 10mbps i would say20:40
clarkbI would suspect its in the 30ms range?20:40
clarkbwhich is probably high enough if afs is doing its weird windowing thing20:40
*** d34dh0r53 has joined #openstack-infra20:40
clarkbthough we could also do a transfer from mirror to mirror in both directions and baseline tcp windowing too20:40
ianwtime=41.320:40
*** whoami-rajat has quit IRC20:47
fungidallas and chicago aren't exactly next door, even as the jet flies20:48
ianwi should probably kill this and try : https://lists.openafs.org/pipermail/openafs-info/2018-August/042502.html (and document it ...)20:50
fungior we can temporarily update the tarballs vhost on static.o.o to serve from the read-write volume temporarily, done that before more than once20:51
*** rcernin has quit IRC21:05
*** rcernin has joined #openstack-infra21:05
*** rcernin has quit IRC21:30
*** dansmith has quit IRC21:33
*** dansmith has joined #openstack-infra21:37
*** rcernin has joined #openstack-infra21:55
*** rcernin has quit IRC22:00
*** rcernin has joined #openstack-infra22:13
*** rcernin has quit IRC22:18
*** yamamoto has joined #openstack-infra22:30
*** rcernin has joined #openstack-infra22:32
*** rcernin has quit IRC22:32
*** rcernin has joined #openstack-infra22:33
*** jamesdenton has quit IRC22:57
*** jamesden_ has joined #openstack-infra22:58
openstackgerritMerged openstack/project-config master: Clean up OpenEdge configuration  https://review.opendev.org/c/openstack/project-config/+/78399023:22
*** tosky has quit IRC23:30
*** yamamoto has quit IRC23:34
*** lamt has joined #openstack-infra23:59

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!