Tuesday, 2020-01-21

*** jamesmcarthur has quit IRC00:12
*** jamesmcarthur has joined #zuul00:12
*** zxiiro has quit IRC00:13
*** jamesmcarthur has quit IRC00:18
*** jamesmcarthur has joined #zuul00:24
*** igordc has quit IRC00:25
*** igordc has joined #zuul00:25
openstackgerritTristan Cacqueray proposed zuul/zuul master: docs: remove generated toc from the main index  https://review.opendev.org/70346800:28
*** igordc has quit IRC00:49
*** rlandy is now known as rlandy|bbl00:55
*** jamesmcarthur has quit IRC01:04
*** jamesmcarthur has joined #zuul01:04
*** jamesmcarthur has quit IRC01:06
*** jamesmcarthur has joined #zuul01:06
*** rfolco has quit IRC01:41
*** jamesmcarthur has quit IRC01:56
*** jamesmcarthur has joined #zuul01:57
*** jamesmcarthur has quit IRC02:02
*** jamesmcarthur has joined #zuul02:24
*** bhavikdbavishi has joined #zuul02:56
*** jamesmcarthur has quit IRC03:04
*** jamesmcarthur has joined #zuul03:05
*** bhavikdbavishi1 has joined #zuul03:08
*** bhavikdbavishi has quit IRC03:10
*** bhavikdbavishi1 is now known as bhavikdbavishi03:10
*** jamesmcarthur has quit IRC03:11
*** jamesmcarthur has joined #zuul03:30
*** jamesmcarthur has quit IRC03:33
*** rlandy|bbl has quit IRC03:39
*** jamesmcarthur has joined #zuul04:42
*** saneax has quit IRC04:55
*** evrardjp has quit IRC05:34
*** evrardjp has joined #zuul05:34
*** jamesmcarthur has quit IRC05:35
*** jamesmcarthur has joined #zuul05:37
*** jamesmcarthur has quit IRC05:43
*** raukadah is now known as chandankumar05:45
*** saneax has joined #zuul06:01
*** yolanda has quit IRC06:04
*** saneax has quit IRC06:06
*** jamesmcarthur has joined #zuul06:06
*** jamesmcarthur has quit IRC06:13
openstackgerritSimon Westphahl proposed zuul/nodepool master: Handle event id in node requests  https://review.opendev.org/70340606:23
openstackgerritSimon Westphahl proposed zuul/nodepool master: Centralize logging adapters  https://review.opendev.org/70340706:23
*** saneax has joined #zuul06:53
*** themroc has joined #zuul07:09
*** jamesmcarthur has joined #zuul07:09
*** jamesmcarthur has quit IRC07:14
*** yolanda has joined #zuul07:25
*** yolanda has quit IRC07:33
*** yolanda has joined #zuul07:34
openstackgerritAndreas Jaeger proposed zuul/zuul-jobs master: fetch-sphinx: Exclude doctrees directory  https://review.opendev.org/70354707:39
*** bhavikdbavishi has quit IRC07:41
*** yolanda has quit IRC07:48
openstackgerritSimon Westphahl proposed zuul/nodepool master: Pass node request handler to launcher base class  https://review.opendev.org/70354907:50
*** jamesmcarthur has joined #zuul08:10
*** jamesmcarthur has quit IRC08:14
*** bhavikdbavishi has joined #zuul08:15
*** bhavikdbavishi1 has joined #zuul08:18
*** bhavikdbavishi has quit IRC08:20
*** bhavikdbavishi1 is now known as bhavikdbavishi08:20
*** yolanda has joined #zuul08:21
*** armstrongs has joined #zuul08:29
*** armstrongs has quit IRC08:39
*** hashar has joined #zuul08:40
*** jpena|off is now known as jpena08:52
openstackgerritSimon Westphahl proposed zuul/nodepool master: Annotate logs in launcher  https://review.opendev.org/70355808:57
openstackgerritSimon Westphahl proposed zuul/nodepool master: Annotate logs in node request handler  https://review.opendev.org/70355908:57
openstackgerritSimon Westphahl proposed zuul/nodepool master: Include event id in node request listings  https://review.opendev.org/70356008:57
*** tosky has joined #zuul09:07
openstackgerritSimon Westphahl proposed zuul/nodepool master: Annotate logs in zk module  https://review.opendev.org/70356109:10
openstackgerritJan Kubovy proposed zuul/zuul master: Add spec for scale out scheduler  https://review.opendev.org/62147909:11
openstackgerritJan Kubovy proposed zuul/zuul master: Add spec for scale out scheduler  https://review.opendev.org/62147909:15
reiterativeclarkb: Thanks! That helped me to find the problem: my base job was including roles from zuul-jobs, but that should be zuul/zuul-jobs!09:21
*** themr0c has joined #zuul09:25
*** yolanda has quit IRC09:27
*** themroc has quit IRC09:28
*** yolanda has joined #zuul09:33
openstackgerritAntoine Musso proposed zuul/zuul master: Docs: fix stestr run example  https://review.opendev.org/70356609:40
*** bhavikdbavishi has quit IRC09:41
*** yolanda has quit IRC09:49
openstackgerritAntoine Musso proposed zuul/zuul master: tox: pass --slowest to stestr  https://review.opendev.org/70357109:58
openstackgerritAntoine Musso proposed zuul/zuul master: Divide concurrent tests by classes  https://review.opendev.org/70357510:08
*** openstackgerrit has quit IRC10:12
*** bhavikdbavishi has joined #zuul10:39
*** bhavikdbavishi1 has joined #zuul10:42
*** bhavikdbavishi has quit IRC10:43
*** bhavikdbavishi1 is now known as bhavikdbavishi10:43
hasharfun, running tests.unit.test_v3.TestAnsible28.test_plugins_1 suite, it seems Ansible detects python and ends up using /usr/bin/python11:08
hasharwhich turns out to be python2.7 on my machine ;)11:08
hashar"discovered_interpreter_python": "/usr/bin/python",'11:08
reiterativeIs it possible to use the 'Depends-On' clause to specify a dependency on a change in zuul/zuul-jobs, which adds a role that the Zuul test job for your change uses?11:24
reiterativee.g. I have a proposed change (https://review.opendev.org/r/693513/) in zuul-jobs, and I want a test job (which uses Bazel) on my Zuul setup to use the install-bazel role that this change adds11:25
*** ironfoot has joined #zuul11:40
*** rfolco has joined #zuul12:00
AJaegerreiterative: yes, that should work.12:04
*** bhavikdbavishi has quit IRC12:09
*** yolanda has joined #zuul12:10
*** yolanda has quit IRC12:10
sugaarHI the line "[gearman] server=127.0.0.1" in zuul.conf specifies where gearman is going to launch?, so if I specify  server=scheduller it will be launched "inisde" the scheduller?12:11
*** yolanda has joined #zuul12:11
*** jpena is now known as jpena|lunch12:21
ironfootsugaar: looks like zuul-scheduler provides the gearman daemon.12:21
ironfoot"You may supply your own gearman server, but the Zuul scheduler includes a built-in server which is recommended. " in https://zuul-ci.org/docs/zuul/howtos/installation.html?highlight=gearman#zuul-components12:22
ironfootso, to clarify, you don't decide where gearman is going to be launched.12:25
sugaarI understand that, however for some reason gearman does not get launched. I am trying to discover why. I have defined the a gearman port (2181) but it seems like it can not be reached12:41
sugaarhttps://paste.gnome.org/p5yjt1rvp12:42
ironfoothave you set the "[gearman_server]    start=true" option?12:42
sugaaryes, ther is included in the conf file: [gearman] --> server=scheduler ; [gearman_server] --> start=true12:46
pabelangerheads up, pip 20.0 broken bindep jobs for us12:47
AJaegerthere was just a report in #openstack-infra as well...12:48
pabelangerhttps://github.com/pypa/pip/issues/721712:49
pabelangerpip 20.0.1 released13:00
*** rlandy has joined #zuul13:07
pabelangerconfirmed to fix our bindep errors13:07
*** jamesmcarthur has joined #zuul13:18
*** zbr|drover has quit IRC13:18
*** zbr has joined #zuul13:19
*** jpena|lunch is now known as jpena13:23
tobiashconfirmed, 20.0 also broke us completely, 20.0.1 fixed it again13:24
*** AJaeger has quit IRC13:33
*** jamesmcarthur has quit IRC13:33
*** AJaeger has joined #zuul13:34
*** avass has joined #zuul13:36
avassdoes a nodeset request need to be fulfilled by a single pool in nodepool?13:43
avassI'm getting 'NODE_FAILURE' when trying to request a windows node and a linux node in the same job and we've separated those into two different pools.13:44
Shrewsavass: yes13:44
*** saneax has quit IRC13:44
*** jamesmcarthur has joined #zuul13:45
avassShrews: alright, we'll have to fix that then13:45
avassSo that actually means there's no way to mix static nodes and dynamic nodes from a cloud provider?13:48
Shrewsavass: that's correct. that is sort of implied in that you may specify only a single driver per provider (https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.driver)14:04
fungithat's mostly to work around locality challenges in multi-node jobs where connectivity between nodes in different providers isn't guaranteed14:07
fungibut the one-driver-per-provider situation does combine with that to make things somewhat less flexible14:07
*** openstackgerrit has joined #zuul14:25
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Handle service restart when connections are changed  https://review.opendev.org/70362414:25
reiterativeAJaeger Adding the Depends-On param to my change in Gerrit doesn't seem to have any effect when I trigger a recheck. Perhaps my Zuul instance is not correctly configured? I specified the dependency like this in the commit message (before the Change-Id): Depends-On: https://review.opendev.org/693513. Looking at the debug output for my zuul-executor, there's no sign of this dependent change being used when preparing the environment - it just uses the14:32
reiterativemaster branch to checkout opendev.org/zuul/zuul-jobs.14:32
Shrewsswest: I left you some comments on your nodepool logging changes. I'm not a fan of passing entire objects around just to modify logging data. Let's try to keep these things loosely coupled.14:34
fungireiterative: oh, you were asking about a depends-on between different code review systems?14:34
fungireiterative: what connector are you using for opendev.org in your configuration?14:35
fungireiterative: it needs to be the gerrit connector rather than the git connector for cross-platform dependencies to work14:35
*** dtroyer has joined #zuul14:38
swestswest: It just looks like the NodeLauncher (at least all of the concrete impls.) is anyways tightly coupled to the the NodeLauncher so I figured it might just pass it up to the base class. I'm all for decoupling, but this just looks like some artificial split since all the concrete implementations are already tightly coupled to the handler.14:39
swestShrews: ^14:39
Shrewsswest: subclassing will be tightly coupled to the base class, yes. but we don't need to tie ourselves needlessly to other objects14:41
Shrewsswest: can you show me where the NodeLauncher is already coupled to the handler?14:43
swestShrews: what I meant to say was: all subclasses are tightly coupled to the handler already. So it seems like there is no use for a NodeLauncher that doesn't have a ref to a handler14:43
swestShrews: e.g. here https://opendev.org/zuul/nodepool/src/branch/master/nodepool/driver/openstack/handler.py#L49 and in all other subclasses of the NodeLauncher14:44
swestbasically all the instances that I modified in https://review.opendev.org/#/c/703549/14:44
Shrewsswest: ah, in the actual implementation. thx, been a while since i've looked at this code. let me look again....14:45
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Add tenant reconfiguration when main.yaml changed  https://review.opendev.org/70363114:46
Shrewsswest: Ok, you changed my mind on the handler bit now that I've refreshed myself on the code. I still believe we shouldn't pass the NodeRequest object to the get_annotated_logger though.14:51
swestShrews: k, you convinced me on that part :) There is no need to couple the event id to the node request, since that could also be used as a generic event id e.g. for related log messages from image builds14:53
swestI'll change that14:53
Shrews\14:54
Shrews\o/14:54
*** armstrongs has joined #zuul14:59
*** avass has quit IRC14:59
*** avass has joined #zuul15:04
openstackgerritAntoine Musso proposed zuul/zuul master: tox: reduce deps used for pep8 env  https://review.opendev.org/70363415:05
openstackgerritAntoine Musso proposed zuul/zuul master: tox: rename pep8 to linters  https://review.opendev.org/70363515:13
openstackgerritAntoine Musso proposed zuul/zuul master: tox: do not install bindep for linters  https://review.opendev.org/70363615:13
*** electrofelix has joined #zuul15:16
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Handle service restart when connections are changed  https://review.opendev.org/70362415:17
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Add networking.k8s.io apiGroups rbac for service account  https://review.opendev.org/70363715:17
*** jamesmcarthur has quit IRC15:25
*** themr0c has quit IRC15:28
*** electrofelix has quit IRC15:31
*** jamesmcarthur has joined #zuul15:35
*** electrofelix has joined #zuul15:36
*** electrofelix has quit IRC15:36
*** electrofelix has joined #zuul15:36
openstackgerritTobias Henkel proposed zuul/zuul master: Add spec for enhanced regional executor distribution  https://review.opendev.org/66341315:38
openstackgerritTobias Henkel proposed zuul/zuul master: Optionally allow zoned executors to process unzoned jobs  https://review.opendev.org/67384015:39
corvusShrews: can you take a look at this tiny docs change? https://review.opendev.org/70347115:48
*** chandankumar is now known as raukadah15:48
Shrewscorvus: +A'd15:49
tristanCcorvus: mnaser: https://review.opendev.org/703624 should implements the scheduler queues reload on restart15:50
tristanCthough i think we need a ready/stop probe for the scheduler as the task enqueues happen too fast and some of the changes are enqueued on the dying scheduler :)15:53
*** zxiiro has joined #zuul15:57
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Add OpenShift SCC and functional test  https://review.opendev.org/70275815:58
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Handle service restart when connections are changed  https://review.opendev.org/70362415:58
openstackgerritDavid Shrewsbury proposed zuul/nodepool master: Enable E741 flake8 check  https://review.opendev.org/70365016:00
*** hashar has quit IRC16:02
sugaarHi, I am trying to understand better how gearman is launched. Reading https://opendev.org/zuul/zuul/src/branch/master/zuul/cmd/scheduler.py makes me think that the scheduler will automatically launch it, but I don' get when is it going to happen.  I have a pod running a zuul-scheduler container and for some reason, the bl**dy gearman won't  start.16:03
sugaarObviously the container image must be bug free since it is used in the docker-compose example and in the helm charts. So there is something in my configuration/architecture that is stopping gearman from being launched or from zuul-scheduler detecting that gearman has been launched.16:03
tobiashsugaar: do you have configured start=true in the gearman_server section of zuul.conf?16:04
sugaaryes16:04
sugaarI am using the same zuul.conf than in the docker-compose example16:05
tobiashsugaar: it's started here: even before logging is setup: https://opendev.org/zuul/zuul/src/branch/master/zuul/cmd/scheduler.py#L13616:05
ironfootdoes 127.0.0.1 work inside a k8s pod?16:05
tobiashsugaar: it's started as a separate process so you should see two python processes in the scheduler container16:05
tobiashironfoot: should work, but I'd rather suggest to add a service pointing to k8s and use this throughout the deployment16:06
tobiash(to have a common zuul.conf for scheduler, executor, web,..)16:06
ironfootyeah16:06
tobiash*service pointing to gearman I meant ;)16:07
ironfootlike in the helm chart I've seen around16:07
sugaartobiash but how can I check which processes are running? the container doesn't have ps or top on it16:08
tobiashsugaar: I thought ps is a builtin shell command?16:09
clarkbyou can ps from the host side16:09
tobiashif you have access to the k8s host that's also an option16:10
fungior check that the listening port is bound?16:11
openstackgerritMerged zuul/zuul-jobs master: fetch-sphinx: Exclude doctrees directory  https://review.opendev.org/70354716:12
sugaarclark how do you do that? sending the command via ssh?16:13
*** openstackgerrit has quit IRC16:13
sugaarclarkb ^^16:13
clarkbsugaar: log into the k8s host and ps there (ssh is one way)16:13
sugaarI will try16:14
sugaarthanks16:14
*** openstackgerrit has joined #zuul16:14
openstackgerritClément Mondion proposed zuul/nodepool master: add tags support for aws provider  https://review.opendev.org/70365116:14
*** tosky has quit IRC16:33
openstackgerritClément Mondion proposed zuul/nodepool master: add tags support for aws provider  https://review.opendev.org/70365116:35
*** mattw4 has joined #zuul16:41
*** jpena is now known as jpena|brb16:46
*** armstrongs has quit IRC16:48
pabelangerany objections on updating zuul UI to sort job names per-buildset?16:50
clarkbpabelanger: by what criteria?16:52
pabelangeralphabetical order mostly16:53
corvuspabelanger: the job list is supposed to be ordered.16:53
pabelangerhttps://dashboard.zuul.ansible.com/t/ansible/status16:53
pabelangeransible-network/ansible_collections.ansible.netcommon jobs16:53
pabelangerfor example16:53
pabelangerI think project-templates are what they might be ordered by?16:54
corvusthe templates are in order, then the jobs16:54
corvusso you can change the order to whatever you want in the project-pipeline config (as long as it isn't inserting a new job between 2 templates)16:55
pabelangerk, that explains is. Was looking to order via just job name16:55
pabelangerI can also just live with this16:56
corvuspabelanger: are you unable to put them in the order you want?16:56
pabelangerin this case, no. github-workflow is top level project-template for all projects. So, I'm not able to move between other 2 tempaltes at project level, if that makes sense16:57
pabelangerbut again, I can accept this layout too16:58
corvuspabelanger: gotcha16:58
pabelangerknowing project-template, then jobs is order is helpful16:59
*** rfolco is now known as rfolco|brb17:16
openstackgerritClément Mondion proposed zuul/nodepool master: add tags support for aws provider  https://review.opendev.org/70365117:23
*** jpena|brb is now known as jpena17:26
*** evrardjp has quit IRC17:34
*** evrardjp has joined #zuul17:34
openstackgerritAntoine Musso proposed zuul/zuul master: tox: do not install bindep for linters  https://review.opendev.org/70363617:36
openstackgerritMerged zuul/zuul master: Docs: fix stestr run example  https://review.opendev.org/70356617:41
*** yolanda has quit IRC17:52
*** yolanda has joined #zuul17:54
*** jamesmcarthur has quit IRC18:00
*** electrofelix has quit IRC18:34
*** jpena is now known as jpena|off18:34
ShrewsI think my wifi router is on its last leg.  Getting random disconnects and strange log entries. :(18:37
fungisounds a lot like my mind18:52
openstackgerritJames E. Blair proposed zuul/zuul master: Revert "Extract an abstract base Parser class"  https://review.opendev.org/70366918:52
corvuswe should make sure to merge that before the next release18:52
pabelanger+218:53
corvusmy power seems to be very unstable...18:55
*** dustinc|PTO is now known as dustinc19:13
*** jamesmcarthur has joined #zuul19:20
*** tosky has joined #zuul19:25
zbri observed that zuul unittests fail often due to timeout, beinf very close to 1h, i suggest +20% bump of timeout for zuul tox jobs. example: https://review.opendev.org/#/c/703372/19:29
zbrif it sounds ok, i will propose it.19:29
AJaegerzbr: there's a fix up for it already...19:30
AJaegerzbr: https://review.opendev.org/#/c/702473/19:31
fungiwhich is even better than increasing the timeout19:32
fungisince we'll usually get results ~twice as fast after splitting19:32
zbri personally do not like sharding, i still have PTSD from molecule where they are still used.19:32
AJaegerfungi: still takes a long time to prepare tests19:32
zbrpytest-xdist is to be preffered, when possible.19:33
AJaegerzbr: check the implementation, for local testing it's not sharded, only in CI19:33
clarkbfwiw I think there is a bit of room to optimize the zuul tests since they spend a lot of time waiting iirc19:33
clarkbas another approach once the bleeding is stopped19:34
zbras i do not have a knowledge on how the test run I suppose there where reasons that prevented parallelization inside the same job.19:35
clarkbzbr: it is already using every available cpu to run a separate test runner within the job19:35
zbrbigger box?19:36
tobiashclarkb: actually the zuul tests are quite cpu bound19:36
*** jamesmcarthur has quit IRC19:46
*** jamesmcarthur has joined #zuul19:50
openstackgerritDavid Shrewsbury proposed zuul/zuul-jobs master: ensure-tox: Output tox version  https://review.opendev.org/70123620:00
*** jamesmcarthur has quit IRC20:01
*** jamesmcarthur has joined #zuul20:01
*** hashar has joined #zuul20:07
*** jamesmcarthur has quit IRC20:08
clarkboh we halved the cpus so we use available_cpus/220:11
openstackgerritMerged zuul/zuul-website master: Remove some redirects  https://review.opendev.org/70345720:11
openstackgerritClark Boylan proposed zuul/zuul-website master: Fix releasenotes redirects  https://review.opendev.org/70368720:14
*** jamesmcarthur has joined #zuul20:14
clarkbuser reported ^ was broken so I've fixed that specific redirect. Not sure if there is a class of redirects needed there though20:15
openstackgerritClark Boylan proposed zuul/zuul master: Speed up ansible plugin tests  https://review.opendev.org/70368820:19
clarkbtobiash: ^ thats a first pass at speeding tests up based on the slow test list20:19
clarkbseems to have a measurable impact on my laptop20:19
clarkbalways with the .keep file20:24
hasharI found a few oddities today when looking at those20:24
clarkbhashar: the redirects, tests, or .keep files :)20:25
hasharfor a tests/unit/test_v3.py I noticed that ansible has python set to 'auto' which triggers some heuristic to find the python command20:25
clarkbyes it is supposed to use the platform default then fallback to any python aiui20:25
hasharpotentially slows it down, then the ansible < 2.8 tests ended up failling for me locally when pointing python_path to my local python3.7 .. ;D20:26
hasharI tried in the FakeNodepool to use python_path = sys.executable (or something like that)20:26
hasharmaybe the fake nodepool could be changed from 'auto' to just '/usr/bin/python', not sure whether that would speed it up anyway20:27
clarkbI think we actually want to test the auto code path though20:27
clarkbsince users are relying on it20:27
hasharso that the ansible tests can act as some kind of integration tests against the different OS ?20:27
tobiashclarkb: keep file hit again ;)20:28
clarkbtobiash: ya once we have data on the current run I'll update the change and add the .keep file back20:29
clarkbunless I need that for the tests anyway?20:29
clarkbI probably do now that I think of it20:29
*** yolanda has quit IRC20:29
openstackgerritClark Boylan proposed zuul/zuul master: Speed up ansible plugin tests  https://review.opendev.org/70368820:29
tobiashclarkb: with this we don't have to mess again with the keep file: https://review.opendev.org/66310820:29
clarkbthanks I'll review that after lunch20:30
*** yolanda has joined #zuul20:30
tobiashmordred, corvus: you also reviewed (and reverted) the first version of this ^ this version should work now20:33
openstackgerritMerged zuul/zuul master: tox: pass --slowest to stestr  https://review.opendev.org/70357120:33
hashar\O/20:34
corvusclarkb: maybe recheck 688 a few times?  in the past we've worried about cpu contention killing zk, etc....20:36
openstackgerritIan Wienand proposed zuul/zuul-jobs master: ensure-tox: fix pipe race  https://review.opendev.org/70368920:36
clarkbcorvus: ya, I was worried about that too, but locally (and this is human observation) once the test starts my cpu contention falls way off20:37
clarkbits the software install the makes my laptop slow20:37
clarkbwe'll have to see what the ci data says and rechecking will get us more20:38
openstackgerritMerged zuul/zuul master: doc: add links to components documentation  https://review.opendev.org/70310520:38
tobiashfyi, the timer jitter change also brought a test race with it. Benjamin is working on a fix (replace sleep by iterate_timeout). He'll provide it tomorrow.20:41
*** yolanda has quit IRC20:46
openstackgerritIan Wienand proposed zuul/zuul-jobs master: ensure-tox: use pip3 in preference to pip  https://review.opendev.org/70369420:46
*** yolanda has joined #zuul20:50
openstackgerritIan Wienand proposed zuul/zuul-jobs master: ensure-tox: fix pipe race  https://review.opendev.org/70368920:50
openstackgerritIan Wienand proposed zuul/zuul-jobs master: ensure-tox: use pip3 in preference to pip  https://review.opendev.org/70369420:50
clarkbcorvus: I see https://review.opendev.org/#/c/703456/2 is the proper fix for the redirects. I'll abandon mine but yours needs a small fix20:55
openstackgerritJames E. Blair proposed zuul/zuul-website master: Update redirects  https://review.opendev.org/70345620:56
corvusclarkb, fungi: ^20:56
clarkb+2 thanks20:57
*** jamesmcarthur has quit IRC20:58
openstackgerritMerged zuul/zuul master: Limit parallelity when installing ansible  https://review.opendev.org/70312621:04
openstackgerritMerged zuul/zuul-website master: Update redirects  https://review.opendev.org/70345621:07
openstackgerritMerged zuul/zuul-jobs master: ensure-tox: fix pipe race  https://review.opendev.org/70368921:20
hasharis anyone aware of a way to skip setUp() entirely for a specific test method? :)21:23
hasharor I guess I can just move that method to a standalone class21:23
clarkbya I think ^ is the way you are supposed to do it21:24
tristanChashar: perhaps adding the `@unittest.skip('noop')` would work?21:24
hasharI found a zuul test that loads the whole stack (due to ZuulTestCase) when it really just:  assert('foo', repr(someobject))   :D21:25
hashartristanC: that would skip it entirely ;]21:25
hasharI guess I will just create a new class that does not inherit from ZuulTestCase21:26
corvushashar: switch the parent to BaseTestCase ?21:26
corvusyeah21:26
hasharthere is probably a lot of those tests in  test/unit/ files that could benefit from that21:26
hasharcorvus: BaseTestCase looks like a good fit. Thx !21:27
clarkbtobiash: my change hit the jitter race21:28
openstackgerritMerged zuul/zuul master: Docs: change "config" title  https://review.opendev.org/70347121:29
openstackgerritMerged zuul/zuul-jobs master: ensure-tox: Output tox version  https://review.opendev.org/70123621:32
clarkbtobiash: can you check the comment on https://review.opendev.org/#/c/663108/221:32
*** jamesmcarthur has joined #zuul21:35
tobiashclarkb: if the symlink exists then normally the indec.html is there as well and we don't thnter this branch21:36
clarkbtobiash: I think we should still guard against it21:36
tobiashbut we could make it more safe and check both21:36
clarkbsomething like try: os.readlink() except: os.symlink()21:37
tobiashclarkb: regarding the jitter race, I'll help Benjamin tomorrow with the fix if that's soon enough21:37
clarkbya no rush21:38
clarkbI think the timing data for the test_plugins_* is still mostly valid21:38
clarkbcorvus: initial data at https://review.opendev.org/#/c/703688/221:46
openstackgerritMerged zuul/zuul master: docs: improve job.role documentation  https://review.opendev.org/70337221:51
corvusclarkb: cool.  i think the main thing to watch out for is zk death.  did the overall runtime decrease?21:59
clarkbthe job still timed out, but it also hit the jitter job fail (which is a sit and wait for timeout I think)22:00
corvusoh, so it's unclear.22:00
clarkbwill need more data from successful jobs to see if it affects total runtime much (in theory it should since we are running 8 jobs and only 4 runners)22:00
clarkbI would expec to save about a minute based on those numbers and assuming a decent test distribution across runners22:01
clarkbfwiw test_playbook runs in this same manner, it doesn't serially run jobs22:01
clarkbtest_playbook becomes the slowest test with my change in place though22:01
openstackgerritAntoine Musso proposed zuul/zuul master: test_repo_repr does not need to clone  https://review.opendev.org/70369822:02
hasharfinally, I just went with some mocking instead of adding yet another class :]22:02
clarkblooking at test playbook I'm not sure there is a good way to speed it up easily. We could split it into multiple tests but then you incur more startup time cost22:02
clarkbthen test_job_pause*22:02
clarkbI think test_job_pause might be slightly quicker if we reduce the total number of jobs being tested (not sure why test-good and test-fail are there, but also recognize they are likely there to catch na interaction and I don't want to mess with that22:03
*** mattw4 has quit IRC22:06
*** mattw4 has joined #zuul22:06
corvushashar: perhaps we should remove that test rather than mock that22:07
corvusI6675f4f3a65ba975456687de91827694273862e1 was the change that introduced it22:08
corvuslooks like it is really a test of the string format.22:09
hasharyup22:09
hasharthough it explicitly invokes __repr__22:10
hasharthat does not have that much value probably22:10
corvusnormally i would not advocate adding a test like that.  i guess since it's already there, we could add the mock.  or remove it.  either way.  :)22:10
hasharI can amend my change and just drop it22:10
openstackgerritAntoine Musso proposed zuul/zuul master: tests: remove test_repo_repr  https://review.opendev.org/70369822:14
hashartest removed :]22:14
*** mattw4 has quit IRC22:14
*** mattw4 has joined #zuul22:15
clarkbyappi says that git operations (like repo update, change merges) are a significant chunk of per test time22:16
clarkbI'm not sure there are good ways to make that run faster, but if we could that would likely have a large global impact on the test suite runtime22:17
clarkbgit repos on tmpfs maybe?22:18
*** avass has quit IRC22:41
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Add OpenShift SCC and functional test  https://review.opendev.org/70275822:41
*** jamesmcarthur has quit IRC22:42
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Handle service restart when connections are changed  https://review.opendev.org/70362422:46
hasharmaybe the slow tests from tests.unit.test_v3  can be moved to their own jobs22:46
hasharor at least profile those ;)22:47
openstackgerritTristan Cacqueray proposed zuul/zuul-operator master: Add tenant reconfiguration when main.yaml changed  https://review.opendev.org/70363122:48
tristanCcorvus: with 703624 and 703631, the zuul-operator is now properly managing zuul.conf and main.yaml modification, using pure ansible tasks22:51
openstackgerritMerged zuul/zuul master: Revert "Extract an abstract base Parser class"  https://review.opendev.org/70366923:01
*** hashar has quit IRC23:16
clarkbthats weird tox-py37 ran for 703571 twice in the gate, but only one is recorded in logstash and the other is recorded in zuul's db?23:22
clarkbhttps://04a9f9fdd9afdf12de4e-f889a65b4dfb1f628c8309e9eb44b225.ssl.cf2.rackcdn.com/703571/1/gate/tox-py37/bb9aedd/job-output.txt not in db and https://de3c109f548194542df7-51174786df661fdc707186ccf04b5df9.ssl.cf1.rackcdn.com/703571/1/gate/tox-py37/9c9a684/job-output.txt is in the db23:22
clarkbattempts shows 1 for both23:23
clarkbmaybe the result of a gate reset?23:23
clarkbin any case one of those has a runtime about the same as 703688 and the other is about 9 minutes faster23:23
clarkball three ran in rax-dfw23:23
clarkbthe slower ones ran on the same cpu model according to the host info and the faster one was a different model23:25
fungithat sounds a likely culprit then (maybe not the cpu specifically, but vintage of the host)23:26
clarkbthe faster job ran on a slightly older model with more clock spins23:27
fungineat23:27
fungiso maybe entirely the cpu then23:27
*** rfolco|brb is now known as rfolco23:41
*** tosky has quit IRC23:47
clarkbcomparing just the tox runtimes old: Ran: 881 tests in 2353.5932 sec. new: Ran: 881 tests in 2280.6632 sec.23:48
clarkbso my change is worth just over a minute in tox runtime I think23:48
clarkbthough that may depend on test sharding23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!