Tuesday, 2020-06-30

*** jamesmcarthur has joined #zuul00:17
*** jamesmcarthur has quit IRC00:26
*** hamalq has quit IRC00:36
*** rfolco has quit IRC00:50
*** jamesmcarthur has joined #zuul01:02
*** wuchunyang has joined #zuul01:03
*** jamesmcarthur has quit IRC01:04
*** Goneri has quit IRC01:15
*** jamesmcarthur has joined #zuul01:17
*** wuchunyang has quit IRC01:21
*** jamesmcarthur has quit IRC01:22
*** jamesmcarthur has joined #zuul01:26
*** jamesmcarthur has quit IRC01:39
*** jamesmcarthur has joined #zuul01:47
*** swest has quit IRC01:58
*** jamesmcarthur has quit IRC02:01
*** jamesmcarthur has joined #zuul02:02
*** jamesmcarthur has quit IRC02:06
*** swest has joined #zuul02:12
*** tdasilva_ has joined #zuul02:24
*** tdasilva has quit IRC02:25
*** tdasilva_ is now known as tdasilva02:25
*** jamesmcarthur has joined #zuul02:30
*** vblando has quit IRC02:36
*** mnasiadka has quit IRC02:36
*** samccann has quit IRC02:36
*** Open10K8S has quit IRC02:37
*** Open10K8S has joined #zuul02:37
*** samccann has joined #zuul02:38
*** mnasiadka has joined #zuul02:38
*** ChrisShort has quit IRC02:38
*** vblando has joined #zuul02:40
*** jamesmcarthur has quit IRC02:45
*** ChrisShort has joined #zuul02:46
*** jamesmcarthur has joined #zuul02:46
*** bhavikdbavishi has joined #zuul02:49
*** jamesmcarthur has quit IRC02:50
*** bhavikdbavishi1 has joined #zuul02:54
*** bhavikdbavishi has quit IRC02:55
*** bhavikdbavishi1 is now known as bhavikdbavishi02:55
*** jamesmcarthur has joined #zuul02:56
*** wuchunyang has joined #zuul03:00
*** jamesmcarthur has quit IRC03:39
*** ysandeep|away is now known as ysandeep04:01
*** wuchunyang has quit IRC04:04
*** evrardjp has quit IRC04:33
*** evrardjp has joined #zuul04:33
*** vishalmanchanda has joined #zuul04:41
*** bhavikdbavishi has quit IRC05:02
*** dmellado has quit IRC05:12
*** sgw has quit IRC05:22
*** bhavikdbavishi has joined #zuul05:23
*** bhavikdbavishi has quit IRC05:34
*** bhavikdbavishi has joined #zuul05:40
*** ysandeep is now known as ysandeep|brb05:41
*** marios has joined #zuul05:54
*** ysandeep|brb is now known as ysandeep06:11
*** y2kenny has quit IRC06:28
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845406:48
*** jcapitao has joined #zuul07:13
*** bhagyashris|pto is now known as bhagyashris07:16
*** bhavikdbavishi has quit IRC07:18
openstackgerritJan Kubovy proposed zuul/zuul master: Connect merger to Zookeeper  https://review.opendev.org/71622107:19
openstackgerritJan Kubovy proposed zuul/zuul master: Connect executor to Zookeeper  https://review.opendev.org/71626207:19
openstackgerritJan Kubovy proposed zuul/zuul master: Connect fingergw to Zookeeper  https://review.opendev.org/71687507:19
*** hashar has joined #zuul07:24
*** yolanda has joined #zuul07:26
*** sshnaidm|afk is now known as sshnaidm|ruck07:28
*** tosky has joined #zuul07:28
*** bhagyashris is now known as bhagyashris|lunc07:29
*** iurygregory has quit IRC07:42
*** jpena|off is now known as jpena07:50
*** wuchunyang has joined #zuul07:54
*** bhavikdbavishi has joined #zuul07:55
*** iurygregory has joined #zuul08:01
*** nils has joined #zuul08:07
*** bhagyashris|lunc is now known as bhagyashris08:37
*** masterpe has quit IRC08:38
*** masterpe has joined #zuul08:51
*** bhavikdbavishi has quit IRC08:53
*** bhavikdbavishi has joined #zuul08:53
zbrdoes zuul have a limit on job description length? if using long, even multiline ones, would create UI problems?08:55
*** bhavikdbavishi1 has joined #zuul08:56
*** bhavikdbavishi has quit IRC08:58
*** bhavikdbavishi1 is now known as bhavikdbavishi08:58
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845409:04
*** sshnaidm|ruck has quit IRC09:14
*** sshnaidm has joined #zuul09:23
*** sshnaidm has quit IRC09:42
*** sshnaidm has joined #zuul09:55
*** wuchunyang has quit IRC10:01
*** jpena has quit IRC10:07
*** mugsie has quit IRC10:07
*** mugsie has joined #zuul10:10
*** marios has quit IRC10:40
*** wuchunyang has joined #zuul10:46
*** hashar has quit IRC10:47
*** ysandeep is now known as ysandeep|afk10:53
*** jcapitao is now known as jcapitao_lunch11:03
*** wuchunyang has quit IRC11:18
*** marios has joined #zuul11:27
*** bhagyashris is now known as bhagyashris|brb11:32
*** jpena has joined #zuul11:32
tobiashzbr: afaik there is no real limit on that11:37
tobiashmany jobs use multiline descriptions11:38
*** ysandeep|afk is now known as ysandeep11:39
*** bhavikdbavishi has quit IRC11:46
*** jpena is now known as jpena|lunch11:46
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845411:55
*** bhavikdbavishi has joined #zuul11:57
openstackgerritBenjamin Schanzel proposed zuul/zuul master: GitHub Reporter: Fix User Email in Merge Commit Message  https://review.opendev.org/73859012:02
*** rfolco has joined #zuul12:02
*** mordred has quit IRC12:04
*** bschanzel has joined #zuul12:06
*** mordred has joined #zuul12:06
*** jamesmcarthur has joined #zuul12:07
*** rlandy has joined #zuul12:08
*** bhagyashris|brb is now known as bhagyashris12:10
*** jcapitao_lunch is now known as jcapitao12:17
*** hashar has joined #zuul12:22
zbrtobiash: thanks. i asked as I seen in some places in UI the description is appended after job name. Good to know we can properly document them.12:24
openstackgerritSimon Westphahl proposed zuul/nodepool master: Ignore unparsable/empty image upload ZNode data  https://review.opendev.org/73801312:26
openstackgerritTobias Henkel proposed zuul/zuul master: Support per branch change queues  https://review.opendev.org/71853112:27
openstackgerritSimon Westphahl proposed zuul/nodepool master: Ignore unparsable/empty image upload ZNode data  https://review.opendev.org/73801312:31
*** jpena|lunch is now known as jpena12:34
openstackgerritTobias Henkel proposed zuul/zuul master: Move queue from pipeline to project  https://review.opendev.org/72018212:36
openstackgerritTobias Henkel proposed zuul/zuul master: Add optional support for circular dependencies  https://review.opendev.org/68535412:36
*** bhavikdbavishi has quit IRC12:38
*** jamesmcarthur has quit IRC12:39
*** wuchunyang has joined #zuul12:41
*** wuchunyang has quit IRC12:46
*** ysandeep is now known as ysandeep|afk12:47
*** rlandy is now known as rlandy|training12:55
*** ysandeep|afk is now known as ysandeep12:55
openstackgerritTobias Henkel proposed zuul/zuul master: WIP: Make repo state buildset global  https://review.opendev.org/73860312:56
*** sgw has joined #zuul13:00
openstackgerritMerged zuul/zuul-jobs master: upload-git-mirror: use retries to avoid races  https://review.opendev.org/73818713:14
*** bschanzel has quit IRC13:20
zbrhad anyone considered moving test_result_table under change metadata (first column) instead of the narrow middle column with code-review?13:21
openstackgerritTobias Henkel proposed zuul/zuul master: Refactor github auth handling into its own class  https://review.opendev.org/71003413:22
tobiashclarkb: I've addressed one of your comments there and responded on the other ^13:23
openstackgerritBenjamin Schanzel proposed zuul/zuul master: [TEST DNM] Help Debugging Slow kubectl Connections  https://review.opendev.org/73861913:35
openstackgerritTobias Henkel proposed zuul/zuul master: Remove some unused variables  https://review.opendev.org/73862013:37
zbrclarkb: is there any way to run ansible nested in zuul but to still display tasks using the same UI?13:47
zbrmaybe not now, but if this is doable in the future13:47
clarkbzbr: I don't know13:48
zbrasking this because we need full freedom regarding what we run: collections, modules, plugins, stuff that zuul does not allow for security reasons13:48
AJaegerzuul-jobs-maint, https://review.opendev.org/737352 updates a README, please review13:53
openstackgerritBenjamin Schanzel proposed zuul/zuul master: [TEST DNM] Help Debugging Slow kubectl Connections  https://review.opendev.org/73861913:56
*** Goneri has joined #zuul13:56
corvuszbr: it's not possible now, but we have thought about it and i think it would be possible in the future; it's mostly a matter of adding the zuul json callback to the nested ansible.  my guess is putting that callback in a collection is going to be the first step.14:10
zbrcorvus: thanks. that was my guess but needed confirmation.14:11
zbri guess I do not need to explain more why i see it useful14:11
corvusnope i think it's a great idea :)14:12
*** ysandeep is now known as ysandeep|away14:17
openstackgerritMatthieu Huin proposed zuul/zuul master: List builds, buildsets results in constants  https://review.opendev.org/73863214:19
AJaegerzuul-jobs-maint, a small change for review, please: https://review.opendev.org/73341914:24
mhuHey there, can someone explain to me again the difference between CANCELED and ABORTED as build results?14:26
mhunevermind, found it in my history14:27
mhu mhu: canceled (note the spelling) is by request of the scheduler, aborted is unexpected error on executo14:27
mhuwith that said, I need help with descriptions of the various possible build results: https://review.opendev.org/#/c/738632/1/doc/source/reference/jobs.rst14:35
openstackgerritMerged zuul/zuul-jobs master: prepare-workspace: Add Role Variable in README.rst  https://review.opendev.org/73735214:45
*** sgw1 has quit IRC14:50
*** bschanzel has joined #zuul14:51
bschanzelHi there, some weeks ago I proposed https://review.opendev.org/#/c/734580/, which fixes the `Reviewed-by` clauses in GitHub merge commit messages. Unfortunately, it contained a bug where the email addresses of reviewers were always "None". Here's a fix https://review.opendev.org/#/c/738590/. The downside of this fix is that we're doing additional14:54
bschanzelGitHub API requests per reviewer and PR on GitHub merges (unless a user is cached).14:54
*** sgw1 has joined #zuul14:55
openstackgerritMerged zuul/zuul-jobs master: Return upload_results in upload-logs-swift role  https://review.opendev.org/73356415:01
tobiashbschanzel: we added a pretty decent user caching a few months ago (https://review.opendev.org/710985) so I don't think that's becoming a problem15:03
bschanzeltobiash: that's what I meant with "unless a user is cached" ;-)15:04
*** bschanzel has quit IRC15:08
tristanCit seems like the latest centos-8 image in opendev's zuul is having some sort of a random network issue: jobs are being retried a lot and when it happens three times in a row, the log_url is a finger... any idea what can we do to debug and understand what is going on in such situation?15:15
clarkbtristanC: yes, we've already spent a bit of time debugging this...15:16
clarkbfomr the opendev side we've confirmed there is no kernel panic15:16
tristanCwe also suspect a kernel bug, would it be possible to grab the console log before node deletion, e.g. adding a post-console-log toggle in nodepool configuration?15:16
clarkbbut the host networking has crashed.15:16
clarkbI've offered to configure zuul holds if I can be told what to hold15:16
clarkbthen we can hope that a reboot of the host makes it debuggable15:17
clarkbI've also offered to set up test nodes in our clouds so that people can try to reproduce or they can try to reproduce lcoally using our images15:17
clarkbfrom the zuul side I'm not sure there is much that can be done. The test node is going away. Maybe we can improve the hold process to hold instances under more specific circumstances so that it is easier tocatch a host in that situation15:17
tristanCclarkb: thank you for all the help, i meant to ask here something that zuul/nodepool/dib could do to fix that kind of issue, not this one in particular15:17
clarkbother than that I don't know15:18
clarkbyes, I wanted to give background to the channel on what sort of debugging had been done15:18
clarkbultimately if zuul can't reach the host its choices are limited15:18
clarkbnot deleting the host may be useful though and doing that reliably would need updates to the hold behavior15:18
tristanCabout the finger url, is there a reason why the executor could not ignore the node log collection failure and at least publish the local job-output file?15:20
clarkbthere are two specific issues with holds today. The first is we have to wait for retry failure which is three attempts (though we can override that to a single attempt on a per job basis). THe other issue is tripleo has many other failures happening too so being able to filter for "ansibel says the host is unreachable" would be useful15:20
tristanCassuming the log are not published because the node is not reachable15:20
fungitristanC: also about capturing server console logs in particular, for this case we did and there was nothing of interest, just some unrelated iptables drops and selinux enforcement messages, no errors or kernel panics15:21
fungibut having zuul capture that would not be easy. our executors don't have the api access credentials necessary to get the server consoles15:21
clarkbtristanC: the log streaming is on the remote host not the executor right?15:22
clarkbthe information available to the executor is limited iirc. But we could probably publsih something that said network connectivity was lost and the ansible return code?15:23
tristanCfungi: right, i was thinking nodepool would have to get the console log, though it's probably not easy to configure when it should do that. e.g. nodepool may not know the job result of a nodeset it is deleting15:24
*** hashar is now known as hasharAway15:24
tristanCclarkb: iirc the job-output.txt is written by the zuul_stream callback and it is local on the executor build work directory15:24
clarkbtristanC: ah right it relies on the remote port 19885 daemon to stream that hosts data but then it aggregates. Thats how we get the interleaved content for job output15:25
clarkbtristanC: that would all be in the post playbook itself. You'd need to run the remote collection then always run upload for job-output separately15:26
clarkbI think its possible just needs some changes to how post.yaml processes tasks15:26
clarkb(also as a side note it wouldn't help debug the tripleo problem any more than we've already done, the log doesn't have any data other than this script is running and 20 minutes later the network is gone)15:27
tristanCthat seems to already be split accross two playbooks https://opendev.org/opendev/base-jobs/src/branch/master/playbooks/base/ post and post-logs15:27
clarkbmaking it easier to get to that point would be good, but it won't solve this for us aiui15:27
tristanCclarkb: having the logs might help discover if the pre-failure happens after a specific task15:28
clarkbtristanC: yes I'm saying we already have that info and it doesn't help much15:28
clarkbtoci quickstart script execution is the task where it fails15:28
*** mgoddard has quit IRC15:28
*** vishalmanchanda has quit IRC15:30
tristanCdarn, alright thank you for all the informations. I guess that specific failure is going to need more investigation, we'll try to reproduce it in rdo zuul15:30
tristanCperhaps having the node console log and trying to rescue the job-output in similar situation might be useful for other retry-limit investigation15:31
clarkbya I think it could be useful in a more general case15:32
clarkbtristanC: looking at the executor look for running playbooks we run all post playbooks even if unreachable is true15:33
clarkbthat makes me wonder if post does curation of the output for post-logs even in the local build dir15:33
clarkb(and that fails because it can't talk to the remote)15:33
tristanCthe post-logs.yaml playbook seems to be running to hosts: localhost though15:34
*** mgoddard has joined #zuul15:36
clarkbtristanC: right but maybe that file isn't where post-logs.yaml will copy from if post.yaml failed?15:37
clarkbbasically it could be that post-logs.yaml is running and having no work to do?15:37
tristanCyeah i understand, though it should at least zuul_return a log_url15:39
clarkbtristanC: maybe that loop in zuul/executor/server.py is short circuiting because an abort is also happening?15:42
tristanCclarkb: i'm looking at the exact piece of code indeed, but in that case, wouldn't the job result be ABORTED instead of RETRY_LIMIT?15:43
clarkbretry limit is handled in the client based on those results but ya it appears we only get retry limit if we haven't aborted15:46
fungiit's result_unreachable on the executor side15:46
clarkbbecause we'll always retry the aborted case regardless of attempts15:46
fungithat either gets retried or reinterpreted as retry_limit by the scheduler15:46
*** hamalq has joined #zuul15:50
*** sshnaidm has quit IRC15:51
*** hamalq_ has joined #zuul15:52
*** sshnaidm has joined #zuul15:53
*** hamalq has quit IRC15:55
corvustristanC, clarkb: i'm trying to digest that conversation -- it seems that the first issue is that we got finger urls instead of logs for retry_limit jobs, yeah?15:57
*** hasharAway is now known as hashar15:58
clarkbcorvus: yes, and while we don't expect any logs from the unreachable remote node we do expect a job-output file or two?15:58
corvusclarkb: right15:58
corvuseven in the case of an unreachable host, i would expect the post-logs.yaml playbook to run and upload job-output* and return a log url15:59
clarkbI'm wondering now if generate-zuul-manifest may dep on something post.yaml does when it runs in post-logs.yaml15:59
clarkbfungi is trying to pull up some examples in our executor logs so that we can check if we get more than one error15:59
corvusclarkb: do you have a failed job link handy so i can go look?15:59
corvusfungi: ^?15:59
corvusoh i'll switch to #openstack-infra15:59
*** jpena is now known as jpena|off16:03
*** marios has quit IRC16:04
*** jpena|off is now known as jpena16:06
corvustristanC: i've been digging, and i can't find any retry_limit jobs with no logs.  there are certainly some without logs from the remote host, but that's expected if the host is unreachable.  but they all at least have the job-output.txt and zuul manifest.16:07
tristanCcorvus: the buildset currently at the top of the gate https://zuul.opendev.org/t/openstack/status has one16:08
clarkbah is it that if they get in the db we're selecting the ones without the issue?16:10
tristanCi also though that the executor would only perform post playbook of successful pre, and if the pre failure happen on the base job, then that would explain the empty log url. But it seems like zuul tries to perform every post, without looking for matching successful pre16:10
clarkbI see it16:13
clarkbgetting a link to share16:13
clarkbwe're failing in run phase with host unreachable so we hit https://opendev.org/zuul/zuul/src/branch/master/zuul/executor/server.py#L1413-L141616:13
clarkbwe could put post-logs.yaml in the cleanup phase to address this16:14
clarkb(I'd need to think a bit about what the remifactions of that are though)16:14
*** holser has quit IRC16:15
*** holser has joined #zuul16:16
tristanCclarkb: oh nice, great finding16:17
*** sshnaidm is now known as sshnaidm|ruck16:18
tristanCclarkb: though how can this result in a job with retry_limit result and a finger url?16:18
clarkbwe never run post in that case and so log url isnt set? something like that16:19
corvusso the issue is that in this case the finger url isn't updated in the status page16:19
corvusclarkb: post always runs16:19
avasscorvus: unless it's dequeued16:19
avass:)16:19
corvusavass: this is not happening in this case16:20
tristanCclarkb: the block you highlight is nested in a `if not pre_failed:` block16:20
corvusclarkb: that's why we couldn't find any in the db where post didn't run.16:20
tristanCoh my bad, could retry_limit happen without pre_failed?16:20
corvuswhen that change reports, the logs will be available at the usual place, just like the other retry limits16:20
clarkbcorvus: I grepped the logs for that specific job on ze03 and it did not run post16:21
clarkbit went from run to cleanup16:21
corvushowever, it seems that the issue here is that we expect the logs to have been uploaded to object storage, and we don't put that url in the status page for retry_limits16:21
tristanCcorvus: looking at https://zuul.opendev.org/t/openstack/builds?project=openstack%2Ftripleo-heat-templates&result=RETRY_LIMIT  , the top few result doesn't have any log_url, e.g. : https://zuul.opendev.org/t/openstack/build/6ec193d4d5c04e569cad7444c7be0b6616:22
corvustristanC, clarkb, okay that's more like it, thanks :)16:22
clarkband my read of the code I linked is we short circuit if an unexpected ansible error occurs in run16:23
clarkbin that casepost does not run16:23
corvusclarkb: the "return None" at the bottom if "if not pre_failed"?16:24
fungicorvus: i posted the early examples of retry_limit jobs with no logs in #openstack-infra16:24
fungijust now16:24
fungisorry, i had an emergency distraction come up16:24
clarkbcorvus: yes16:25
corvusclarkb: yeah, i agree we should be able to change that to do what the pre-runs do16:25
corvusi can do that16:25
corvusclarkb: unless you're already in progress?16:26
clarkbcorvus: I'm not. trying to make breakfast16:27
corvusk.  on it.16:27
tristanCiiuc, if the node connection is lost during the run phase, then the job is retried without post executions, and on the last attempt the job result will be RETRY_LIMIT? even though the failure happened during the run phase16:36
clarkbyes16:37
*** jcapitao has quit IRC16:39
openstackgerritJames E. Blair proposed zuul/zuul master: Run post-run playbooks on unreachable  https://review.opendev.org/73866816:40
corvustristanC: yes -- unreachable nodes ar16:41
corvusgrr16:41
corvustristanC: yes -- we retry jobs with unreachable nodes because we presume some external network issue caused the problem (that assumption is wrong in this case, but it's a conservative assumption)16:41
tristanCclarkb: corvus: that makes sense, thanks a lot16:44
corvustristanC: thank you :)16:46
*** jpena is now known as jpena|off16:52
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845417:00
clarkbcorvus: looks like tristanC left a comment on that fix. I don't know what the answer is off the top of my head17:06
clarkbbut the fix looks good to me otherwise17:06
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845417:11
corvusclarkb, tristanC: replied17:34
*** hashar is now known as hasharAway17:40
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Use a temporary registry with buildx  https://review.opendev.org/73851717:51
*** nils has quit IRC17:53
-openstackstatus- NOTICE: Due to a flood of connections from random prefixes, we have temporarily blocked all AS4837 (China Unicom) source addresses from access to the Git service at opendev.org while we investigate further options.18:24
openstackgerritSimon Westphahl proposed zuul/zuul master: Ensure refs for recent branches are not GCed  https://review.opendev.org/73845418:49
*** fbo has quit IRC19:01
*** hasharAway is now known as hashar19:01
*** sshnaidm|ruck is now known as sshnaidm|bbl19:02
*** ianw_pto is now known as ianw19:04
*** yolanda has quit IRC19:05
*** rlandy|training is now known as rlandy19:36
*** armstrongs has joined #zuul19:45
*** armstrongs has quit IRC19:54
*** hashar is now known as hasharAway19:55
*** noonedeadpunk has quit IRC20:03
*** noonedeadpunk has joined #zuul20:03
*** tobiash has quit IRC20:06
*** tobiash has joined #zuul20:07
*** yolanda has joined #zuul20:11
*** hasharAway has quit IRC20:28
*** hashar has joined #zuul20:29
*** sshnaidm|bbl has quit IRC20:34
*** sshnaidm|bbl has joined #zuul20:35
*** sshnaidm|bbl is now known as sshnaidm|afk20:46
*** stevthedev has quit IRC21:14
*** stevthedev has joined #zuul21:14
*** jbryce has quit IRC21:15
*** _erlon_ has quit IRC21:15
*** jbryce has joined #zuul21:15
*** _erlon_ has joined #zuul21:15
*** tobiash has quit IRC22:04
*** tobiash has joined #zuul22:06
*** hashar has quit IRC22:07
*** rlandy is now known as rlandy|afk22:15
*** rfolco has quit IRC22:17
*** piotrowskim has quit IRC22:22
*** rfolco has joined #zuul22:33
*** tosky has quit IRC22:57
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Use a temporary registry with buildx  https://review.opendev.org/73851723:15
corvusroles/write-inventory/tasks/main.yaml:1: [E106] Role name write-inventory does not match ``^[a-z][a-z0-9_]+$`` pattern23:16
corvusis this a new ansible-lint thing?23:16
corvusthat's apparently true for collections, but i don't think that applies here23:18
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Ignore ansible lint E106  https://review.opendev.org/73871623:19
corvuszuul-jobs-maint: we may want to merge that asap ^23:20
*** wuchunyang has joined #zuul23:20
clarkblooking23:22
*** wuchunyang has quit IRC23:24
fungiyeah, this came up before, it was hotly debated in the ansible community apparently, but the implementation plan for collections prevents being able to use hyphens in their names23:27
corvusyep.  i won't try to stop the locomotive, just point out that this car isn't hooked up to it :)23:28
fungii guess ansible-lint only just got around to adding a rule to check for it, and assumes all roles are collections now23:28
fungiand all restaurants are now taco bell23:28
openstackgerritMerged zuul/zuul-jobs master: Ignore ansible lint E106  https://review.opendev.org/73871623:33
corvusavass: looks like https://review.opendev.org/738517 should be ready now -- if you have a minute tomorrow to take a look at that, we can decide if we want to squash that into your change23:35
*** hamalq_ has quit IRC23:38

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!