*** marios is now known as marios|ruck | 05:41 | |
opendevreview | Felix Edel proposed zuul/zuul master: Fix test_gerrit.TestPolling.test_config_update https://review.opendev.org/c/zuul/zuul/+/773023 | 05:50 |
---|---|---|
opendevreview | Felix Edel proposed zuul/zuul master: Execute merge jobs via ZooKeeper https://review.opendev.org/c/zuul/zuul/+/802299 | 05:50 |
*** rpittau|afk is now known as rpittau | 07:10 | |
opendevreview | Matthieu Huin proposed zuul/zuul master: [api][cors] Add CORS configuration https://review.opendev.org/c/zuul/zuul/+/767691 | 08:20 |
opendevreview | Felix Edel proposed zuul/zuul master: TEST Increase timeout of zuul-tox-py36 job https://review.opendev.org/c/zuul/zuul/+/802836 | 08:22 |
abitrolly[m] | So what is the official Matrix bridge then? The current public bridge that Matrix shows in search points to Freenode. | 09:50 |
felixedel | corvus: Regarding the test failures with the merger API changes: It looks like they are fixed by https://review.opendev.org/c/zuul/zuul/+/773023 but the py36 tests went in a timeout (the test output looks fine, so no failed tests until the timeout). I added a test patch which increases the timeout to see if that helps https://review.opendev.org/c/zuul/zuul/+/802836 and this was successful (although the py36 job didn't take as long as before and | 10:10 |
felixedel | would have also been successful without the increase). | 10:10 |
opendevreview | Tobias Henkel proposed zuul/zuul master: Initialize github client manager if needed https://review.opendev.org/c/zuul/zuul/+/802857 | 10:25 |
*** jcapitao is now known as jcapitao_lunch | 10:29 | |
*** jcapitao_lunch is now known as jcapitao | 12:18 | |
pabelanger[m] | We are testing out the review github reporter in https://github.com/ansible-network/sandbox/pull/78 and noticed zuul will reapply a review, regardless if it has done so or not. While the github API allows you to do it, trying to come up with a way to prevent zuul from doing this | 12:50 |
pabelanger[m] | I am unsure if we should first check to see if a review is applied and if so, skip the action, or maybe expose a new setting some how to only do it if missing | 12:51 |
pabelanger[m] | I might be able to use https://zuul-ci.org/docs/zuul/reference/drivers/github.html#attr-pipeline.reject.%3Cgithub%20source%3E.review if I am reading correctly | 13:11 |
fungi | abitrolly[m]: oftc (which you've found) but the oftc matrix bridge doesn't index channels with +s set (which we've done to make it less generally discoverable for spammers). soon the plan is to not have any official #zuul irc channel and just do native matrix: https://zuul-ci.org/docs/zuul/reference/developer/specs/community-matrix.html | 13:27 |
abitrolly[m] | <fungi "abitrolly: oftc (which you've fo"> Nice. Will it be #zuul:matrix.org or some own server? | 13:31 |
fungi | abitrolly[m]: the specification i linked speculates that it will be the #zuul:opendev.org, work is underway in opendev to rewrite some relevant irc bots as a prerequisite | 13:33 |
abitrolly[m] | Another question about Zuul. I've got `.zuul.yaml` and want to run tests described in its job locally in container. What is the easiest way to do this? | 13:56 |
*** Guest1365 is now known as hashar | 14:04 | |
*** hashar is now known as Guest2704 | 14:05 | |
*** Guest2704 is now known as hashar | 14:06 | |
hashar | fungi: hi, any chance we could get a new release of opendev/gear ? :] | 14:10 |
hashar | not sure what is needed there, but I will be happy to help | 14:11 |
fungi | abitrolly[m]: it depends on how your job is designed, but generally the idea is that you write ansible playbooks as wrappers around the commands you would normally run to test your software, so that the job definition is there mainly to provide context about that to the ci/cd system. there is a spec in progress to provide a more direct way to run a job locally, but because of zuul's ability to | 14:11 |
fungi | piece jobs together from components in lots of different repositories it knows about, there's a need for a zuul running somewhere to resolve the various pieces: https://zuul-ci.org/docs/zuul/reference/developer/specs/zuul-runner.html | 14:11 |
pabelanger[m] | okay, I've managed to solve my issue | 14:13 |
pabelanger[m] | 2021-07-29 14:10:58,733 DEBUG zuul.Pipeline.ansible.lgtm: [e: caa9b9a0-f076-11eb-9b04-3ff866b7486f] Change <Change 0x7f31bca04588 ansible-network/sandbox 81,3317583540ce45d205101f03e03843209c3a4172> does not match pipeline requirement <GithubRefFilter connection_name: github.com reject-reviews: [{'type': 'approved', 'username': re.compile('ansible-zuul\\[bot\\]')}]> because Reject reviews [{'type': 'approved', 'username': | 14:13 |
pabelanger[m] | re.compile('ansible-zuul\\[bot\\]')}] matches dict_values([{'by': {'username': 'ansible-zuul[bot]', 'email': None, 'name': None}, 'grantedOn': 1627567842, 'type': 'approved', 'submitted_at': '2021-07-29T14:10:42Z', 'permission': 'read'}]) | 14:13 |
fungi | hashar: we'll need to merge https://review.opendev.org/800325 and https://review.opendev.org/796704 before we consider tagging a new release, but corvus also expressed concerns about making another gear release before the recent zuul 4.6.0 security release, so i would want to get his input on whether he's okay with the idea now | 14:13 |
hashar | sure thing, the reason I ask is one of my coworker got a patch merged in and we still rely on a fork. It is not a big deal, just a slight annoyance ;) | 14:15 |
hashar | good to see the metadata get polished! | 14:16 |
fungi | hashar: once corvus is around today i'll see if he's still got concerns with possible changes in a new gear release complicating the work of moving off gearman protocol in zuul | 14:16 |
fungi | it was a particular concern when we were trying to knock out a large tangle of embargoed security fixes, but maybe the concern is less now we've got that behind us | 14:17 |
hashar | fungi: worse case Zuul get pinned to the latest 15.1 release which is known to be stable? | 14:18 |
fungi | hashar: that was an option we discussed too, yes | 14:18 |
hashar | cool. And I have an irc bouncer nowadays so can catch up the discussion that might happen while I am sleeping | 14:19 |
abitrolly[m] | <fungi "piece jobs together from compone"> From what I see all jobs are probably local https://github.com/fedora-infra/anitya/blob/master/.zuul.yaml but if the `zuul-runner` could inspect and see if it can run the yaml and find all the jobs defined, that would be cool. Reading about the runner. Looks good so far. | 14:27 |
fungi | abitrolly[m]: a typical job in opendev's zuul (which i help manage) draws ansible roles from and/or inherits from job definitions in at least half a dozen different git repositories, and some jobs pull components from dozens of repositories | 14:29 |
abitrolly[m] | <fungi "abitrolly: a typical job in open"> Does it calculate a dependency graph before doing so? | 14:30 |
fungi | even in a basic zuul deployment, your simple locally-defined job at least inherits from the base job, so to faithfully reproduce exactly what zuul would run you need to run any pre and post phase playbooks defined in that base job as well | 14:30 |
fungi | abitrolly[m]: yes, also job definitions can cross branch boundaries and need to collect bits from different branches of projects, or may attempt to match branch names between repositories to find the correct versions of jobs | 14:31 |
abitrolly[m] | I don't see this as a problem. Projects still need to fetch a lot of libraries from the net to build. | 14:31 |
fungi | the zuul scheduler indexes the configuration from every branch of every repository it's told about, and then identifies the relationships between them | 14:31 |
fungi | so any job in any project can reference job configuration from any other project within the same tenant | 14:32 |
fungi | the jobs themselves use a global namespace | 14:32 |
fungi | tenant-global i mean | 14:32 |
fungi | so you don't have to tell your configuration where to find a job when it inherits from it, as job names within a tenant are unique across all projects | 14:33 |
fungi | thus you would at least need the zuul scheduler's tenant configuration, parse it, and then scan all the projects listed therein to find the jobs you might be inheriting from in your local job | 14:34 |
fungi | at which point you're essentially running a zuul scheduler | 14:34 |
abitrolly[m] | Given my `.zuul.yaml` from Anitya project above, can Zuul Runner analyze it to tell me about job names with missing Ansible files? Or what program do I need to run? | 14:35 |
fungi | abitrolly[m]: from the spec, "The Zuul Runner shall query an existing Zuul API to get the list of projects required to run a job. [...] The CLI would then perform the executor service task to clone and merge the required project locally." | 14:37 |
fungi | i take it from that you'd need to invoke it for a specific job, so to find that for all jobs you care about you'd probably need to invoke it separately for each job you're interested in running | 14:38 |
abitrolly[m] | Well, for the quick-start if would be great if I can just run everything in `.zuul.yaml` in a local container. | 14:39 |
fungi | abitrolly[m]: for the quick-start deployment at https://zuul-ci.org/docs/zuul/tutorials/quick-start.html you mean? | 14:40 |
abitrolly[m] | <fungi "abitrolly: for the quick-start d"> No, actually I though about something like `podman run -v ".:/src" -w /src zuul-runner localrun`. Which will try to do as much as possible without API access. | 14:44 |
fungi | abitrolly[m]: so just to be clear, the general case of trying to run a zuul job outside of the context of a zuul service is complex enough that you essentially need a running zuul somewhere to piece everything together. for the simple case of running the same things a job would run, you're generally better off making the jobs lightweight wrappers around what the developer would normally run | 14:47 |
fungi | (whether that's a shell command or some set of ansible playbooks) but not actually try to execute the job definition locally | 14:47 |
fungi | the job definition provides context to zuul, it isn't something which is directly runnable. the zuul-runner work is attempting to bridge that gap by turning your local system into a sort of substitute zuul executor and node inventory, but it still needs to offload the work of figuring out what to run to a scheduler which has all the relevant projects configured in it already | 14:48 |
abitrolly[m] | What in the context of this `.zuul.yaml` https://github.com/fedora-infra/anitya/blob/master/.zuul.yaml prevents it from running locally in a container? | 14:52 |
abitrolly[m] | From what I can see all playbooks that need to be run are already in the repo. | 14:52 |
fungi | abitrolly[m]: absolutely, you're referring to a special case, if we create a tool to try to reproduce zuul jobs outside a zuul environment though we need it to support more complex relationships than just "it's all defined in my repo already" (but also what you don't see in that job definition is the implicit inheritance from a base job which also has its own playbooks) | 14:55 |
fungi | a typical zuul job inherits from some other job definition in zuul's standard job library (the zuul-jobs project) which usually has its own role definitions, and then implicitly on a base job | 14:56 |
abitrolly[m] | But `zuul-runner` knows how to resolve this base jobs and download them? Or their location needs to be explicitly configured for each project? | 14:58 |
fungi | abitrolly[m]: even in your example, you're not noticing the "parent: tox" which says each of those jobs inherits from a definition for a job named "tox" which is not defined in that file | 14:58 |
abitrolly[m] | <fungi "abitrolly: even in your example,"> Right. So where is this `tox` playbook is defined? | 14:59 |
fungi | yes, zuul-runner would ask your zuul service api to collect all the playbooks associated with, say, your tox-bandit job, and then it would see that tox-bandit defines that it builds on the tox job, and would find what repository and branch the tox job is defined in and add any playbooks associated with that, and then also know that the job inherits from the base job and add any playbooks for | 14:59 |
fungi | phases defined in that | 14:59 |
fungi | where that "tox" parent job is defined would depend on the zuul where you're running/querying | 15:00 |
mhuin | abitrolly[m], check the "jobs" page on the Zuul UI | 15:00 |
fungi | somewhere in your zuul's global job namespace is a job named "tox" and your zuul knows which project and branch it found that job in | 15:00 |
fungi | tenant-global job namespace i mean | 15:01 |
abitrolly[m] | <fungi "tenant-global job namespace i me"> Is that a default catalog to fetch jobs? Like ansible galaxy? | 15:02 |
mhuin | if you click on the job, you can see which repo(s) define it | 15:02 |
abitrolly[m] | <mhuin "abitrolly, check the "jobs" page"> I don't have UI. I just want to run tests locally before submitting PR. | 15:03 |
mhuin | abitrolly[m], see https://fedora.softwarefactory-project.io/zuul/job/tox | 15:04 |
abitrolly[m] | <fungi "yes, zuul-runner would ask your "> So `zuul-runner` doesn't have the dependency resolver functionality? It just sends `.zuul.yaml` to `zuul service`? An SD diagram could greatly enrich the docs. | 15:04 |
mhuin | that's the zuul instance running the builds for the anitya project | 15:04 |
mhuin | the tox job comes from the zuul-jobs repo hosted at softwarefactory-project.io | 15:05 |
abitrolly[m] | <mhuin "abitrolly, see https://fedora.so"> Oh. I see. So its like implicit private storage for GitHub Actions. | 15:06 |
abitrolly[m] | <mhuin "the tox job comes from the zuul-"> But it is not the same, right? It is copied from there and given the same name, but it could be completely different? | 15:06 |
fungi | abitrolly[m]: it's the softwarefactory-project.io/zuul-jobs project (which is probably a fork of https://opendev.org/zuul/zuul-jobs) | 15:08 |
abitrolly[m] | I need to take a break, but all this has been very interesting so far. I am going to take a look at `zuul-runner` to see if can be patched to fill the missing parts for my scenario. | 15:09 |
fungi | i don't know what you mean by implicit private storage, and also i'm not familiar with "github actions" so not able to parse your question | 15:09 |
fungi | abitrolly[m]: what i linked to earlier was a specification (a plan for creating) a zuul-runner tool. i'm not sure how far along that even is yet, but the people who have been working on making it may appreciate your feedback if you determine it's unsuitable for your use case without further amendment | 15:11 |
abitrolly[m] | <fungi "i don't know what you mean by im"> Yep, sorry. I mean that GitHub Actions can not be in private repositories, so all the build toolchain is always open source - https://github.com/actions/checkout/issues/95#issuecomment-853927627 | 15:12 |
abitrolly[m] | Actions are basically jobs defined elsewhere. | 15:12 |
fungi | i don't see what's private about the storage of anything we've talked about so far | 15:14 |
corvus | abitrolly: i have 2 questions, mostly out of curiosity: 1) why do you want to run the jobs locally instead of letting zuul run them for you? 2) if you do want to run them locally, why not just run "tox" ? | 15:14 |
abitrolly[m] | <fungi "i don't see what's private about"> For me as external contributor there is now way to see where the `tox` job coming from without asking admins. | 15:15 |
fungi | it looks like your project is using software factory for its ci, and as far as i know all of software factory's job configuration is public | 15:15 |
fungi | oh, now i get it. so you're a contributor to a project which hasn't actually provided you with any contributor documentation explaining where and how their ci runs | 15:15 |
fungi | i can certainly see how that would get very confusing | 15:16 |
mhuin | abitrolly[m], if you haven't read the wiki, this might answer some of your questions: https://fedoraproject.org/wiki/Zuul-based-ci | 15:16 |
abitrolly[m] | <corvus "abitrolly: i have 2 questions, m"> 1) Because I am not privileged enough to use Zuul cluster configured for `release-monitoring` project. | 15:18 |
mhuin | abitrolly[m], but the jobs should run automatically if you're allowed to send a PR to the project | 15:18 |
abitrolly[m] | <fungi "abitrolly: what i linked to earl"> That's very good. Looks like `zuul-runner` doesn't have a separate repo https://review.opendev.org/#/q/topic:freeze_job+(status:open+OR+status:merged - is the Gerrit an official "forge" for it? How to track issues and feature requests there? | 15:20 |
corvus | abitrolly: i thought we were talking about a project (anitya) that is using zuul; i don't understand how zuul-runner would help with a project that isn't using zuul. | 15:20 |
mhuin | abitrolly[m], is release-monitoring a project hosted on github? | 15:23 |
fungi | abitrolly[m]: the spec indicates this is where you can find the work in progress implementing zuul-runner: https://review.opendev.org/q/topic:freeze_job | 15:23 |
corvus | <abitrolly[m] "For me as external contributor t"> Yes, mhuin linked to the job page in the zuul ui -- that's available to anyone, and tells you about the job, and what job it inherits from, and so forth, so you use that to learn that "tox-lint" comes from zuul-jobs, and inherits from "tox" in zuul-jobs, etc. | 15:24 |
abitrolly[m] | <mhuin "abitrolly, but the jobs should r"> And it runs, yes. Just wanted to run tests locally to see that I haven't broke anything without sending a PR. | 15:24 |
mhuin | abitrolly[m], okay, then as suggested if there are tox jobs in the zuul builds, you can most likely run tox locally to check it first. I wouldn't worry too much about broken stuff in PRs; as long as it isn't merged (and it shouldn't be if the PR doesn't pass zuul's CI), you won't break anything | 15:27 |
mhuin | It's actually better to catch the failures there | 15:27 |
abitrolly[m] | <corvus "abitrolly: i thought we were tal"> Yes. The project is using Zuul for CI. It is one of the dozens of project I occasionally commit to, and I can't remember how to run tests for each of them. If I could just run `zuul-runner` and it will pick up `.zuul.yaml` and safely run it in containers, that would save my day, or maybe even several weeks. | 15:27 |
mhuin | ah, fair enough ... | 15:28 |
mhuin | but then I'd argue the onus of documenting how to contribute and test code is on the project, not the CI tool | 15:29 |
abitrolly[m] | <fungi "abitrolly: the spec indicates th"> So where to provide a feedback for it? I followed the link from https://zuul-ci.org/docs/zuul/reference/developer/specs/zuul-runner.html | 15:29 |
fungi | abitrolly[m]: each of those changes implements part of zuul-runner or the necessary support infrastructure for it within the scheduler/api, and is work in progress. you can review the proposed patches there and comment. you can also provide feedback in here or on the zuul-discuss mailing list | 15:32 |
abitrolly[m] | Nice. I will try to see how to run `zuul-runner` from source locally. | 15:36 |
clarkb | abitrolly[m]: out of curiousity for projects that use Jenkins or github actions do you expect to be able to run the jenkins job specification xml somehow or the github actions action yaml action list? Maybe those tools exist and I'm not aware of them? I ask because I ran into similar with a projct that runs jenkins jobs and I ended up learning how to use bazel (similar to tox in this | 15:41 |
clarkb | case) in order to run the checks locally. | 15:41 |
clarkb | wondering if there is a general expectation that CI reproduction shift up a level to emulating the CI system itself rather than the build tool executed by the CI system and if so if there are other examples of that | 15:42 |
abitrolly[m] | <clarkb "abitrolly: out of curiousity for"> I haven't seen Jenkins much - used Bitten/Trac and Buildbot for inhouse projects. :D For Github Actions there is https://github.com/nektos/act | 15:44 |
clarkb | thanks | 15:45 |
abitrolly[m] | For the good news I've just discovered that it is now possible to run containers inside containers https://www.redhat.com/sysadmin/podman-inside-container and maybe even unprivileged ones inside unprivileged, and this opens possibilities to directly use full CI setup locally. Hopefully CI systems will still evolve to be more lightweight. | 15:46 |
opendevreview | James E. Blair proposed zuul/zuul master: DNM: massive race test https://review.opendev.org/c/zuul/zuul/+/802908 | 15:47 |
fungi | abitrolly[m]: i know there are zuul users running their executors (which rely on bubblewrap containers) inside openshift containers, been the case for years | 15:47 |
fungi | software factory may itself be deployed that way, i'm not certain | 15:48 |
avass[m] | fungi: abitrolly yeah and we run all components in k8s | 15:49 |
clarkb | re the original question about the official matrix bridge. I think the freenode and oftc bridges themselves are both run by matrix.org and official in that capacity. I also don't know that we have any more rights to anything on the freendoe side as they recreated services and didn't port stuff over aiui | 15:52 |
clarkb | I wonder if we can hide the freenode bridge channels though | 15:52 |
clarkb | on the matrix side I mean | 15:52 |
fungi | probably not without control of the channel on freenode, and the new freenode management are likely to see using a channel purely in order to hide it as a violation of their terms of service | 15:53 |
*** rpittau is now known as rpittau|afk | 16:03 | |
*** marios|ruck is now known as marios|out | 16:18 | |
corvus | felixedel: https://review.opendev.org/802908 ran 10 times without the NoNodeError/timeout; i think that's pretty good evidence that https://review.opendev.org/773023 addresses the problem | 17:18 |
corvus | i'm going to collect some local test timings | 17:19 |
*** sshnaidm is now known as sshnaidm|afk | 18:30 | |
corvus | okay, the times look reasonable to me, i don't think we need to increase the timeouts | 19:08 |
fungi | corvus: not sure if you saw the conversation with hashar earlier, but if there's interest in tagging gear 0.16.0 (most significant functional changes in 0.15.1..origin/master are tls 1.3 disablement and connection scanning randomization) would you want to preemptively pin gear<0.16 in zuul first? | 19:14 |
corvus | fungi: yeah; should we put that in the zuul i'm about to release, or the one after? | 19:19 |
corvus | let's do after | 19:40 |
fungi | corvus: it didn't sound like hashar or his folks were in any hurry there, so i wouldn't necessarily try to cram more changes in when we've already got an effective release candidate | 19:40 |
corvus | zuul-maint: how does this look for a zuul release? commit 8e4af0ce5e708ec6a8a2bf3a421b299f94704a7e (HEAD, tag: 4.7.0, origin/master, gerrit/master, master) | 19:40 |
fungi | effective release candidate of zuul i meant | 19:40 |
fungi | there are still a couple of packaging-related changes which need to merge to gear before we would tag a release of it anyway | 19:41 |
clarkb | that commit is tip of master and 4.7.0 would be appropriate with the last tag being 4.6.0 and the changes we've landed | 19:41 |
clarkb | corvus: ^ is that what opendev is running? if so lgtm | 19:42 |
fungi | yes, that commit is the tip of master but also older than when we restarted opendev's last, and there definitely seem to be some changes warranting a minor version bump | 19:44 |
hashar | fungi: corvus: for gear released we are not in a rush. We just have pip pointed to a git fork we done and pointing at whatever branch we have. It will be more convenient to just reuse the pypi published package instead, but it is not blocking us in anyway :] | 19:44 |
fungi | corvus: 8e4af0c as 4.7.0 seems good to me | 19:45 |
corvus | clarkb: yes: https://wiki.openstack.org/wiki/Infrastructure_Status | 19:45 |
corvus | i'll go ahead and push it then | 19:46 |
fungi | hashar: thanks, sounds like after zuul 4.7.0 we should merge a gear<0.16 pin in zuul's requirements, then after the next zuul release after 4.7.0 we can tag gear 0.16.0 | 19:46 |
hashar | fungi: up to you :] I guess as long as a new release get tagged "soonish" it is fine to me :) | 19:47 |
hashar | thank you! | 19:47 |
fungi | i'll push that pin change now while i'm thinking about it | 19:47 |
opendevreview | James E. Blair proposed zuul/zuul master: Pin gear < 0.16.0 https://review.opendev.org/c/zuul/zuul/+/802953 | 19:48 |
corvus | fungi: too late | 19:48 |
opendevreview | Jeremy Stanley proposed zuul/zuul master: Pin gear to avoid installing 0.16 https://review.opendev.org/c/zuul/zuul/+/802954 | 19:52 |
fungi | d'oh, i'll abandon mine | 19:52 |
fungi | that's what i get for hacking from a small screen where i've only got one terminal up at a time | 19:52 |
clarkb | why do we need to pin gear? | 20:03 |
clarkb | if we don't think the new release is going to work why make it a relase? | 20:03 |
clarkb | I guess even if we are cleaning up gear in my mind it doesn't make sense to release it if we aren't comfortable without a pin | 20:03 |
fungi | clarkb: there are non-zuul users of gear it sounds like who would like to see recent fixes in the version on pypi, but with zuul working to get rid of its use of gear it's just safer not to unnecessarily introduce new unknowns there | 20:07 |
clarkb | fungi: right, that makes me wonder about making a gera rlease if we think it isn't safe neough for zuul | 20:07 |
clarkb | It kind of feels like what we are saying is gear can play fast and loose now because zuul isn't using it anymore and I'm not sure I agreewith that approach | 20:08 |
fungi | i thought it was more zuul maintainers not wanting to have to spend time thinking about what might be changing in gear when we're in the middle of work to stop using it anyway | 20:09 |
clarkb | as a zuul maintainer I don't worry about it if we are confident in the gear release :) | 20:11 |
fungi | i don't think gear's playing fast and loose at all, it's quite tested (even more tested than it was leading up to the last release), but the tls gear implementation in particular was finicky for zuul and there is a change to tls policy in gear's master branch to make testing on python 3.9 work | 20:12 |
clarkb | anyway its notsuper important. I find myself allergic to package pins. In this case its not a major concern becuase the dep will go away entirely. But makes me think baout our confidence in releasing gear at all if we aren't confident zuul will run with it | 20:12 |
fungi | there were some dnm changes to confirm zuul's tests continued working with the tls policy change in gear before we merged that, but i get that there's still hesitancy | 20:15 |
opendevreview | Merged zuul/zuul master: Pin gear < 0.16.0 https://review.opendev.org/c/zuul/zuul/+/802953 | 21:09 |
opendevreview | James E. Blair proposed zuul/zuul master: Address comments in parent change https://review.opendev.org/c/zuul/zuul/+/802971 | 23:29 |
corvus | felixedel: ^ can you take a look at that and if you like it, squash it into the parent? | 23:30 |
corvus | 4.7.0 release announcement sent | 23:38 |
* fungi puts on party hat and gets out balloons | 23:38 | |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Remove success-url https://review.opendev.org/c/zuul/zuul-jobs/+/802974 | 23:43 |
opendevreview | James E. Blair proposed zuul/zuul master: Remove success-url and failure-url https://review.opendev.org/c/zuul/zuul/+/802976 | 23:46 |
opendevreview | James E. Blair proposed zuul/zuul-website master: Remove success-url https://review.opendev.org/c/zuul/zuul-website/+/802977 | 23:46 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!