corvus | dmsimard: that's lovely :) | 00:03 |
---|---|---|
*** holser has quit IRC | 00:03 | |
*** holser has joined #zuul | 00:06 | |
dmsimard | thanks! I've found it much easier to write than web frontend stuff :) | 00:06 |
smcginnis | corvus: I didn't read all scrollback yet, but that would be expected behavior. | 00:07 |
smcginnis | corvus: That is partly why we have separate landing pages for stable branches in most openstack projects. | 00:07 |
smcginnis | corvus: We only want stable releases to show up on the stable branch pages. Otherwise they would be mixed in with releases directly off of the master branch line, and that could get really confusing. | 00:08 |
smcginnis | So keeping it separate between branches allows that branch of releases to be shown in a contiguous view that makes sense from a stable branch perspective. | 00:09 |
smcginnis | corvus: Looks like your proposed steps should work. The reason it does is even for something in an earlier tag on the branch, if it gets updated during the current cycle (or in this case between tags) it gets picked up as new for that group. | 00:11 |
*** hamalq has quit IRC | 00:27 | |
tristanC | corvus: note that we will probably maintain a 3.x version of zuul for the software-factory 3.4 repository for at least 6 months. If the stable/3.x branch can stay in opendev, we could use it to backport fix there when needed. | 00:44 |
*** rlandy|ruck|bbl is now known as rlandy|ruck | 01:09 | |
*** hamalq has joined #zuul | 01:30 | |
*** hamalq_ has joined #zuul | 01:31 | |
*** decimuscorvinus has quit IRC | 01:33 | |
*** decimuscorvinus has joined #zuul | 01:33 | |
*** decimuscorvinus has joined #zuul | 01:34 | |
*** hamalq has quit IRC | 01:36 | |
*** hamalq_ has quit IRC | 01:38 | |
*** rlandy|ruck has quit IRC | 01:52 | |
*** hamalq has joined #zuul | 02:04 | |
*** hamalq has quit IRC | 02:10 | |
*** holser has quit IRC | 02:15 | |
dmsimard | oops, did someone else see errors for ara-report role ? https://i.imgur.com/pY0EPDE.png | 02:31 |
dmsimard | I remember seeing something about local execution recently :D | 02:31 |
dmsimard | problem is here: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ara-report/tasks/main.yaml#L2-L5 | 02:33 |
dmsimard | "Executing local code is prohibited" so can't generate the ara report :( | 02:34 |
dmsimard | I'll take the role off for now, makes the ara integration jobs POST_FAILUREs | 02:36 |
dmsimard | It's late, can discuss the fix later :p | 02:36 |
dmsimard | I guess it's https://review.opendev.org/#/c/742229/ | 02:39 |
*** decimuscorvinus has quit IRC | 02:45 | |
*** bhavikdbavishi has joined #zuul | 02:45 | |
ianw | dmsimard: we were just discussing the same thing for one of the kata roles in #opendev | 02:46 |
dmsimard | oh, let me go there o/ | 02:46 |
ianw | oh, no conclusion, just same thing ... i'm guessing this is somehow related to switching to containers for the executor? | 02:46 |
dmsimard | I am very out of the loop but the change I linked ^ seems to be it. I guess the prohibition comes because it's not a trusted job ? | 02:49 |
ianw | yeah | 02:49 |
clarkb | yes, it wasan inadverdent relaxation if tge rules | 02:50 |
clarkb | once found it got fixed | 02:50 |
dmsimard | security is hard :) | 02:50 |
*** bhavikdbavishi1 has joined #zuul | 02:53 | |
*** bhavikdbavishi has quit IRC | 02:54 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 02:54 | |
*** decimuscorvinus has joined #zuul | 02:56 | |
*** rfolco has quit IRC | 03:00 | |
*** decimuscorvinus has quit IRC | 03:24 | |
*** decimuscorvinus has joined #zuul | 03:28 | |
*** decimuscorvinus has quit IRC | 03:31 | |
*** decimuscorvinus has joined #zuul | 03:39 | |
*** bhavikdbavishi has quit IRC | 03:39 | |
*** bhavikdbavishi has joined #zuul | 03:54 | |
*** bhavikdbavishi1 has joined #zuul | 03:59 | |
*** bhavikdbavishi has quit IRC | 04:00 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 04:00 | |
*** bhavikdbavishi has quit IRC | 04:13 | |
*** bhavikdbavishi has joined #zuul | 04:14 | |
*** sugaar has quit IRC | 04:17 | |
*** hamalq has joined #zuul | 04:43 | |
*** hamalq_ has joined #zuul | 04:44 | |
*** hamalq has quit IRC | 04:48 | |
*** hamalq_ has quit IRC | 04:55 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Revert "Enable tls-proxy in ensure-devstack" https://review.opendev.org/742344 | 05:23 |
*** marios has joined #zuul | 05:39 | |
*** vishalmanchanda has joined #zuul | 05:55 | |
*** bhavikdbavishi has quit IRC | 06:07 | |
*** bhavikdbavishi has joined #zuul | 06:22 | |
*** iurygregory has joined #zuul | 06:25 | |
*** bhavikdbavishi1 has joined #zuul | 06:28 | |
*** bhavikdbavishi has quit IRC | 06:29 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 06:29 | |
*** holser has joined #zuul | 06:29 | |
*** bhavikdbavishi has quit IRC | 06:51 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Temporarily remove pending release notes https://review.opendev.org/742353 | 06:52 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Re-add temporarily removed pending release notes https://review.opendev.org/742354 | 06:52 |
tobiash | corvus: preparation of steps 4 and 6 ^ | 06:53 |
openstackgerrit | Merged zuul/zuul master: PF4: Update build result page layout https://review.opendev.org/739972 | 07:13 |
openstackgerrit | Merged zuul/zuul master: Fix fetching function on build result page https://review.opendev.org/741103 | 07:15 |
*** bhavikdbavishi has joined #zuul | 07:21 | |
tobiash | avass: do you already use https://review.opendev.org/741809 productive? | 07:24 |
tobiash | we're thinking about trying it out because we experience seldom stability issues with our swift api backed by ceph in our clouds (and we see with other services that the s3 api might be more stable) | 07:26 |
*** jcapitao has joined #zuul | 07:35 | |
*** tosky has joined #zuul | 07:39 | |
*** bhavikdbavishi has quit IRC | 07:44 | |
*** marios has quit IRC | 07:48 | |
openstackgerrit | Merged zuul/zuul-jobs master: Add upload-logs-s3 https://review.opendev.org/741809 | 07:50 |
*** jpena|off is now known as jpena | 07:51 | |
*** bhavikdbavishi has joined #zuul | 07:53 | |
*** sugaar has joined #zuul | 07:56 | |
*** bhavikdbavishi has quit IRC | 08:02 | |
*** tosky has quit IRC | 08:04 | |
*** tosky_ has joined #zuul | 08:04 | |
*** bhavikdbavishi has joined #zuul | 08:08 | |
zbr|rover | tobiash: avass: can you please have a look at https://review.opendev.org/#/c/723603/ ? | 08:18 |
zbr|rover | this is how it looks https://sbarnea.com/ss/Screen-Shot-2020-07-22-09-18-16.69.png | 08:18 |
zbr|rover | and the effect is instant on save, visible in any console log with longer lines. | 08:19 |
*** tosky_ is now known as tosky | 08:19 | |
*** nils has joined #zuul | 08:20 | |
*** marios has joined #zuul | 08:50 | |
*** sgw1 has quit IRC | 08:54 | |
*** sgw has joined #zuul | 09:17 | |
*** bhavikdbavishi has quit IRC | 09:27 | |
*** bhavikdbavishi has joined #zuul | 09:28 | |
*** bhavikdbavishi has quit IRC | 10:27 | |
*** bhavikdbavishi has joined #zuul | 10:43 | |
*** jcapitao is now known as jcapitao_lunch | 11:08 | |
*** sshnaidm is now known as sshnaidm|afk | 11:14 | |
*** jpena is now known as jpena|lunch | 11:34 | |
*** rfolco has joined #zuul | 11:51 | |
*** rlandy has joined #zuul | 11:55 | |
*** holser has quit IRC | 11:55 | |
*** rlandy is now known as rlandy|ruck | 11:55 | |
*** holser has joined #zuul | 11:56 | |
*** bolg has joined #zuul | 12:02 | |
*** jcapitao_lunch is now known as jcapitao | 12:23 | |
*** Goneri has joined #zuul | 12:25 | |
*** bhavikdbavishi has quit IRC | 12:28 | |
*** jpena|lunch is now known as jpena | 12:37 | |
*** webknjaz has quit IRC | 12:38 | |
*** ChrisShort has quit IRC | 12:38 | |
*** mordred has quit IRC | 12:40 | |
*** masterpe has quit IRC | 12:40 | |
*** webknjaz has joined #zuul | 12:42 | |
*** ChrisShort has joined #zuul | 12:42 | |
*** bhavikdbavishi has joined #zuul | 12:42 | |
*** holser has quit IRC | 12:43 | |
*** holser has joined #zuul | 12:44 | |
*** masterpe has joined #zuul | 12:48 | |
zbr|rover | fungi: clarkb: when back, recheck ^ -- soon it will be a 3mo old attempt to fix the log wrapping in UI | 12:54 |
*** sgw1 has joined #zuul | 12:58 | |
openstackgerrit | Clark Boylan proposed zuul/nodepool master: DNM Use constraints file on arm64 build https://review.opendev.org/741973 | 13:07 |
tristanC | getting up to speed with 3.19.1, it is apparently breaking dco-license and promote-docker-image job. Are we supposed to move these roles to trusted context before doing the upgrade to 3.19.1? | 13:13 |
*** mordred has joined #zuul | 13:13 | |
clarkb | tristanC: dco-license doesn't need to run on the executor and could run on a node. promote-docker-image probably wants to run in a privileged context (I believe we run it that way) | 13:14 |
clarkb | however, yes, generally anything that can't be rewritten to not use localhost commands will either need to be trusted or run on a remote node | 13:14 |
fungi | using a node for dco-license would be crazy overkill though | 13:14 |
fungi | it's just parsing a commit message | 13:14 |
clarkb | ya its just not inherent to the design it run on the executor | 13:15 |
fungi | probably it needs to be rewritten to not use the command module if it can't be dafely run in a privileged context | 13:15 |
fungi | some of these jobs have been in use for very a long time though... how long was that security hole present? | 13:16 |
tristanC | clarkb: it seems like zuul image promotion is happening in untrusted context according to https://zuul.opendev.org/t/zuul/builds?job_name=zuul-promote-image , | 13:16 |
fungi | i can't help but wonder if we regressed something in fixing it | 13:16 |
clarkb | tristanC: ya probably because of the bug :) | 13:17 |
clarkb | looking at the role its a python -c command to calculate a timerange | 13:17 |
*** bhavikdbavishi has quit IRC | 13:18 | |
clarkb | I imagine we can figure that out with jinja instead | 13:18 |
tristanC | i worry that abuse of that bug is quite common and 3.19.1 fixing it is going to cause false positive failures. | 13:18 |
clarkb | fungi: I don't think so | 13:18 |
clarkb | tristanC: I mean yes probably, but should we just allow the security gap instead? | 13:19 |
clarkb | I think we have to roll forward | 13:19 |
tristanC | we have to fix this for sure, i'm just looking for instructions how to avoid job failures | 13:20 |
clarkb | fungi: the rule was pretty clear before. untrusted contexts can't run command/shell tasks on localhost. We broke that rule. Roles tooks advantage. Now we need to clean up | 13:20 |
clarkb | tristanC: I don't think there is an easy answer to that. The jobs need to be assessed for their trustability and could be run in a trusted context or rewritten to avoid localhost commands or not be run | 13:20 |
fungi | the validate-dco-license role hasn't been altered since it was added nearly two years ago... i wonder if we switched to not using a node for it more recently than that | 13:21 |
clarkb | fungi: we laos started using it well after pabelanger added it iirc | 13:21 |
fungi | we, 1.5 years ago | 13:21 |
clarkb | fungi: its also possible this was broken prior to the change tobiash identified | 13:22 |
tristanC | clarkb: i guess we could look at all the zuul-jobs default job and provide upgrade instruction? | 13:22 |
tristanC | e.g., dco-license needs a nodeset or to be added through a trusted project | 13:23 |
fungi | yeah, we were running validate-dco-license in opendev as a third-party check against kata-containers repos in github with https://review.openstack.org/630998 in february 2019 | 13:23 |
clarkb | fungi: via a trusted context | 13:24 |
fungi | i'm trying to find where we set that | 13:24 |
clarkb | fungi: openstack/project-config is trusted | 13:24 |
clarkb | it specifying that the doc-license job run against kata puts the execution of that job into a trusted context | 13:25 |
clarkb | via the zuul tenant config | 13:25 |
fungi | https://review.openstack.org/634941 moved it out of project-config that same month once the third-party runs worked | 13:26 |
clarkb | fungi: kata-containers/zuul-config is apparently a config project | 13:27 |
clarkb | which is itself possibly a bug | 13:27 |
fungi | ahh, then i wonder why it started breaking | 13:27 |
clarkb | we do limit it to job secret and nodeset though | 13:27 |
clarkb | the inheritance path of the jobs may help? its in the inventory file | 13:29 |
*** sshnaidm|afk is now known as sshnaidm | 13:29 | |
fungi | oh, well it looks like teh dco-license job has used an empty nodeset since it was added to zuul-jobs in https://review.openstack.org/630302 1.5 years ago, so it's been working via kata-containers/zuul-config for nearly that long | 13:29 |
tristanC | fungi: it does look overkill to use a nodeset for dco-license, but should we remove the empty-nodeset for now so that the job at least run sucecssfully? | 13:31 |
fungi | tristanC: probably? i'm still trying to figure out if this vulnerability is far older than we first thought | 13:31 |
tristanC | and indicate that a more efficient usage needs to set it in trusted context and overriding the default nodeset with an ampty one? | 13:31 |
fungi | it's possible we've just broken a common pattern jobs (including some in our standard library) have been relying on for 1.5 years or longer | 13:32 |
fungi | in a point release, with no forewarning | 13:32 |
clarkb | I don't think that release has been made yet fwiw | 13:33 |
clarkb | fungi: the bug to command was introduced mid january 2019 | 13:36 |
clarkb | pabelanger added the zuul job around that period of time | 13:36 |
fungi | okay, so it has been roughly that long | 13:37 |
clarkb | I think we may have simply gotten lucky it and just worked | 13:37 |
clarkb | (once deployed to real world use cases) | 13:37 |
fungi | er, maybe i'm misparsing what you meant by "the bug to command was introduced mid january 2019" | 13:37 |
clarkb | fungi: I5ce1385245c76818777aa34230786a9dbaf723e5 | 13:38 |
fungi | you're saying the vulnerability we just patched has existed in zuul since mid january 2019 | 13:38 |
clarkb | yes | 13:38 |
clarkb | what this doesn't explain is why it is broken now | 13:39 |
clarkb | since I think it would be trusted if I read the config right but maybe its related to the limited include on that repo? | 13:39 |
fungi | okay, got it. i saw mentions this was introduced with work to be able to containerize executors, but this looks like it was part of the work toward multi-ansible | 13:39 |
clarkb | oh wait we didn't merge that change until march though | 13:40 |
clarkb | so the timeline is still a bit off | 13:41 |
clarkb | march 2019 | 13:41 |
clarkb | so for ~5 weeks the kata deployment would have been running? I wonder if it didn't work at that point? definitely an interesting question related to why it doesn't work now | 13:41 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: dco-license: remove the empty nodeset https://review.opendev.org/742430 | 13:41 |
clarkb | the change I identified may just be a reorg of existing code | 13:43 |
clarkb | digging in the history here is difficult but I'm trying to go back further and check it | 13:43 |
clarkb | ah yup its going back further If15b0a6166538debc52df41c06767978ef183b05 | 13:44 |
fungi | so the action-general to actiongeneral rename in https://review.openstack.org/575617 | 13:45 |
clarkb | https://review.opendev.org/#/c/575351/ is the origin I think (it shows a file add into a new dir) | 13:47 |
clarkb | ya I think it is very likely this has been a long standing bug on zuulv3 and as a result roles and jobs will face fallout from it. However, the bug is definitely at odds with the assumed security stance and therefore I think the fix remains valid, but we'll need to clean things up | 13:48 |
clarkb | on that note I'll review tristanC's dco-license change | 13:48 |
clarkb | for docker image promotion if we can replace that time delta calculation we should be good to go | 13:49 |
fungi | Submitted-at: Thu, 14 Jun 2018 18:36:22 +0000 | 13:49 |
fungi | so if this has been a bug in zuul basically since the inception of v3, we're probably looking at a *lot* of fallout from the fix | 13:50 |
tristanC | clarkb: it seems like moving the ppc to a trusted context is not enough, perhaps we needs a new job to be defined in trusted-context to be able to use empty-nodeset | 13:50 |
clarkb | tristanC: ppc? | 13:51 |
tristanC | oops, this is for Project Pipeline Config | 13:51 |
clarkb | ya I think that is what we are seeing with kata too | 13:51 |
clarkb | since they define the pipeline and job in a trusted context | 13:52 |
clarkb | tobiash and corvus may have additional thoughts on this as far as ways to address the problem | 13:52 |
tristanC | i'm happy to clean things up, but it would be easier with clear instructions for how to fix what is expected to break | 13:52 |
clarkb | I seem to recall that if you consume an untrusted role in a job in a trusted project you don't get speculative testing of that role. This is why we don't get speculative testing of our base roles in a simple manner and use the whole base-test system | 13:54 |
clarkb | given that I would assume if you define the job in a trusted repo that would be sufficient, but defining a variant of an untrusted job seems to be insufficient based on evidence | 13:55 |
clarkb | tristanC: I'm not sure if you are in a spot where you can test that with dco-license? basically redefine it as a new job in a trusted repo and then run it on projects | 13:57 |
clarkb | (I'm trying to help with the opendev event so not in a good spot to do that sort of thing myself) | 13:57 |
tristanC | clarkb: sure, let me do some commit in one of our tenants | 13:58 |
corvus | it's all about where the playbook is invoked | 14:10 |
*** yolanda has quit IRC | 14:10 | |
corvus | if the playbook is listed in a job definition in a project-config repo, then the playbook itself and the roles repos are non-speculative and run in the trusted execution context | 14:11 |
corvus | fungi: fwiw, zuulv3 was released in march 2018, so the bug has been present for about half the time | 14:12 |
*** yolanda has joined #zuul | 14:12 | |
clarkb | corvus: I think the bug was introduced in https://review.opendev.org/#/c/575351/3 | 14:12 |
clarkb | I'm not sure if there were other mitigating factors at that period of time though | 14:13 |
tristanC | corvus: for kata-containers, the dco-license job should be listed in a project-config repo here: https://opendev.org/kata-containers/zuul-config/src/branch/master/zuul.d/project-templates.yaml | 14:13 |
clarkb | (its possible there were other checks for things that I'm missing in that code state) | 14:13 |
clarkb | tristanC: ya but the job itself where it lists the playbook is from zuul-jobs which is untrusted | 14:13 |
corvus | tristanC: it's the job definition that mettars | 14:13 |
fungi | yeah, my "basically since the inception of v3" comment was based on Submitted-at: Thu, 14 Jun 2018 18:36:22 +0000 | 14:13 |
fungi | so ~3mos after 3.0.0 was tagged | 14:14 |
clarkb | there is a zuul/ansible/action/normal.py which I think handled the command case prior to that file being added, but once that file was added command was no longer passing through normal.py | 14:14 |
clarkb | but the ansible integration is not somethign I'm super familiar with so double check that | 14:15 |
corvus | i believe that's the case; is it important to double check it? | 14:15 |
clarkb | probably not | 14:16 |
tristanC | ok i can see `RUN START: [untrusted : opendev.org/zuul/zuul-jobs/playbooks/dco-license/run.yaml@master]` even when dco-license is set from a trusted-project. thus the doc update in https://review.opendev.org/742430 is incorrect | 14:16 |
corvus | we're pretty sure this has been a bug for a Long Time. :) | 14:16 |
clarkb | I'm just bringing it up because tristanC and fungi are worried we regressed since this worked before | 14:16 |
clarkb | but I'm asserting it was a bug then and a bug until yesterday and now we need to roll forward and fix the fallout | 14:16 |
clarkb | tristanC: the code change in 742430 is correct but not the doc string | 14:17 |
fungi | yeah, i'm less concerned we've regressed something unrelated to the vulnerability now that it looks like we've had this vulnerability for >2 years | 14:17 |
fungi | now it's more about how we help manage whatever widespread breakage will result from fixing a bug which jobs (including some in our standard library) may have been inadvertently relying on all that time | 14:18 |
tristanC | i agree we need to roll forward, but i'd like to know what can be done before the upgrade to 3.19.1 to avoid false positive failure | 14:18 |
corvus | i don't think there are any false positives? | 14:19 |
tristanC | corvus: for example https://zuul.opendev.org/t/zuul/builds?job_name=zuul-promote-image | 14:19 |
corvus | in order for dco-license to work as it does today, it will need to be *defined* in a config-project | 14:19 |
fungi | existing jobs starting to fail once the fix is applied, not exactly false positives | 14:19 |
corvus | right, that's a true positive | 14:19 |
tristanC | corvus: oh right, i meant job failing after the upgrade | 14:20 |
fungi | i think the question is how do we/users find and correct those jobs without suddenly starting to fail them | 14:20 |
corvus | i can't think of a way to do that | 14:21 |
corvus | run a second parallel system? | 14:22 |
tristanC | for example, we could ask our user to look for `START: [untrusted : opendev.org/zuul/zuul-jobs/playbooks/docker-image/promote.yaml` in job-output.txt and do X and Y before the upgrade | 14:22 |
fungi | yeah, me neither, so instead we likely just need to be really clear in our messaging about the fix | 14:22 |
corvus | tristanC: why not just fix that? :) | 14:22 |
corvus | we could spend a month or so creating a static lexical analysis tool to identify forbidden combinations; meanwhile the vulnerability is still out there | 14:23 |
clarkb | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/promote-docker-image/tasks/promote-cleanup.yaml#L6-L8 | 14:23 |
clarkb | that is the issue specific to docker image promotion | 14:23 |
clarkb | I'm guessing we may be able to solve that in another way | 14:23 |
corvus | yeah, i say we comment that section out immediately, then come back and fix it later | 14:24 |
corvus | that cleanup can be delayed for a few days | 14:24 |
clarkb | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/promote-docker-image/tasks/main.yaml#L28-L32 is the block to comment out I think | 14:24 |
corvus | we could take the unusual step of moving dco-license to zuul/base-jobs | 14:25 |
fungi | right, i feel like it's sufficient to just be *very* clear in the release note that admins should expect jobs which were previously working to suddenly start failing, and what the common solutions are to correct them | 14:25 |
corvus | or make a new repo for zuul-trusted-jobs | 14:25 |
fungi | at least in theory, people concerned about sudden breakage are deploying releases of zuul and reading release notes before each upgrade (even if it's just a point release) | 14:26 |
fungi | also we'd presumably say the same things or more in the announcement on the zuul-announce ml | 14:27 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Temporarily disable tag cleanup in docker promote https://review.opendev.org/742447 | 14:28 |
corvus | fungi: did we solve the mystery about validate-dco? did it live its entire life inside of the expected open window for this bug? | 14:29 |
clarkb | we might be able to do ansible_date_time.iso8601 | to_datetime and zj_docker_tag.last_updated | to_datetime and do operations on them that way | 14:30 |
fungi | corvus: that seems to be what happened, yes | 14:31 |
clarkb | something like {{ ((ansible_date_time.iso8601 | to_datetime) - (zj_docker_tag.last_updated | to_datetime)).seconds > 86400 }} as the condition | 14:31 |
clarkb | corvus: ^ I need to find some food during my short opendev break but maybe that would work? | 14:31 |
corvus | clarkb: i'd like to defer it | 14:31 |
clarkb | ok | 14:31 |
corvus | clarkb: can you remove your +2 from https://review.opendev.org/742430 | 14:32 |
fungi | corvus: basically people have been using validate-dco in untrusted contexts running cmd tasks on executors for up to 1.5 years | 14:32 |
*** paladox has quit IRC | 14:32 | |
corvus | tristanC: see comment on https://review.opendev.org/742430 | 14:32 |
fungi | and there are likely more, this is just the first one we've noticed | 14:32 |
clarkb | corvus: note that change removes the null nodeset so that change would work | 14:32 |
corvus | clarkb: which of my two objections are you objecting to? | 14:32 |
clarkb | corvus: "This job, by virtue of being defined in an untrusted repo, can never execute local code on the executor. The only way to fix that is to define the job itself in a trusted repo." | 14:33 |
clarkb | corvus: if we modified it and the playbook to accomodate non localhost that would work | 14:33 |
clarkb | I'm just poitning out we could solve this by having zuul jobs define a job that ran on not localhost | 14:33 |
clarkb | and then also describe how you can define a trusted job that does run on localhost | 14:33 |
corvus | clarkb: yes, if we rewrote the playbook to run on the remote node then yes. the comment is still wrong, you could never make it trusted. | 14:34 |
clarkb | correct I've removed the +2 | 14:34 |
tristanC | corvus: so what should we do with zuul-jobs that haves empty `nodes: []` nodeset? | 14:35 |
corvus | tristanC: are there others? | 14:35 |
tristanC | corvus: promote-docker-image | 14:36 |
corvus | tristanC: https://review.opendev.org/742447 | 14:36 |
corvus | i think those are the only 2 | 14:36 |
tristanC | corvus: and about dco-license, should i update the playbook then? | 14:38 |
corvus | for validate-dco, i think we have 4 options: 1) update it to run on a node 2) tell people to write their own job+playbook in a config-project and use the role if they want it to be nodeless. 3) move validate-dco to base-jobs. 4) create zuul/zuul-trusted-jobs and move it there. | 14:39 |
corvus | it seems like 1+2 may be the lest complex options | 14:39 |
clarkb | I think 1 is a good short term option as it should fix things for existing users with little to no input on their side | 14:42 |
clarkb | then we can work towards a different option longer term if there is demand | 14:42 |
corvus | 1 and 2 are not exclusive | 14:42 |
corvus | i lean toward both, and think that, for example, opendev should implement #2 for kata | 14:43 |
clarkb | https://review.opendev.org/#/c/741973/ resulting in a successful nodepool image job (that means if we can get cached wheels from somewhere we should have that working now) | 14:43 |
clarkb | corvus: yes, I like 1 and 2. re Kata apparently they are switching everything to github actions and so we may just be able to turn it off on our side | 14:44 |
corvus | clarkb: oh that's easy then | 14:44 |
clarkb | fungi: ^ can probably confirm that as he has been workign with them through the github events being missed bug | 14:45 |
corvus | tristanC: i wonder if we can examine the job-output.json files for jobs and see if we ran command or shell tasks on localhost? | 14:47 |
corvus | we might be able to write a quick tool to do that and detect this case | 14:47 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: dco-license: remove the empty nodeset https://review.opendev.org/742430 | 14:47 |
corvus | (in the untrusted context) | 14:47 |
corvus | tristanC: do you think that would be useful? | 14:48 |
tristanC | corvus: yeah, that doesn't seems overly complicated, walking through the job-output.json and filtering `command` module that happens on localhost in untrusted context | 14:49 |
tristanC | i would run that before doing the upgrade | 14:49 |
corvus | you'd need to have a bunch of builds to feed it | 14:49 |
corvus | or at least some suspect builds | 14:50 |
tristanC | i guess the operator could use direct sql access to retrive a list of unique job over the last week or month? | 14:50 |
corvus | we have an api right? :) | 14:51 |
tristanC | s/job/build/ | 14:51 |
tristanC | the api is missing time range unfortunately | 14:51 |
tristanC | i would write a first function that returns set of unique (job-name, tenant) tuple, and then another one that look for localhost unstrusted command in a build's job-output | 14:53 |
corvus | the api sorts though, and provides time info | 14:53 |
*** paladox has joined #zuul | 14:53 | |
tristanC | and report the affected job context | 14:53 |
corvus | so just walk backwards | 14:53 |
corvus | tristanC: i'll write the second part; you want to write the first? | 14:54 |
tristanC | i can work on that now | 14:54 |
fungi | clarkb: corvus: sorry, groceries arrived in the middle of that... i don't know whether kata are the only opendev users who are running the dco-license job, though i would recommend anyone who's running it with gerrit to just switch to having gerrit enforce dco instead | 14:58 |
clarkb | fungi: ya I've just approved the fix to the job (running on normal nodes) | 14:59 |
fungi | and yeah, i'm not super concerned with fixing it for kata because they're switching to github actions for it anyway (so we can drop their tenant real soon hopefully) | 14:59 |
clarkb | and yes, I would suggest that anyone wanting to enforce dco do so via gerrit (and I dont' think we're doing any dco checks for our third party ci) | 15:03 |
openstackgerrit | James E. Blair proposed zuul/zuul master: WIP: add a script to find untrusted execution tasks https://review.opendev.org/742458 | 15:05 |
corvus | tristanC: ^ there's the second half | 15:05 |
openstackgerrit | Merged zuul/zuul-jobs master: Temporarily disable tag cleanup in docker promote https://review.opendev.org/742447 | 15:07 |
*** bhavikdbavishi has joined #zuul | 15:10 | |
corvus | clarkb: i'll work on your timestamp idea now | 15:20 |
openstackgerrit | Merged zuul/zuul-jobs master: dco-license: remove the empty nodeset https://review.opendev.org/742430 | 15:23 |
corvus | fungi: ^ new jobs should be working now | 15:23 |
tristanC | corvus: i'm adding first half to the script | 15:33 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Reinstate docker tag cleanup https://review.opendev.org/742461 | 15:40 |
corvus | clarkb: ^ | 15:41 |
clarkb | corvus: huh to_datetime can't handle the T and Z I guess? thats too bad | 15:42 |
corvus | clarkb: yeah, it spit out an error about the format not matching with my local testing under 2.9.5 | 15:42 |
clarkb | lgtm | 15:44 |
clarkb | I did double check and ansible_date_time doesn't seem to have a single value without the T and Z | 15:46 |
openstackgerrit | James E. Blair proposed zuul/zuul master: Add reno configuration settings https://review.opendev.org/742296 | 15:46 |
clarkb | we could construct it out of the .date and .time values though | 15:46 |
corvus | clarkb: yeah it's nuts | 15:46 |
clarkb | but I think the regex is clear enough to me so +2 | 15:47 |
corvus | clarkb: i figured i'd do the same thing for both values | 15:47 |
clarkb | ++ | 15:47 |
corvus | anyone started on release notes for 3.19.1 yet? | 15:50 |
clarkb | I haven ot | 15:50 |
corvus | i'll do that | 15:50 |
*** zbr|rover is now known as zbr | 15:58 | |
corvus | btw, i think in zuulv4, when we know we have secured zk, we can think about dumping this restriction entirely (and rely only on bwrap) | 15:59 |
corvus | remote: https://review.opendev.org/742473 Add 3.19.1 release notes | 16:06 |
corvus | zuul-maint: ^ let me know if that covers everything | 16:06 |
clarkb | +2 that seems to capture it | 16:09 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: WIP: add a script to find untrusted execution tasks https://review.opendev.org/742458 | 16:10 |
corvus | tristanC: cool, i'm running that against opendev now | 16:12 |
*** marios is now known as marios|out | 16:12 | |
corvus | that seems to be working -- it's found some docker promote jobs | 16:13 |
corvus | tristanC: but i think you have a debug line in there still | 16:13 |
*** marios|out has quit IRC | 16:13 | |
corvus | tristanC: i left a comment pointing it out | 16:14 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Add a script to find untrusted execution tasks https://review.opendev.org/742458 | 16:17 |
corvus | tristanC: ah i just found the uuid thing too | 16:18 |
corvus | tristanC: i think we can just skip those, it looks like it's happening for 'skipped' jobs | 16:19 |
corvus | (i don't think we have to add them to the failed_to_examine list) | 16:19 |
fungi | anyone else who followed the branched release notes discussion yesterday, reviews of https://review.opendev.org/742464 would be appreciated | 16:21 |
corvus | i've started setting the topics for changes we need to merge before the release to "zuul-release" | 16:22 |
*** hamalq has joined #zuul | 16:22 | |
clarkb | fungi: done | 16:23 |
*** hamalq_ has joined #zuul | 16:23 | |
corvus | tristanC: oh, and we might want to put the files in /tmp :) | 16:26 |
clarkb | I think I've reivewed all of those changes identified for the release that are not WIP | 16:27 |
clarkb | going to get a bike ride in now back in a bit | 16:27 |
*** hamalq has quit IRC | 16:27 | |
corvus | looking back 2 days for opendev, i don't see any other jobs | 16:29 |
corvus | i'm checking dashboard.zuul.ansible.com now | 16:31 |
*** bhavikdbavishi has quit IRC | 16:35 | |
*** bhavikdbavishi has joined #zuul | 16:37 | |
*** nils has quit IRC | 16:39 | |
corvus | looks like ansible's zuul will be okay based on the last 2 days | 16:46 |
corvus | tobiash: i think you can unwip https://review.opendev.org/742353 now, seems like we're close enough to go ahead and put that in | 16:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Add a script to find untrusted execution tasks https://review.opendev.org/742458 | 16:53 |
corvus | tristanC: those changes all look good :) | 16:54 |
corvus | tristanC: except you left a tenant skip in there again :) | 16:55 |
tristanC | nice, i'm also happy to report none of our tenants seems to be affected | 16:55 |
tristanC | arg! :-) | 16:55 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Add a script to find untrusted execution tasks https://review.opendev.org/742458 | 16:56 |
corvus | tristanC: that's also great news, this may not end up being as bad as we feared :) | 16:57 |
*** jcapitao has quit IRC | 16:57 | |
corvus | clarkb, fungi: https://review.opendev.org/742458 is ready for final review / merge i think | 16:58 |
tristanC | corvus: yes. In anycase it's worth the effort to have such functions to inspect job-output.json, that might comes handy in the future | 16:59 |
*** jpena is now known as jpena|off | 17:03 | |
*** sshnaidm is now known as sshnaidm|afk | 17:09 | |
*** sshnaidm|afk has quit IRC | 17:23 | |
corvus | tristanC, clarkb, fungi: one of you want to +3 this? https://review.opendev.org/742353 | 17:25 |
corvus | i think that's the only thing missing a +w before we can tag | 17:25 |
*** sshnaidm has joined #zuul | 17:27 | |
corvus | cool, we're waiting on 4 changes to land; i'll work on the tag after they do | 17:27 |
*** sshnaidm is now known as sshnaidm|afk | 17:27 | |
*** sugaar has quit IRC | 17:43 | |
*** sanjayu_ has joined #zuul | 17:45 | |
openstackgerrit | Merged zuul/zuul master: Add reno configuration settings https://review.opendev.org/742296 | 17:48 |
*** saneax has quit IRC | 17:48 | |
*** rlandy|ruck is now known as rlandy | 17:56 | |
*** _erlon_ has joined #zuul | 18:02 | |
*** reiterative has quit IRC | 18:09 | |
*** reiterative has joined #zuul | 18:10 | |
*** reiterative has joined #zuul | 18:11 | |
noonedeadpunk | hi! | 18:14 |
*** bhavikdbavishi has quit IRC | 18:14 | |
noonedeadpunk | is there any way to get ansible requirements to be isntalled with zuul pre step? | 18:14 |
noonedeadpunk | I mean like ansible-galaxy install -r {{ zuul.project.src_dir }}/requirements.yml ? | 18:15 |
noonedeadpunk | as I need role to be executed which is not in required-projects (as it's third party github repo not owned by us) | 18:15 |
clarkb | corvus: looking now | 18:29 |
clarkb | https://review.opendev.org/#/c/742458/ is failing on a linter check | 18:29 |
clarkb | and the other seems approved | 18:29 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Add a script to find untrusted execution tasks https://review.opendev.org/742458 | 18:30 |
corvus | i re+3d that | 18:33 |
*** sanjayu_ has quit IRC | 18:33 | |
corvus | noonedeadpunk: we have plans for incorporating that into zuul soon, but it isn't implemented yet. if you need to do that, you'll need to run a nested ansible. | 18:33 |
corvus | noonedeadpunk: or help us implement it :) | 18:34 |
corvus | noonedeadpunk: (but i think our implementation in zuul will probably only support collections) | 18:34 |
noonedeadpunk | ok, then I'll use git module I guess | 18:37 |
noonedeadpunk | but thanks for the fast answer | 18:38 |
clarkb | tristanC: do you have time for https://review.opendev.org/#/c/742461/ to address the docker image promote job? | 18:41 |
dmsimard | very interesting hack with the datetime :D | 18:43 |
openstackgerrit | Merged zuul/zuul master: Temporarily remove pending release notes https://review.opendev.org/742353 | 19:21 |
corvus | one change remaining | 19:22 |
tristanC | It's a bit premature, but I wrote some Haskell libraries to interface with Zuul and Gerrit. Let me know if you are interested in such projects, here is what the Zuul documentation looks like: https://docs.softwarefactory-project.io/zuul-haskell/ | 19:27 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 19:41 |
clarkb | ^^ is a change inspired by a job dansmith is trying to debug but it is timing out and we don't know why | 19:41 |
openstackgerrit | Merged zuul/zuul master: Add a script to find untrusted execution tasks https://review.opendev.org/742458 | 19:46 |
fungi | tristanC: not sure if you're aware, but the sf 3pci test-job-unittests-el7 seems to be failing on an ensure-pip task in pre, leading to retry_limit: https://softwarefactory-project.io/logs/27/742527/1/third-party-check/test-job-unittests-el7/2cb6d6e/job-output.txt.gz | 19:53 |
tristanC | fungi: just saw that indeed, something about epel not found? | 19:53 |
fungi | i think so. yeah | 19:54 |
fungi | clarkb: on 742527 it looks like partial_subunit_files.files is problematic | 19:55 |
fungi | maybe it needs to no-op when partial_subunit_files is empty? | 19:56 |
clarkb | I expected it to but I guess that isn't implied? also it seems that the existing testing for that runs localhost stuff? | 19:57 |
clarkb | fungi: can you link to where you found the partial subunit file sissue? I've just found the localhost things so far | 19:57 |
tristanC | fungi: it seems like we didn't ran the test on https://review.opendev.org/#/c/736070/ . iiuc that change made epel a requirement for ensure-pip | 19:57 |
fungi | i'm not sure i understand jinja's concept of "attribute" and whether that maps to a dict key in python | 19:57 |
fungi | as opposed to an actual python object attribute | 19:58 |
tristanC | oh, because unittest use bindep, that ends-up running ensure-pip | 20:00 |
clarkb | fungi: ya I think the issue is no files were found so .files doesn't get populated | 20:00 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 20:03 |
clarkb | fungi: ^ I'm hopeful that will address that issue, though we seem to have separate testing problems | 20:03 |
*** yolanda has quit IRC | 20:05 | |
*** yolanda has joined #zuul | 20:08 | |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 20:25 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable workspace setup testing https://review.opendev.org/742537 | 20:25 |
corvus | commit 8ff7ff70c736a91db5f7672dbde04afb56ace400 (HEAD -> stable/3.x, tag: 3.19.1, origin/stable/3.x, refs/changes/73/742473/1) | 20:40 |
corvus | zuul-maint: ^ does that look right for the 3.19.1 tag? | 20:40 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable workspace setup testing and cleanup z-c testing https://review.opendev.org/742537 | 20:41 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 20:41 |
clarkb | corvus: looking | 20:42 |
fungi | corvus: yep, that looks like the current stable/3.x branch tip commit and the history for that is what i would expect | 20:43 |
clarkb | corvus: looking at the git log for 3.x do we need the github fix? | 20:44 |
clarkb | or did it branch before the bug was merged into master? | 20:44 |
*** nils has joined #zuul | 20:46 | |
corvus | clarkb: pretty sure it branched before; i think the github bug was very recent | 20:50 |
corvus | clarkb: you mean the one we just fixed right? | 20:50 |
clarkb | corvus: yup. If thats the case then ya that sha looks good to me | 20:50 |
corvus | clarkb: yeah, commit that introduced that was d94f3136873b7bcfcd967cbe8ffee7945d660189 | 20:53 |
tristanC | 3.19.1 at 8ff7ff70c736a91db5f7672dbde04afb56ace400 lgtm | 20:54 |
corvus | does not appear in 3.x history by my reckoning | 20:54 |
corvus | pushing tag now | 20:54 |
*** nils has quit IRC | 20:54 | |
corvus | oh. this is "funny". we didn't backport the container build release job | 20:55 |
corvus | so we won't end up with a 3.19.1 docker image | 20:56 |
corvus | i think i should dequeue the release jobs | 20:56 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable base role testing that runs code on localhost https://review.opendev.org/742537 | 20:56 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 20:56 |
corvus | then we can add the jobs and enqueue-ref it without having to disable the pypi jobs | 20:57 |
*** hamalq_ has quit IRC | 20:57 | |
corvus | zuul-maint: ^ anyone agree/disagree? | 20:57 |
*** hamalq has joined #zuul | 20:58 | |
clarkb | corvus: that seems reasonable | 20:58 |
clarkb | be careful doing that with pypi though | 20:58 |
clarkb | (if we only do a partial upload we can't replace the file) | 20:58 |
clarkb | I guess we could tag a 3.19.2 if it came to that | 20:59 |
clarkb | (on the same commti) | 20:59 |
corvus | clarkb: i dequeued before the jobs even started | 20:59 |
clarkb | perfect | 20:59 |
corvus | hrm | 21:00 |
corvus | is it going to use the jobs in the tag? | 21:00 |
clarkb | oh in the past it wouldn't have | 21:00 |
clarkb | but now I think itwill? | 21:01 |
corvus | yeah, but i think my change to "fix" that "erroneous behavior" has merged | 21:01 |
corvus | lemme go see what it does | 21:01 |
corvus | "Match tag items against containing branches" | 21:01 |
corvus | i think we just need to merge the job into the stable/3.x branch then it should run even though it wasn't in the tag | 21:02 |
clarkb | aha | 21:02 |
clarkb | it matches to hte branch head not the tag state, that makes sense | 21:02 |
corvus | i'll backport the job adds | 21:02 |
clarkb | I need to pop out for a bit but here's an early +2 without seeing any code | 21:02 |
corvus | remote: https://review.opendev.org/742541 Update release jobs | 21:05 |
corvus | zuul-maint: ^ we need that in order to actually publish docker images of the release | 21:05 |
ianw | corvus: not sure if you saw https://github.com/pyca/cryptography/issues/5339 re: adding zuul to pyca/cryptography; i think we're going to have a task convincing people but i could use your advice | 21:16 |
corvus | ianw: i think that's mostly an opendev policy question -- about whether we want to have un-gated github projects in a tenant | 21:22 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable base role testing that runs code on localhost https://review.opendev.org/742537 | 21:23 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 21:23 |
ianw | corvus: i was trying to think through what the worst thing that can happen is ... would a broken file manage to stop all reloading? | 21:24 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable base role testing that runs code on localhost https://review.opendev.org/742537 | 21:34 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 21:34 |
tristanC | ianw: not sure if that is expected, but after https://review.opendev.org/#/c/736070/ i had to switch our el7 node python-path to /usr/bin/python3 as well as pre-installed python-devel in the image to get ensure-pip to work again. Otherwise the job failed with an error because epel was not found | 21:40 |
tristanC | and we missed ensure-pip path to trigger the test-job-unittests-el7 third-party test (which run on a centos-7 image without epel) | 21:41 |
clarkb | ./test-playbooks/base-roles/fetch-subunit-output.yaml:71:5: [warning] comment not indented like content (comments-indentation) anyone understand why https://review.opendev.org/#/c/742537/5/test-playbooks/base-roles/fetch-subunit-output.yaml trips that? | 21:47 |
clarkb | I've dedented it too and am very confused | 21:47 |
*** vishalmanchanda has quit IRC | 22:01 | |
*** rlandy is now known as rlandy|bbl | 22:26 | |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Disable base role testing that runs code on localhost https://review.opendev.org/742537 | 22:32 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Collect partial subunit files https://review.opendev.org/742527 | 22:32 |
clarkb | finally managed to figure out yamllint | 22:32 |
clarkb | zuulians ^ I'm not sure that is the bet approach there. Its sort of a fix it fast with comments then allow us to come back and figure out a better solution later solution | 22:46 |
clarkb | reviews appreciated I think they should both pass now. I'm going to find dinner and a long nap though | 22:47 |
*** tosky has quit IRC | 23:03 | |
*** armstrongs has joined #zuul | 23:13 | |
ianw | tristanC: hrmm, yeah i guess; can you install the epel repo in the base image? with it disabled by default? | 23:20 |
*** yolanda has quit IRC | 23:23 | |
*** armstrongs has quit IRC | 23:23 | |
*** yolanda has joined #zuul | 23:24 | |
tristanC | ianw: i'd rather not to as epel is quite a big repository. and i'm not sure why would it be needed now, ensure-pip (needed for unittest) used to work fine without it. | 23:27 |
ianw | tristanC: well i guess you'll need to set ensure_pip_from_upstream and install it with get-pip.py out of band. packaged pip is on epel for centos7 so i don't see there's much else can be done? | 23:29 |
ianw | then you have to worry about other jobs installing the python-pip package over the top -- but perhaps in a unittest case where that won't happen it's a ok trade-off | 23:30 |
tristanC | ianw: i managed to fix our nodeset by setting the python-path to python3 in nodepool configuration. | 23:32 |
ianw | tristanC: right, that would have made ansible_python.version.major not match | 23:33 |
ianw | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip/tasks/RedHat.yaml#L21 to be specific | 23:34 |
ianw | that seems valid too, as long as your unit tests/tox whatever is all running under python3 | 23:35 |
ianw | i dunno, i'm open to suggestions. it seems if people are using ansible under python2 and requesting package installs of pip all we can do is install from epel which is what they are implicitly asking for | 23:37 |
tristanC | ianw: well i'm not sure, but i guess epel is now required by zuul-jobs to work on centos-7 with python2 | 23:40 |
ianw | i don't really agree with that -- epel is required to install python-pip from packages on centos7 if that's what you ask for | 23:41 |
ianw | which, i agree, is the default state we have chosen. but you can choose to install pip from upstream source if you would like | 23:41 |
tristanC | ianw: well i didn't ask for it, it seems to happen by default when using the unittest job | 23:42 |
ianw | right, but you sort of did by accepting the defaults, which we have chosen as installing from packages. but you *can* turn it off, if you'd like | 23:42 |
tristanC | ianw: here is the how our centos-7 node is created: https://softwarefactory-project.io/cgit/config/tree/containers/centos-7/Dockerfile | 23:43 |
tristanC | ianw: and that recently stopped working because "Repository epel not found." | 23:44 |
ianw | tristanC: well, the intent of ensure-pip has always been to install pip from packages in the default case, unless ensure_pip_from_upstream is set | 23:46 |
tristanC | we don't have the logs anymore, but the last successfull test-job-unittests-el7 without epel7 was with https://review.opendev.org/#/c/730668/ | 23:56 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!