Thursday, 2023-05-18

@jim:acmegating.comzuul-maint: how does this look for a zuul release? commit c1b0a00c607c13c52991e14056e6bcac4a49d5d5 (HEAD -> master, tag: 8.3.1, origin/master, gerrit/master, refs/changes/43/883443/3)01:03
@jim:acmegating.com(tomorrow at this point)01:03
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/nodepool] 883463: Use cached ids in node iterator more often https://review.opendev.org/c/zuul/nodepool/+/88346303:04
@rancher:matrix.org> <@clarkb:matrix.org> the error with getProjectMetadata() implies the project has no config. I'm not sure this is a gitlab issue. It may just be an improper/incomplete configuration. I think you need to look in your logs for where that project is getting loaded to see the initial error07:25
But it's strange that the same projects work fine on the older version of GitLab. Even the default "zuul/zuul-jobs" (opendev.org) throws the same error on the newer versions:
opendev.org:
untrusted-projects:
- zuul/zuul-jobs:
include:
- job
@rancher:matrix.org[connection "opendev.org"]07:26
name=opendev
driver=git
baseurl=https://opendev.org
@rancher:matrix.orgI think these are the only references to that project.07:27
@rancher:matrix.org * I think these are the only references to that project, it should work, right?07:27
@newbie23:matrix.orghi guys, 08:01
is it possible to configure Nodepool + Azure Compute Driver to define some sort of retention policy
so that nodes are kept around for a while and, if still up and running, they are reused for jobs?
Context: reusing nodes will enable us several use cases involving using cached data, but just using pre-defined pre-existing nodes might be expensive as there might be no workload
@rancher:matrix.orgI now tried with github.com, but the following errors show: https://privatebin.net/?74639cd6d74df993#79prpzrqVynsUcQ9Cqtgfy5TyeyZPuugMFwffU2fo9BG13:44
There's no mention of "gitlab" anywhere in config files, and default GitHub reference pipelines are used.
@clarkb:matrix.orgyour yaml tenant config is where I would expect the gitlab to be defined that is failing to map to the connection config 13:47
@clarkb:matrix.orgWhat I was trying to get at before is that the failure is happening before the traceback you previously posted. You need to look in early startup to see why there is no metadata for that setup. A keyerror like this could be causing it13:48
@clarkb:matrix.orgI think13:48
@clarkb:matrix.org> <@newbie23:matrix.org> hi guys, 13:49
> is it possible to configure Nodepool + Azure Compute Driver to define some sort of retention policy
> so that nodes are kept around for a while and, if still up and running, they are reused for jobs?
>
> Context: reusing nodes will enable us several use cases involving using cached data, but just using pre-defined pre-existing nodes might be expensive as there might be no workload
The metastatic and static drivers are going to be the ones you want to look at. Metastatic won't schedule them in an intelligent reuse manner. But depending on your needs this may be fine. With static you would set up static nodes and specific labels that could be used.
@newbie23:matrix.org> <@clarkb:matrix.org> The metastatic and static drivers are going to be the ones you want to look at. Metastatic won't schedule them in an intelligent reuse manner. But depending on your needs this may be fine. With static you would set up static nodes and specific labels that could be used.13:56
Thanks for the pointer! I will look into it.
@rancher:matrix.org> <@clarkb:matrix.org> your yaml tenant config is where I would expect the gitlab to be defined that is failing to map to the connection config14:05
Yeah, but I didn't define one, only Github. I suppose the syntax is correct, correct me if I'm wrong. I'm new to this.
@rancher:matrix.org> <@clarkb:matrix.org> What I was trying to get at before is that the failure is happening before the traceback you previously posted. You need to look in early startup to see why there is no metadata for that setup. A keyerror like this could be causing it14:06
Is every log displayed by default when cloning the repository (Quick-Start Guide)? What should I look for? I didn't see any error apart from gerrit ones that I don't use.
@rancher:matrix.org> <@clarkb:matrix.org> What I was trying to get at before is that the failure is happening before the traceback you previously posted. You need to look in early startup to see why there is no metadata for that setup. A keyerror like this could be causing it14:06
* Is every log displayed by default when cloning the repository (Quick-Start Guide)? What should I look for? I didn't see any error apart from Gerrit ones, which I don't use.
@rancher:matrix.org * Is every log displayed by default when cloning the repository (Quick-Start Guide)? What should I look for? I didn't see any error apart from Gerrit ones, which I don't use. I removed the gerrit connection string.14:07
@rancher:matrix.org * Is every log displayed by default when cloning the repository (Quick-Start Guide)? What should I look for? I didn't see any error apart from Gerrit ones, which I don't use. I removed the Gerrit connection string.14:07
@jim:acmegating.comRancher: you might find the configurator at https://acmegating.com/acme-enterprise-zuul/#start helpful.  it will help you make the configuration files with gitlab.  you can use it on its own (it produces a docker-compose file like the quick-start), or just take the gitlab parts and splice them into the zuul quick-start.14:09
@jim:acmegating.com8.3.1 promote succeeded, i'll send out the email shortly14:55
@fungicide:matrix.orgthanks!15:01
@jim:acmegating.comClark: okay i held a quickstart node and i get different behavior on it than i do locally16:49
@jim:acmegating.comlocally when i run 'host gerrit' in a pod i get a response, but on the test node, i get NXDOMAIN16:50
@jim:acmegating.comlocal dns:16:50
@jim:acmegating.comsearch dns.podman16:51
nameserver 10.89.1.1
@jim:acmegating.comtest node:16:51
@jim:acmegating.comnameserver 8.8.8.816:51
nameserver 8.8.4.4
@jim:acmegating.comsame podman/podman-compose versions16:52
@jim:acmegating.commaybe something about podman-dnsname16:54
@clarkb:matrix.orgI wonder if that is a side effect of some plugin installation? Like containernetworking-plugin?16:54
@jim:acmegating.comexactly16:54
@jim:acmegating.comi have golang-github-containernetworking-plugin-dnsname locally and it's not on the remote node16:54
@jim:acmegating.comhow does adding that to ensure-podman (without an option to enable, so always installed) sound?16:55
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul-jobs] 883552: Add podman dns plugin to ensure-podman https://review.opendev.org/c/zuul/zuul-jobs/+/88355217:00
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed on behalf of Tristan Cacqueray: [zuul/zuul] 687135: Replace docker by podman for quick-start https://review.opendev.org/c/zuul/zuul/+/68713517:01
@jim:acmegating.comClark: ^17:01
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul-jobs] 883552: Add podman dns plugin to ensure-podman https://review.opendev.org/c/zuul/zuul-jobs/+/88355218:11
@michael_kelly_anet:matrix.orgIs there an (easy) way to exclude a specific repo/project from a pipeline?  eg: I'm PoCing some check jobs for someone in a large repo and I don't want Zuul to just start auto-merging everyone's changes (yet) because multiple different teams live in this repo and it'll be surprising.19:05
@michael_kelly_anet:matrix.orgOr is the sane thing to do here just to create a separate tenant to handle this casE?19:06
@michael_kelly_anet:matrix.org * Or is the sane thing to do here just to create a separate tenant to handle this case?19:06
@jim:acmegating.comjust don't add it to the pipeline19:09
@fungicide:matrix.orgprojects have pipelines explicitly added to them, what's the situation where you're ending up with a project added that you didn't intend?19:09
@fungicide:matrix.orgmaybe it's coming in via a project-template? in that case, the simple solution is to probably have a separate project-template which omits the problem pipeline and apply that to the project instead, or just don't use a project-template for that project, or split the existing project-template to move the pipeline in question into a second template that most projects get both of but that project only gets the one that doesn't have that pipeline...19:11
@jim:acmegating.comor maybe via a catch-all project regex, in which case adjust the regex19:11
@michael_kelly_anet:matrix.orgThis is happening before I've even added a zuul.yaml to the repo. :)19:12
@fungicide:matrix.orgdo you add pipelines to that project in your config project maybe?19:12
@michael_kelly_anet:matrix.orgThat project only exists in Zuul's universe in as much as I've added it to my tenant config.  The config project has no awareness of it as such.19:13
@fungicide:matrix.organd there's no project pattern matching it in your config project that might add pipelines?19:14
@michael_kelly_anet:matrix.orgAh, there you have it.19:14
@fungicide:matrix.orgbut to restate, zuul won't automatically add pipelines for a project just by adding it to the tenant config19:14
@michael_kelly_anet:matrix.orgI get what you're saying now.19:14
@michael_kelly_anet:matrix.orgYea, I think content in my config project is derived from the quickstart/walkthrough so yes, the regex in there is matching the entire world.19:15
@fungicide:matrix.orgthat'd do it. so like corvus said, adjust that pattern19:15
@michael_kelly_anet:matrix.orgSounds good.19:16
@michael_kelly_anet:matrix.orgThanks for the pointer. :)19:16
@michael_kelly_anet:matrix.orgI was a bit excited that folks were going to hunt me down like the monster that I am :)19:16
@fungicide:matrix.orgmy monster-hunting days are long over19:16
@michael_kelly_anet:matrix.orgHopefully more folks will join the religion and see the light.19:17
@clarkb:matrix.orgcorvus: https://zuul.opendev.org/t/zuul/build/7f3ac7f8dd4f4d55a0ad41b4bcc0e30c/log/job-output.txt#734 is https://github.com/containers/podman/issues/6368#issuecomment-669836088 I think20:04
@clarkb:matrix.orgI'm not sure how to plumb that into podman compose though20:04
@clarkb:matrix.orgI think I haven't hit this in opendev bceaues I'm not trying to run as a regular user there20:06
@clarkb:matrix.orgso that might be a workaround here if all else fails20:07
@jim:acmegating.comClark: oh interesting; look a few lines above that20:10
@jim:acmegating.com720-72420:10
@jim:acmegating.com(this is during the image build of the "node" container)20:10
@jim:acmegating.comthat suggests that it is using cgroupfs but that's failing?20:11
@clarkb:matrix.orgoh ya it is20:11
@jim:acmegating.combut at any rate, for our "test what the user runs" approach, we may want to see about using systemd anyway?20:11
@clarkb:matrix.orgyes and systemd is there and I think when you ssh in you get a session of some sort. But maybe it is too minimal forwhat podman wants20:12
@jim:acmegating.comso maybe we should try the `sudo loginctl enable-linger 1000` or... at least learn what that is :)20:12
@jim:acmegating.comClark: i still have the held node at 213.32.76.17520:13
@clarkb:matrix.orglooks like sshd pam config is what sets up systemd sessions via ssh20:13
@jim:acmegating.comyeah, the loginctl linger thing doesn't look like what we want.20:15
@jim:acmegating.comClark: wow i added the cgroup cli and it's working20:16
@jim:acmegating.comso i guess when it said "falling back on cgroupfs" it really meant "just kidding"20:16
@clarkb:matrix.orghahaha20:18
@clarkb:matrix.orgok20:18
@jim:acmegating.comhttps://github.com/containers/podman-compose/issues/20920:19
@clarkb:matrix.orglooking at the session stuff our sshd_config does say use pam and pam sshd config includes common-session which includes pam-systemd.so which should do it20:19
@clarkb:matrix.orgbut i don't have the XDG_SESSION_BUS env var logging in over ssh20:19
@jim:acmegating.comthat last comment appears to work20:19
@jim:acmegating.com`podman-compose --podman-build-args="--cgroup-manager cgroupfs" ...`20:20
@jim:acmegating.combut we might want to consider https://github.com/containers/podman-compose/issues/209#issuecomment-65527631720:20
@jim:acmegating.comand maybe doing that in the ensure-podman role?20:21
@clarkb:matrix.orgI think the config option would be good if we think othe rpeople won't have this issue. Theoretically they won't if logged in via a desktop20:21
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Tristan Cacqueray: [zuul/zuul] 687135: Replace docker by podman for quick-start https://review.opendev.org/c/zuul/zuul/+/68713520:32
@clarkb:matrix.orghopefully I did that right. We should know soon enough20:32
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul] 883319: Refresh builds even less https://review.opendev.org/c/zuul/zuul/+/88331920:35
@jim:acmegating.comClark: looks good technically (but it should actually be in a different playbook; i can move it once it works) -- but should we put that in the ensure-podman role too?20:36
@jim:acmegating.comer s/too/instead20:36
@clarkb:matrix.orgI thought about that, but I think this is only necessary if you aren't in a systemd session with dbus info20:36
@clarkb:matrix.organd so it is a question of whether or not we expect that problem to be universal enough on ubuntu at least to just hard code it20:37
@jim:acmegating.comClark: given that zuul-jobs roles are basically for running on systems like this, i think it would be pretty universal?20:37
@clarkb:matrix.orgthats a good point and I did confirm that we appear to have the pam stuff that would make this work.20:38
@clarkb:matrix.orgI also hcecked that libpam-systemd is installed and it is20:38
@clarkb:matrix.orgif this works I can remove it from the zuul change and add it to the jammy tasks for ensure-podman20:39
@jim:acmegating.comthe libpam-systemd stuff is required for the cgroupfs manager?20:39
@clarkb:matrix.orgI think libpam-systemd is what sshd would trigger via pam to create systemd + dbus session stuff in your login environment20:40
@clarkb:matrix.organd that isn't happenign whihc makes me think we aren't missing anything with our sshd and pam setup and instead its something they just aren't doing20:40
@clarkb:matrix.orgwouldn't surprise me if on rhel you get this but not on ubuntu20:40
@jim:acmegating.comokay, i think i grok now what you were saying, thx :)20:40
@jim:acmegating.com> <@clarkb:matrix.org> if this works I can remove it from the zuul change and add it to the jammy tasks for ensure-podman20:40
sounds like a plan
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Tristan Cacqueray: [zuul/zuul] 687135: Replace docker by podman for quick-start https://review.opendev.org/c/zuul/zuul/+/68713520:54
@jim:acmegating.comClark: i'm seeing errors about dependencies... i wonder if we should just remove them from the compose file since they don't do anything useful anyway21:22
@clarkb:matrix.orgcorvus: hrm I think on the opendev gitea side of things we use them (dependenceis) too and they work.21:23
@clarkb:matrix.orgbut ya if they aren't helpful the ncleaning them up seems fine21:23
@jim:acmegating.comi mean... in docker they "work" inasmuch as they'll cause things to start up in a certain order, but they don't actually cause docker to wait until the dependency is up and running21:24
@jim:acmegating.comso as long as things are fast enough, they can make startup more smooth, but if anything is slow, then they might as well not be there21:24
@clarkb:matrix.orghttps://zuul.opendev.org/t/zuul/build/bb0abe9346ed45e898a4752756722dd6/log/job-output.txt#1287 is this what you think is a dependency error?21:25
@jim:acmegating.comif i'm reading the logs right, in podman it looks like we might be waiting on builds21:25
@jim:acmegating.comClark: no; `2023-05-18 21:16:49.074241 | ubuntu-jammy | Error: error generating dependency graph for container 5e613a47ab513f153865b6d6fd87adcac6952c9c1932cc2b174d9d1f53d88b3f: container 5bb4da0c942019132b5be057f2e86d0ca074bc1b8cf2be2e5dbf52091e0629ba depends on container 6e20e3f34ef8bfcfea4b9546004c11292fa7ae812dc6c1699990a1aae07a5483 not found in input list: no such container` from the streaming log21:26
@jim:acmegating.comi think that says "executor depends on scheduler" ; honestly i'm not sure what the deal with the scheduler is21:27
-@gerrit:opendev.org- Clark Boylan proposed: [zuul/zuul-jobs] 883593: Force cgroupfs cgroup manager with podman on ubuntu https://review.opendev.org/c/zuul/zuul-jobs/+/88359321:27
@clarkb:matrix.orgok ya its trying to ru nthe executor and failing to find its dependencies maybe due to transitivitiy too?21:31
@clarkb:matrix.orgbut thats the first non zero exit code so theoretically every container is running?21:32
@clarkb:matrix.orgI wonder if it is gerritconfig in particular since it is fire and foreget so may not be running anymore at that point21:33
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Tristan Cacqueray: [zuul/zuul] 687135: Replace docker by podman for quick-start https://review.opendev.org/c/zuul/zuul/+/68713521:33
@clarkb:matrix.orglets try removing just gerritconfig to start21:33
-@gerrit:opendev.org- Michael Kelly proposed:21:50
- [zuul/zuul-operator] 853695: Prefix zuul-specific resources with instance name https://review.opendev.org/c/zuul/zuul-operator/+/853695
- [zuul/zuul-operator] 853696: Prefix nodepool specific resources with instance name https://review.opendev.org/c/zuul/zuul-operator/+/853696
- [zuul/zuul-operator] 867938: Prefix managed resources with instance name https://review.opendev.org/c/zuul/zuul-operator/+/867938
- [zuul/zuul-operator] 861488: helm: Add a basic helm chart for zuul-operator https://review.opendev.org/c/zuul/zuul-operator/+/861488
- [zuul/zuul-operator] 862390: helm: Add cert-manager as optional dependency https://review.opendev.org/c/zuul/zuul-operator/+/862390
- [zuul/zuul-operator] 863191: helm: Add pxc-operator as optional dependency https://review.opendev.org/c/zuul/zuul-operator/+/863191
- [zuul/zuul-operator] 865945: test: Restructure zuul-operator-functional-k8s layout https://review.opendev.org/c/zuul/zuul-operator/+/865945
- [zuul/zuul-operator] 863579: test: Introduce Helm-based functional test https://review.opendev.org/c/zuul/zuul-operator/+/863579
- [zuul/zuul-operator] 863474: k8s: Provide an option to disable PXC operator installer https://review.opendev.org/c/zuul/zuul-operator/+/863474
- [zuul/zuul-operator] 866231: k8s: Clean up cert-manager installer https://review.opendev.org/c/zuul/zuul-operator/+/866231
- [zuul/zuul-operator] 863475: k8s: Provide an option to disable cert-manager installation https://review.opendev.org/c/zuul/zuul-operator/+/863475
- [zuul/zuul-operator] 863586: helm: Remove unnecessary CRD access from clusterrole https://review.opendev.org/c/zuul/zuul-operator/+/863586
- [zuul/zuul-operator] 863476: k8s: Enable administrator to limit the watched namespace scope https://review.opendev.org/c/zuul/zuul-operator/+/863476
- [zuul/zuul-operator] 863477: k8s: Allow use of a default image version besides latest https://review.opendev.org/c/zuul/zuul-operator/+/863477
- [zuul/zuul-operator] 863571: web: Enable custom metadata for Service resources https://review.opendev.org/c/zuul/zuul-operator/+/863571
- [zuul/zuul-operator] 861279: bug: Select scheduler pod based on instance name on update https://review.opendev.org/c/zuul/zuul-operator/+/861279
- [zuul/zuul-operator] 863572: bug: Properly parameterize zookeeper-client-tls everywhere https://review.opendev.org/c/zuul/zuul-operator/+/863572
- [zuul/zuul-operator] 866295: k8s: Remove unused ClusterRole from rbac-admin https://review.opendev.org/c/zuul/zuul-operator/+/866295
- [zuul/zuul-operator] 866296: helm: Support clusteradmin role binding https://review.opendev.org/c/zuul/zuul-operator/+/866296
- [zuul/zuul-operator] 866297: k8s: Added deploy cluster admin template https://review.opendev.org/c/zuul/zuul-operator/+/866297
- [zuul/zuul-operator] 866406: k8s: Inject rbac.yaml into operator.yaml https://review.opendev.org/c/zuul/zuul-operator/+/866406
- [zuul/zuul-operator] 866407: k8s: Provide tools and checker for deploy templates https://review.opendev.org/c/zuul/zuul-operator/+/866407
- [zuul/zuul-operator] 863439: doc: Rework install doc to cover both template and helm install https://review.opendev.org/c/zuul/zuul-operator/+/863439
- [zuul/zuul-operator] 883595: deploy: Surface disk_limit_per_job in operator CRD https://review.opendev.org/c/zuul/zuul-operator/+/883595
- [zuul/zuul-operator] 883596: Broaden scope of potential config updates https://review.opendev.org/c/zuul/zuul-operator/+/883596
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Tristan Cacqueray: [zuul/zuul] 687135: Replace docker by podman for quick-start https://review.opendev.org/c/zuul/zuul/+/68713522:10
@clarkb:matrix.orgcorvus: ^ I think that is really close now. It is failing to get builds listed back after some down and up stuff. I think maybe zookeeper may be losing its data?22:10
@clarkb:matrix.orgI don't see a bind mount for zk's data storage. But it wasn't there under docker either... I'm a bit confused about that22:10
@jim:acmegating.comClark: just goes into a volume22:10
@clarkb:matrix.orgwere we relying on docker volumes maybe and podman doesn't have that?22:11
@clarkb:matrix.orgya I wonder if podman isn't persisting volumes for us22:11
@clarkb:matrix.orghttps://zuul.opendev.org/t/zuul/build/a4802526d4564247a0e3d8dfe98ae086/logs was the most recent build. The patchset I just pushed cleans up the container.conf config in zuul22:11
@clarkb:matrix.orglooks like podman does do volume management, but maybe it isn't doing it based on the image data and we need to list them?22:12
@jim:acmegating.comfascinating22:13
@clarkb:matrix.orgthis is my hunch at least due to the failure and the scheduler generating new keys on what  Ithink is the last time it started: https://zuul.opendev.org/t/zuul/build/a4802526d4564247a0e3d8dfe98ae086/log/container_logs/scheduler.log22:15
@jim:acmegating.comClark: verified locally that the volume doesn't survive a "up/down/up" cycle22:16
@jim:acmegating.comlooks like "down" removes the volume22:17
@clarkb:matrix.orgoof22:17
@clarkb:matrix.orghttps://github.com/containers/podman-compose/issues/105 that implies it didn't do this at one time22:19
@jim:acmegating.comvolumes are still there, but it makes new ones22:20
@clarkb:matrix.orghttps://github.com/containers/podman-compose/blob/devel/podman_compose.py#L2322-L2332 oh ok that would explain it I guess22:21
@clarkb:matrix.orgI was about to say it doesn't delete them unless you pass the --volumes flag22:21
@clarkb:matrix.orgthis is fun behavior22:21
@jim:acmegating.comhttps://github.com/containers/podman-compose/issues/378#issuecomment-1019310463 is interesting info (not directly our problem)22:22
@clarkb:matrix.orgIf we manually define them I wonder if that makes podman-compose smart and it stops creating new ones22:22
@clarkb:matrix.orgbut when they are dynamic based on the image it just naively recreates them every time22:22
@jim:acmegating.comyeah this is i think the first troubling behavior... guaranteed to result in data loss22:22
@jim:acmegating.com(unless we find a way to get docker-compose behavior)22:23
@jim:acmegating.com(because if we rely on "manually inspect the container image to see if any anonymous volumes have changed and update the compose file accordingly" we are hosed)22:23
@clarkb:matrix.orgagreed this is seems problematic22:25
@jim:acmegating.comClark: er, are we sure this isn't docker-compose behavior?22:27
@clarkb:matrix.orgcorvus: well if it were the tests would fail I think?22:27
@jim:acmegating.comi think i just repeated that test with d-c and got the same behavior22:27
@clarkb:matrix.orghow is the test passing with docker then?22:27
@clarkb:matrix.orgI may be misreading the job but I think it does multiple up and downs and then later validates it can still fetch data that should be present and that build data should be stored in mysql and/or zk and persisted via volumes?22:29
@clarkb:matrix.orgboth mysql and zookeeper appear to lack explicit volumes for data dirs22:29
@jim:acmegating.comoh we use "stop" not "down"22:30
@jim:acmegating.comthe task names are lying22:30
@jim:acmegating.comlet me repeat local tests with those commands22:30
@jim:acmegating.comClark: with up/stop/up they both keep the volume, and i can see data persisting22:32
@clarkb:matrix.orgI don't understand the failure then. Maybe we need to hold a newer node and check it directly. Maybe I'm missing something with what the test job is doing22:33
@jim:acmegating.comlog says it's "recreating" the containers22:33
@jim:acmegating.comso it must have decided they have a different configuration?22:34
@clarkb:matrix.orgoh so we lose the volumes for a different reason maybe?22:34
@jim:acmegating.comlooks like that part of the process is changing an env variable, and i guess that doesn't cause docker-compose to recreate containers, but it does podman-compose22:35
@jim:acmegating.comand actually that env var is used for a mount, so what we're really doing is changing a volume mount22:36
@jim:acmegating.comClark: yes, that appears to be a behavior difference.  i can change a volume mount with docker-compose and it will keep the old anonymous volumes, but with podman-compose i get a new set22:40
@clarkb:matrix.orgcorvus: what bit is changing?22:41
@clarkb:matrix.orgis it `ZUUL_TUTORIAL_CONFIG`?22:41
@jim:acmegating.comyeah22:41
@jim:acmegating.comhttps://opendev.org/zuul/zuul/src/branch/master/doc/source/examples/docker-compose.yaml#L7122:42
@clarkb:matrix.orgso bind mounts and maybe specific volumes would address this22:42
@jim:acmegating.comi don't want to use bind mounts for data, that's an anti-goal for the tutorial, but if specific mounts for container data would solve it, that might be okay, but it's pretty sub-optimal (since again we'd be in a position of needing to track any changes to dependent containers) but at least it's not a data loss scenario22:44
@clarkb:matrix.orgor can we accomplish what the test is doing by not changing mounts?22:46
@clarkb:matrix.orgperhaps by stopping and the modifying the data under the mount instead?22:47
@jim:acmegating.comClark: local testing says defined volumes should work22:47
@jim:acmegating.comClark: not what i want to be doing in the repo22:47
@clarkb:matrix.organd probably worth filing a bug too since it is potentially lossy22:47
@jim:acmegating.com(i really like how easy it is for users and for testing right now -- we can run the same system with any number of configs just by tweaking an env var)22:48
@jim:acmegating.comi wonder what docker-compose does to change the mount without recreating22:49
@jim:acmegating.comapparently it *does* recreate the container, it apparently just also reattaches the existing volumes22:53
@jim:acmegating.comClark: fyi changing the image name will do this too23:00
@clarkb:matrix.orgcorvus: that I half expected. I wonder if doing foo/bar -> docker.io/foo/bar does too23:02
@clarkb:matrix.orgthat would be docker-compose specific since podman doesn't support unqualified names23:03
@jim:acmegating.comClark: well, it seems like the best thing to do might be to add explicit volumes :/23:03
@jim:acmegating.com(because, ideally, we want this to work with both things, and explicit volumes aren't bad, per se)23:04
@clarkb:matrix.orgya in theory we'll catch this if it breaks due to a change in implicit volumes not matching the explicit ones23:04
@jim:acmegating.comand we may want to make opendev admins aware of the behavior difference.  at least we won't lose data in case of an error -- since the old volumes just get orphaned.  they can be recovered.23:05
@clarkb:matrix.org++23:05
@jim:acmegating.comi need to task switch away from this for a bit23:05
@clarkb:matrix.orgya I actually need to pop out for dinner momentarily23:07

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!