Tuesday, 2019-06-04

*** sanjayu_ has quit IRC00:30
*** jamesmcarthur has quit IRC01:22
*** jamesmcarthur has joined #zuul01:31
*** jamesmcarthur has quit IRC01:43
*** jamesmcarthur has joined #zuul01:44
*** jamesmcarthur has quit IRC02:00
*** jamesmcarthur has joined #zuul02:00
*** rlandy has quit IRC02:12
pabelangerclarkb: yah, using pypi pre-releases today, for wheel, since that is the easiest way to get a released wheel of nodepool / zuul. Which avoids the need for asking to tag official releases every week.  The idea was to also provide additional feedback before tagging a release for users02:38
pabelangerpypi is nice, as it just works and don't have to deal with mirror stuff02:38
*** jamesmcarthur has quit IRC02:39
*** jamesmcarthur has joined #zuul02:40
*** jamesmcarthur has quit IRC02:43
*** jamesmcarthur has joined #zuul02:43
pabelangerclarkb: as an example, https://github.com/ansible-network/windmill-config/blob/master/ansible/group_vars/nodepool.yaml#L36 was all that needed to change to pin to pre-release version of nodepool, and once we tag 3.6.1 / 3.7.0, we just bump that and pull again from pypi02:44
openstackgerritTristan Cacqueray proposed zuul/zuul master: docs: add cleanup-run documentation  https://review.opendev.org/66214702:55
*** jamesmcarthur has quit IRC03:45
*** jamesmcarthur has joined #zuul03:46
*** jamesmcarthur has quit IRC03:49
*** jamesmcarthur_ has joined #zuul03:49
clarkbpabelanger: ya but now we create 20MB of data for pypi to store forever on every commit03:50
clarkbwhich has made pypi impossible to mirror due to other people doing similar03:51
openstackgerritTobias Henkel proposed zuul/zuul master: Parallelize github event processing  https://review.opendev.org/66281804:03
SpamapSclarkb: and worse, those kids won't get off pypi's lawn.05:01
openstackgerritMerged zuul/zuul master: git: only list heads and tags references  https://review.opendev.org/66118505:11
*** pcaruana has joined #zuul05:12
*** pcaruana has quit IRC05:23
*** pcaruana has joined #zuul05:28
*** raukadah is now known as chandankumar05:29
*** jamesmcarthur_ has quit IRC05:29
*** flepied has quit IRC05:37
*** badboy has joined #zuul05:38
badboyhi guys05:38
badboyI think there's a bug in nodepool/zuul05:38
badboyfor test purposes I've manually created 5 VMs with the same SSHD keys05:39
badboyfor some reason Zuul starts a job on two VMs but only one is mentioned in the logs05:39
badboyafter changing SSHD keys on the VMs this behaviour stopped05:40
*** sanjayu_ has joined #zuul05:44
*** ianychoi has quit IRC06:57
*** ianychoi has joined #zuul06:58
*** gtema has joined #zuul06:58
*** flepied has joined #zuul07:02
*** sanjayu__ has joined #zuul07:10
*** sanjayu_ has quit IRC07:13
*** themroc has joined #zuul07:18
openstackgerritTristan Cacqueray proposed zuul/nodepool master: nodepool: improve static driver request handling for multi labels  https://review.opendev.org/66295407:25
*** zbr has joined #zuul07:31
openstackgerritTobias Henkel proposed zuul/zuul master: Evaluate CODEOWNERS settings during canMerge check  https://review.opendev.org/64455707:33
*** threestrands has joined #zuul07:44
*** jpena|off is now known as jpena07:51
*** hashar has joined #zuul08:07
openstackgerritTobias Henkel proposed zuul/zuul master: Evaluate CODEOWNERS settings during canMerge check  https://review.opendev.org/64455708:08
*** flepied has quit IRC08:31
*** spsurya has joined #zuul08:51
*** lennyb has quit IRC08:53
*** threestrands has quit IRC09:14
*** themroc has quit IRC09:15
*** themroc has joined #zuul09:17
*** lennyb has joined #zuul09:21
openstackgerritMerged zuul/zuul master: test_v3: replace while loop with iterate_timeout  https://review.opendev.org/66211209:28
*** panda is now known as panda|ruck09:38
*** mugsie_ is now known as mugsie09:52
*** flepied has joined #zuul10:09
*** gtema has quit IRC10:43
*** gtema_ has joined #zuul10:43
*** gtema_ is now known as gtema10:43
*** jpena is now known as jpena|lunch11:34
*** yolanda has quit IRC11:44
*** panda|ruck is now known as panda|ruck|eat11:45
*** panda|ruck|eat is now known as panda|ruck11:51
openstackgerritTobias Henkel proposed zuul/zuul master: Mount tmpfs on ansible tmp dir  https://review.opendev.org/66301512:10
*** mugsie is now known as mugsie_12:18
*** mugsie_ is now known as mugsie12:18
*** badboy has quit IRC12:19
*** gtema has quit IRC12:19
*** rlandy has joined #zuul12:30
*** jpena|lunch is now known as jpena12:32
*** pwhalen has joined #zuul12:33
*** panda|ruck is now known as panda|ruck|eat12:37
*** spsurya has quit IRC12:40
*** gtema has joined #zuul12:48
*** rlandy has quit IRC12:49
*** panda|ruck|eat is now known as panda|ruck12:51
*** jamesmcarthur has joined #zuul13:00
*** rlandy has joined #zuul13:17
*** jamesmcarthur has quit IRC13:20
*** jamesmcarthur has joined #zuul13:20
*** jamesmcarthur has quit IRC13:25
pabelangerclarkb: yah, the main reason i was hoping for with pypi, is to keep the same deployment location for both released / pre-release wheels. But agree it comes with the cost of more storage on pypi.13:28
*** rlandy_ has joined #zuul13:29
*** rlandy has quit IRC13:30
pabelangerSpamapS: A while back you were talking about an autohold system, maybe just pausing a playbook, when user added their ssh public key to git content, am I remembering that right?13:32
clarkbpabelanger: ya I mean if pypi is ok with not being mirrorablr and having gigabytes of input data every day due to other projects doing similar then I dont think we should stop13:33
clarkbI dont know if they have an opinion though13:33
*** rlandy_ is now known as rlandy13:34
clarkbfwiw it would be nice if the mirror tooling vould be more flexible and say only mirror proper releases13:34
clarkbbut it uses a journal so that likely gets messy13:34
pabelangeryah13:35
clarkb(you would have to intentionally skip things your local journal says you've got)13:35
*** hashar has quit IRC13:42
openstackgerritMatthieu Huin proposed zuul/zuul master: [DNM] admin REST API: docker-compose PoC, frontend  https://review.opendev.org/64353614:17
corvusmordred, clarkb: based on http://logs.openstack.org/79/656879/1/check/zuul-jobs-test-registry/0debd7f/ara-report/result/d99f3b3a-fdcb-40fd-9040-4a4643e96d70/  i'm starting to think that buildkit has a completely different backend, even to the point that it has a different transport system when pulling layers.  the system is configured to use "http://mirror.regionone.limestone.openstack.org:8082" as a14:19
corvusdocker_mirror, but buildkit forces that to https (and fails).  [note, that's not even gotten to the weird buildset mirror stuff yet, that's just a normal image build on a server configured to use our region mirror]14:19
corvusi suppose we can try this again once our mirrors are all ssl'd (which should be soon)14:20
mordredcorvus: ++14:20
corvusor, just to experiment, since that's a patch to zuul-jobs, it could also just pull out the docker mirror stuff temporarily14:21
*** zbr has quit IRC14:21
*** zbr has joined #zuul14:22
clarkbre mirrors I think next step is to replace all the regions with new mirror builds, then update new mirror builds with ssl'd docker proxies14:30
clarkbwhich won't be immediate but can probably happen fairly quickly14:31
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return python artifact records to Zuul  https://review.opendev.org/66305314:39
*** sanjayu__ has quit IRC14:42
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return javascript content artifact records to Zuul  https://review.opendev.org/66305614:51
*** chandankumar is now known as raukadah15:00
*** hashar has joined #zuul15:07
*** jamesmcarthur has joined #zuul15:11
*** themroc has quit IRC15:20
fungi"ERROR: Could not find a version that satisfies the requirement python-subunit (from ara<1.0.0) (from versions: none)" http://logs.openstack.org/70/662870/2/check/zuul-build-image/a0737a7/job-output.txt.gz#_2019-06-03_23_46_16_08924615:25
fungiis this a known issue?15:25
dmsimardfungi: no, let me check that out15:27
dmsimardfungi: only one hit in 48h according to logstash, could it be a fluke ?15:30
clarkbhttps://pypi.org/project/python-subunit/ it is a valid package name15:31
dmsimardthere has been other successful runs of zuul-build-image after that failure: http://zuul.openstack.org/builds?job_name=zuul-build-image -- I would be interested in doing a recheck15:33
*** spsurya has joined #zuul15:39
fungiyep, i'll try, just didn't want to do so if there was a known problem15:52
fungithanks!15:52
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return python artifact records to Zuul  https://review.opendev.org/66305316:04
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return javascript content artifact records to Zuul  https://review.opendev.org/66305616:04
openstackgerritJames E. Blair proposed zuul/zuul master: DNM: test zuul-jobs artifact changes Depends-On: https://review.opendev.org/663056  https://review.opendev.org/66308116:06
openstackgerritJames E. Blair proposed zuul/zuul master: DNM: test zuul-jobs artifact changes  https://review.opendev.org/66308116:06
*** gtema has quit IRC16:10
Shrewscorvus: thinking about your keeping the autohold removal at expiration... what mechanism do you think should do the deletion? e.g., should we delete expired requests on each zuul CLI autohold command? Is there a thread within the scheduler we could use to do this, or should another one be created for the cleanup?16:12
corvusShrews: we iterate over all the autohold requests every time a build completes; that might be a good time to do it16:17
corvusShrews: and that just made me think -- we should make sure this uses zk caching16:17
*** jangutter has quit IRC16:27
corvustobiash: does this mean anything to you? http://logs.openstack.org/81/663081/2/check/build-javascript-content/11e6a77/ara-report/result/c24c2599-0ff7-4c69-ad97-6f4991710577/16:28
corvustobiash: it does look like that job has been failing over the past few days: http://zuul.openstack.org/builds?job_name=publish-openstack-javascript-content16:28
clarkbcorvus: tristanC suggested we needed to rebase on top of the react-scripts update change16:28
clarkbcorvus: and/or merge that particular change. maybe we should recheck it to see if this issue goes away with that in?16:29
clarkb(then prepare to monitor zuul.openstack.org ?)16:29
corvusclarkb: what change?16:29
clarkbcorvus: https://review.opendev.org/#/c/659991/16:30
openstackgerritJean-Philippe Evrard proposed zuul/zuul-jobs master: Explicitly store facts for promote  https://review.opendev.org/66281716:30
clarkbI think we are all afeared of merging it as is because it broke zuul.openstack.org last time we did16:31
openstackgerritJames E. Blair proposed zuul/zuul master: DNM: test zuul-jobs artifact changes  https://review.opendev.org/66308116:31
clarkbbut tristanC did more testing and wasn't able to reproduce so maybe we just pick a time to merge it when tristanC is around to help debug16:31
corvusokay, i added a depends-on to that16:31
corvusthat should tell us if it will fix the javascript-content job (though it also still depends on my changes to that role, so there could be unrelated errors, but they would happen after the build)16:32
openstackgerritJean-Philippe Evrard proposed zuul/zuul-jobs master: Explicitly store date facts for promote  https://review.opendev.org/66281716:32
*** mattw4 has joined #zuul16:33
corvusalso, it will be nice that at the end of this process we'll be running the javascript-content job in check/gate16:33
*** mattw4 has quit IRC16:37
SpamapSpabelanger: yes, I have a role that will do that on a change that has an SSH key added in a certain path. Still early experimental phase.16:44
corvusclarkb: i still see the same error:16:49
corvus2019-06-04 16:46:03.884313 | ubuntu-bionic | EEXIST: file already exists, mkdir '/home/zuul/src/opendev.org/zuul/zuul/web/build'16:49
*** hashar has quit IRC16:53
openstackgerritJames E. Blair proposed zuul/zuul master: Revert "Create zuul/web/static on demand"  https://review.opendev.org/66309916:57
openstackgerritJames E. Blair proposed zuul/zuul master: DNM: test zuul-jobs artifact changes  https://review.opendev.org/66308116:57
corvusokay, moving on16:58
*** mattw4 has joined #zuul16:58
corvuspabelanger, mordred: it looks like there is some incomplete work related to the fetch-output system.  here is a job in zuul-jobs which fetches python sdists from the remote host and, presumably due to an error, renames them as the *file* called "artifacts".  i assume the intent was to put them in an artifact directory, but that directory doesn't exist, so it rsyncs them into a file:17:01
corvushttp://logs.openstack.org/81/663081/3/check/build-python-release/f57892a/ara-report/17:01
corvuswhat's the ultimate design for a job like that?17:01
corvusshould the tarball-post playbook no longer rsync to the executor and instead just move the file on the worker node?17:02
corvusis it safe to assume that job always fails and no one runs it, so we can go ahead and just make that change?17:02
corvusor should the job continue to do the rsync itself, but just create the artifacts dir beforehand, so it doesn't rely on fetch-output?17:03
corvusi guess the real question is -- do we expect jobs like that in zuul jobs to conform to the fetch-output interface? are there any?  how do we indicate that?17:03
mordredcorvus: those are all great questions (and a sad bug) ... I believe the final expectation would be that tarball-post no longer rsyncs to the executor and instead just moves the file on the worker node17:04
mordredalthough I believe we were trying to get there stepwise in a fashion that if the tarball-post job continued to rsync things it would not be a problem because the final rsync targets would be teh same17:05
corvusokay, right now, no job in zuul-jobs uses the fetch-output role.  however, we do run it in the opendev base jobs.17:06
mordredbut the final result shoudl be the jobs, in general, shouldn't be rsyncing back to the executor but instead focus on putting thigns into the directories on the worker - because that's the most flexible - and an operator can then either have a base job that rsyncs to the executor then copies thigns to where they go, or an operator could choose to just copy directly from the worker nodes, but in both cases the17:06
mordredzuul-jobs job content could be used17:06
corvusokay, so for that to happen, then i think we need to make a big effort to move the whole zuul community in that direction17:07
mordredyeah17:07
corvusbecause that's basically "at some point we're going to stop rsyncing in zuul jobs/roles and leave it up to you to have fetch-output in your base jobs"17:08
mordredyah. in talking about it here - I'm starting to think we shoudl take the original emails on the topic and put them into a spec - so that it can be clearly documented17:08
*** jpena is now known as jpena|off17:09
corvusso we'll need to make roles compatible with not having or not-having fetch-output, then have a flag day where we say everyone using zuul-jobs either needs to have fetch-output in their base job (or have a log system which doesn't require it, though none exist atm), then we start to remove the compat rsyncs17:09
mordred++17:09
corvusmordred: yeah, sounds good.  i bet it's not that hard really, we just need to lay out the steps, set dates, and send lots of emails.17:09
mordredyup17:09
corvusmordred: i think that means for the moment what i need to do is change the build-python-release job to mkdir artifacts on the executor, and then rm artifacts/* on the worker after rsyncing17:10
corvusthat should be backwards and forwards compatible17:10
corvus(ie, we're not going to rely on our fetch-output role for the moment, and then by doing the rm, we're not going to copy twice)17:11
mordredyou should be able to skip the rm - rsync should noop if the files are already synced, no?17:11
corvusgood point17:11
mordredbut - I agree in principle with the idea - make sure it doesn't yet rely on fetch-output but is also compatible with it17:11
corvusk, thx, i think i can proceed :)17:12
mordred\o/17:12
* mordred is going to find sandwich before infra meeting17:12
corvusit looks like the revert did fix the javascript issue, so that is buggy17:12
corvustobiash, clarkb, mordred, tristanC: https://review.opendev.org/66309917:13
corvusi'm also now mostly afk until infra mtg17:13
tobiashcorvus: yepp, looks like that was incomplete17:23
clarkbcorvus: oh interseting17:26
*** flepied has quit IRC17:30
*** jamesmcarthur has quit IRC17:46
openstackgerritTobias Henkel proposed zuul/zuul master: Revert "Revert "Create zuul/web/static on demand""  https://review.opendev.org/66310817:54
tobiashcorvus: I think that should work in python as well as for the js content generation job ^17:54
Shrewstobiash: looking at caching for zuul autoholds, i think we are double caching data in nodepool17:55
Shrewstobiash: we create a TreeCache, and then use events on that tree to maintain a second cache17:55
tobiashShrews: that are two distinct types of caches, the treecache caches the raw data and the second cache is the resulting object structure17:56
tobiashwe need that to not re-parse all the json over and over agai17:56
*** themroc has joined #zuul17:58
Shrewstobiash: oh, i see now17:59
Shrewsso, intentional double caching17:59
tobiashyepp :)17:59
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return javascript content artifact records to Zuul  https://review.opendev.org/66305617:59
Shrewstobiash: you probably explained that the first time around and i just forgot  :)18:00
*** themroc has quit IRC18:00
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return javascript content artifact records to Zuul  https://review.opendev.org/66305618:00
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return python artifact records to Zuul  https://review.opendev.org/66305318:00
openstackgerritJames E. Blair proposed zuul/zuul master: DNM: test zuul-jobs artifact changes  https://review.opendev.org/66308118:01
tobiashShrews: I don't know anymore, it's already a long time ago :)18:01
corvustobiash: thanks, i added a depends-on to your revert revert to test ^18:01
tobiashcorvus: thanks :)18:01
tobiashcorvus: do remember the discussion about live log routing we had a couple of weeks ago? This might become a topic for me soon. I guess this should have a spec?18:04
corvustobiash: i don't entirely remember it, no.18:08
tobiashcorvus: ok, I'll write a spec ;)18:08
corvussounds like a plan :)18:08
*** themroc has joined #zuul18:11
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return python artifact records to Zuul  https://review.opendev.org/66305318:13
*** spsurya has quit IRC18:24
openstackgerritJeremy Stanley proposed zuul/zuul-jobs master: Use password lookup for run-buildset-registry role  https://review.opendev.org/66311918:43
corvustobiash: revert revert looks good!18:51
tobiashcorvus: but it broke python this time :/18:52
corvuswell, at least, it looks good for the js-tarball job18:52
corvusyeah that18:52
tobiashlooks like I have to fix the setup hook once more18:52
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Return python artifact records to Zuul  https://review.opendev.org/66305318:53
*** ParsectiX has joined #zuul18:58
openstackgerritMerged zuul/zuul master: Revert "Create zuul/web/static on demand"  https://review.opendev.org/66309919:00
tjgreshahas anyone seen this from nodepool?  nodepool.log >> openstack.exceptions.SDKException: Server reached ACTIVE state without being allocated an IP address.     - then it deletes my newly spawned VMs which do have IP assigned19:15
ofososCan we so a quick show of hands? So you guys want to have a slackbot in some way or another (e.g. as an external daemon)?19:16
ofososDo19:16
fungitjgresha: that usually indicates a problem in your providing cloud19:16
clarkbfungi: tjgresha or in your clouds.yaml confg (so that nodes don't come up with a valid ip addr)19:18
fungiahh, yeah if nodepool is looking for a different field in the api response and considers the addresses returned "private" or something along those lines19:18
tjgreshafungi: i can manually provision with the image generated by the DIB and uploaded (onto same network, same tenant, as same user, etc)19:20
clarkbtjgresha: does it come up with a valid "public" ip?19:21
tjgreshalet me do a quick recheck19:23
fungitjgresha: nodepool has an algorithm with which it tries to guess which addresses returned for a node is effectively public, but also provides a means to override that logic through configuration if you know how to identify the public interface19:25
fungiassuming this is the openstack driver, it's handled by the os-client-config library: https://docs.openstack.org/os-client-config/latest/user/network-config.html19:26
Shrewsthat's actually part of openstacksdk directly now19:27
fungi(so in the clouds.yaml file)19:27
tjgreshayes - clouds.yaml is pointed to the cloud in our lab19:27
fungiahh, neat, nodepool docs still refer to the oscc docs for it19:27
fungihttps://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].cloud19:27
Shrewsfungi: oops, we should update that19:28
fungithough i suppose it's more the implicit guessing heuristics which are what you're talking about19:28
tjgreshayes openstack cloud  =) sorry19:32
tjgreshafungi: so the cloud i am connected to has an internal ci-net and a public119:35
fungiif it's called "public1" and not "public" then you may have to configure that specifically in the network stanza19:36
tjgreshadouble checked.. yes i have public119:38
*** hashar has joined #zuul19:38
*** altlogbot_1 has joined #zuul19:41
*** jamesmcarthur has joined #zuul19:42
*** jamesmcarthur has quit IRC19:45
*** jamesmcarthur has joined #zuul19:45
tjgreshavia horizon, if i include public1 at launch time it results in a error..  that is not good.     i can include ci-net and then get a float from public1 tho19:47
fungioh, i see, so you're using something like rfc 1918 addressing on a ci-net interface and then mapping floating ips from a public1 pool? if so, that may need to be specified in the network stanza in your clouds.yaml19:49
tjgreshahmm ok .. i will give that a try    https://docs.openstack.org/os-client-config/latest/user/network-config.html#    << -- you mean this right19:53
SpamapSofosos: zuul-discuss may be a better way to gauge peoples' interest. http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss19:54
tjgreshafungi: just realized that was what you sent earlier  =)19:54
fungiyup19:55
fungithough Shrews may have other suggestions for better reference docs as he says openstacksdk is handling that stuff directly now19:56
tjgreshai will catch on eventually =)19:56
mordredhttps://docs.openstack.org/openstacksdk/latest/user/config/network-config.html19:56
mordredthe doc is pretty much the same though19:56
fungiwe can at least update the nodepool docs to point there now19:57
openstackgerritJeremy Stanley proposed zuul/nodepool master: Update os-client-config references to openstacksdk  https://review.opendev.org/66313120:05
fungiShrews: mordred: tjgresha: ^ documentation update20:05
mordredfungi: thnaks!20:05
*** themroc has joined #zuul20:06
fungipumping my miserable contributor stats ;)20:06
ofososSpamapS: i'll move the discussion there20:10
*** themroc has quit IRC20:37
*** mattw4 has quit IRC20:55
*** mattw4 has joined #zuul20:58
*** jamesmcarthur has quit IRC21:00
*** jamesmcarthur has joined #zuul21:00
*** jamesmcarthur has quit IRC21:04
*** pcaruana has quit IRC21:07
*** hashar has quit IRC21:09
SpamapSofosos: good plan. IRC is often biased heavily toward US timezones and full-time-zuul'ers, and I want your thinking to get the full scrutiny of the community. :)21:09
*** jamesmcarthur has joined #zuul21:14
clarkband IRCers :P21:17
*** jamesmcarthur has quit IRC21:18
*** jamesmcarthur has joined #zuul21:20
fungiyeah, asking on irc who likes slack... probably going to get you a set of opinions which are more biased than you would want ;)21:27
*** jamesmcarthur has quit IRC21:28
*** jamesmcarthur has joined #zuul21:49
*** jamesmcarthur has quit IRC21:51
openstackgerritMerged zuul/nodepool master: Update os-client-config references to openstacksdk  https://review.opendev.org/66313121:55
*** ParsectiX has quit IRC22:01
*** ParsectiX has joined #zuul22:02
*** jamesmcarthur has joined #zuul22:04
*** mattw4 has quit IRC22:08
*** ParsectiX has quit IRC22:11
ofosos:)22:15
*** jamesmcarthur has quit IRC22:16
*** mattw4 has joined #zuul22:24
*** sanjayu__ has joined #zuul22:52
clarkbI'm saying hi over the cube wall to point out we just created a service announcement list for opendev services. We'll send planned outage notices and other chagne info to that list so we don't have to send them to all the various projects individually. If this sounds useful to you feel free to sign up at http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce22:57
clarkbIt should be very low traffic22:59
fungiwe can also send this info to the zuul-discuss ml if people would like23:01
corvuswe could also subscribe the zuul-discuss list if we feel that's appropriate23:23
fungisubscribing mailing lists to mailing lists can lead to weird behaviors, though less likely i suppose when they're all on the same server23:31
clarkbdifferent "vhosts" though. Not sure if that matters23:31
fungiif i ever get rolling again on mm3 work, it's multi-domain-capable (and i've tested that in an earlier poc with lists that only differ by domain name)23:33

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!