opendevreview | Ian Wienand proposed openstack/project-config master: nodepool elements: use yaml.safe_load https://review.opendev.org/c/openstack/project-config/+/816774 | 00:26 |
---|---|---|
*** odyssey4me is now known as Guest4951 | 00:56 | |
*** rlandy|ruck|afk is now known as rlandy|ruck | 00:56 | |
opendevreview | Merged openstack/project-config master: nodepool elements: use yaml.safe_load https://review.opendev.org/c/openstack/project-config/+/816774 | 01:08 |
*** fzzf2 is now known as fzzf | 02:34 | |
*** sshnaidm is now known as sshnaidm|off | 03:06 | |
*** frenzyfriday|sick is now known as frenzy_friday | 04:25 | |
opendevreview | Ian Wienand proposed openstack/project-config master: infra-package-needs: skip haveged start on 9-stream https://review.opendev.org/c/openstack/project-config/+/816782 | 06:41 |
opendevreview | Merged openstack/project-config master: infra-package-needs: skip haveged start on 9-stream https://review.opendev.org/c/openstack/project-config/+/816782 | 07:00 |
*** amoralej|off is now known as amoralej | 09:37 | |
amoralej | hi after, adding cs9 nodes in nodepool, i see the centos-9-stream labels in https://zuul.openstack.org/labels | 09:38 |
amoralej | i also see an image was properly created in https://nb02.opendev.org/centos-9-stream-0000000015.log | 09:38 |
amoralej | but when i try to use it in jobs in https://review.opendev.org/c/x/packstack/+/816796 | 09:38 |
amoralej | i get node_failure and i can't see any log message | 09:39 |
amoralej | any hint? | 09:39 |
amoralej | am i missing some step? do i need to wait to have the images pushed to the providers? | 09:41 |
opendevreview | daniel.pawlik proposed openstack/ci-log-processing master: Initial project commit https://review.opendev.org/c/openstack/ci-log-processing/+/815604 | 09:54 |
frickler | amoralej: where did you see that node_failure? I see three nodes building now, they might have similar issues getting ready like fedora had/has though | 09:56 |
frickler | oh, actually it looks like they are failing to boot. guess someone will need to debug, not sure how much progress we had with fedora. ianw might know | 09:58 |
opendevreview | Alfredo Moralejo proposed openstack/project-config master: Fix haveged installation in CentOS7 https://review.opendev.org/c/openstack/project-config/+/816813 | 10:07 |
amoralej | frickler, yep, it's working now | 10:08 |
amoralej | maybe just a timing issue | 10:08 |
amoralej | i see node started and failed later but couldn't upload logs | 10:09 |
*** jpena|off is now known as jpena | 10:36 | |
*** dviroel|out is now known as dviroel|rover | 10:38 | |
*** rlandy is now known as rlandy|ruck | 10:43 | |
amoralej | fyi, i broke infra-package-needs element for centos7 and image is failing to build. I sent https://review.opendev.org/c/openstack/project-config/+/816813 to fix it | 11:19 |
*** jcapitao is now known as jcapitao_lunch | 11:50 | |
*** jcapitao_lunch is now known as jcapitao | 12:31 | |
*** amoralej is now known as amoralej|lunch | 13:48 | |
dpawlik | fungi, clarkb: hey, so 2 vcpus, 2GB ram should be enough for the beginning to run gearman-worker, gearman-client and logscraper. If HA is required, same instance with shared one volume or maybe some pacemaker rule to check systemd service if its alive/host if is alive. All services should be running inside the container... I prefer to use centos 8 | 14:16 |
dpawlik | stream + podman, but it is a loose proposition :) | 14:16 |
fungi | dpawlik: cool, i'll check what images are available there. when you say 2 vcpus/2GB ram for gearman-worker, gearman-client and logscraper, are you wanting a separate vm for each of those or planning to start with all three on the same vm? also what hostname(s) do you prefer? | 14:18 |
clarkb | dpawlik: fwiw the way gearman works you shouldn't need pacemaker or a shared volume. The idea is that each gearman job is independent | 14:20 |
clarkb | then you can just ensure the services are running via your init system or similar | 14:20 |
clarkb | note I would not use centos 8 as it is EOL in less than 2 months... | 14:20 |
dpawlik | fungi: all services can be put on one host so far, it should be working normally, but I can be surprised. Dev test != production | 14:20 |
dpawlik | clarkb: so the logscraper was doing "checkpoint" file that was puting dates where it should start, so if the host will disappear second host can read the file and start scraping from some point, instead pushing same data... | 14:21 |
fungi | dpawlik: when you say you prefer centos 8, do you mean stream 8? | 14:22 |
dpawlik | clarkb: c8stream should be running for a while | 14:23 |
dpawlik | fungi: about hostname, ah, that is always hard | 14:23 |
clarkb | dpawlik: ok but you said centos 8 whcih != centos 8 stream | 14:24 |
clarkb | (I'm trying to make people be more specific about that because it is important right now) | 14:24 |
dpawlik | clarkb: I wrote c8 stream ;) | 14:24 |
clarkb | dpawlik: oh the irc message length broke it | 14:24 |
clarkb | I'm sorry I misread that as a result | 14:24 |
clarkb | (the line ends as "centos 8") | 14:24 |
dpawlik | "logs" in latin are "acta" but acta word is not very nice in current world | 14:25 |
clarkb | dpawlik: if your system is running as a singleton with essentially a lock file then refactoring gearman out probably makes sense. gearman isn't ideal in that setup since it expects that all the jobs can run at all times if they have workers available | 14:25 |
dpawlik | so I will leave the hostname in your hands :) | 14:26 |
*** amoralej|lunch is now known as amoralej | 14:26 | |
dpawlik | clarkb: yeah. I spoke yesterday with TristanC about the logscraper and that the gearman worker/client can be deprecated in the future in the "logs workflow" and he have a new idea for the future: http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001744.html | 14:28 |
dpawlik | IMHO currently we should go like it was designed right now | 14:29 |
clarkb | dpawlik: ya I'm trying to respond to that email next. I really don't like that idea personally | 14:29 |
clarkb | There are a lot of problems with making elasticsearch tightly coupled to your CI system. I'm hoping to get that email out soon today | 14:29 |
dpawlik | aha | 14:30 |
fungi | i'm not opposed to zuul growing a feature to push logs into an elasticsearch, but opendev likely wouldn't make use of it | 14:31 |
fungi | in order to minimize the footprint of systems we're responsible for managing | 14:32 |
fungi | (and managing includes integration points in that case) | 14:32 |
* dpawlik I will have something to read on Monday morning | 14:34 | |
fungi | dpawlik: in absence of suggestions, i'll just call it logscraper01.openstack.org | 14:43 |
fungi | dpawlik: what ssh key do you want granted access to the server? | 14:44 |
dpawlik | fungi: some of those: https://github.com/danpawlik.keys | 14:45 |
fungi | will do, thanks | 14:45 |
dpawlik | can be first, can be second... I use both | 14:45 |
dpawlik | will do ansible playbook + finish role on monday | 14:45 |
dpawlik | thanks fungi | 14:54 |
fungi | clarkb: i'm trying to upload a centos 8 stream genericcloud qcow2 to vexxhost, but openstack image create is telling me "No such file or directory" for the local file i'm passing with the --file option. it definitely exists though as i tab-completed it, have you ever seen that behavior before? | 14:56 |
dpawlik | nope, allways was working | 15:00 |
fungi | trying to install newer osc into a venv on bridge but it breaks trying to build cryptography's rust extensions | 15:11 |
fungi | aah, the venv module was using pip 9 by default, i think that had something to do with it | 15:13 |
fungi | okay, newer openstackclient seems to be uploading the image just fine now | 15:14 |
clarkb | ya I hadn't seen that before | 15:17 |
clarkb | Note that it is preferable to use raw files on vexxhost iirc | 15:17 |
fungi | doesn't seem like rh makes those available for download, at least not that i could find | 15:25 |
clarkb | qemu-img has a simple conversion process if you want to convert. It probably isn't as big of a deal in this case since its a small number of nodes. vexxhost complains more when we do lots of nodepool uploads and boots | 15:28 |
opendevreview | Andre Aranha proposed openstack/openstack-zuul-jobs master: Enable support for fips on the jobs https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/816855 | 15:32 |
opendevreview | Andre Aranha proposed openstack/openstack-zuul-jobs master: Enable support for fips on the jobs https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/816855 | 15:33 |
fungi | i'm also not sure we have enough free space on bridge for me to convert an centos image to raw anyway | 15:47 |
fungi | also fun, we're just about at disk quota in sjc1 | 15:49 |
fungi | "quota is 2000G and 1922G has been consumed" | 15:49 |
fungi | i'll make the root volume 40gb for now, but that only leaves us ~38gb remaining | 15:50 |
fungi | should i be putting this in ca-ymq-1 instead? | 15:50 |
clarkb | fungi: dpawlik: ok I just sent my novel. Apologies for the length on that email | 15:50 |
clarkb | fungi: I want to say that is where mnaser_ has said we should launch stuff these days | 15:51 |
fungi | sjc1 or ca-ymq-1? | 15:51 |
clarkb | ca-ymq-1 | 15:51 |
fungi | ahh, can do | 15:51 |
clarkb | He just did a DC upgrade and its all shiny there or something along those lines | 15:51 |
opendevreview | Andre Aranha proposed openstack/openstack-zuul-jobs master: Enable support for fips on the jobs https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/816855 | 16:20 |
opendevreview | Andre Aranha proposed openstack/openstack-zuul-jobs master: Enable support for fips on the jobs https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/816855 | 16:24 |
fungi | dpawlik: i told cloud-init to create a passwordless dpawlik account on logscraper01.openstack.org with your keys authorized and passwordless sudo access, please test it out and let me know if you have trouble getting in, but the console log seems to indicate the account was created successfully at first boot | 16:24 |
clarkb | fungi: huh how do you do that? | 16:26 |
clarkb | (I thought most of the cloud-init config for creating users was baked into the images, maybe it grew new features or I missed them entirely) | 16:26 |
fungi | clarkb: create a yaml file like this https://paste.opendev.org/show/810816 | 16:28 |
fungi | pass it with --user-data=cloud-init.yaml (or whatever you called the file) in the openstack server create command | 16:28 |
clarkb | ah user data | 16:29 |
fungi | that's how i've been doing it for my personal systems anyway and it's worked for me | 16:29 |
dpawlik | fungi: works well. Thank you! | 16:29 |
fungi | awesome, since we have more available disk there i gave it a 100gb rootfs, assuming you'll probably be pooling some large batches of files on it | 16:30 |
dpawlik | fungi: I will read email later or just leave it for reading during Monday coffee :) | 16:30 |
fungi | of course, have a great weekend! | 16:30 |
dpawlik | ack | 16:30 |
dpawlik | you too ! | 16:30 |
*** jpena is now known as jpena|off | 17:22 | |
*** amoralej is now known as amoralej|off | 17:42 | |
*** rlandy|ruck is now known as rlandy|ruck|brb | 18:59 | |
*** rlandy|ruck|brb is now known as rlandy|ruck | 19:24 | |
ade_lee | fungi, hey - is there a pip problem -- I had jobs starting to fail like this -- https://zuul.opendev.org/t/openstack/build/c0e34834b73844b7a245008c5ab0b91a | 21:07 |
ade_lee | ? | 21:07 |
ade_lee | this is setuptools problem .. | 21:08 |
rlandy|ruck | yep - we have tox failures ... https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d41/807465/14/check/openstack-tox-py38/d415d42/job-output.txt | 21:11 |
rlandy|ruck | 2021-11-05 20:44:07.405487 | ubuntu-focal | The conflict is caused by: | 21:12 |
rlandy|ruck | 2021-11-05 20:44:07.405497 | ubuntu-focal | jsonschema 3.2.0 depends on setuptools | 21:12 |
rlandy|ruck | 2021-11-05 20:44:07.405516 | ubuntu-focal | The user requested (constraint) setuptools===58.5.2 | 21:12 |
rlandy|ruck | 2021-11-05 20:44:07.405526 | ubuntu-focal | | 21:12 |
rlandy|ruck | https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f45/815384/2/gate/openstack-tox-py36/f45f8dc/job-output.txt | 21:14 |
ade_lee | rlandy|ruck, yay .. | 21:15 |
rlandy|ruck | ade_lee: reporting bug for tracking | 21:16 |
rlandy|ruck | happy friday afternoon | 21:16 |
ade_lee | rlandy|ruck, yeah , I think this is a sign .. | 21:17 |
rlandy|ruck | ade_lee: go - run like the wind ... it's weekend time | 21:18 |
rlandy|ruck | will forward you the bug link | 21:18 |
ade_lee | rlandy|ruck, thanks | 21:18 |
rlandy|ruck | ade_lee: https://bugs.launchpad.net/tripleo/+bug/1950016 | 21:23 |
ade_lee | rlandy|ruck, ack my failure looks a little different, but the result is the same | 21:24 |
ade_lee | rlandy|ruck, I think its related to an update in global requirements recently | 21:25 |
rlandy|ruck | ade_lee: pls comment on the bug | 21:25 |
rlandy|ruck | so we keep the tracking in one place | 21:25 |
ade_lee | rlandy|ruck, ack - doing so now . | 21:25 |
rlandy|ruck | need to log off now | 21:27 |
rlandy|ruck | will deal with it on sunday if nobody gets to it before | 21:27 |
*** jonher_ is now known as jonher | 21:28 | |
*** yoctozepto8 is now known as yoctozepto | 21:28 | |
fungi | sorry, was eating, but catching up on the filures now | 21:28 |
fungi | first guess, a new jsonschema release doesn't like setuptools being pinned | 21:29 |
ade_lee | fungi, I just updated the bug -- I ran into a different failure on ocatavia jobs | 21:30 |
fungi | jsonschema 4.2.1 "Released: about 6 hours ago" | 21:30 |
fungi | also setuptools 58.5.3 was released yesterday, shortly after the constrained 58.5.2 | 21:34 |
fungi | i wonder if that's a clue | 21:34 |
fungi | looks like the setuptools constraints bump happened when https://review.opendev.org/816611 merged at 08:13:56 utc yesterday, so that's been in place for more than a day and a half | 21:37 |
ade_lee | fungi, not sure why - but I had jobs get beyond this point until just this afternoon | 21:38 |
fungi | a pbr constraints update merged today at 17:21 | 21:40 |
fungi | in theory we can reproduce the other error by trying to install tempest with constraints into a py38 venv | 21:49 |
fungi | yours is going to be harder to reproduce without using devstack, since it's installing a new setuptools into an already dirtied global envitonment | 21:51 |
ade_lee | fungi, johnsom just told me he's seeing similar in designate jobs | 21:52 |
fungi | a link would be helpful, so i can see if it's one of the two already observed cases, or yet a third one | 21:52 |
johnsom | https://zuul.opendev.org/t/openstack/build/d4ea4cb87413463c83b19f6d833c6bb4 | 21:53 |
fungi | though i expect what's happening is we're somehow sending pip into dependency hell | 21:53 |
ade_lee | fungi, happy Friday afternoon :) and sorry .. | 21:53 |
johnsom | I had hoped it was just a pypi/index issue, so went for some exercise, but maybe it's not as simple. | 21:54 |
ade_lee | johnsom, yeah that looks like my failure in octavia | 21:54 |
fungi | yeah, the designate failure is like the other one ade_lee linked, devstack trying to install/upgrade/downgrade a specific setuptools version in the dirtied devstack global environemnt | 21:54 |
fungi | it's possible the tripleo one has the same underlying cause, and looks like it may be easier to isolate since it's an initial install into a venv | 21:55 |
fungi | i'm attempting to reproduce that one on my workstation now | 21:57 |
fungi | Successfully installed PrettyTable-2.4.0 PyYAML-6.0 argparse-1.4.0 attrs-21.2.0 autopage-0.4.0 bcrypt-3.2.0 certifi-2021.10.8 cffi-1.15.0 charset-normalizer-2.0.7 cliff-3.9.0 cmd2-2.2.0 colorama-0.4.4 coverage-6.1.1 cryptography-35.0.0 debtcollector-2.3.0 entrypoints-0.3 extras-1.0.0 fasteners-0.14.1 fixtures-3.0.0 flake8-3.7.9 flake8-import-order-0.11 future-0.18.2 hacking-3.0.1 | 21:58 |
fungi | idna-3.3 iso8601-0.1.16 jsonschema-3.2.0 linecache2-1.0.0 mccabe-0.6.1 monotonic-1.6 msgpack-1.0.2 netaddr-0.8.0 netifaces-0.11.0 oslo.concurrency-4.5.0 oslo.config-8.7.1 oslo.context-3.4.0 oslo.i18n-5.1.0 oslo.log-4.6.1 oslo.serialization-4.2.0 oslo.utils-4.11.0 oslotest-4.5.0 packaging-21.2 paramiko-2.8.0 pbr-5.7.0 pycodestyle-2.5.0 pycparser-2.20 pyflakes-2.1.1 pyinotify-0.9.6 | 21:58 |
fungi | pynacl-1.4.0 pyparsing-2.4.7 pyperclip-1.8.2 pyrsistent-0.18.0 python-dateutil-2.8.2 python-subunit-1.4.0 pytz-2021.3 requests-2.26.0 rfc3986-1.5.0 setuptools-58.5.2 six-1.16.0 stestr-3.2.1 stevedore-3.5.0 testtools-2.5.0 traceback2-1.4.0 unittest2-1.1.0 urllib3-1.26.7 voluptuous-0.12.2 wcwidth-0.2.5 wrapt-1.13.3 | 21:58 |
fungi | well that's an unfortunate success | 21:59 |
johnsom | lol | 21:59 |
johnsom | I know that feeling | 21:59 |
clarkb | this kind of looks like the "we got stale data from pypi cdn" problem | 22:09 |
clarkb | 58.5.2 was released a couple of days ago and is present in the simple index listings from pypi directly and our ovh gra1 proxy cache | 22:09 |
clarkb | this means they didn't remove the package file we're pinned to at least. But maybe it isn't present if we get some stale cdn backend instead (it seems this happens more frequently with new packages I think due to the delay in updating that backup backend) | 22:10 |
fungi | oh, good point, i wonder if these ran exclusively in iweb mtl01 and vexxhost ca-ymq-1 | 22:10 |
clarkb | the failure I'm looking at is ovh gra1 so different continent | 22:10 |
clarkb | but still possible for that to have failures I guess | 22:10 |
opendevreview | Merged openstack/project-config master: Fix haveged installation in CentOS7 https://review.opendev.org/c/openstack/project-config/+/816813 | 22:12 |
fungi | ovh-gra1, ovh-gra1, ovh-gra1 | 22:12 |
fungi | i think i see a pattern, yeah | 22:13 |
johnsom | Yeah, it had the feeling of a bad pypi index issue thingy to me. Later runs passed, etc. | 22:13 |
fungi | so yes, i think clarkb is onto something, a fastly cdn endpoint in the vicinity of the ovh-gra1 data center with stale/fallback data could explain this | 22:14 |
clarkb | we should maybe suggest to pypa that pip should list the available versions when it makes that error message | 22:14 |
clarkb | I've got a policy of avoiding that repo though so I'll defer to someone else on it | 22:15 |
fungi | i've done a `curl -XPURGE https://pypi.org/simple/setuptools` | 22:16 |
fungi | in hopes that might wake it up | 22:16 |
fungi | also did the same with a trailing / just in case | 22:17 |
fungi | and did it from a shell on mirror.gra1.ovh.opendev.org in case that matters (though i don't think it's supposed to) | 22:18 |
clarkb | ya I think that is supposed to indicate to the entire CDN they need to refresh | 22:18 |
clarkb | it is theoretically possible they refresh from the backup backend though :/ | 22:19 |
fungi | yep | 22:19 |
fungi | especially if there's transatlantic connection issues or something | 22:19 |
johnsom | I needed to recheck my patch anyway, so I'll let you know if it pops up again | 22:24 |
clarkb | fungi: looking at the PBR PEP517 thing briefly before I call it a week I notice two things it sets the package version to 0.0.0 and the sdist looks remarkably complete. | 22:27 |
clarkb | however, it seems that installing the resulting tar.gz or whl files doesn't do what we expect. For example import bindep after intsallation fails because pbr isn't installed even though pbr is listed as a requirement | 22:28 |
clarkb | ya ok it didn't install parsley either | 22:29 |
clarkb | But the code seems to have been packaged and installed just without deps for some reason | 22:29 |
clarkb | oh and without entrypoints | 22:29 |
clarkb | aha the wheel is very incomplete | 22:30 |
clarkb | the pbr.json record says that the commit is for the last bindep release and not the latest checkout | 22:32 |
clarkb | ya I think what might be happening is we aren't trigger the pbr'y things to generate authors, requirements, entrypoints etc via pyproject.toml | 22:36 |
clarkb | if I compare the two sdists one built with pyproject.toml and the other setup.py those types of things seem to be the main difference | 22:36 |
clarkb | ya the requires file has all the reqirements in it. Almost like pbr isn't firing with enough pbr stuff via the pep 517 entrypoints | 22:37 |
fungi | maybe the build entrypoint doesn't encompass as much as we expected? | 22:38 |
clarkb | ya I suspect that is what it going on. pbr/build.py just calls setuptools.build_meta.build_sdist as an example | 22:39 |
clarkb | whereas normal pbr instantiation does something a lot more than that iirc (though I don't recallexactly how it takes over setuptools setup() yet) | 22:39 |
clarkb | fungi: it seems that the way pbr works is by overriding commands? So PBR definitely won't work when commands are stopped being called | 22:42 |
fungi | we'll probably need a new wrapper in pbr which calls all those things and use that as the build backend | 22:42 |
clarkb | ya rather than using setuptools build_meta | 22:43 |
fungi | it may also need to duplicate some setuptools functionality i guess, if it needs to figure out which tasks are appropriate to perform | 22:43 |
clarkb | wow I've paged back in a lot of python packaging magic just now. So roughly the way it works is pbr sets an extrypoint for distutils.setup_keywords to extend the arguments accepted by setup() | 22:49 |
clarkb | that points at pbr.core.pbr which is a function called with the value of that keyboard in setup() and that takes over everythin | 22:49 |
clarkb | so ya I think we need to refactor pbr.build to call into pbr.core.pbr (or extract pieces of that that it can call) | 22:50 |
fungi | pbr.really_build ;) | 22:50 |
clarkb | like for build sdist we need to make a subset of the things pbr() runs for an sdist build | 22:50 |
clarkb | its possible we might get away with simply passing the pbr processed configs to build_meta.build_sdist | 22:51 |
clarkb | so that version and requirements etc are all set properly | 22:51 |
clarkb | fungi: https://github.com/pypa/setuptools/blob/main/setuptools/build_meta.py#L239-L276 I think this may have worked properly when mordred first wrote it due to that | 22:57 |
clarkb | it appears that setuptools at some point replaced the build meta implementation | 22:57 |
clarkb | obviously not ideal to depend on that and use it but might be a good stopgap if we can test that and see that it works. Let me try it really quickly | 22:58 |
fungi | not unsurprising | 22:58 |
clarkb | though build_sdist doesn't exist in the legacy bridge | 22:58 |
clarkb | so maybe not | 22:58 |
clarkb | oh! | 22:59 |
clarkb | I think you must still have a setup.py? | 22:59 |
clarkb | if thats it then wow but ok | 22:59 |
clarkb | ok ya progress. If I put the pbr setup.py back I get what looks like a good sdist but not a good wheel | 23:01 |
clarkb | I totally misinterpreted how this would be implemented | 23:01 |
clarkb | however making a wheel from that sdist is still a problem | 23:02 |
clarkb | I'll haev to pick this up again later. But this feels like progress at least. sdists seem to work more like I expect now. Version is set and there is a requires file in the sdist with the requirements | 23:04 |
clarkb | Something must be missing for making a proper wheel though | 23:04 |
clarkb | stephenfin: ^ fyi | 23:04 |
clarkb | fungi: heh and now if I pip install ./ in the bindep dir after restoring the setup.py it works. This makesme worry it isn't actually going through the pep517 system | 23:09 |
clarkb | I wonder if there is a way to tell. In any case I suspect this is actuall 80% of the way there and we need to sort out why build doesn't work | 23:09 |
fungi | clarkb: i want to say pep 517 is about making an sdist from a source tree | 23:11 |
fungi | or maybe it's too late on a friday | 23:11 |
fungi | ahh, it's both according to the hooks list: https://www.python.org/dev/peps/pep-0517/#mandatory-hooks | 23:13 |
fungi | i love that pbr features prominently in an arbitrary example in the recommendations for build frontends section | 23:15 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!