*** tosky has quit IRC | 00:04 | |
ianw | corvus: thanks for setting up; https://gerrit.googlesource.com/plugins/zuul-results-summary/+/refs/heads/main imported | 00:34 |
---|---|---|
*** goneri has quit IRC | 01:10 | |
*** wuchunyang has joined #zuul | 01:16 | |
*** mordred has quit IRC | 02:38 | |
*** mordred has joined #zuul | 02:42 | |
*** mgoddard has quit IRC | 02:58 | |
*** rfolco has joined #zuul | 03:11 | |
*** rfolco has quit IRC | 03:32 | |
*** wuchunyang has quit IRC | 04:04 | |
*** bhavikdbavishi has joined #zuul | 04:04 | |
*** rlandy has quit IRC | 04:46 | |
*** bhavikdbavishi has quit IRC | 04:53 | |
*** bhavikdbavishi has joined #zuul | 04:54 | |
*** wuchunyang has joined #zuul | 04:54 | |
*** wuchunyang has quit IRC | 04:59 | |
*** vishalmanchanda has joined #zuul | 05:02 | |
*** hamalq_ has quit IRC | 05:16 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #zuul | 05:33 | |
*** wuchunyang has joined #zuul | 05:51 | |
*** bhavikdbavishi has quit IRC | 05:51 | |
*** bhavikdbavishi has joined #zuul | 06:18 | |
*** bhavikdbavishi1 has joined #zuul | 06:35 | |
*** bhavikdbavishi has quit IRC | 06:37 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 06:37 | |
*** bhavikdbavishi has quit IRC | 06:43 | |
*** jfoufas1 has joined #zuul | 06:50 | |
*** zenkuro has quit IRC | 06:55 | |
*** zenkuro has joined #zuul | 06:56 | |
*** mach1na has joined #zuul | 07:05 | |
icey | I'm having trouble setting up Zuul's websocket behind nginx, I'm currently using something like https://pastebin.ubuntu.com/p/F9FtZxfM4J/ for the location that should be doing the websocket, but I'm getting exceptions like: `ws4py.exc.HandshakeError: Header Upgrade is not defined` when actually trying to load the log streams | 07:26 |
*** bhavikdbavishi has joined #zuul | 07:30 | |
*** bhavikdbavishi1 has joined #zuul | 07:33 | |
*** bhavikdbavishi has quit IRC | 07:35 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 07:35 | |
*** mach1na has quit IRC | 07:41 | |
*** jfoufas1 has quit IRC | 07:43 | |
icey | and it turns out that it's my bad nginx config, nevermind :) | 07:45 |
*** piotrowskim has joined #zuul | 07:48 | |
*** jcapitao has joined #zuul | 07:58 | |
*** mach1na has joined #zuul | 08:01 | |
*** mach1na has quit IRC | 08:06 | |
*** mach1na has joined #zuul | 08:11 | |
*** rpittau|afk is now known as rpittau | 08:18 | |
*** tosky has joined #zuul | 08:33 | |
*** wuchunyang has quit IRC | 08:34 | |
*** mgoddard has joined #zuul | 08:46 | |
*** jpena|off is now known as jpena | 08:56 | |
*** bhavikdbavishi has quit IRC | 09:03 | |
*** mgoddard has quit IRC | 09:06 | |
*** mgoddard has joined #zuul | 09:07 | |
*** bhavikdbavishi has joined #zuul | 09:12 | |
zbr|rover | did anyone look at opendev-promote-javascript-deployment-tarball job on zuul? | 09:13 |
zbr|rover | it was broken for weeks | 09:13 |
zbr|rover | last success was 15 days ago. | 09:13 |
*** bhavikdbavishi1 has joined #zuul | 09:15 | |
*** bhavikdbavishi has quit IRC | 09:16 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 09:16 | |
zbr|rover | apparently "{{ build.json[0] }}" is causing a list object has no element 0 failures | 09:17 |
*** LLIU82 has joined #zuul | 09:24 | |
LLIU82 | I have a change regarding gitlab timestamp for review. https://review.opendev.org/c/zuul/zuul/+/765990 | 09:25 |
openstackgerrit | Sorin Sbârnea proposed zuul/zuul-jobs master: Debug download-artifact https://review.opendev.org/c/zuul/zuul-jobs/+/767286 | 09:28 |
*** hashar has joined #zuul | 09:32 | |
*** vishalmanchanda has quit IRC | 09:36 | |
*** smyers_ has joined #zuul | 09:40 | |
*** smyers has quit IRC | 09:42 | |
*** smyers_ is now known as smyers | 09:42 | |
*** tosky_ has joined #zuul | 09:47 | |
openstackgerrit | Sorin Sbârnea proposed zuul/zuul-jobs master: Avoid download-artifact failure with no artifacts https://review.opendev.org/c/zuul/zuul-jobs/+/767286 | 09:48 |
*** tosky is now known as Guest24372 | 09:49 | |
*** tosky_ is now known as tosky | 09:49 | |
*** Guest24372 has quit IRC | 09:50 | |
*** mach1na has quit IRC | 09:52 | |
*** mach1na has joined #zuul | 09:54 | |
*** CraigR has joined #zuul | 09:56 | |
avass | We had ten seconds of sun today. That got punished with fog so thick I can't see further than five metres. | 09:57 |
*** mach1na has quit IRC | 10:00 | |
*** mach1na has joined #zuul | 10:00 | |
*** CraigR has quit IRC | 10:04 | |
*** nils has joined #zuul | 10:06 | |
*** vishalmanchanda has joined #zuul | 10:07 | |
*** wuchunyang has joined #zuul | 10:09 | |
zbr|rover | i guess if I will see sun here, i will go out right away but my hopes are low till march | 10:09 |
*** wuchunyang has quit IRC | 10:13 | |
icey | is the pipeline the only place that concurrency can be controller? ie: I have a job that I'd like to limit to X at one time | 10:20 |
*** LLIU82 has quit IRC | 10:23 | |
avass | icey: I guess you could limit the nodeset or use a semaphore for that | 10:41 |
avass | I mean limit the number of nodes in nodepool | 10:41 |
icey | avass: yeah, continuing with the docs, I also found a sempahore which seems like the perfect option | 10:41 |
icey | I'd rather not limit the nodes in the pool to controle it as there are multiple types of jobs that my changes should trigger (lint, unit tests, functional tests) and the functional tests are really the only one that should have a limit seperate from the nodepool | 10:42 |
avass | yeah but you can use separate labels for them :) | 10:49 |
avass | but if semaphores work I'd stick with that | 10:49 |
avass | zbr|rover: you also affected by permanent overcast? | 10:51 |
zbr|rover | avass: more or less, England weather. | 11:06 |
avass | zbr|rover: we've had it since the beginning of november :( | 11:14 |
*** mach1na has quit IRC | 11:15 | |
*** holser has joined #zuul | 11:37 | |
avass | Any reason why registerConnections is only ever run during startup? | 11:41 |
*** jcapitao is now known as jcapitao_lunch | 11:42 | |
avass | and not during fullReconfigure for example | 11:42 |
*** msuszko has quit IRC | 11:45 | |
*** msuszko has joined #zuul | 11:47 | |
icey | avass: doing a different label actually sounds like a very cool idea as well for entirely different reasons so I might end up doing both :-P | 11:53 |
lyr | Hi there | 11:56 |
lyr | nodepool dib-image-delete debian-10-0000000008 tells me "Cannot delete a build in progress". How can I force this ? The build is dead or stuck atm, log file hasn't changed for 10 min | 11:57 |
lyr | Most likely my fault has I restarted nodepool-builder service while it was building | 11:57 |
*** rfolco has joined #zuul | 12:01 | |
avass | tobiash: is the latest patch set on aws nodepool builder up to date? | 12:05 |
avass | I want to start experimenting :) | 12:05 |
avass | jonass: ^ ? | 12:06 |
jonass | avass: yes this is up to date (documentation and tests are not yet finished though). But it is already running in our int environment :) | 12:08 |
avass | jonass: nice :9 | 12:13 |
*** jfoufas1 has joined #zuul | 12:19 | |
tobiash | lyr: if you restarted the nodepool-builder a new build should be already running. The failed build attempt should be deleted automatically after the next successful build | 12:31 |
*** jpena is now known as jpena|lunch | 12:33 | |
*** zenkuro has quit IRC | 12:41 | |
*** zenkuro has joined #zuul | 12:41 | |
*** zenkuro has quit IRC | 12:49 | |
*** zenkuro has joined #zuul | 12:49 | |
*** jcapitao_lunch is now known as jcapitao | 12:59 | |
*** sduthil has joined #zuul | 13:05 | |
*** rlandy has joined #zuul | 13:12 | |
*** jpena|lunch is now known as jpena | 13:33 | |
*** bhavikdbavishi has quit IRC | 13:35 | |
avass | jonass: any planned changes to what already exists or have you found any critical errors with it so far? | 13:46 |
avass | wondering if it's worth setting up some experimental images connected to our zuul built with that | 13:47 |
avass | otherwise it's working great, though the import through s3 is a bit slow. not sure if that could be optimized easily though | 13:47 |
jonass | avass: Concerning the import, there i would just like to change it to vhd (this is supported by amazon, and then the upload is faster as the vhd images are only have as big). But maybe i do this with a later change, as the upload time is not that big of an issue. | 13:51 |
jonass | The import task itself is unfortunately rather slow (i think more than 20 minutes). But so far, i did not see anything how this could be improved. | 13:51 |
jonass | But, i have not yet actively researched that. | 13:52 |
avass | the only way I know of that could be faster is to enable dib to use a chroot and snapshot the chroot | 13:52 |
jonass | Otherwise, i think it should run okay so far. | 13:52 |
avass | that should bypass s3 completely | 13:52 |
avass | yeah it's not a problem but a bit annoying | 13:53 |
avass | I mean enable dib to use an ebs mounted chroot :) | 13:53 |
avass | and snaptshot the ebs | 13:53 |
jonass | Ah ok, yes that's a good point, i haven't tried yet how long this takes on AWS but that could be faster | 13:54 |
avass | but that would require nb to run in aws so the s3 option is good to have as well | 13:54 |
tobiash | avass: since that's then an aws only solution I wonder if dib is the right tool for that kind of optimization | 13:55 |
avass | yeah exactly | 13:55 |
jonass | tobiash: yes I would also agree | 13:55 |
avass | at that point it could be better to allow nodepool to use packer which already does that | 13:55 |
mordred | yeah - it could be a nodepool specific thing - have dib make the image as normal, then just have the "upload" drive mount an ebs volume, unpack the image into it, unmount it and snapshot it | 13:55 |
tobiash | if such an optimization is wanted I guess one would need into aws native ways of building images and leverage those as an optional way of image building | 13:56 |
avass | mordred: or that | 13:56 |
mordred | you could have that nodepool driver really only work if the builders are running in aws - but it might not be a terrible path | 13:56 |
tobiash | mordred: ok, that would be the third option to combine dib with an optional fast upload track | 13:57 |
mordred | and you keep the benefit of dib of starting from scratch instead of from existing cloud base images | 13:57 |
openstackgerrit | Dinesh Garg proposed zuul/zuul-jobs master: Allow customization of pip and chart testing binary https://review.opendev.org/c/zuul/zuul-jobs/+/767354 | 13:57 |
mordred | you could maybe even get fancy and detect whether the builder node is running in aws, if so use the ebs path, if not use the s3 path | 14:01 |
jonass | Also a good idea, should be doable | 14:04 |
avass | that should be possible | 14:04 |
mordred | ooh. IN FACT - to pull in an idea from avass ... (although this should maybe be an advanced knob) | 14:09 |
mordred | what dib does already is "create chroot, update files in chroot, create image from chroot contents" | 14:09 |
mordred | and you can control, iirc, the location of the chroot with dib command line arguments | 14:10 |
*** wuchunyang has joined #zuul | 14:11 | |
mordred | so maybe a setup where you just pre-create the chroot dir, mount an ebs volume in it, tell dib to use it - and to not delete the chroot when it's done | 14:11 |
mordred | that way you could avoid the "create image file" and "unpack image file" steps - really only an advantage if you're _only_ working on aws - as otherwise you'd want the image files to get created as normal | 14:11 |
*** jfoufas1 has quit IRC | 14:12 | |
mordred | it might be harder because it would combine knowledge of uploading at building time - which is a bit of a layer violation in the current model | 14:12 |
*** wuchunyang has quit IRC | 14:15 | |
fungi | zbr|rover: i think we had some earlier discussions about deprecating the tarball deployment method for the tarball bits, but i don't recall if anyone identified what broke with that job | 14:17 |
zbr|rover | fungi: well, a made a simple fix for the moment. you seen it? | 14:17 |
zbr|rover | if results is empty, avoid a failure | 14:18 |
fungi | nope, sorry, still caffeinating | 14:18 |
fungi | which one? | 14:18 |
zbr|rover | https://review.opendev.org/c/zuul/zuul-jobs/+/767286 | 14:18 |
* zbr|rover going for another coffee | 14:18 | |
avass | mordred: could be useful but I letting nodepool mount an ebs is probably going to be a big time saver anyway and I don't think packing/unpacking is gonna take long compared to that | 14:19 |
avass | and we'll probably be using both aws and azure, so you can expect azure support for nodepool builder in the future :) | 14:20 |
fungi | zbr|rover: did you see tristanC's comment on that yet? should we be skipping the upload in such cases? | 14:22 |
zbr|rover | fungi: yes, but I still think that the current behavior is unacceptable (exception). | 14:23 |
zbr|rover | if desired, I could also add a "fail" for the case the list of artifacts is empty. | 14:23 |
fungi | zbr|rover: just wondering if uploading an empty tarball will be even worse than breaking | 14:23 |
zbr|rover | IMHO having a job that fails consistently for long time is worse as it creates bad habit of ignoring failed jobs. | 14:24 |
zbr|rover | in these two weeks, we likely had >100 merges to the repo, but we did nothing. I do not think I was the only one observing that failure. | 14:25 |
zbr|rover | likely we can remove the job from running, but we should also fix the role to avoid such an ugly/confusing ansible failure | 14:26 |
fungi | i'm not disagreeing that we should fix the job, just suggesting that it's "failed safe" and not uploaded incomplete/empty tarballs which users might wind up pulling | 14:26 |
fungi | if we work around that and tell it to upload anyway even when expected files are missing, that seems like it would be broken in a much worse way | 14:27 |
mordred | avass: ++ | 14:28 |
fungi | zbr|rover: right now the breakage means users who are relying on those tarballs to fetch the javascript bits of the webui are stuck on an old stale copy, rather than getting a new broken one | 14:28 |
avass | mordred: I thought the azure driver was finished? | 14:29 |
mordred | avass: yes! it is | 14:29 |
mordred | duh | 14:29 |
zbr|rover | fungi: for the moment, I will update the role to end with a fail call if there is nothing instead of silently pass. ok? | 14:29 |
* mordred isn't really fully here :) | 14:29 | |
avass | haha | 14:30 |
avass | mordred: though we'll see if we bump into some problems using it. I see that it doesn't have any private-ip toggle like the aws drvier does for example | 14:30 |
fungi | zbr|rover: do you happen to know why it's not building those files any longer? | 14:31 |
zbr|rover | fungi: nope, i did not had time to investigate (very busy week) | 14:31 |
openstackgerrit | Paul Belanger proposed zuul/zuul-jobs master: DNM - debug tox siblings https://review.opendev.org/c/zuul/zuul-jobs/+/767361 | 14:35 |
tobiash | avass: the azure driver is finished (without image support), however I don't know of any productive user yet, so expect problems when scaling up ;) | 14:39 |
avass | tobiash: yeah that's gonna be fun since we need azure for nested virt currently. I want to see a zuul-native solution for that but it's getting some friction from non-zuul users :( | 14:44 |
tobiash | avass: you're sure you want productively use nestedvirt? | 14:44 |
avass | no | 14:44 |
tobiash | we have a half year odyssee behind up and end up with static nodes on bare metal for this use case | 14:45 |
avass | yeah the problem is that it's a different organization creating the tool with a completely different ci setup | 14:45 |
tobiash | avass: I just can give you this advice: don't try on openstack, and on azure carefully read the small written conditions ;) | 14:46 |
tobiash | (and have a fall-back plan) | 14:48 |
avass | tobiash: yeah they're gonna be using the same solution with nested virt in azure, so our fallback plan is pointing to zuul/nodepool ;) | 14:48 |
fungi | our experience with nested virt acceleration (at least for linux) is that it is highly sensitive to the versions of the three kernels involved (host, guest, nested-guest) | 14:49 |
tobiash | exactly, poc virtual validation worked, then under load it became highly unstable with all sorts of deep in the kernel embedded issues | 14:50 |
tristanC | fungi: it seems like we don't rebuild the javascript tarball when a change doesn't modify web/ content, thus ideally we shouldn't try to promote in that situation | 14:50 |
tobiash | especially if the nested-guest is a custom ecu image -> don't even try it | 14:50 |
avass | tobiash: :) | 14:51 |
fungi | tristanC: oh, we should skip the whole job if web content doesn't change? | 14:51 |
tristanC | fungi: if that is possible yes | 14:51 |
fungi | if it's actually run in the promote pipeline (change-merged trigger) i think it should be possible. if it's run in the post pipeline (ref-updated) then file filters won't work | 14:52 |
*** hashar has quit IRC | 15:03 | |
*** ikhan has quit IRC | 15:05 | |
*** ikhan has joined #zuul | 15:05 | |
openstackgerrit | Paul Belanger proposed zuul/zuul-jobs master: DNM - debug tox siblings https://review.opendev.org/c/zuul/zuul-jobs/+/767361 | 15:07 |
pabelanger | mordred: so, we have a use case of a project, that isn't python (so no setup.cfg) and uses tox. I'm trying to see how to get tox siblings to still work^ any ideas on we can skip comparing the pkg name? | 15:11 |
mordred | pabelanger: I thought we had support for non-python projects already | 15:12 |
openstackgerrit | Dinesh Garg proposed zuul/zuul-jobs master: Allow customization of pip and helm binary https://review.opendev.org/c/zuul/zuul-jobs/+/767354 | 15:12 |
mordred | pabelanger: but - maybe what we did was just add code to read setup.cfg directly to support projects without a setup.py | 15:12 |
pabelanger | Hmm, I _think_ a setup.cfg file is needed | 15:12 |
mordred | pabelanger: maybe it's ok to do a check for setup.cfg existing and if it doesn't to skip the package name comparison? | 15:13 |
pabelanger | wfm | 15:14 |
mordred | pabelanger: oh - it's more complicated than that | 15:17 |
mordred | no - nevermind. the more complicated code I was looking at was for identifying cloned repos as siblings. that does need the package to be a python package (else it cant' really get installed) | 15:19 |
mordred | pabelanger: we currently bail early if there's no setup.cfg - I think we might want to add a flag to override that behavior, otherwise it's a behavior change that might have hard to uderstand impact somewhere | 15:20 |
pabelanger | yah, I think that is fair | 15:20 |
pabelanger | let me try to get it working first | 15:20 |
openstackgerrit | Paul Belanger proposed zuul/zuul-jobs master: DNM - debug tox siblings https://review.opendev.org/c/zuul/zuul-jobs/+/767361 | 15:21 |
*** jfoufas1 has joined #zuul | 15:28 | |
avass | should squashfs-tools be part of nodepool builder? | 15:31 |
avass | had to install that to get ubuntu-bionic build working. am I missing something obvious? | 15:32 |
*** bhavikdbavishi has joined #zuul | 15:32 | |
openstackgerrit | Dinesh Garg proposed zuul/zuul-jobs master: Allow customization of pip and helm binary https://review.opendev.org/c/zuul/zuul-jobs/+/767354 | 15:33 |
openstackgerrit | Dinesh Garg proposed zuul/zuul-jobs master: Allow customization of pip and helm binary https://review.opendev.org/c/zuul/zuul-jobs/+/767354 | 15:53 |
*** jpena is now known as jpena|off | 15:57 | |
fungi | pabelanger: sorry to be a contrarian, but if it's not a python project then why tox? wouldn't something like make be better for non-python cases (granted some of us have waffled for years on whether make might be better even for python cases)? | 15:58 |
*** bhavikdbavishi has quit IRC | 15:58 | |
fungi | not that i'm necessarily opposed to supporting non-python projects with our python-specific roles, just curious about the use case more than anything | 15:59 |
*** jpena|off is now known as jpena | 15:59 | |
pabelanger | fungi: mostly because of tox-siblings, I know of no other way to use depends-on (via zuul-jobs) without writing a bunch of new code | 15:59 |
fungi | oic | 15:59 |
fungi | and yeah we have some similar mechanisms for container image based jobs, but it does require a bit of finesse to implement | 16:00 |
fungi | i guess this is something which has python-based dependencies but isn't itself in python? | 16:01 |
pabelanger | right, the use-case is actually container project, that uses a python project to build the container | 16:01 |
pabelanger | so, we use tox as entrypoint, given zuul has a lot of jobs around it, over makefiles | 16:01 |
openstackgerrit | Paul Belanger proposed zuul/zuul-jobs master: Create tox_package_name for tox role https://review.opendev.org/c/zuul/zuul-jobs/+/767361 | 16:13 |
mhu | hello zuul-maint, can we get https://review.opendev.org/c/zuul/zuul-client/+/765999 https://review.opendev.org/c/zuul/zuul-client/+/765203 https://review.opendev.org/c/zuul/zuul-client/+/765313 and https://review.opendev.org/c/zuul/zuul-client/+/765553 reviewed please? | 16:18 |
fungi | pabelanger: cool, so in that case tox-siblings makes sense, because you have python projects you need to install with it | 16:18 |
fungi | even if the triggering project is not itself in python | 16:19 |
*** hashar has joined #zuul | 16:29 | |
openstackgerrit | Paul Belanger proposed zuul/zuul-jobs master: Create tox_package_name for tox role https://review.opendev.org/c/zuul/zuul-jobs/+/767361 | 16:29 |
*** zenkuro has quit IRC | 16:32 | |
mordred | fungi: I think we've had that use case with other things ourselves - like zuul-jobs and system-config | 16:32 |
mordred | but we just added a setup.cfg :) | 16:32 |
pabelanger | yah, we've used setup.cfg for the most part too. | 16:35 |
pabelanger | but, it's difficult to explain to non-python folks why we need it in CI :) | 16:36 |
pabelanger | the good news is, they are mostly too busy loving speculative containers to fuss too much | 16:36 |
fungi | pabelanger: we could add a role which renames this_is_not_a_setup.cfg to setup.cfg and then you just insert that immediately before the tox roles? ;) | 16:40 |
fungi | .sibling-deps.ini | 16:40 |
fungi | whatever you like, the filename could even be a configurable parameter | 16:40 |
*** wuchunyang has joined #zuul | 16:59 | |
pabelanger | ++ | 17:00 |
pabelanger | unrelated, and likely wrong channel | 17:00 |
pabelanger | has anyone used poetry yet, to manage python dependencies? | 17:00 |
*** wuchunyang has quit IRC | 17:03 | |
*** rpittau is now known as rpittau|afk | 17:11 | |
tristanC | pabelanger: yeah, and i found it super convenient | 17:22 |
pabelanger | tristanC: looking to maybe create a plugin for it, to get version info out of pbr | 17:22 |
pabelanger | seems possible, by looking at something like: https://github.com/mtkennerly/poetry-dynamic-versioning | 17:23 |
*** jcapitao has quit IRC | 17:25 | |
tristanC | pabelanger: that plugin seems like the way to go | 17:26 |
tristanC | pabelanger: shouldn't we add poetry job to zuul/zuul-jobs? | 17:29 |
pabelanger | mordred: fungi: https://review.opendev.org/c/zuul/zuul-jobs/+/767361 seems to do what I need for tox siblings | 17:29 |
*** hamalq has joined #zuul | 17:30 | |
avass | mordred, pabelanger: I think it would help if pip supported something like 'pip develop' on a project so that if you're trying to install that dependency it will install it from a local version instead | 17:34 |
avass | nim does this really good: https://github.com/nim-lang/nimble#nimble-develop | 17:34 |
*** hamalq_ has joined #zuul | 17:34 | |
*** hamalq has quit IRC | 17:38 | |
tristanC | avass: i think that's one of the reason projects are using poetry instead of pip | 17:41 |
avass | tristanC: Yeah I've heard of that but haven't looked into it yet | 17:43 |
webknjaz | @pabelanger: I've had a bad experience with Poetry + the maintainer is rather unreliable. So I stick with pip + pip-tools.. | 17:45 |
avass | poetry looks a lot like npm for python though | 17:46 |
fungi | pabelanger: for clarity around the poetry discussion, you're looking for a mechanism whereby when you tell poetry to install a project from source and that project is using pbr via setuptools to generate package metadata, poetry would be able to read the additional git ref into out of the package's pbr json metadata file? | 17:46 |
pabelanger | fungi: yup, that is right | 17:46 |
webknjaz | @avass: In a way, yes. But it also makes questionable decisions that break things | 17:46 |
pabelanger | today, you have to hardcode the version string in pyproject.toml | 17:47 |
fungi | er, read the additional git ref info out of the package's pbr json metadata file | 17:47 |
fungi | pabelanger: oh, the entire version? that's a different problem, and not even pbr specific | 17:47 |
fungi | pbr embeds the version into the normal package metadata, so anything which can read the installed package version should be able to get it | 17:48 |
pabelanger | yah, ansible-builder uses poetry for reasons (I don't fully understand). But they also want to use pbr, to avoid hardcoding version strings in git, and to make releases easier | 17:48 |
fungi | the only separate pbr-specific metadata is for stuff like tracking the originating git ref id | 17:48 |
fungi | so if you can make poetry read standard package metadata to get the version, that should be all you need unless you want the extra git bits | 17:49 |
fungi | in modern python stdlib that's something like importlib.metadata.distribution(project_name).version | 17:51 |
avass | tbh from what I've seen using lockfiles for all your dependencies causes more problems than version being bumped now and then and breaking like pip does | 17:52 |
webknjaz | @pabelanger: there's a 3-year old feature request asking for SCM-based versioning that is still getting "me too"s. And I've even implemented a PoC for that. But the author does not allow core contributions from people who are not him and this is a known blocker. | 17:52 |
fungi | it doesn't seem like this case would need poetry to "support" scm-based versioning, just reading versions from normal package metadata (whatever builds the package just needs to write the version into the metadata) | 17:54 |
fungi | python packages have standardized metadata for precisely this reason | 17:55 |
fungi | it's an interface. things which can write the standard metadata and things which can read the standard metadata will be interoperable | 17:55 |
fungi | because... standard | 17:55 |
webknjaz | I've also heard rumors that the person isn't known for finishing the projects. And I know that now he refuses to collaborate with the rest of the packaging community unless everyone starts using his own dependency resolver. So combined with all of this, I've just given up on Poetry. | 17:56 |
*** jpena is now known as jpena|off | 18:02 | |
tristanC | webknjaz: that's unfortunate, the tool is very convenient to use though | 18:03 |
webknjaz | As usual, there's ups and downs | 18:04 |
avass | webknjaz: sounds like the project needs a fork ;) | 18:11 |
*** nils has quit IRC | 18:29 | |
*** wuchunyang has joined #zuul | 19:00 | |
*** wuchunyang has quit IRC | 19:05 | |
*** jfoufas1 has quit IRC | 19:55 | |
*** SpamapS has quit IRC | 20:53 | |
*** rfolco has quit IRC | 20:56 | |
*** vishalmanchanda has quit IRC | 20:56 | |
*** hashar has quit IRC | 21:13 | |
*** rlandy is now known as rlandy|bbl | 21:43 | |
*** wuchunyang has joined #zuul | 23:02 | |
*** wuchunyang has quit IRC | 23:06 | |
*** piotrowskim has quit IRC | 23:19 | |
ianw | any thoughts on making a type of job that is dependent on others, but always runs, even if some of the dependencies fail? like a "finally" block i guess | 23:21 |
ianw | the context is the mirror jobs, which currently only run the release job if every platform build job completes | 23:22 |
ianw | i could reorganise this without such a thing, just a thought | 23:23 |
fungi | so dependent as in ordered to run after, not dependent on th eoutcome | 23:33 |
corvus | ianw: yeah, i think we've batted around the idea of a 'cleanup' job | 23:38 |
ianw | fungi: yes, in essence | 23:39 |
corvus | could add another boolean flag like this one https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.dependencies.soft | 23:40 |
ianw | yeah, that's what i was thinking. but just thinking at this stage, don't want to yak shave :) | 23:40 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!