@clarkb:matrix.org | I don't think it can hurt to have options. I personally havent bothered with uv yet because it lacked a number of features last I looked but those may not be important to everyone | 00:02 |
---|---|---|
@clarkb:matrix.org | Seems like it does support a lot more pip functionality now. I wonder if it respects constraints rules | 00:09 |
@fungicide:matrix.org | uv (and poetry) seem to have more of a focus on lockfiles based on what i've seen discussed in the lockfiles pep, so it wouldn't surprise me if they don't see much importance in pip's constraints functionality | 00:11 |
-@gerrit:opendev.org- Zuul merged on behalf of Felix Edel: [zuul/zuul] 937876: Incorporate tenants stats in Toolbar section https://review.opendev.org/c/zuul/zuul/+/937876 | 11:23 | |
@fungicide:matrix.org | mordred: i suspect this task didn't expect cpython's free-threading "t" suffixed versions: https://zuul.opendev.org/t/openstack/build/2d57f006e1b548f3954d9780c7adf346/console#3/0/11/ubuntu-noble | 14:33 |
@fungicide:matrix.org | any opinion on whether we should directly filter them out, change the sort, or what? | 14:34 |
@fungicide:matrix.org | it looks like tox has decided that py313 shouldn't use the 3.13.1t that's being installed there (per the failure in that build) | 14:35 |
@joao15130:matrix.org | Hello, During the exection of the ansible playbook, I've observed some messages like below: | 15:29 |
2025-01-24 15:26:01.763247 | localhost | Timeout exception waiting for the logger. Please check connectivity to ... | ||
The connectivity is fine, and it looks like it happens only when it needs the logger. | ||
Any idea? | ||
@clarkb:matrix.org | Jean Pierre Roquesalane: this probably means you aren't running the logger subsystem. There is a role for that in zuul-jobs: start-zuul-console | 15:49 |
@clarkb:matrix.org | fungi: those are python builds without the gil enabled? seems like a tox bug? | 15:50 |
@joao15130:matrix.org | this job is run. | 15:50 |
[root@zuul zuul-config]# cat playbooks/base/pre.yaml | ||
- hosts: localhost | ||
roles: | ||
- role: emit-job-header | ||
zuul_log_path_shard_build: true | ||
- ensure-output-dirs | ||
- hosts: all | ||
pre_tasks: | ||
- name: Start zuul console daemon | ||
zuul_console: | ||
roles: | ||
- add-build-sshkey | ||
- prepare-workspace-git | ||
- validate-host | ||
- log-inventory | ||
@joao15130:matrix.org | and sometimes it worked on a static vms, I'm observing now as I used a nodepool based VM | 15:51 |
@joao15130:matrix.org | on an openstack cloud | 15:51 |
@clarkb:matrix.org | it could just be a race. You're running stuff before you start the console which creates a block of time where it won't eb able to connect. Does the message go away and you start to get the console afterwards? | 15:52 |
@joao15130:matrix.org | the start console is done at the very beginning before that ansible execution | 15:52 |
@clarkb:matrix.org | in your example two roles are run before you start the console | 15:53 |
@joao15130:matrix.org | I'd like to debug it but I don't know how I can tell nodepool not to delete the instance at the end of the job | 15:53 |
@joao15130:matrix.org | yes that's true | 15:53 |
@joao15130:matrix.org | you suggest to move at the very first line? | 15:54 |
@clarkb:matrix.org | you can use the zuul autohold feature (either vai the web ui if you have login setup there or the zuul-client) to request zuul and nodepool not delete nodes when jobs fail | 15:54 |
@dfajfer:fsfe.org | I have it when the job starts and after it finishes when the logs are already there | 15:54 |
@dfajfer:fsfe.org | but it's like Clark says | 15:54 |
@clarkb:matrix.org | Not necessarily its an asynchronous system sometimes it is ok to accept connection retries | 15:54 |
@joao15130:matrix.org | ok but I couldn't find a way to make it work as of now | 15:55 |
@joao15130:matrix.org | I don't see the create request button in the autohold tab on the Zuul UI | 16:01 |
@clarkb:matrix.org | are you logged in with admin credentials? | 16:02 |
@joao15130:matrix.org | I'm just into the UI on port 9000 | 16:02 |
@joao15130:matrix.org | I'm using the UI which comes as part of the qui-start tutorial | 16:03 |
@joao15130:matrix.org | it may be limited in terms of features... | 16:03 |
@clarkb:matrix.org | There is an add on tutorial to set up with keycloak to provide authentication | 16:04 |
@joao15130:matrix.org | Ok I'll look into it | 16:04 |
@joao15130:matrix.org | other than that, no idea on how to overcome the timeout issue? | 16:04 |
@clarkb:matrix.org | https://zuul-ci.org/docs/zuul/latest/tutorials/keycloak.html | 16:05 |
@clarkb:matrix.org | not without more information. It could be firewalls (are the ports open between you and the VMs?) could be timing. Maybe the console system is crashing for some reason | 16:05 |
@joao15130:matrix.org | ok | 16:05 |
@joao15130:matrix.org | all ports are open | 16:06 |
@joao15130:matrix.org | I think I know where it comes from | 16:14 |
@joao15130:matrix.org | my openstack cloud has two networks | 16:14 |
@joao15130:matrix.org | one private on which the VM is hooked up | 16:14 |
@joao15130:matrix.org | one public on which we assign floating IPs | 16:14 |
@joao15130:matrix.org | nodepool creates VM on this cloud | 16:15 |
@joao15130:matrix.org | But I think that somewhere in the process there is a misconfiguration | 16:15 |
@joao15130:matrix.org | Maybe I need to connect the VMs directly on the public network the floating IP mechanisme | 16:16 |
@joao15130:matrix.org | because at the end the IP of the VM from inside the system is the private IP and not the publicj one | 16:16 |
@joao15130:matrix.org | that's the whole point of a floating IP | 16:16 |
@joao15130:matrix.org | do you think this kind of network layout can impact Zuul? | 16:17 |
@clarkb:matrix.org | If zuul is talking to the floating IP it should work. But if it is trying to talk to the private ip that could explain it | 16:18 |
@joao15130:matrix.org | Don't know | 16:18 |
@joao15130:matrix.org | I'm observing that devstack has the floating IP in the configuration file | 16:18 |
@joao15130:matrix.org | instead of the private | 16:18 |
@joao15130:matrix.org | and it breaks devstack installation | 16:19 |
@joao15130:matrix.org | I'm wondering if that can harm zuul as well | 16:19 |
@clarkb:matrix.org | we (OpenDev) run zuul against at least one cloud using floating IPs | 16:19 |
@clarkb:matrix.org | in general it should work | 16:19 |
@dfajfer:fsfe.org | im pretty sure i use it the way you do Jean and it just works | 16:20 |
@joao15130:matrix.org | hmm, can it be a configuration difference between yours and mine | 16:20 |
@joao15130:matrix.org | because devstack is triggered with public IPs inside the configuration file | 16:21 |
@joao15130:matrix.org | which is unknown from the VM | 16:21 |
@joao15130:matrix.org | which comes from there in the job defintion: | 16:22 |
devstack_localrc: | ||
SERVICE_HOST: "{{ hostvars['controller']['nodepool']['public_ipv4'] }}" | ||
HOST_IP: "{{ hostvars['controller']['nodepool']['public_ipv4'] }}" | ||
#PUBLIC_BRIDGE_MTU: '{{ external_bridge_mtu }}' | ||
DATABASE_TYPE: mysql | ||
RABBIT_HOST: "{{ hostvars['controller']['nodepool']['public_ipv4'] }}" | ||
DATABASE_HOST: "{{ hostvars['controller']['nodepool']['public_ipv4'] }}" | ||
@fungicide:matrix.org | clarkb: not so sure it's a bug in tox, i think pyenv is setting up a python3.13t executable rather than python3.13, tox probably needs something like a py313t testenv to use that, but more generally i think it's a bug in our pyenv role that we're preferring the temporart t-suffixed versions over normal python builds | 16:46 |
@clarkb:matrix.org | fungi: do you think tox -e py313t would work? if so then ya I guess it isn't a tox bug. Also I didn't think our pyenv role prefers no gil over gil. That might just be pyenv if you ask it for 3.13? | 16:47 |
@fungicide:matrix.org | also i think this is the experimental free-threading variant, not the nogil variant | 16:48 |
@fungicide:matrix.org | or do those wind up being the same thing | 16:48 |
@fungicide:matrix.org | ah, yeah, they're the same thing. right | 16:49 |
@fungicide:matrix.org | i was confusing that with the jit variant | 16:49 |
@clarkb:matrix.org | https://github.com/pyenv/pyenv/issues/3015 seems related | 16:49 |
@fungicide:matrix.org | agreed | 16:50 |
@clarkb:matrix.org | it is possible that pyenv may not be useful here anymore if it cannot reliably build the python we want (though the issue doesn't really say that as much as it says it produces confusing results) | 16:51 |
@clarkb:matrix.org | side note when looking at uv yesterday it apparently does pyenv like things now too | 16:51 |
@fungicide:matrix.org | but anyway, our role should ideally be telling pyenv to build 3.13.1 rather than 3.13.1t when 3.13 is requested | 16:51 |
@clarkb:matrix.org | fungi: ya my read though is that it should be producing 3.13 and 3.13t they might just be the same thing. However they may not have updated that issue after disambiguating | 16:51 |
@fungicide:matrix.org | our ansible asks pyenv for a list of available versions, filters them for ^3.13, sorts, then picks the last (highest) result, which is the one with the t on the end rather than the one without | 16:52 |
@clarkb:matrix.org | so we probably need to filter out the t variants if no t is present in our original version request | 16:53 |
@fungicide:matrix.org | right, that's my take as well | 16:53 |
@clarkb:matrix.org | its also possible that filtering doesnt' work if you do supply 3.13t | 16:53 |
@fungicide:matrix.org | we'd have to do some less naive parsing of the requested base version, or add a separate field or something | 16:54 |
@fungicide:matrix.org | but also i'm wary of over-engineering a solution when cpython upstream has stated that these "t" versions are a temporary thing | 16:55 |
@clarkb:matrix.org | in that case maybe just do a grep -v '.*t$' before piping to tail | 16:55 |
@clarkb:matrix.org | and say the role doesn't support no gil builds for now | 16:56 |
@fungicide:matrix.org | yeah, at least in the short term that ought to address the recent breakage for 3.13 jobs | 16:58 |
@fungicide:matrix.org | i'll propose that and see what others think | 16:58 |
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [zuul/zuul-jobs] 940158: ensure-python: Skip "t" versions in pyenv https://review.opendev.org/c/zuul/zuul-jobs/+/940158 | 17:06 | |
@clarkb:matrix.org | as a quick hack to make things work that makes sense to me. Then followups can add support for the special builds if people are interested in them | 17:07 |
@fungicide:matrix.org | https://review.opendev.org/c/openstack/pbr/+/940117 should hopefully tell us if it worked | 17:09 |
@fungicide:matrix.org | looks like it has (the previously failing openstack-tox-py313 job succeeded with depends-on, though the buildset is still about half an hour out from reporting back on the change) | 18:16 |
-@gerrit:opendev.org- Brian Haley proposed: [zuul/zuul-jobs] 940074: Update ensure-twine role https://review.opendev.org/c/zuul/zuul-jobs/+/940074 | 23:08 | |
-@gerrit:opendev.org- Brian Haley proposed: [zuul/zuul-jobs] 940074: Update ensure-twine role https://review.opendev.org/c/zuul/zuul-jobs/+/940074 | 23:18 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!