opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/942969 | 02:25 |
---|---|---|
opendevreview | Merged openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/942969 | 03:03 |
ralonsoh | fungi, hello! I have one question: is it possible in zuul to configure a queue (with multiple jobs) to re-check them at least once if one of them fails? | 06:48 |
ralonsoh | this is because we have the check queue for neutron-tempest-plugin, that has a lot of jobs and some of them are not very stable | 06:48 |
ralonsoh | when one fails, the queue needs to be re-triggered again | 06:49 |
fungi | ralonsoh: no, zuul is designed to run jobs that pass consistently and ideally only fail when there's a bug. openstack has a history of recheck-spamming changes that introduce new nondeterministic bugs until they merge, so the assumption is that you're identifying the problems in each failure and removing tests or jobs if they aren't reliable | 13:31 |
ralonsoh | fungi, makes sense, thanks! | 15:48 |
clarkb | removing tests, or removing jobs, or fixing the problems | 15:49 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Wind down/clean up Rackspace Flex SJC3 resources https://review.opendev.org/c/openstack/project-config/+/943065 | 21:23 |
M0weng[m] | Not sure if this is the right place to ask about this, but it seems like the last three keystone runs have all failed on the `keystone-dsvm-py3-functional-federation-ubuntu-jammy-k2k` job for the same reason:... (full message at <https://matrix.org/oftc/media/v1/media/download/AcRrcoC6WfpWG-3Kw-gccNvFgD0i60MJkz_HerbOnEKx_pseJD6r540tRozYa2rcFnT_KB4tbxpU17jbFAamlGlCeVlu61OQAG1hdHJpeC5vcmcvZnZqVUpGUlp4dm9CeW96S1dXQURSQWNH>) | 23:13 |
M0weng[m] | (this is for the openstack/keystone queue of the gate pipeline) | 23:14 |
M0weng[m] | * (this is for the openstack/keystone queue of the gate and check pipelines) | 23:14 |
clarkb | M0weng[m]: note your first message was too long for irc so got bundled up in a url to click (keeping them under 512 bytes avoids this). Can you link to the logfile you collected that from? | 23:15 |
clarkb | (thats just a general habit I'm trying to get people into when discussing failures) | 23:15 |
clarkb | I'm nto sure this is the correct place. Seems like osc made some change requiring a new parameter and devstack can't handle it? | 23:16 |
M0weng[m] | Ohh ok, I didn't know that! Yes here's an example: https://zuul.opendev.org/t/openstack/build/463acae9c078449eb7d67469e139773e | 23:16 |
clarkb | thanks! | 23:16 |
clarkb | this is the commadn that was run: `oscwrap role add --group 2149986712c5497789c670dd42852f17 --domain 00f93dba92fa47d9b8c49f9f65cd89e7` (which proxies to a running osc process) and the help output says that command needs a role to be provided | 23:17 |
clarkb | openstackclient did get a release two days ago | 23:17 |
clarkb | I'm wondering if this is a new requirement of the new release | 23:17 |
M0weng[m] | Ahh I think I worked on that openstackclient change actually haha... | 23:20 |
M0weng[m] | I think role add was always supposed to have required a role as a positional argument, but possibly it wasn't being enforced before... | 23:22 |
clarkb | on the line above it says local member_role= | 23:25 |
clarkb | maybe that variable stopped getting set? | 23:25 |
clarkb | or maybe it was never set before and until it was enforced it was not a problem | 23:26 |
clarkb | I need to pop out now and enjoy some sunlight before it is gone. But I would look and see why member_role is not being set and fix that as the likely path forward | 23:27 |
M0weng[m] | Enjoy the sunshine! :) | 23:27 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!