opendevreview | OpenStack Proposal Bot proposed openstack/ironic master: Imported Translations from Zanata https://review.opendev.org/c/openstack/ironic/+/954844 | 03:48 |
---|---|---|
rpittau | good morning ironic! o/ | 06:41 |
-opendevstatus- NOTICE: the gerrit service (https://review.opendev.org) is currently down, please be patient while we work on restoring it | 07:34 | |
queensly[m] | Good morning | 07:58 |
dtantsur | TheJulia: nice! would you prefer to keep my patch stacked? I'd opt for finishing it on master and merging since it's nearly ready, but would like to hear from you. | 09:16 |
rpittau | Everyone please be aware that I'm going to remove the -W to tkajinam patches to remove support for python 3.9, meaning after those patches merge python 3.9 can't be used anymore | 10:46 |
rpittau | Please let me know if you habe anything against it before eow | 10:46 |
rpittau | s/habe/have | 10:47 |
dtantsur | rpittau: we've migrated bifrost, haven't we? | 10:47 |
tkajinam | gerrit is down so we still have some time to wait for any last-minute feedback :-) | 10:47 |
tkajinam | https://opendev.org/openstack/bifrost/src/branch/master/zuul.d/project.yaml | 10:49 |
tkajinam | it seems it still runs some jammy job | 10:49 |
tkajinam | * jobs | 10:49 |
rpittau | dtantsur: we should be running everything with python>=3.10 at the moment, but I will double check | 10:49 |
rpittau | tkajinam: yep, those are leftovers | 10:49 |
tkajinam | yeah | 10:50 |
rpittau | tkajinam: if you have time for a patch I will approve it, otherwise I'll check later today or tomorrow | 10:50 |
tkajinam | rpittau, ack. will ping you in case I push something. it may depends on the recovery time of gerrit, though | 10:51 |
tkajinam | I hope it comes back online soon | 10:51 |
tkajinam | (seeing the discussion in #opendev | 10:51 |
rpittau | tkajinam: ack thanks | 10:52 |
*** iurygregory_ is now known as iurygregory | 11:15 | |
iurygregory | gerrit is on pto? | 11:15 |
iurygregory | Service Unavailable | 11:16 |
hjensas_ | iurygregory: There was a thing "NOTICE: the gerrit service (https://review.opendev.org) is currently down, please be patient while we work on restoring it" - it's taking some time of absence. :) | 11:25 |
*** hjensas_ is now known as hjensas | 11:25 | |
iurygregory | hjensas, we forgot to give cookies to gerrit =( | 11:26 |
-opendevstatus- NOTICE: the gerrit service (https://review.opendev.org) is back up. We believe the restoration is complete. If you notice any issues please report them in #opendev ASAP | 11:55 | |
rpittau | and gerrit is back :) | 12:16 |
TheJulia | dtantsur: keep going with your patch, we can layer on top. I'm seeing some really odd threading behavior, but it very well may also just be the size of my fake nodes (~5005). | 13:12 |
TheJulia | dtantsur: A few thoughts, we do need to reduce/set a default stack size to something more sane. With threading the virtual size balooned but luckily actual usage wasn't really *that* different. | 13:14 |
TheJulia | I think part of my issue is the takeover code path and what I did to... simulate remote BMCs. I'm sort of hoping my local conductor has chilled out since I left it running all night. :) | 13:21 |
opendevreview | Riccardo Pittau proposed openstack/bifrost master: Remove Python 3.9 upper constraints pin https://review.opendev.org/c/openstack/bifrost/+/955061 | 13:21 |
TheJulia | it chilled out, still weirdness: Jul 15 13:18:13 ubu ironic[34965]: 2025-07-15 13:18:13.704 34965 WARNING ironic.conductor.manager [None req-b45d2723-337d-4353-b2ab-1a8d189dc1d0 - - - - - -] There are no more conductor workers for power sync task. 3 workers have been already spawned.: ironic.common.exception.NoFreeConductorWorker: Requested action cannot be performed due to lack of free conductor workers. | 13:22 |
TheJulia | :( | 13:22 |
dtantsur | TheJulia: probably instrument spawn_worker with something that logs thread creation, thread deletion and how long it took? We can even live it there behind a new flag. | 13:32 |
TheJulia | Yeah, that is what I was thinking. I'm a little annoyed by takeover, but since that entire process finished the other weirdness continues. Its sort of a chess board that I have to feel to move the pieces around but I'm somehow wearing gloves at the moment. | 13:35 |
TheJulia | The other weird thing, api, creating a node, all *feel* faster | 13:42 |
TheJulia | its just... weird | 13:42 |
dtantsur | magic | 13:50 |
TheJulia | heh | 13:56 |
dtantsur | I only need to transfer the Projects directory to the new laptop, and I'll be ready to code again. Hopefully. If nothing else breaks. | 14:07 |
guilhermesp | hello team! its me again, losing the rest of sanity i still have with those lenovo lol this time, i think it might be a bug (? ) with bios.{factory_reset,apply_configuration} and redfish... its intermittent ... have anyone seen this? https://gist.githubusercontent.com/guilhermesteinmuller/aa219edd07e8e2ddbdbda92451e29083/raw/7eb426c789b831a91d8824fb34b5de6df3aa411b/gistfile1.txt --- i mean... the deploy image is | 14:19 |
guilhermesp | there, and this is just happening on bios steps | 14:19 |
dtantsur | cardoe: does my memory serve me will: you were interested in asyncio for sushy? | 14:49 |
cardoe | yeah | 14:50 |
dtantsur | cardoe: I'm in a discussion about scaling sensor collection with sushy, and it does not look like we can handle it without asyncio | 14:51 |
dtantsur | like, 3500 nodes, response time averaging at 5 seconds, try to do it with a reasonable number of OS threads | 14:51 |
dtantsur | cardoe: do I also recall it right that you had some ideas about hybrid sync/async sushy? like, keep compatibility? | 14:54 |
opendevreview | Riccardo Pittau proposed openstack/bifrost master: Remove Ubuntu Jammy jobs https://review.opendev.org/c/openstack/bifrost/+/955077 | 14:54 |
TheJulia | dtantsur: This is going to seem a little weird, but I'm staring to suspect a lot of weirendess is around the code pattern in _spawn_worker itself... specifically trying to return the submit call. Which seems crazy, but splitting it apart slightly drastically changes behavior | 14:59 |
dtantsur | hmm | 14:59 |
TheJulia | FWIW, I'm going to leave the process running for a while, gather a bit more data | 15:00 |
* TheJulia raises an eyebrow | 15:03 | |
TheJulia | yeeeahhhhh.. its running at the same time span now for checks as it was with eventlet with a super simple change | 15:07 |
TheJulia | I guess I'll check all handling around thread/worker invocations, it likely explains why we're slowly ramping up to 300 workers. I'll just let it idle long enough to see if logs an inability to launch another worker | 15:09 |
cardoe | dtantsur: yeah. If we use the httpx library it’s easy. Not sure if we got that into global-requirements yet. | 15:17 |
dtantsur | Okay, thanks! I'm curious to build some sort of a PoC this week. Do you have anything already? | 15:18 |
TheJulia | cardoe: its in g-r | 15:25 |
dtantsur | \o/ | 15:25 |
dtantsur | I wonder if any team is already using it for the same purpose | 15:25 |
dtantsur | skyline-apiserver, hmm, okay | 15:33 |
opendevreview | Riccardo Pittau proposed openstack/bifrost master: Remove Ubuntu Jammy jobs https://review.opendev.org/c/openstack/bifrost/+/955077 | 15:52 |
clif | Has anyone run into this error when trying to run "ironic-dbsync" under devstack? : "sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.plugins:dbcounter" | 16:01 |
clif | I'm having trouble diagnosing this as very few places seem to reference "dbcounter" | 16:02 |
clif | ironic's codebase seems to have no mention of it | 16:03 |
opendevreview | Verification of a change to openstack/ironic-python-agent master failed: Fix software RAID creation on different physical devices https://review.opendev.org/c/openstack/ironic-python-agent/+/953122 | 16:11 |
clif | perhaps my problem is in my approach: I'm trying to create a db migration for my port model change, I thought I'd try to use devstack to create it since there should be a migrated db stood up after stack.sh finished. I used "tox -epy3 --devenv" to create a virtualenv which I could accomplish this under. | 16:11 |
clif | aha, it is part of devstack, installing it in my virtualenv solved this, rubber ducking ftw | 16:15 |
TheJulia | clif: ugh, yeah. That is plugin from devstack. I'd honestly just turn it off, there is a devstack setting for that. | 16:27 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] Switch from local RPC to automated JSON RPC on localhost https://review.opendev.org/c/openstack/ironic/+/954755 | 16:51 |
dtantsur | started working on unit tests ^^ and cleaned up the code a bit | 16:52 |
TheJulia | If we set an upper bound on a runtime for a power sync worker, what do we think it would want to be? | 18:11 |
TheJulia | It *looks* like the issue I've been chasing is the power sync worker self-exhausting the pool and its hitting itself eventually. | 18:26 |
opendevreview | Merged openstack/ironic-python-agent master: Fix software RAID creation on different physical devices https://review.opendev.org/c/openstack/ironic-python-agent/+/953122 | 18:29 |
TheJulia | but, in some ways, I'm also running in a "large scale" painful state for a deployment | 18:30 |
opendevreview | Julia Kreger proposed openstack/ironic master: trivial: fix benchmark data generation script https://review.opendev.org/c/openstack/ironic/+/955099 | 20:13 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!