arne_wiebalck | Snowy good morning, Ironic! | 07:41 |
---|---|---|
rpittau | almost snowy good morning ironic! o/ | 08:13 |
ftarasenko | Gm! :) | 08:13 |
ftarasenko | ajya: have question for you regarding https://review.opendev.org/c/openstack/sushy/+/864845 It looks like it does not help to clean server and it waits for a long time. Can you check log provided https://paste.openstack.org/show/bFsw5RZJGUuCeYZsHIcg/ | 08:13 |
rpittau | wondering if the tox issue has been fixed | 08:18 |
ajya | Good morning Ironic | 08:19 |
ajya | ftarasenko: the errors don't seem to be directly related to the issue, but maybe is somehow caused by it. Looks like the periodic task is failing to check that operation has completed successfully, but it shouldn't be doing that here at all (it's waiting for task to complete on sushy side instead). I'll try to reproduce it. | 08:22 |
ftarasenko | ajay: I'm waiting for cleaning to finish from last evening - more than 12h( Do you need more info or this is enough? | 08:23 |
ftarasenko | ajya: sorry) ^ | 08:23 |
ajya | ftarasenko: definitely no, it should time out as configured (10-20 minutes?), but it seems because of these periodics errors timeout is not triggering. Separate thing that could be improved. | 08:25 |
rpittau | still not fixed everywhere https://review.opendev.org/q/topic:bug%252F1999183 | 09:49 |
ajya | ftarasenko: I was able to reproduce "'NoneType' object is not iterable" error when RAID cleaning more than once on the same node. The cause is that it didn't delete parts of driver internal info after the 1st run. I'll work on the patch to fix this. But cleaning still completed. Maybe there is another error somewhere blocking cleaning? | 10:55 |
ajya | This issue is not related to OperationApplyTime fix, and is pre-existing. | 10:55 |
ftarasenko | ajya: I see nothing more in Ironic logs. I have Ironic Zed version installed. | 10:56 |
ajya | ftarasenko: can you try again with fresh node created just to see if it's different? | 11:00 |
ftarasenko | ajya, yep. Can I try to readd the same node or need to find a new one? | 11:08 |
ajya | ftarasenko: can use the same | 11:10 |
ftarasenko | ajya, I'll try now, but I think that I did the same already. Should I turn debug logs in conductor or it's not required? | 11:12 |
iurygregory | morning Ironic | 11:24 |
ajya | ftarasenko: turn on, and let's see what sushy does | 11:27 |
ajya | hi iurygregory | 11:28 |
iurygregory | ajya, o/ | 11:38 |
opendevreview | Slawek Kaplonski proposed openstack/ironic master: [grenade] Explicitly enable Neutron ML2/OVS services in the CI job https://review.opendev.org/c/openstack/ironic/+/866993 | 11:48 |
opendevreview | Merged openstack/ironic-lib master: No longer override install_command in tox.ini https://review.opendev.org/c/openstack/ironic-lib/+/866032 | 11:50 |
opendevreview | Slawek Kaplonski proposed openstack/ironic master: [grenade] Explicitly enable Neutron ML2/OVS services in the CI job https://review.opendev.org/c/openstack/ironic/+/866993 | 11:52 |
opendevreview | Merged openstack/python-ironicclient master: No longer override install_command in tox.ini https://review.opendev.org/c/openstack/python-ironicclient/+/866033 | 11:59 |
opendevreview | Merged openstack/sushy master: Handle proper code_status in unit test https://review.opendev.org/c/openstack/sushy/+/866782 | 12:01 |
ftarasenko | ajya: It looks like I have no problems with fresh node. Tried multiple times to clean. Let me check some more deployments, it would take 1-2 days. | 13:00 |
ftarasenko | Thank you | 13:00 |
ajya | ftarasenko: ok, meanwhile I'll work on the patch for NoneType issue to get it fixed and in case it is somehow causing this. | 13:02 |
opendevreview | Aija Jauntēva proposed openstack/ironic master: Fix "'NoneType' object is not iterable" in RAID https://review.opendev.org/c/openstack/ironic/+/867117 | 14:45 |
ajya | ftarasenko: patch that fixes the NoneType issue ^, but I need to run more tests to make sure I haven't broken other cases, so W-1 now | 14:46 |
opendevreview | Riccardo Pittau proposed openstack/ironic bugfix/20.2: All jobs should still run on focal and point to stable/zed https://review.opendev.org/c/openstack/ironic/+/866972 | 15:16 |
rpittau | bye everyone, have a great weekend! o/ | 15:29 |
opendevreview | Merged openstack/ironic-prometheus-exporter stable/wallaby: Remove bifrost integration job https://review.opendev.org/c/openstack/ironic-prometheus-exporter/+/866597 | 19:26 |
NobodyCam | Good afternoon OpenStack Folks | 20:02 |
TheJulia | greetings NobodyCam | 20:06 |
NobodyCam | :) happy almost weekend TheJulia :) | 20:07 |
NobodyCam | Crazy question | 20:07 |
NobodyCam | I just just looking at the redfish driver, | 20:07 |
NobodyCam | has anyone ever needed to overwrite the boot device with an actual device id (i.e. one from the efibootmgr list)? | 20:09 |
NobodyCam | instead of setting boot device to PXE I'd like to set something like "Boot0003" | 20:11 |
opendevreview | Merged openstack/ironic master: Follow-up to Redfish Interop Profile https://review.opendev.org/c/openstack/ironic/+/866190 | 20:19 |
opendevreview | Merged openstack/ironic bugfix/20.2: All jobs should still run on focal and point to stable/zed https://review.opendev.org/c/openstack/ironic/+/866972 | 20:19 |
opendevreview | Verification of a change to openstack/bifrost master failed: Use ansible 6.x https://review.opendev.org/c/openstack/bifrost/+/865969 | 20:20 |
TheJulia | NobodyCam: so, not quite like that | 20:54 |
TheJulia | the conundrum is inconcistent ventor exposure of the records, and you can change the record number in userspace | 20:54 |
TheJulia | so it becomes... problematic | 20:54 |
TheJulia | Which is why we do it deep inside of the ramdisk typically | 20:54 |
NobodyCam | +++ | 21:00 |
*** tosky is now known as Guest1527 | 22:34 | |
*** tosky_ is now known as tosky | 22:34 | |
opendevreview | Julia Kreger proposed openstack/ironic-specs master: Add a shard key https://review.opendev.org/c/openstack/ironic-specs/+/861803 | 23:10 |
TheJulia | JayF: ^ | 23:10 |
JayF | TheJulia: that edit to alternatives section is A++++ | 23:11 |
TheJulia | yeah, once I realized the comment might not have really groked the meaning, I reworked it | 23:12 |
TheJulia | granted, we *intentionally* modeled RBAC so you *could* connect a nova to ironic with a specific project | 23:13 |
TheJulia | but... if your doing that, you have a very very specific use/interaction | 23:13 |
JayF | Do not conflate two things here: | 23:13 |
TheJulia | and that is a ton of overhead to maintain | 23:13 |
JayF | there's a difference between: "you can connect multiple Nova services to the same Ironic with owner/project" | 23:13 |
JayF | and | 23:13 |
JayF | "you can scale a single Nova service to a single Ironic with owner/project" | 23:13 |
JayF | because the first is a sensible case, even post-shard-key | 23:13 |
TheJulia | oh yeah | 23:14 |
JayF | the second is just prima facie wrong | 23:14 |
TheJulia | agree 10000% | 23:14 |
TheJulia | I don't want to think of the pain to do the latter | 23:14 |
JayF | I mean, it's even more ridiculous operator coordination | 23:14 |
JayF | "for each nova compute; give it it's own service user with different project permissions" | 23:15 |
TheJulia | yup | 23:15 |
JayF | my eyes want to start bleeding just from thinking about how gross that documentation would be | 23:15 |
* TheJulia suggests a calming weekend instead | 23:15 | |
JayF | lol | 23:15 |
JayF | 45 minutes left until my EOD :) | 23:15 |
TheJulia | speaking of which, I'm going to go help with some stuff outside | 23:15 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!