Friday, 2022-12-09

arne_wiebalckSnowy good morning, Ironic!07:41
rpittaualmost snowy good morning ironic! o/08:13
ftarasenkoGm! :)08:13
ftarasenkoajya: have question for you regarding https://review.opendev.org/c/openstack/sushy/+/864845 It looks like it does not help to clean server and it waits for a long time. Can you check log provided https://paste.openstack.org/show/bFsw5RZJGUuCeYZsHIcg/ 08:13
rpittauwondering if the tox issue has been fixed08:18
ajyaGood morning Ironic08:19
ajyaftarasenko: the errors don't seem to be directly related to the issue, but maybe is somehow caused by it. Looks like the periodic task is failing to check that operation has completed successfully, but it shouldn't be doing that here at all (it's waiting for task to complete on sushy side instead). I'll try to reproduce it.08:22
ftarasenkoajay: I'm waiting for cleaning to finish from last evening - more than 12h( Do you need more info or this is enough?08:23
ftarasenkoajya: sorry) ^08:23
ajyaftarasenko:  definitely no, it should time out as configured (10-20 minutes?), but it seems because of these periodics errors timeout is not triggering. Separate thing that could be improved.08:25
rpittaustill not fixed everywhere https://review.opendev.org/q/topic:bug%252F199918309:49
ajyaftarasenko: I was able to reproduce "'NoneType' object is not iterable" error when RAID cleaning more than once on the same node. The cause is that it didn't delete parts of driver internal info after the 1st run. I'll work on the patch to fix this. But cleaning still completed. Maybe there is another error somewhere blocking cleaning?10:55
ajyaThis issue is not related to OperationApplyTime fix, and is pre-existing.10:55
ftarasenkoajya: I see nothing more in Ironic logs. I have Ironic Zed version installed.10:56
ajyaftarasenko: can you try again with fresh node created just to see if it's different?11:00
ftarasenkoajya, yep. Can I try to readd the same node or need to find a new one?11:08
ajyaftarasenko: can use the same11:10
ftarasenkoajya, I'll try now, but I think that I did the same already. Should I turn debug logs in conductor or it's not required?11:12
iurygregorymorning Ironic 11:24
ajyaftarasenko: turn on, and let's see what sushy does11:27
ajyahi iurygregory11:28
iurygregoryajya, o/11:38
opendevreviewSlawek Kaplonski proposed openstack/ironic master: [grenade] Explicitly enable Neutron ML2/OVS services in the CI job  https://review.opendev.org/c/openstack/ironic/+/86699311:48
opendevreviewMerged openstack/ironic-lib master: No longer override install_command in tox.ini  https://review.opendev.org/c/openstack/ironic-lib/+/86603211:50
opendevreviewSlawek Kaplonski proposed openstack/ironic master: [grenade] Explicitly enable Neutron ML2/OVS services in the CI job  https://review.opendev.org/c/openstack/ironic/+/86699311:52
opendevreviewMerged openstack/python-ironicclient master: No longer override install_command in tox.ini  https://review.opendev.org/c/openstack/python-ironicclient/+/86603311:59
opendevreviewMerged openstack/sushy master: Handle proper code_status in unit test  https://review.opendev.org/c/openstack/sushy/+/86678212:01
ftarasenkoajya: It looks like I have no problems with fresh node. Tried multiple times to clean. Let me check some more deployments, it would take 1-2 days.13:00
ftarasenkoThank you13:00
ajyaftarasenko: ok, meanwhile I'll work on the patch for NoneType issue to get it fixed and in case it is somehow causing this.13:02
opendevreviewAija Jauntēva proposed openstack/ironic master: Fix "'NoneType' object is not iterable" in RAID  https://review.opendev.org/c/openstack/ironic/+/86711714:45
ajyaftarasenko:  patch that fixes the NoneType issue ^, but I need to run more tests to make sure I haven't broken other cases, so W-1 now14:46
opendevreviewRiccardo Pittau proposed openstack/ironic bugfix/20.2: All jobs should still run on focal and point to stable/zed  https://review.opendev.org/c/openstack/ironic/+/86697215:16
rpittaubye everyone, have a great weekend! o/15:29
opendevreviewMerged openstack/ironic-prometheus-exporter stable/wallaby: Remove bifrost integration job  https://review.opendev.org/c/openstack/ironic-prometheus-exporter/+/86659719:26
NobodyCamGood afternoon OpenStack Folks20:02
TheJuliagreetings NobodyCam20:06
NobodyCam:) happy almost weekend TheJulia :)20:07
NobodyCamCrazy question20:07
NobodyCamI just just looking at the redfish driver, 20:07
NobodyCamhas anyone ever needed to overwrite the boot device with an actual device id (i.e. one from the efibootmgr list)?20:09
NobodyCaminstead of setting boot device to PXE I'd like to set something like "Boot0003"20:11
opendevreviewMerged openstack/ironic master: Follow-up to Redfish Interop Profile  https://review.opendev.org/c/openstack/ironic/+/86619020:19
opendevreviewMerged openstack/ironic bugfix/20.2: All jobs should still run on focal and point to stable/zed  https://review.opendev.org/c/openstack/ironic/+/86697220:19
opendevreviewVerification of a change to openstack/bifrost master failed: Use ansible 6.x  https://review.opendev.org/c/openstack/bifrost/+/86596920:20
TheJuliaNobodyCam: so, not quite like that20:54
TheJuliathe conundrum is inconcistent ventor exposure of the records, and you can change the record number in userspace20:54
TheJuliaso it becomes... problematic20:54
TheJuliaWhich is why we do it deep inside of the ramdisk typically20:54
NobodyCam+++21:00
*** tosky is now known as Guest152722:34
*** tosky_ is now known as tosky22:34
opendevreviewJulia Kreger proposed openstack/ironic-specs master: Add a shard key  https://review.opendev.org/c/openstack/ironic-specs/+/86180323:10
TheJuliaJayF: ^23:10
JayFTheJulia: that edit to alternatives section is A++++23:11
TheJuliayeah, once I realized the comment might not have really groked the meaning, I reworked it23:12
TheJuliagranted, we *intentionally* modeled RBAC so you *could* connect a nova to ironic with a specific project23:13
TheJuliabut... if your doing that, you have a very very specific use/interaction23:13
JayFDo not conflate two things here:23:13
TheJuliaand that is a ton of overhead to maintain23:13
JayFthere's a difference between: "you can connect multiple Nova services to the same Ironic with owner/project"23:13
JayFand 23:13
JayF"you can scale a single Nova service to a single Ironic with owner/project"23:13
JayFbecause the first is a sensible case, even post-shard-key23:13
TheJuliaoh yeah23:14
JayFthe second is just prima facie wrong23:14
TheJuliaagree 10000%23:14
TheJuliaI don't want to think of the pain to do the latter23:14
JayFI mean, it's even more ridiculous operator coordination23:14
JayF"for each nova compute; give it it's own service user with different project permissions"23:15
TheJuliayup23:15
JayFmy eyes want to start bleeding just from thinking about how gross that documentation would be23:15
* TheJulia suggests a calming weekend instead23:15
JayFlol23:15
JayF45 minutes left until my EOD :) 23:15
TheJuliaspeaking of which, I'm going to go help with some stuff outside23:15

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!