Friday, 2021-06-25

opendevreviewArun S A G proposed openstack/ironic master: Add support for configdrive in anaconda interface  https://review.opendev.org/c/openstack/ironic/+/78039803:12
iurygregorygood morning Ironic! TGIF06:31
arne_wiebalckGood morning iurygregory and Ironic!06:36
iurygregorymorning arne_wiebalck o/06:36
rpittau|afkgood morning ironic! Happy Friday! o/07:13
*** rpittau|afk is now known as rpittau07:15
arne_wiebalckGood morning rpittau o/07:18
rpittauhey arne_wiebalck :)07:18
iurygregorymorning rpittau o/07:36
rpittauhey iurygregory :)07:36
jandershey iurygregory arne_wiebalck rpittau and Ironic o/07:59
jandersHappy Friday07:59
iurygregoryhey janders o/07:59
rpittauhey janders :)08:00
arne_wiebalckgood morning janders o/08:12
opendevreviewVerification of a change to openstack/ironic failed: Fix ramdisk boot option handling  https://review.opendev.org/c/openstack/ironic/+/79751708:21
dtantsurmorning ironic08:43
dtantsurTheJulia: do you suggest we block https://review.opendev.org/c/openstack/ironic/+/796879 until September? :-/08:46
dtantsurI'm not vehemently against that, but it may put the FJ drivers in disadvantage because of the remaining inconsistency08:47
dtantsurJayF: hey, could we have some resolution on https://review.opendev.org/c/openstack/ironic/+/797512 please? Do you suggest we never improve or otherwise change error messages any more?08:48
opendevreviewArne Wiebalck proposed openstack/ironic master: Cache AgentClient on Task, not globally  https://review.opendev.org/c/openstack/ironic/+/79767408:58
arne_wiebalckdtantsur: I updated the task number to link it with the story ^^ (I guess this is how it is linked)08:59
dtantsuryep, looks about right09:00
opendevreviewDmitry Tantsur proposed openstack/ironic master: [WIP] Refactor validate_image_properties  https://review.opendev.org/c/openstack/ironic/+/79787509:02
cennemorning everyone .09:02
dtantsurhi cenne 09:03
cenneo/09:03
iurygregorymorning dtantsur and cenne o/09:04
arne_wiebalckhi cenne o/09:04
cennehey arne_wiebalck o/09:05
arne_wiebalckdtantsur: here is the issue I mentioned yesterday, it looks very similar to the one with stacked hearbeats (I initially thought is the same, but the error still appears): https://storyboard.openstack.org/#!/story/200900809:05
rpittaugood morning dtantsur and cenne :)09:06
cennehey rpittau o/09:07
cennelol. : testtools.matchers._impl.MismatchError: 'bios' is not 'bios' 09:08
cenneshould have seen that coming09:08
opendevreviewVerification of a change to openstack/ironic failed: Fix ramdisk boot option handling  https://review.opendev.org/c/openstack/ironic/+/79751709:10
dtantsurcenne: heh, are you trying to compare string with "is"?09:30
cenneHad by mistake.  Fixed it to assertEqual09:30
dtantsurgood :)09:31
dtantsuryeah, it's a common mistake, I must admit09:31
dtantsurPython 3.9 finally started issuing a syntax warning for it09:31
janderssee you on Monday Ironic o/09:41
arne_wiebalckbye janders o/09:41
rpittaubye janders :)09:42
jandershaving really bad connectivity interruptions on both main link and the tethered 5G09:42
jandersI think telcos are trying to tell me I should finish work for this week09:42
arne_wiebalckdtantsur: the issue seems indeed very close to the heartbeat issue: whenever two RPCs to continue deployment are sent in quick succession, the ERROR is thrown (I updated the story with some more logging to show this)09:42
jandershave a great weekend everyone09:42
rpittauthanks janders, you too09:43
iurygregorybye janders enjoy the weekend09:44
dtantsuroh, so no SPUC today?09:49
rpittaudtantsur: I can't today :/09:50
arne_wiebalckdtantsur: I cannot today09:51
dtantsurokay :)09:51
arne_wiebalckdtantsur: hold on a sec before you accept my challenge09:51
arne_wiebalckdtantsur: it may be due to the fact that the image does not have the heartbeat patch yet, and that this is indeed the same issue09:51
dtantsurit may also be related to https://review.opendev.org/c/openstack/ironic/+/75635409:52
arne_wiebalckthanks! looking at the IPA logs, it seems the heartbeat is close to finishing the async command ... let me confirm there is an issue09:54
dtantsurwe still have kernel panics in the CI :( https://zuul.opendev.org/t/openstack/build/bd544ddfe344493bb44dbaaa08c52f17/log/controller/logs/ironic-bm-logs/node-1_no_ansi_2021-06-25-08:53:36_log.txt10:08
arne_wiebalck"funny" you mention this: I am struggling with kernel panics as well in our deployment as well since some days ... seems to be due half-downloaded initrd file (and does not seem to be related to Ironic from what I see atm)10:13
opendevreviewcenne proposed openstack/ironic master: [WIP] Add `boot_mode` and `secure_boot` to node object and expose in api  https://review.opendev.org/c/openstack/ironic/+/79705510:15
dtantsurarne_wiebalck: have you tried updating ipxe?10:18
arne_wiebalckdtantsur: you mean updating the bios f/w10:24
arne_wiebalckdtantsur: ?10:24
dtantsurarne_wiebalck: do you have iPXE pre-flashed on your hardware?10:25
arne_wiebalckdtantsur: I don't think we use iPXE10:26
arne_wiebalckdtantsur: we do not re-flash the hardware, if that is your question10:26
dtantsurwhat do you use then, plain PXE with TFTP?10:27
arne_wiebalckdtantsur: TFTP, then HTTP, yes10:27
arne_wiebalckdtantsur: the image download is via http10:27
dtantsur"then HTTP" == either UEFI boot or iPXE, which one?10:27
arne_wiebalckUEFI boot10:27
dtantsurahh10:28
dtantsurI haven't encountered it yet. maybe upgrading firmware may help then.10:28
arne_wiebalckdtantsur: we had cases in the past, where re-installing the bios helped, hardware colleagues are not convinced yet10:28
arne_wiebalckdtantsur: but we are running out of ideas, so ... :)10:31
arne_wiebalckdtantsur: anyway, not related to your issue10:32
cennedtantsur: when you have time, can you help me out here? : https://review.opendev.org/c/openstack/ironic/+/79705510:37
opendevreviewDmitry Tantsur proposed openstack/ironic master: Refactor deploy_utils.validate_image_properties  https://review.opendev.org/c/openstack/ironic/+/79787510:38
cenneI don't understand why two of the units tests (i wrote) are failing. 10:38
dtantsurlemme see10:38
cenneconductor.test_utils.CacheBootModeTestCase.test_failed_boot_mode and test_failed_secure10:40
dtantsuryeah, I'm looking at them now10:41
iurygregory.10:41
cennethank you10:42
cennehey iurygregory10:43
iurygregoryhey cenne 10:44
dtantsurcenne: I've spotted the issue, now checking the rest of the patch10:53
cenneThanks.  Looking at it now10:54
* dtantsur has posted the comments10:54
cennedtantsur: do you want me to split node_states_boot_mode function too?11:02
dtantsurcenne: not necessarily11:02
cenneokay11:03
* cenne embarrassed about missing out s/vendor/boot_mode even after going over that code many times11:08
dtantsurit took me quite a big of figuring out too, don't worry11:14
*** rpittau is now known as rpittau|bbl11:17
opendevreviewChing Kuo proposed openstack/bifrost master: Fix Redeploy Playbook  https://review.opendev.org/c/openstack/bifrost/+/79807911:52
opendevreviewcenne proposed openstack/ironic master: [WIP] Add `boot_mode` and `secure_boot` to node object and expose in api  https://review.opendev.org/c/openstack/ironic/+/79705512:36
opendevreviewkamlesh chauvhan proposed openstack/ironic master: Upgrade oslo.db version  https://review.opendev.org/c/openstack/ironic/+/79681112:39
TheJuliadtantsur: I don't think we should block it if they can confirm that the machine is at least getting started (hence the reference to check the system console). It sounds like they are having a couple different issues, and understanding them *is* kind of key in my mind.12:53
*** rpittau|bbl is now known as rpittau12:56
dtantsurokay (and good morning)13:01
dtantsurand I see someone is working on it13:01
TheJuliadtantsur: awesome13:02
dtantsurTheJulia: ronic.common.exception.IRMCOperationError: iRMC iRMC set_power_state failed. Reason: HTTPConnectionPool(host='192.168.6.250', port=80): Read timed out. (read timeout=60)13:02
TheJuliadtantsur: I guess what I'm really afraid of is if we break it.13:03
TheJuliawheeeeeeeee13:03
TheJuliathat sounds already broken quite nicely13:03
dtantsurso the last build failed because of something internal to them apparently? I'll mention on the bug13:03
TheJuliaat least their logs are available again13:03
TheJuliaI mean, I could even be good at that, as long as they are good with it13:03
opendevreviewVerification of a change to openstack/ironic failed: Fix ramdisk boot option handling  https://review.opendev.org/c/openstack/ironic/+/79751713:05
dtantsurthis is insanity, why are my patches always fail in the gate so many times?13:06
dtantsurTheJulia: btw "it looks like the CI job uses virtual media", no, it uses irmc-ipxe13:07
TheJuliaoh, I could have sworn I saw it getting loaded in the log13:07
TheJuliawell, devstack13:08
dtantsurwhich is honestly upsetting, because covering virtual media is highly desired..13:08
TheJuliait is all a blur13:08
dtantsurDEBUG oslo_service.service [-] enabled_boot_interfaces        = ['irmc-pxe']13:08
* TheJulia sighs13:08
dtantsurnot even ipxe, just plain pxe13:08
TheJuliale sigh13:08
TheJuliamaybe comment that?13:08
dtantsurI will13:08
TheJuliadtantsur: w/r/t failures all of the redundant jobs maybe?!13:09
TheJuliaor super slim lines of coverage13:09
TheJuliagmann: by chance did you post a patch for tempest's client initialization?13:09
opendevreviewJulia Kreger proposed openstack/ironic-inspector master: Ignored error state cache for new requests  https://review.opendev.org/c/openstack/ironic-inspector/+/78524513:13
opendevreviewJulia Kreger proposed openstack/ironic master: Only return the requested fields from the DB  https://review.opendev.org/c/openstack/ironic/+/79227413:14
TheJuliaajya|afk: ^ typo fixes13:16
opendevreviewJulia Kreger proposed openstack/ironic master: Add note regarding configuration drives to tuning docs  https://review.opendev.org/c/openstack/ironic/+/78962313:17
opendevreviewVerification of a change to openstack/ironic failed: Fix ramdisk boot option handling  https://review.opendev.org/c/openstack/ironic/+/79751713:28
* TheJulia blinks13:28
arne_wiebalckTheJulia: dtantsur: I guess https://review.opendev.org/c/openstack/ironic-python-agent/+/796882 is a backport candidate?13:39
TheJuliamarked as such :)13:40
arne_wiebalckTheJulia: ty13:41
TheJuliaiurygregory: I'm not totally in the middle of https://review.opendev.org/c/openstack/ironic-specs/+/785742, but I posted a question which I wonder might be another way to approach it. Do we *really* need the field13:41
iurygregoryTheJulia, you mean event_types?13:43
iurygregoryit's required to create a subscription https://redfish.dmtf.org/schemas/v1/EventDestination.v1_0_6.json13:44
iurygregorydoes this answer your question?13:46
TheJuliabut *does* it need to be defined by a user, i guess is where I'm going with the question13:48
TheJuliait may be needed to create it, but is it *really* needed13:48
iurygregoryif we don't let the user specify the event types how we should handle this?13:49
iurygregorywe will use a default? "Alert" for example?13:50
iurygregoryfor the OCP use case it would be ok since we are interested in Alerts from Redfish...13:51
iurygregorybut this would make the API very limited, I'm wondering if people will be ok with it13:52
iurygregoryif we as a community agree this is the way to go I'm completely fine with this path =)13:52
trandleskkillsfirst, TheJulia where are we meeting at 10 MDT? Is it just on IRC or are we going to get on bluejeans/webex/zoom?14:13
opendevreviewJulia Kreger proposed openstack/ironic master: Implements node history: database  https://review.opendev.org/c/openstack/ironic/+/76800914:13
TheJuliaiurygregory: something is better than nothing :)14:16
TheJuliaand if the field is going away, then how do we navigate that I guess is the question14:17
TheJulianot having it sooner makes things easier to navigate when it was a thing of the long ago past in like 2 years from now14:17
TheJuliatrandles: kkillsfirst: meetpad.opendev.org/ironic?14:17
iurygregoryTheJulia, humm so for now we would consider all subscriptions will be "Alert" and in the future when vendors have the support for RegistryPrefix and ResourceType we just add that to the api ?14:18
TheJuliaarne_wiebalck: i fixed the down revision on the node history patch. I think it is okay to merge in all honesty, although I raised some questions regarding indexes, like if we ever want to offer api query by conductor then we would want it indexed now and not later.14:18
TheJuliaiurygregory: I *feel* like that is the logical path since we can't see anything but the vague outline of the future14:19
iurygregoryso inside ironic when doing the request to redfish we would just "hardcode" to send "Alert" in the request? 14:19
TheJuliafor now, yeah I think so14:19
iurygregoryTheJulia, ack I will update the spec and mention this =)14:20
iurygregoryand re-work my code a bit XD14:20
TheJuliait is just an idea, it seems like that is a way to unblock it and not have us have a deprecation on a field soon()14:20
iurygregoryyeah makes sense to me =)14:20
trandlesTheJulia: that works for me!14:21
TheJuliaIf anyone is willing, I'd <3 for reviews on my query performance improvement series of patches14:21
TheJuliaI also suspect every large operator would love it14:22
TheJuliarloo: I've pondered adding chassis_uuid by join, but maybe it shouldn't be a default field populated?14:22
TheJuliaof course, doing detail view for all nodes *is* going to be slow14:23
* rloo wonders why we never removed chassis_uuid but anyway...14:23
TheJuliaI guess, removal would be breaking14:24
TheJuliabut it could be by microversion.14:24
rlooit seems to me that if someone explicitly wants chassis_uuid, we ought to populate it. so in a general 'detail', we should show it. 14:24
TheJuliaso as is, without a join, it will be the secondary query per node14:25
TheJuliayeah, needs to be joined then14:25
TheJuliaAt least it is a 1-1 mapping14:26
TheJuliaerr, 1 to many, but yeah14:26
TheJulia1 row ever for nodes14:26
rloohave others complained about chassis-uuid aside from us? give me a few min, i want to see what we do downstream. it has been years ago...14:27
arne_wiebalckTheJulia: I guess there is no harm to have it indexed now, is there?14:27
TheJuliarloo: I think you guys just removed the lookup since you don't use it14:27
TheJuliaarne_wiebalck: not really, uuid would be slightly bloated because it woudl all be unique but its a solid hint to query planning/optimizers14:27
TheJuliaconductor indexes would be super efficient if queried by conductor. since you would like in your environment have <30 mappings to it would basically just have to give you the result set over the wire14:28
TheJuliano actual hunting/matching field level values required14:28
rloook, we added a config option downstream, which of course is always turned on, to set chassis_uuid to None for GET /v1/nodes & /v1/nodes/detail. 14:29
arne_wiebalckTheJulia: right14:29
rlooaccording to notes (from juno...!) "Performance improvement was 15s for 10k nodes as compared to 50s with chassis_id"14:30
TheJuliarloo: wow14:30
TheJuliayeah, every single less additional query after the initial result sets helps a lot14:31
arne_wiebalckTheJulia: could you update the patch accordingly? I guess you know already exactly what to add.14:31
TheJuliaarne_wiebalck: sure, and yeah14:32
rlooi suspect that was done, before we changed the API to specify fields, so maybe it isn't that important now.14:32
arne_wiebalckTheJulia: awesome, thanks a lot!14:33
* arne_wiebalck goes back to upgrade preparations14:33
rloois chassis_uuid the only thing where we did a 2nd db lookup? I thought we did others but don't recall (and maybe you already addressed those).14:34
TheJuliarloo: so... now we do have a couple other possible ones on if you ask for specific fields or detail view of the node. Like... allocations is another. That too could be a join instead with a little work. Node traits currently does an under the hood left outer join and row by row de-duplication (yes, crazy!), but my performance series changes that and actually starts moving us to using selectinload14:35
TheJuliafor all list operations, so it is much more targetted and efficent14:35
rlooif possible, we should try to use the same 'mechanism' to improve performance. i'd be worried about trying to maintain/grok why retrieval of diff fields from the db are done differently. 14:37
TheJuliafwiw, with the current state of master branch on the nova query for nodes, I'm at ~18.49 seconds per 10k nodes if I scale the result set size down14:38
TheJuliafor single value fields, we can actually join. The selectinload are more for relationship joins where we get lists of fields attached to the base node object14:39
arne_wiebalckTheJulia: this is the nova query in the nova ironic driver to get the list of all nodes?14:39
TheJuliaarne_wiebalck: yes.... my tests locally have 113542 nodes (creating nodes takes a long time and I stopped it short of 150k)14:40
opendevreviewcenne proposed openstack/ironic master: [WIP] Add `boot_mode` and `secure_boot` to node object and expose in api  https://review.opendev.org/c/openstack/ironic/+/79705514:40
TheJuliaarne_wiebalck: latest test is ~210 seconds to work through everything, granted no transport latency or final spitting out through the webserver, but all of that is fairly low overhead compared to object conversions and data lookups.14:41
arne_wiebalckTheJulia: is this the one which is used to compare the number of nodes?14:42
arne_wiebalckTheJulia: let me try to find a link for you, checking with Belmiro ...14:43
JayFdtantsur: anytime I put a comment with a +1, like on https://review.opendev.org/c/openstack/ironic/+/797512, consider it exceedingly optional. I'll -1 if I feel strongly about something.14:43
TheJuliaarne_wiebalck: the query nova usees?14:43
dtantsurJayF: I get it, but I'm afraid people may take it as a soft objection and not review..14:44
arne_wiebalckTheJulia: background is that we were looking into why the resource tracker time still grows and it seems nova is getting all nodes (but just to compare the numbers and to print a warning)14:44
arne_wiebalckTheJulia: nova-conductor per group still gets all nodes14:44
TheJuliaarne_wiebalck: seriously?!?14:44
arne_wiebalckTheJulia: it seems so14:45
TheJuliajust to get a count14:45
arne_wiebalckTheJulia: so, we removed that last week14:45
arne_wiebalckTheJulia: this cuts the RT time in half and suppresses a warning14:45
* arne_wiebalck goes and finds a link to the code ...14:45
TheJulia*so* field level query improvements + just asking for the list of UUIDs would be enough to get a result set that is filtered *all* the way down and as minimal as possible to respond to the api query14:46
TheJuliaoh... just fix nova completely14:46
TheJuliaNobodyCam: hey!14:46
rlooTheJulia: Seems like the two choices for handling these fields are 1. join; 2. second db call. We're doing 2 now. How much performance gain to do 1 and how much work to do 1? I mean, I'd say do 1 unless it is a lot of work or hard to understand/maintain code.14:47
rpittaubye everyone, see you in 10 days! o/14:47
*** rpittau is now known as rpittau|afk14:47
arne_wiebalckbye rpittau|afk o/14:48
arne_wiebalckTheJulia: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L965414:48
TheJuliarloo: performance on a join would be good, it would be another object in the result set up front, The joins are relatively quick all things considered and saves the resulting 1000 db calls that would occur from the second db call14:49
TheJulia1 joined query is much nicer to the db than 1001 db queries14:49
TheJuliaand all the round trip time on the back end of that14:49
rloothen lets go with joins.14:49
rloowrt what arne_wiebalck mentioned about nova -- we should be able to improve that. seems like someone actually asked to have an API that returned only numbers. but even if we don't go that route, the actual ironic query could filter out more stuff.14:52
arne_wiebalckTheJulia: this line takes about 800secs for our ~8000 nodes ... not sure we are talking about the same query14:52
arne_wiebalckTheJulia: but we removed this one (mind you, Nova is still on Stein, but the code does look the same on master)14:53
TheJuliaarne_wiebalck: I'll look in a moment14:59
TheJuliaarne_wiebalck: yeah, there is a HUGE amount of overhead under the hood in ironic that would cause that to be stupidly slow15:01
arne_wiebalckTheJulia: :-D15:01
TheJuliabecause it is asking the db for everything about the node, and then passing that through all the object transitions in ironic, and then going through policy and sanitization15:02
TheJuliaand then finally getting spit out15:02
arne_wiebalckTheJulia: I guess that code was copied over from the VM driver, it makes sense there (and maybe in general), but has to be adapted for Ironic and 1000s of nodes15:02
TheJuliayeah15:02
arne_wiebalckTheJulia: from what we saw, it only compares the number and prints a warning :)15:02
TheJuliarofl15:03
TheJuliaThat seems pointless then15:03
arne_wiebalckYes, we should fix it there. I think Belmiro has not suggested this yet, since we are some versions behind on nova and he did not yet check if that works the same way on master.15:04
opendevreviewJulia Kreger proposed openstack/ironic master: Implements node history: database  https://review.opendev.org/c/openstack/ironic/+/76800915:04
arne_wiebalckI doubt it changed, though.15:04
opendevreviewcenne proposed openstack/ironic master: [WIP] Add `boot_mode` and `secure_boot` to node object and expose in api  https://review.opendev.org/c/openstack/ironic/+/79705515:04
TheJuliayeah, unlikely15:04
arne_wiebalckYeah, seems like this is just a "quick" consistency check ... :-D15:05
TheJuliaarne_wiebalck: I also renamed the stock index kaifeng included, since indexes are not divided by table they are against, they are namespaced across the entire db15:05
arne_wiebalckOn master, I do not see the result used anywhere besides LOG.warning()15:06
arne_wiebalckTheJulia: node history: oh, I see, makes sense15:08
TheJuliawow, that is literally the only place that method is used in nova...15:08
TheJuliacould we just return None and have that be a signal to skip the check?!15:08
TheJulia(i realize, that *sounds* awful, so maybe an attribute on the driver or *something*15:09
arne_wiebalckwe need to decide if we need this check15:09
arne_wiebalckif yes, it should be conductor_group aware I guess15:10
TheJuliayeah, that wouldn't be too hard to do although the compare may still fail15:10
opendevreviewMerged openstack/ironic master: CI: change ilo_deploy_iso to deploy_iso  https://review.opendev.org/c/openstack/ironic/+/79688615:11
arne_wiebalckwhich raises the question how useful it is15:11
TheJuliayeah15:14
TheJuliaso it does do one thing15:18
TheJuliathat actually helps performance15:18
dtantsurgoing a bit earlier today, have a great weekend15:19
arne_wiebalckbye dtantsur o/15:19
opendevreviewMerged openstack/ironic-python-agent master: Coalesce heartbeats  https://review.opendev.org/c/openstack/ironic-python-agent/+/79688215:21
TheJuliaactually, it doesn't15:23
TheJuliait doesn't touch the cache at all, doesn't update it, it just queries needlessly15:23
arne_wiebalcknice15:23
TheJuliawe could literally use the cache for that query15:23
TheJuliaI get why nova does it, they want to flag discrepencies in their power sync loop, but we're already keeping a cache15:25
gmannTheJulia: not yet, I am going to do that after my breakfast15:25
TheJuliagmann: ack15:25
TheJuliagmann: thanks15:25
opendevreviewMerged openstack/ironic-python-agent master: Only mount the ESP if not yet mounted  https://review.opendev.org/c/openstack/ironic-python-agent/+/79604515:25
TheJuliaarne_wiebalck: so the get available nodes tracking *does* refresh the cache it looks like *and* is partition key aware15:29
TheJuliaso using the local in-driver cache seems to be the way to go15:29
TheJuliaand would remove the transactions over the wire unless the cache is just empty15:30
arne_wiebalckTheJulia: oh, that is good15:43
arne_wiebalckTheJulia: ofc, the comparison will still fail, there will be only a warning with no consequences, and all nova-compute process will ask for all nodes 15:46
TheJuliaarne_wiebalck: so I'm not sure it would, because I think the cache is group aware and the count is by node anyway15:48
TheJuliaerr, nova-compute instance15:48
TheJuliaso I think the warning will go away as well15:48
arne_wiebalckthe cache on the ironic side, you mean? but is nova-compute asking for a group as well? (maybe I misunderstand)15:50
TheJuliaarne_wiebalck: so, nova-compute processes keep a local cache of the ironic nodes.15:54
TheJuliasome of the hits go to the cache instead of back to ironic. Specific power sync status checks *do* go back to ironic in the end, but the query is just kind of redundant.15:54
opendevreviewArne Wiebalck proposed openstack/ironic-python-agent stable/wallaby: Only mount the ESP if not yet mounted  https://review.opendev.org/c/openstack/ironic-python-agent/+/79812415:58
arne_wiebalckTheJulia: oh, I thought power sync would go back to Ironic and the db16:00
TheJuliaarne_wiebalck: it *does* eventually16:00
arne_wiebalckok16:02
opendevreviewArne Wiebalck proposed openstack/ironic-python-agent stable/wallaby: Coalesce heartbeats  https://review.opendev.org/c/openstack/ironic-python-agent/+/79812916:37
arne_wiebalckHave a good week-end everyone, see you next week o/16:43
TheJuliaI guess no spuc today?17:05
JayFI can join if folks are there17:09
JayFnot fully braining well this morning lol17:09
gmannTheJulia: try with this https://review.opendev.org/c/openstack/tempest/+/79813017:11
gmannTheJulia: how it create network for project scope only as there we have project_id which is needed for network creation17:12
TheJuliaJayF: https://pbs.twimg.com/profile_images/1234391573/brainslug_Fry_400x400.jpg17:12
TheJuliaJayF: nobody is there, and I feel like I have a brain slug17:12
TheJuliagmann: ack17:12
JayFYeah. Kinda the same. Exhausted for no reason, can't think well today17:13
TheJuliaI'm going to go have a nice lunch with the wife, it has been a really long week17:14
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: Use get_service_clients framework with basic Secure RBAC  https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/79752117:15
gmannTheJulia: this is basically create network resources only for the project scope. and this is something we need to make a general solution on "how all the openstack services project mapped resources can be operated by system scope token"17:15
TheJuliagmann: ack, which makes me think our tempest tests may fail/break on it, but better now than later17:15
gmannespecially cross service dependent resources  17:15
TheJulia++17:15
TheJuliaI'll check the results of the job after lunch17:15
gmannk17:16
TheJuliaI also suspect I'm working some this weekend :(17:17
cenneIt's Jupiter in retrograde, first friday. Thats why.  : p Minds are clouded. Follow your hearts. 17:18
cennejk.17:18
TheJuliacenne: perfect answer :)17:21
cenneHappy  weekend.  Sorry for silliness.  Spilled over in lieu of spuc. 17:23
TheJulia:)17:23
TheJuliasilliness++17:23
cenneI'll17:23
TheJuliaWe encourage silliness in this project!17:23
* TheJulia says with a serious face trying to hold back laughing17:23
cenne^u^ 17:25
cenneI'll  take leave now. Have a goof day. 17:25
JayFo/17:26
TheJuliahave a wonderful weekend!17:26
cenneumm..  *good day ofc. :p17:26
cenneo/17:26
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: Add Wallaby jobs  https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/79814317:47
* TheJulia goes and finds lunch17:56
TheJuliagmann: thanks for revising the change, hopefully the job will now be good19:37
gmannTheJulia: yeah hope so, I am monitoring that in case.20:15
JayFTheJulia: https://review.opendev.org/c/openstack/ironic/+/797984 WDYT about my suggestion there? If you're meh on it, I'll land the patch as-is20:31
TheJuliaJayF: If I edit it real quick and update it, I think you could +2+A it then21:27
opendevreviewJulia Kreger proposed openstack/ironic master: Deprecate [pxe]ip_version parameter  https://review.opendev.org/c/openstack/ironic/+/79798421:29
TheJuliagmann: that tempest error is bizzar21:34
* TheJulia goes and makes coffee21:36
gmannI did the recheck for victoria jobs, should be green21:39
TheJuliawell, one of the jobs failed with a key error which made no sense21:44
TheJuliaI need ot go check the code of ironic's tempest plugin, it might need another change21:44
TheJuliacoffee() first21:44
opendevreviewMerged openstack/ironic master: Change UEFI ipxe bootloader default  https://review.opendev.org/c/openstack/ironic/+/79797321:50
TheJuliahmm, looks unrelated21:53
TheJuliaso something is broken with regards to the memory checking, which seems totally unrelated. it is failing to apply the rule in inspector because the criteria is not being met22:04
TheJuliaand technically are ending up booting with just shy of the memory required for the test to pass...22:07
TheJuliagmann: I see what it is22:14
TheJuliahttps://review.opendev.org/c/openstack/ironic-python-agent/+/788921 needs to be cherry picked to victoria22:14
gmannk, sorry I wasted the recheck there. but should be ok as its friday and less load in gate :)22:15
TheJuliano worries, yeah... it is bizzar but apparently due to a bad patch on lshw in centos22:19
opendevreviewJulia Kreger proposed openstack/ironic-python-agent stable/victoria: Fix getting memory size in some lshw output  https://review.opendev.org/c/openstack/ironic-python-agent/+/79816822:22
TheJuliaAnd... I need to actually backport that one further22:22
opendevreviewJulia Kreger proposed openstack/ironic-python-agent stable/ussuri: Add function to calculate memory  https://review.opendev.org/c/openstack/ironic-python-agent/+/79817022:27
opendevreviewJulia Kreger proposed openstack/ironic-python-agent stable/ussuri: Fix getting memory size in some lshw output  https://review.opendev.org/c/openstack/ironic-python-agent/+/79817122:27
opendevreviewJulia Kreger proposed openstack/ironic-python-agent stable/train: Add function to calculate memory  https://review.opendev.org/c/openstack/ironic-python-agent/+/79817222:30
opendevreviewJulia Kreger proposed openstack/ironic-python-agent stable/train: Fix getting memory size in some lshw output  https://review.opendev.org/c/openstack/ironic-python-agent/+/79817322:30
* TheJulia files a downstream bug22:44
TheJuliagmann: I just rechecked a job to hopefully tie devstack+tempest+ironic-python-agent changes together to see if we can actually run our tempest tests with scope enforcement set. Wish it luck :)22:57
TheJuliaerr, ironic-tempest-plugin changes22:57
opendevreviewJulia Kreger proposed openstack/ironic master: Set stage for objects to handle selected field lists.  https://review.opendev.org/c/openstack/ironic/+/79227523:04
opendevreviewJulia Kreger proposed openstack/ironic master: API to pass fields to node object list  https://review.opendev.org/c/openstack/ironic/+/79229623:04
opendevreviewJulia Kreger proposed openstack/ironic master: Allow node_sanitize function to be provided overrides  https://review.opendev.org/c/openstack/ironic/+/79488023:04
opendevreviewJulia Kreger proposed openstack/ironic master: Use selectinload for all list queries  https://review.opendev.org/c/openstack/ironic/+/79733723:05
gmannTheJulia: +123:06
* TheJulia gets out the jeopardy music23:12

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!