Thursday, 2023-06-08

mnederlofHi all, would someone be able to help me to review the change i made? (https://review.opendev.org/c/openstack/nova/+/884595)06:47
fricklermnederlof: based on the bug you reference (which was marked as invalid for nova) maybe sean-k-mooney might want to have a look. but also next week the summit is happening so things might be slow until some time after that06:50
mnederlofyeah it was marked invalid, because it needed to be a feature, so i created a bp for it as well. but i can imagine the summit takes some preparation time this week as well :)06:58
opendevreviewMerged openstack/nova-specs master: Re-propose using extend volume completion action for 2023.2  https://review.opendev.org/c/openstack/nova-specs/+/87723308:18
sean-k-mooneymnederlof: frickler due to budget and other considerations im actully not attendign this summit. i had planned to go to the last physical ptg but it was canceled09:57
sean-k-mooneymnederlof: placign the config option in workarounds implies it will be deleted at some point. so it likely should be elsewhere10:00
sean-k-mooneymnederlof: you can get the same effect already without modifying nova10:01
sean-k-mooneywell almost10:01
sean-k-mooneyif you disabel the access to the diretl storage url in glance and exposing multipel locations10:01
sean-k-mooneythen nova cant detect that its the same ceph cluseter and we will download the image via glance and flatten it10:02
mnederlofsean-k-mooney: yeah you could do that with glance, but then you lose the speed improvement when using a clone/flatten by ceph directly (which is significant)10:35
mnederlofand thanks for the review, i will continue to work on these other points you mentioned10:36
sean-k-mooneyyou will need to attend the nova team meeting and ask for the specless bluepritn to be approves as a specless blueprint by the way. its possibel that you may be asked to provdie a spec for this10:37
sean-k-mooneywe have a deadline for approvals in each cycle and in this case its july 6th10:38
sean-k-mooneythe next team meeting will be canceled due to the ptg so the next operturntiy to do this will be tuesday the 20th10:38
sean-k-mooneybauzas: without doing a full review (feel free to do one if you want) do you think https://review.opendev.org/c/openstack/nova/+/884595 would need a spec10:39
sean-k-mooneythere si no api change which means we have some flexiblity and given its config driven and only in the libvirt driver the question really is what is the upgrade impact10:41
sean-k-mooneymnederlof: have you spent any time thinking about what happens if you cahnge this option after the cloud is deployed10:41
sean-k-mooneyi.e. you have many vms using not flattend storage possibley with existign snapshots. then you enable flattening10:42
sean-k-mooneywill you retoractivaly flatten all the existign snapshots or just the new one10:42
mnederlofsean-k-mooney: only new ones will be flattened, as this happens during instance create procedure, so if you change it after deployment, a operator needs to do the flattening of the previously created instances manually, using `rbd flatten <nova-pool>/<instance uuid>_disk`11:29
mnederlofwhich is something he is already doing sometimes, if he wants to delete a glance image which has children11:29
sean-k-mooney"{{ hostvars[inventory_hostname]['ansible_facts']['default_ipv4']['address'] }}"11:48
sean-k-mooneyhttps://opendev.org/zuul/zuul-jobs/src/commit/839de7f8996838162ae0de6a9f6ba28f968381bc/roles/nimble/defaults/main.yaml#L411:49
opendevreviewAmit Uniyal proposed openstack/nova master: Reproducer for dangling bdms  https://review.opendev.org/c/openstack/nova/+/88145711:54
opendevreviewAmit Uniyal proposed openstack/nova master: Delete dangling bdms  https://review.opendev.org/c/openstack/nova/+/88228411:54
sean-k-mooneyanyone care to review this backport. i just realised its still open https://review.opendev.org/c/openstack/nova/+/82980413:24
dansmithsean-k-mooney: can you hit this? https://review.opendev.org/c/openstack/nova/+/88535213:49
dansmithsean-k-mooney: how is that backport not just a feature? it even says so in the reno13:50
dansmithI know it has as bug and bug-like description in the commit message, but it seems like a feature13:51
dansmithand it also went straight to wallaby?13:51
dansmithI guess I missed the discussion, so I'll go look13:52
dansmithhmm, I'm not finding it13:54
dansmithoh, feb 2022, not 2023 so was this already in yoga?13:56
sean-k-mooneydansmith: where was some patalogical behvior which affected amd host that this helped adress 14:03
sean-k-mooneyi need to join a meeting be let me loop back with the context i dont have it to hand14:03
dansmithack14:03
opendevreviewAmit Uniyal proposed openstack/nova master: Reproducer for dangling bdms  https://review.opendev.org/c/openstack/nova/+/88145714:15
opendevreviewAmit Uniyal proposed openstack/nova master: Delete dangling bdms  https://review.opendev.org/c/openstack/nova/+/88228414:15
ykarelsean-k-mooney, bauzas when you get chance please revisit https://review.opendev.org/c/openstack/nova/+/86841914:16
bauzasykarel: ack14:16
sean-k-mooneydansmith: so looping back https://bugs.launchpad.net/nova/+bug/1978372 was the amd bug where the balancing feature could be used to mitigate the impact. however while lookign at that i just rememebered that gibi aslo adress the issue directly by adding some caching https://review.opendev.org/c/openstack/nova/+/845896 so backporting htat would also adress the issue with a large14:37
sean-k-mooneynumber of host numa nodes14:37
sean-k-mooneyit does not jsut affect amd by the way but they just have a higher propbaly of higher core/numa node counts14:38
sean-k-mooneyso https://bugs.launchpad.net/nova/+bug/1978372/comments/6 was the intiall workaround but if we backport gibis change insteads we dont need the balancing feature is bug14:40
dansmithsean-k-mooney: yeah, seems a lot better to backport that fix from gibi15:32
sean-k-mooneyack in that case care to leave that as a comment and we can backprot that instead15:33
dansmithnot only does it closes-bug instead of partial-bug, fixes instead of feature in reno, but it addresses the actual problem without adding new conf knobs and behaviors15:33
dansmithsean-k-mooney: I've already commented and got an unhelpful response from the owner15:34
dansmithbut yeah I'll leave my -1 then15:34
sean-k-mooneycool15:34
opendevreviewVasyl Saienko proposed openstack/os-vif master: Don't break traffic if port already exists  https://review.opendev.org/c/openstack/os-vif/+/88512715:52
dansmithbauzas: we could probably benefit from a similar strategy: https://review.opendev.org/c/openstack/os-brick/+/885558/1/README.rst19:01

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!