Thursday, 2021-12-02

opendevreviewmelanie witt proposed openstack/nova master: Add stub unified limits driver  https://review.opendev.org/c/openstack/nova/+/71213700:47
opendevreviewmelanie witt proposed openstack/nova master: Assert quota related API behavior when noop  https://review.opendev.org/c/openstack/nova/+/71214000:47
opendevreviewmelanie witt proposed openstack/nova master: Make unified limits APIs return reserved of 0  https://review.opendev.org/c/openstack/nova/+/71214100:47
opendevreviewmelanie witt proposed openstack/nova master: DNM Run against unmerged oslo.limit changes  https://review.opendev.org/c/openstack/nova/+/81223600:47
opendevreviewmelanie witt proposed openstack/nova master: Add logic to enforce local api and db limits  https://review.opendev.org/c/openstack/nova/+/71213900:47
opendevreviewmelanie witt proposed openstack/nova master: Enforce api and db limits  https://review.opendev.org/c/openstack/nova/+/71214200:47
opendevreviewmelanie witt proposed openstack/nova master: Update quota_class APIs for db and api limits  https://review.opendev.org/c/openstack/nova/+/71214300:47
opendevreviewmelanie witt proposed openstack/nova master: Update limit APIs  https://review.opendev.org/c/openstack/nova/+/71270700:47
opendevreviewmelanie witt proposed openstack/nova master: Update quota sets APIs  https://review.opendev.org/c/openstack/nova/+/71274900:47
opendevreviewmelanie witt proposed openstack/nova master: Tell oslo.limit how to count nova resources  https://review.opendev.org/c/openstack/nova/+/71330100:47
opendevreviewmelanie witt proposed openstack/nova master: Enforce resource limits using oslo.limit  https://review.opendev.org/c/openstack/nova/+/61518000:47
opendevreviewmelanie witt proposed openstack/nova master: Add legacy limits and usage to placement unified limits  https://review.opendev.org/c/openstack/nova/+/71349800:47
opendevreviewmelanie witt proposed openstack/nova master: Update quota apis with keystone limits and usage  https://review.opendev.org/c/openstack/nova/+/71349900:47
opendevreviewmelanie witt proposed openstack/nova master: Add reno for unified limits  https://review.opendev.org/c/openstack/nova/+/71527100:47
opendevreviewmelanie witt proposed openstack/nova master: WIP Enable unified limits in the nova-next job  https://review.opendev.org/c/openstack/nova/+/78996300:47
*** tbachman_ is now known as tbachman03:02
opendevreviewBalazs Gibizer proposed openstack/nova master: Reproduce bug 1952941  https://review.opendev.org/c/openstack/nova/+/82012108:52
gibiif somebody has good ideas how to do a proper OVO data migration (including writing the upgraded data back to the DB) in case an OVO is persisted in two different tables (instance_extra.numa_topology and request_spec.numa_topology) then let me know. See ^^ as reference09:07
gibiI can create a solution that has two separate code patch one for the instance_extra and one for the request_spec case but that is i) ugly ii) does not generic enough to handle the case when object will be persisted in a 3rd table iii) create a bad precedence for future OVO data migrations09:09
*** efried1 is now known as efried09:48
bauzasmorning09:51
gibibauzas: o/ good morning09:52
bauzasgibi: we did some data migrations in the past09:52
bauzasbut if you need to update two objects, well, wow09:52
gibibauzas: yes, and the pcpuset one is broken :)09:53
gibibauzas: it is one OVO class that is persisted to two db table09:53
* bauzas facepalms09:53
gibiInstanceNUMACell is part of instance_extra.numa_topology as well as request_spec.numa_topology09:53
bauzasI guess we don't use a same ovo object for NUMATopology ?09:53
gibisame ovo class for both 09:54
bauzaslemme look at your bug above ^09:54
bauzashah09:54
gibisure09:54
gibiI'm hacking on a soluiton atm09:54
bauzasokay, I think I understand the problem09:59
bauzaswe updated instance_extra 10:02
bauzaswhen we called the InstanceNumaTopology object10:02
bauzasbut if you don't hydrate this object this way, we don't update it10:03
gibiyepp10:03
bauzasI thought we had an db upgrade too before moving to Wallaby then10:03
bauzasat least a nova-upgrade check10:03
bauzastelling that some objects weren't updated10:04
gibiI don't think so as the code still has todos to remove the migration code once we are sure the the objects are loaded once10:04
bauzasat least that's what I'd do if I would write some upgrade change10:04
gibibut even if we had a blocking migration that would miss request spec too10:04
bauzasgibi: because we persist it in the RequestSpec API DB ?10:05
gibiyewpp10:05
gibiyepp10:05
bauzasok I see the problem10:05
bauzasso in theory the cell db is upgraded10:05
gibiI assume that if we forgot to migrate the request spec then we forgot to add a blocking migration for that too10:05
bauzasbut the api db continues to have old values10:05
gibiyes10:05
gibior more precisely when an instance is loaded it has a proper value, but if a request spec is loaded it has still the old value10:06
bauzasok, so we need to use this migrate method for the requestspec object then10:06
bauzasgibi: yup, understood10:06
gibiyes, but we need to split the code as in case of request spec we need to persist the updated value to a different table10:06
gibihence my pain 10:06
bauzashah10:06
gibiit is ugly and non generic10:06
bauzasgot it10:07
bauzaswell, this should have been done this way in Victoria either way, right?10:07
bauzasproblem is, we assume that InstanceNumaTopology object is only persisted by instance_extra db table10:08
bauzasright?N10:08
gibiright10:09
bauzasif so, the object is broken10:09
bauzasI mean the object persistency10:09
gibithe problem is that the InstanceNUMATopology object does not know that it is used in two differnt context10:09
bauzaswe need to say we have to persist in two db tables10:09
bauzasgibi: yup, hence my word 'broken'10:09
bauzasin general we have object classes that are backed by a single db table10:10
gibiI can add a generic code to _obj_from_primitives that called in both case and do the data migration genericly, but in that code I cannot decide which table to write the data back10:10
bauzasand we directly map the object fields with the db values10:10
bauzasgibi: agreed with your problem, this is a pain to fix10:10
bauzasthe design itself is having flaws10:10
bauzaswe somehow need to have ovo objects that know which db table they are related10:11
bauzasbut honestly, I somehow feel those necessarly have to be two different objects10:11
bauzaswe can nest objects under others10:12
bauzasbut given RequestSpecs is at the API DB, we can't just hydrate the value from the cell DB values10:12
bauzasor, the other way to consider that, is that we only use RequestSpecs.InstanceNUMATopology as a non-persisted object10:13
bauzasbut we would need to scatter/gather the values from the cell DB before we hydrate such reqspec nested object10:13
bauzasgibi: see ?10:14
bauzasone way to address this would be to gather the instancenumatopology object from the cell DB at the api level before we hydrate the requestspec10:14
gibiI'm not even sure that for a single instnace the InstanceNUMATopology in the instance and in the request spec are the saem10:14
gibisame10:14
gibii.e. instance multicreate reuses a single request spec object afaik10:15
bauzasthis is fine10:15
bauzasthis is just for scheduling decisions10:15
bauzaswe don't need to persist this10:15
bauzasthat's why we always said we were fine with one single reqspec per boot call in case of multicreate10:16
bauzasand in case of a move op, getting the generic reqspec is ok10:16
gibithen I don't get it10:16
gibithe failure happens during the scheduling of a migration of a pre Victoria instance10:16
gibiin the NumaTopology filter10:17
bauzasgibi: ok, so at the API, we need to gather the instance numa topology from the cell DB corresponding to where the instance is stored10:17
bauzasand we will then hydrate the reqspec with the proper values of the instance10:18
gibino no, I get that. what I don't get is why the NumaTopology filter depends on this value10:18
gibiwhat does the request_spec.numa_topology describes? 10:19
gibiis the what was requested?10:19
gibiand the instance_extra.numa_topology represent what was actually selected10:19
bauzasexcellent question10:19
gibibut then in multicreate there is one request but more than one actual selection 10:20
gibiand those selections can be have different values, different pcpu ids10:20
gibianyhow I have to go back hacking on a solution to fix an urgent downstream issue. The fix needs to be backportable so I will do the ugly code path duplication10:21
gibilater on we can think about a proper fix that reimagine the ovo usage10:21
bauzasgood luck10:22
gibithanks10:22
gibiyou will see the result :D10:22
gibibauzas: how could this ever work? https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/objects/request_spec.py#L727-L73010:48
gibias far as I see db_spec is a dict10:48
gibidb_spec.save() is a type error10:48
bauzasgibi: no this isn't a dict10:52
gibiis something connected to a db model?10:53
bauzasgibi: IIRC it returns the class object10:53
bauzasoh wait, sec10:54
bauzasit returns some SQLA object 10:55
gibiOK I think I see now, thanks10:56
gibithe unit test simulates the db in a wrong way :/10:57
bauzashttps://www.commitstrip.com/wp-content/uploads/2016/01/Strip-Voyage-dans-le-temps-650-finalenglish-2.jpg10:58
bauzasgibi: I guess we need to pdb it10:58
bauzasmy object-persistency-fu is a bit rusty10:59
gibilol10:59
gibiso the test mocks in a dict instaed of a proper object from the db, hence my problem10:59
gibimoving on11:00
gibistephenfin: do you remember if we have a blocking db migration for the cpuset -> pcpuset data migration?11:14
sean-k-mooneyyay another libosinfo bug that can break live migration this time.11:29
sean-k-mooneygibi: i did not think we needed one11:29
gibisean-k-mooney: if want to remove the data migration code from InstanceNUMATopology ever then we need a blocking migration to see that all the objects is loaded at least once11:30
sean-k-mooney gibi right but i tought we were not goign to drop that11:31
sean-k-mooneyi mean we can but we do it on load to avoid a data migration11:32
sean-k-mooneygibi: back to your instance numa toploty question11:32
sean-k-mooneythe one in the request spec is not the same as the one in the instnace11:32
sean-k-mooneythe request spec version will not have actul cpus ectra assigned 11:33
sean-k-mooneyit just has the constriants11:33
sean-k-mooneye.g. 2 numandoes with pinned cpus11:33
sean-k-mooneythe one in the instance actully will be fully populated11:33
sean-k-mooneyonce it has landed on the compute node11:33
sean-k-mooneyand been assinged cores and save back to the db11:33
gibisean-k-mooney: yeah, so we cannot use the data in instance_extra.numa_topology to populate request_spec.numa_topology due to multicreate11:34
gibiwe need to independently handle request_spec.numa_topology and migrate it11:35
sean-k-mooneywell we shoudl not be populating the request spec version and persiting it11:35
sean-k-mooneygibi: we clone the request spec to avoid it modifying it https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/scheduler/filters/numa_topology_filter.py#L64-L7111:38
sean-k-mooneyso that multi create works11:38
sean-k-mooneystephen fixed that in https://github.com/openstack/nova/commit/bff2030ecea8a1d21e03c61a7ece02f40dc25c5d11:38
sean-k-mooneyhttps://bugs.launchpad.net/nova/+bug/165597911:39
gibisean-k-mooney: so there will be a request spec for each multicreated instance during scheduling but only the first request spec is persisted the rest is just temporary11:39
gibior we call create() on the clones as well?11:39
gibiohh11:40
sean-k-mooneyno only one will be persited 11:40
sean-k-mooneywe make a copy and use that in the filter11:40
gibiand that one has numa_topology ? 11:40
sean-k-mooneythe initial request spec has a partly populated numa_toplogy object11:41
sean-k-mooneywhich is intentional11:41
sean-k-mooneyits create by numa_get_constraits https://github.com/openstack/nova/blob/8d9785b965657d42f20e1ad7234f570077a387d7/nova/virt/hardware.py#L185811:42
sean-k-mooneyin the api11:42
sean-k-mooneyhttps://github.com/openstack/nova/blob/ff4b396abff80dea5a54dfe830a7db3a97a7360c/nova/compute/api.py#L106811:43
sean-k-mooneyas part of _validate_and_build_base_options11:43
sean-k-mooneythat is what is stored in the request_spec11:43
gibiso that create numa cell objects with cpuset values11:44
sean-k-mooneyno the cpuset values will not be populated11:45
gibihttps://github.com/openstack/nova/blob/8d9785b965657d42f20e1ad7234f570077a387d7/nova/virt/hardware.py#L155311:45
gibihttps://github.com/openstack/nova/blob/8d9785b965657d42f20e1ad7234f570077a387d7/nova/virt/hardware.py#L204111:45
gibibased on this they will11:45
sean-k-mooneywell ok they will https://github.com/openstack/nova/blob/a027b45e46fb3a63166b8b86ef7f99b0b04bcec8/nova/virt/hardware.py#L1552-L155311:46
sean-k-mooneyfrom https://github.com/openstack/nova/blob/a027b45e46fb3a63166b8b86ef7f99b0b04bcec8/nova/virt/hardware.py#L2040-L205111:46
gibiyepp 11:46
sean-k-mooneythey wont be mapped to host cores at this point11:47
gibitrue11:47
sean-k-mooneybut we will know how many floatign or pinned cores are required11:47
sean-k-mooneygibi: what is your current issue by the way11:48
sean-k-mooneyi only read back part of the irc log11:48
gibiOK. So i) we need to do the cpuset -> pcpuset data migration in request_spec.numa_topology that was missed originally in Victoria. And we cannot avoid that by populating request_spec.numa_topology from instance_extra.numa_topology 11:48
gibisean-k-mooney: the cpuset -> pcpuset data migration was only done in instance_extra.numa_topology and not in request_spec.numa_topology11:49
gibiso when a pre Victoria pinned instance is migrated after upgrade to Victoria the scheduler blows11:49
gibias it triest to read out pcpuset but it is not populated11:50
gibisee https://review.opendev.org/c/openstack/nova/+/82012111:50
gibiso I'm working on a bugfix for it, but it will be ugly as the InstanceNUMATopology object is persisted in two tables but the OVO itself does not know about that fact11:51
gibiso I cannot add a generic solution11:51
gibiI have to keep the upgrading of the two table in separate paths11:51
sean-k-mooneygibi well we can just do the migration in the ovo no?11:52
sean-k-mooneyrather then in the db11:52
gibide ovo does not know which table to update 11:52
gibithe current migration persists the data after the data migration11:52
gibibut only in instance_extra11:52
sean-k-mooneywell it only gets saved indretly via eithe a request_spec.save or instance.save11:52
gibinope11:52
sean-k-mooneywehre do we save the toplogy diretly11:53
gibisec...11:53
gibihttps://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/objects/instance_numa.py#L20311:53
gibihere11:53
sean-k-mooneyright so that is in the data migration code11:54
gibiso when an instance is loaded that needed a migration the instance_extra is changed in the DB during the load11:54
sean-k-mooneybut normally we do not do that11:54
sean-k-mooneyya only for old instances11:54
sean-k-mooneywe can do the same in the request spec11:54
sean-k-mooneywehn its loaded if it has an old instance object it can also just save it11:55
gibiyes, that is what I'm doing but that is ugly11:55
gibi i) ugly ii) does 11:56
gibi              not generic enough to handle the case when object will be persisted in a 3rd 11:56
gibi              table iii) create a bad precedence for future OVO data migrations11:56
sean-k-mooneythe https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/objects/request_spec.py#L245-L26211:56
gibiI will do it as I have to provide a fix today downstream that is backportable, but I hate i11:56
sean-k-mooneygibi: well you should not be updating multiple tables really11:56
gibisean-k-mooney: those are only called if the request spec is created from an instance object11:57
sean-k-mooneyyou are either udpateing the instance copy or the request spec copy11:57
sean-k-mooneythey are not ment to have the same data in them11:57
gibiyes, but from the OVO perspective it does not know whihc11:57
gibiso I cannot have a generic _obj_from_primitive call in InstanceNUMATopology that does the migration as that call has no info where to persist the result11:58
sean-k-mooneythe instance request spec object wont11:58
sean-k-mooneybut the request spec will know it the request spec11:58
sean-k-mooneyyou shoudl be fixint this in the request psec object not the InstanceNUMATopology11:58
gibiso now the RequestSpec object has to know that there is an InstanceNUMATopology data migration that needs to be run, that is baaad encapsulation11:59
sean-k-mooneynot really11:59
gibithe data format change should be encapsulated in the InstanceNUMATopology in my eyes11:59
sean-k-mooneythe instance numa toplogy object is a child object of the request spec11:59
sean-k-mooneygibi: well it can be encaplusted there but there12:00
sean-k-mooneybut the request spec shoudl call InstanceNUMATopology to parse the data and also to gereate teh updated serialised version12:00
sean-k-mooneythen the the request spec shoudl save that as part of iteslf12:01
gibibut that is bad encapsulation12:01
gibithe data migration should be automatic and opaque from the client of the object12:01
sean-k-mooneygibi: how its just create the InstanceNUMATopology form a json blob and then saving it as a json blob12:01
sean-k-mooneygibi: it woudl be automatic if we bumpt the RequestSpec ovo version when the ovo version of one of its contained objects is increased12:03
sean-k-mooneybut we dont12:03
sean-k-mooneyso the request spec jsut need to detach that the contained ovo is an older veriion then the current one12:03
sean-k-mooneythe constoct the new one using from_primitve and then call to_primate to get the updated version and save it back12:04
sean-k-mooneygibi: we shoudl be doing the data migration in the from_primitive function12:04
sean-k-mooneyin InstanceNUMATopology12:04
gibisean-k-mooney: but then you don't know from the client side if re-presisting is needed or not12:04
sean-k-mooneythat will make it transparent to the client and keep the encalupation12:04
sean-k-mooneygibi: well you would if the verion changed but i guess that is a catch 2212:05
sean-k-mooneywe might need a way to ask the InstanceNUMATopology ovo if it need migration12:05
sean-k-mooneyby passing it the primiave object form the db and having it return true/false12:06
stephenfingibi: I think you've figured this out already but no, no blocking DB migration yet. We could add one but it hasn't been done12:07
stephenfinreading back through logs. That looks like a hairy bug12:07
gibistephenfin: thanks. I will not have time to add the blocking migration now, but maybe at some point in the future12:08
sean-k-mooneyis this only needed wehn we have mixed cpus12:11
gibinope12:11
gibiwhat you need is a pre victoria pinned instance migrated post Victoria12:11
sean-k-mooneyok so its form the PCPU in placment changes12:11
opendevreviewBalazs Gibizer proposed openstack/nova master: Migrate RequestSpec.numa_topology to use pcpuset  https://review.opendev.org/c/openstack/nova/+/82015312:12
gibisean-k-mooney, bauzas, stephenfin: ^^ that is my first stab to fix this12:12
sean-k-mooneygibi: as a quick hack you can just update the numa toplogy object in the filter 12:12
sean-k-mooneytaht wont actully fix it will it. well i twill work but it wont update the request spec version12:13
gibiyeah that would be an option too If I can assume that only the numa filter depends on this12:14
sean-k-mooneyah you also do it there12:14
gibiI do it now in the RequestSpec loading12:14
sean-k-mooneyyep just lookg at it now12:15
sean-k-mooneywe use it in the api also12:15
sean-k-mooneywehn validating rebuilds12:15
sean-k-mooneyalthough im not sure we actully depend in the cpu sets directly12:15
sean-k-mooneyi think we generate new objects form the old and new image12:15
sean-k-mooneyand assert they are the same12:16
stephenfingibi: that's how I'd have done it also12:16
sean-k-mooneystephenfin: in the filter or gibis patch12:17
stephenfingibi's patch12:17
sean-k-mooneyya i was assuming we would do it on load12:18
sean-k-mooneyto avoid the blocker migration12:18
stephenfindoing it in the filter leaves us having to support that forever. As a temporary fix, sure, but it's not viable long-term12:18
sean-k-mooneygibi im not sure we can assume we can remove this imideatly after yoga12:20
sean-k-mooneywe might want to keep it for a few releases in the event that peoppel do an inplace upgrade12:20
stephenfinsean-k-mooney: when do we ever remove things when we said we would12:20
gibi:D12:20
sean-k-mooneylol true12:20
gibisean-k-mooney: technically if we add a blocking migration then we can remove this in Yoga12:21
gibiprobably we wont add that 12:21
sean-k-mooneygibi: we could also do it as a nova manage command with a nova status check12:22
gibiyepp12:22
bauzashonestly, my jab is that we shouldn't persist this object in the API DB12:22
gibiif we trust the users to run the upgrade check12:22
sean-k-mooneybauzas: no the curent behavior is correct12:22
bauzasif it's just for asking the scheduler using it12:22
sean-k-mooneybauzas: we need to persit it to persit the numa constraits12:22
sean-k-mooneywell technially we can generate it again12:23
bauzaswhich service is persisting it ?12:23
sean-k-mooneyif we have the embeded image and flavor12:23
sean-k-mooneyit capture the numa scheduleing requirements12:23
bauzasI need to taxi my daughter now12:23
bauzasbut let's discuss this after12:23
sean-k-mooneyif we dont persite it we need to regenerate it form the embeded flavor and image12:23
bauzasor from the instance one when moving , nope ?12:25
bauzasanyway, me moving12:25
sean-k-mooneyno the instance object is fully populated it might work but its would not work for inital schduling12:26
sean-k-mooneywe woudl still need to store it in the request spec to not chagne ethe filter api12:26
bauzassean-k-mooney: that's something we already do12:44
bauzassean-k-mooney: we either populate from the image and/or the flavor first12:44
bauzasbut then for move ops, we get it from somewhere else12:44
sean-k-mooneywhere? we currently populate it form the instance or when we build the request spec form its componets12:47
sean-k-mooneybauzas: gibi  we coudl make it a propery and constuct it form the image and flavor if we needed too12:48
sean-k-mooneyand stop persiting it12:48
sean-k-mooneyit might slow donw scduling some what as numa_get_constraints is not super cheap but its also not that expensive either12:51
* gibi is on a call12:52
sean-k-mooneyits a pure fucntion of its inputs https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/virt/hardware.py#L1858 but we would have to make sure to use the correct falvor on resize12:52
gibisean-k-mooney: you mean not persist the numa_topology in the RequestSpec at all? Just create it from the image/flavor every time we need it?13:48
sean-k-mooneygibi: yep that is what bauzas is suggesting13:50
sean-k-mooneywhich we could actully do in the case of the request sepc13:50
bauzassorry, a bit busy on and off13:50
bauzasbut yeah I don't like to persist the same DB table in another DB13:51
sean-k-mooneyteh numa constreait fucntion is relitvely cheap in comparison to the actul numa affinity process13:51
sean-k-mooneybauzas: its really two differnt thigns13:51
sean-k-mooneywe jsut use the same object13:51
sean-k-mooneyone is the numa affinity request object13:51
sean-k-mooneyand the other is the final affinty object which stores the assinged cpus and numa nodes13:51
sean-k-mooneywe just reused the same object for both13:52
gibiOK. I got it thanks.13:52
sean-k-mooneygibi: if we go with the generate it when needed approch i would prefer not to backport that mainly because i dont want to have to thing do we have that or not when looking at different downstream releases13:53
sean-k-mooneybut im not against it for master on 13:53
gibiyeah I would not backport that either13:53
gibiLets see if I can get some time adding that to master 13:54
gibibut I will go with the current patch as a backportable thing13:54
bauzassean-k-mooney: I understand that's two different things13:54
bauzasbut using the same object is creating some concerns13:54
gibiwe need that to move back til victoria13:54
gibibauzas: totally agree I think it is worth to remove it from the second table13:55
bauzas++13:55
sean-k-mooneygibi: i review that and +1'd by the way the only thing i woudl add is a release note but it looks correct to me13:55
sean-k-mooneywell actully 13:55
sean-k-mooneydid you update the func test13:55
sean-k-mooneyi closed the review13:56
sean-k-mooneyok you did not13:56
sean-k-mooneyso ya release note and then you need to fix the repoduce func test13:57
gibisure I will add a reno13:57
sean-k-mooneywhich passed...13:57
gibithe reproduce was fixed but it is not a func test it is a unit13:57
sean-k-mooneyoh ok13:57
sean-k-mooneyya it is13:57
sean-k-mooneyhttps://review.opendev.org/c/openstack/nova/+/820153/1/nova/tests/unit/objects/test_request_spec.py13:57
gibicreating a pre victoria instance in the func test is far from trivial13:58
gibiso I dropped that direction13:58
sean-k-mooneysorry i normally expect those to be func tests but ya backporting the func test to victoria woudl be non trivail13:59
sean-k-mooneyin this specific case i think a unit test is ok13:59
gibinot that what I meant. I mean creating a func test on master that simulates a pre Victoria instance is hard13:59
gibiI would need to dig in to the DB anyhow to backlevel the data to pre-victoria as our object layer will persist the master version14:00
*** whoami-rajat__ is now known as whoami-rajat14:00
sean-k-mooneyyes so you woudl have to boot the vm14:00
sean-k-mooneythen update its request spec14:00
sean-k-mooneythen migrate it and assert it expodes14:00
sean-k-mooneygibi: i assumed you ment because we willl be missing several fo the helper functions14:01
gibinot that ^^14:01
gibibut the "then update its request spec"14:01
gibithat would be heavy DB digging in a func test 14:01
sean-k-mooneygibi: ya you woudl effectly have to directly execute sql14:03
sean-k-mooneywhich i think is over kill14:04
gibiand in that sql manipulate a highly nested json dict :D14:04
sean-k-mooneyand this is all in a json blob in the db too so its not exactly nice to update via sql either14:04
sean-k-mooneyexactly14:04
gibiexactly :D14:04
sean-k-mooneyjinks :)14:04
sean-k-mooneyso ya its understandable that you chose to go the unit test route14:04
gibiI have a downstream env where they can reproduce the issue so I will have real test result hopefully in a day14:04
gibithat env is a complicated one simulating an upgrade from Mitaka to Victoria :D14:05
gibiso when I say pre-Victoria instance that is actually a Mitaka instance :D14:06
sean-k-mooneyi suspect the only reason we have not hit this downstream is our last release was based on train14:06
sean-k-mooneyand the next one which will be based on wallaby is not releasing until next year14:06
gibiyepp, the bug is introduced in Victoria14:06
sean-k-mooneythat is after the initall cpu in placement work14:07
sean-k-mooneythat was in train14:07
sean-k-mooneythis is part of the mixed cpu feature i think14:07
sean-k-mooneyya part of https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/use-pcpu-vcpu-in-one-instance.html14:07
sean-k-mooneyhttps://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/use-pcpu-vcpu-in-one-instance.html#work-items14:08
sean-k-mooneyit added the pcpuset filed14:08
sean-k-mooneyinitally after the pcpu in placment change we just used cpuset since all cores were either pinned or not and we coudl tell that from teh config cpu_policy14:10
gibiyepp that is my understanding too14:10
opendevreviewBalazs Gibizer proposed openstack/nova master: Migrate RequestSpec.numa_topology to use pcpuset  https://review.opendev.org/c/openstack/nova/+/82015314:14
gibinow with release notes :)14:14
sean-k-mooney:)14:17
sean-k-mooneyill wait for ci to finsih and ill try an re review later14:17
gibithanks14:19
*** akekane__ is now known as abhishekk14:58
*** tosky_ is now known as tosky16:17
*** priteau is now known as Guest738816:38
*** priteau_ is now known as priteau16:38
opendevreviewBalazs Gibizer proposed openstack/nova master: [WIP]Stop persisting RequestSpec.numa_topology  https://review.opendev.org/c/openstack/nova/+/82021517:43
*** artom__ is now known as artom19:36
*** tbachman_ is now known as tbachman19:59
*** tbachman_ is now known as tbachman20:10
*** tbachman_ is now known as tbachman21:05

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!