Monday, 2022-09-05

bauzasgood morning Nova08:01
bauzasgibi: ack, saw the Critical bug 08:02
bauzaswe still have 2 weeks for RC1, hopefully should be OK08:02
gibigood morning08:12
gibiyeah, hopefully we can agree on where to fix it, how to fix it, and if we fix it in oslo then hopefully we can release a new olso.concurrency version08:13
fricklerFYI placement seems to have issues with the latest oslo.db release https://zuul.opendev.org/t/openstack/build/d7bcb42a3c4a466cac375badba13b18b not that it will likely matter much with all the projects that are completely broken by it08:25
sean-k-mooney[m]frickler: thats just a deprecation warning not an actual failure. placemient is mostly sqlachemey 2.0 compatiable already08:45
sean-k-mooney[m]looks like we have missed one change but that should not be hard to adress08:46
gibifrickler, sean-k-mooney[m]: I'm on it08:46
fricklersean-k-mooney[m]: yes, just mentioned it because if would block that requirements patch. certainly not a big thing compared to the other blockers there08:47
gibiwe intended to store dicts in a caches but we stored row objects instead08:47
fricklers/if/it/08:47
sean-k-mooney[m]frickler:  well as it stands its just breaking the test as we treat warnings as errors. i doubt its breaking placment at all08:48
sean-k-mooney[m]but our cache is not working correctly so thats a latent bug08:48
sean-k-mooney[m]gibi https://github.com/openstack/placement/commit/c68d472dca6619055579831ad5464042f745557a i guess we missed this usecase08:54
sean-k-mooney[m]the test is useing the dict interface instead of the field interface08:55
gibiyeah, good point, we can store row objects in our rc caches if we access the columns by attribute acces and not by dict access08:55
gibilet me check that it works08:56
gibias it provides an easier solutions08:56
sean-k-mooney[m]https://github.com/openstack/placement/blob/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/objects/resource_class.py#L220 we are tryign to use _mappings to convert them to dict on get all08:56
sean-k-mooney[m]well not to convert the via a dict to a resouceclass which should be fine08:57
gibihere we have Row objects in the caches https://github.com/openstack/placement/blob/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/attribute_cache.py#L15508:58
gibiwhich is actually fine08:59
gibias we use attribute access everywhere08:59
gibiexcept in that one func test that now fails08:59
gibihm, no09:00
gibithat caches is broken09:00
gibias sometimes we store dicts09:00
gibisometimes we store Row objects09:00
gibihttps://github.com/openstack/placement/blob/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/attribute_cache.py#L16309:00
sean-k-mooney[m]https://github.com/openstack/placement/blob/48f31d446be5dd8743392e6d1e45ed8183a9ce1b/placement/attribute_cache.py#L148-L15109:01
sean-k-mooney[m]we store db rows when we load it form the db09:01
gibiyep so this is a type mess09:02
sean-k-mooney[m]ya stephen was complainging about this in a differnt patch09:03
sean-k-mooney[m]so the all_cache09:04
sean-k-mooney[m]currently had the row09:05
sean-k-mooney[m]but that could jsut be a new tuple firht09:05
sean-k-mooney[m]https://github.com/openstack/placement/blob/48f31d446be5dd8743392e6d1e45ed8183a9ce1b/placement/attribute_cache.py#L15109:05
sean-k-mooney[m] self._all_cache = {r[1]: r for r in res} ->  self._all_cache = {r[1]: (r[0], r[1]) for r in res}09:06
gibiso while 'self._all_cache = {r[1]: r._mapping for r in res}' fixed the currently failing test case it breaks a bunch of gabbi tests09:06
sean-k-mooney[m]what elese is in that row beyond the id and string09:07
gibiupdated_at and created_at09:08
gibiit is in the select above the fetachall 09:08
sean-k-mooney[m]ah base is the time stamped mixin09:08
sean-k-mooney[m]also the value of the cache is ment to be a dict not a tuple so my version is incorrect09:09
gibiyeah I have to figure out while the gabbi tests fails with my change09:10
gibitechnically Row._mapping is not a dict just a dict like object09:10
gibiso it might be a problem09:10
sean-k-mooney[m]do you want to do dict(**r._mappings)09:10
sean-k-mooney[m]by the way to make this dicts not rows09:11
gibiyeah I can try that09:11
sean-k-mooney[m]although if we are expecting rows and using .id09:11
sean-k-mooney[m]you would need a named tuple instead09:11
gibithat cache stores dicts so if there is .id access now that would fail anyhow 09:12
sean-k-mooney[m]named tuple is the only “standard”  class that will give you the field and dict style access09:12
sean-k-mooney[m]well its storing row objects currently09:13
sean-k-mooney[m]its ment to ba a dict09:13
gibinamedtuple does not give you dict access09:14
gibiit sometimes store Row sometimes store Dict09:14
gibihttps://github.com/openstack/placement/blob/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/attribute_cache.py#L184-L18909:14
gibiso it seems we started using that cache also in a mixed mode09:15
gibiat some places we use attribute acces on it09:15
gibihence the gabbi test failures09:15
sean-k-mooney[m]https://github.com/openstack/placement/blame/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/objects/trait.py#L151 ya stephen added a fixme when they noticed that09:16
sean-k-mooney[m]namedtuple give you indexed acces i.e. r[0]09:16
sean-k-mooney[m]but i guess it wont give you r[‘id’]09:16
gibihttps://github.com/openstack/placement/blame/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/objects/resource_class.py#L71-L74 so we assumes Row here09:17
gibiso we need to decide which was we go. a) Store Row (or namedtuple) objects in caches and keep attribute access b) store dict and go with dict access09:18
sean-k-mooney[m]https://github.com/openstack/placement/commit/b3fe04f081a096258468d032560f46cdfe77e144 stpehen tried to remove these assumtions in ^09:18
sean-k-mooney[m]well that will work as a named tuple09:19
gibiRow and namedtuple is pretty much compatible, yes09:19
sean-k-mooney[m]i would avoid random dicts personally09:19
gibiack09:19
sean-k-mooney[m]and either store the row or named tuple09:20
sean-k-mooney[m]im not sure if there is a reason not to sotre thr row object09:20
sean-k-mooney[m]does it increase memory09:20
sean-k-mooney[m]or have any other sideffect we would not want09:20
gibistoring row is OK the doc said dict hence my statement that the caches is broken09:23
sean-k-mooney[m]ack09:23
gibiwe need to mix Row and namedtuple as I can only create namedtuple here https://github.com/openstack/placement/blob/13bbdba06da19f85c05a2a9e1fbdb9d1813c3b47/placement/attribute_cache.py#L157-L16809:24
gibibut they are compatible09:24
gibiso I only leave a note09:24
gibiwhen I convert that to namedtuple09:24
sean-k-mooney[m]if you use a named tuple on line 155 as well in refersh_from_db i thik we dont need to mix types but cool ill review when you push09:39
songwenping_sean-k-mooney[m],gibi: hi, nova-scheduler get allocation_candidates return 504 gateway timeout when create vm with 8gpus requests on our client's env, there are 13 gpu compute, and every compute has 8gpus.09:47
opendevreviewBalazs Gibizer proposed openstack/placement master: Make us compatible with oslo.db 12.1.0  https://review.opendev.org/c/openstack/placement/+/85586209:47
gibisean-k-mooney[m]: ^^09:47
gibistephenfin: ^^09:47
gibisean-k-mooney[m]: also here is my stab at the nova only fair lock fix https://review.opendev.org/c/openstack/nova/+/855717 there is the oslo version of the fix https://review.opendev.org/c/openstack/oslo.concurrency/+/85571409:50
songwenping_i imitate to insert some test datas on my devstack env, and the result is same, perhaps the api of allocation_candidates/limit... need to optimize.09:51
gibibut I have to go back and think about the unit tests as it seems they are unstable09:51
opendevreviewAmit Uniyal proposed openstack/nova master: Adds check for VM snapshot fail while quiesce  https://review.opendev.org/c/openstack/nova/+/85217109:53
sean-k-mooney[m]songwenping_:  this is physical gpu passthoug?09:54
sean-k-mooney[m]not vGPU correct09:54
songwenping_yes09:54
songwenping_pgpu passthough09:54
sean-k-mooney[m]those are not tracked in placment then09:55
auniyal_Hi09:55
auniyal_please review these09:55
auniyal_https://review.opendev.org/c/openstack/nova/+/85217109:55
auniyal_https://review.opendev.org/c/openstack/nova/+/85449909:55
auniyal_backporting09:55
auniyal_https://review.opendev.org/c/openstack/nova/+/85497909:55
auniyal_https://review.opendev.org/c/openstack/nova/+/85498009:55
sean-k-mooney[m]songwenping_:  on master we now can track pci devics in placment but in any other release pci devices are not tracked in placment09:56
songwenping_sean-k-mooney[m]: we use cyborg to manage pgpu, the request url is :curl -g -i -X GET "http://10.7.20.73/placement/allocation_candidates?limit=1000&group_policy=none&required1=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT_ID_1DB6&required2=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT_ID_1DB6&required3=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT_ID_1DB6&required4=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT_ID_1DB6&required5=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT09:56
songwenping__ID_1DB6&required6=CUSTOM_GPU_NVIDIA%2CCUSTOM_GPU_PRODUCT_ID_1DB6&resources=MEMORY_MB%3A32%2CVCPU%3A1&resources1=PGPU%3A1&resources2=PGPU%3A1&resources3=PGPU%3A1&resources4=PGPU%3A1&resources5=PGPU%3A1&resources6=PGPU%3A1" -H "Accept: application/json" -H "OpenStack-API-Version: placement 1.29" -H "User-Agent: openstacksdk/0.99.0 keystoneauth1/4.6.0 python-requests/2.27.1 CPython/3.8.10" -H "X-Auth-Token: gAAAAABjFbkj8nz24Q7A6J0qmjpdZHfWM09:56
songwenping_vZidJOT9iYhV2MQCngcYQHhSQmjsGJofkYoT087tAISpf3IniDGwPTHXz_-8x-1nF60WavSYFgEd-5l3_ENrGumaHuU1yfhMJqZu06IR4SXacjA1g6ImSSEfLbfQ9zrPouB0roFokHPmPy3-UpnFZE"09:56
gibisean-k-mooney[m], sean-k-mooney[m]: is it via cyborg? because then it might be tracked in placement09:56
sean-k-mooney[m]ok09:56
sean-k-mooney[m]in that case perhaps they are hitting the compintorial explosion we were worried about with tracking VFs directly09:58
gibiprobably there too many possible candidates 09:58
sean-k-mooney[m]there are 13 chose 8 combinations09:58
sean-k-mooney[m]that 128709:58
gibithat is not that much09:58
gibibut if there is 1000 computes09:58
sean-k-mooney[m]1287*100009:58
gibior just 10009:58
gibithen that is sizeable09:59
sean-k-mooney[m]ya it will grow quickly09:59
sean-k-mooney[m]sorry no10:00
songwenping_there are extra 8 computes without gpu.10:00
sean-k-mooney[m]its 13 hosts each with 8 gpus and the vm is asking for 810:00
sean-k-mooney[m]so it wont explode like that there is only 1 allocation pooible per host10:00
sean-k-mooney[m]*possible10:01
gibinope10:01
gibiif you have 8 groups and 8 gpus10:01
gibithen each group can be satisfied by each gpu10:01
sean-k-mooney[m]you think it would b n squared10:01
sean-k-mooney[m]oh hum maybe10:01
gibi8!10:01
gibi4032010:02
sean-k-mooney[m]ya… that would be bad10:03
sean-k-mooney[m]i hope thats not whats happening or that will cause issue for pci devices in placemnt too10:03
gibithis is coming from the fact that placement treats groups and RPs individually. even if we have very similar RPs and very similar groups10:03
gibiI can make a test case for 8 PCI devs :)10:04
gibiin nova func test10:04
gibiso we can prove it10:04
sean-k-mooney[m]perhaps start with 410:04
sean-k-mooney[m]i mean if its a gabbit test then sure 810:05
gibiack. first 1 will go and try to stabilize the fair lock unit tests then I will look at the 4-8 PCI issue10:05
sean-k-mooney[m]we really need to not do all possible permuations on the placement side10:05
gibiif this is real factorial then we need to introduce an per host limit for a_c query10:06
sean-k-mooney[m]ya i was debating if that was the only option10:06
gibiif the groups are different by trait then we need all permutations10:06
gibiotherwise we might not found the right one10:06
sean-k-mooney[m]well an allocation candiate by defieniton meets the requirements10:07
gibithe trick here is that we have identical groups and identicaly PRs10:07
sean-k-mooney[m]so it depend on whyer this is happening10:07
gibito find a_c placement needs to iterate permutations I think in the general case10:07
sean-k-mooney[m]anyway something to look into i guess10:07
gibiyepp10:07
sean-k-mooney[m]im going to grab a coffee and take my blood pressre meds and then check on freya be back in about 10 mins10:08
gibiack10:09
gibifor me coffee is the blood pressure med10:09
sean-k-mooneygibi: should fastener reintoudce the workaround they had10:32
gibithat is a way too but I have less authority over that project10:35
sean-k-mooneyisnit it an oslo deliverable too10:35
sean-k-mooneyor has it move out of openstack10:36
sean-k-mooneyoh its not an openstack project10:36
sean-k-mooneyi tough it used to be at one point10:37
sean-k-mooneyi guess we can fix it in oslo but we should leth the know that under eventlet the rentrant guarentee is broken10:38
sean-k-mooneyhttps://github.com/harlowja/fasteners#-overview10:38
sean-k-mooneythen note that it should be reentrant10:38
sean-k-mooneygibi: https://github.com/harlowja/fasteners/issues/8610:44
sean-k-mooneythat also intereisng10:44
sean-k-mooneyhttps://github.com/harlowja/fasteners/pull/87/files10:44
sean-k-mooney  elif not self.has_pending_writers:10:44
sean-k-mooney                    elif (self._writer == me) or not self.has_pending_writers:10:44
gibiI'm not sure I follow how this connects to your current problem. 10:49
gibiour10:49
sean-k-mooney its relying on threading.current_thread10:50
sean-k-mooneyan now it allows you to reaquire the lock if tha tis the same10:50
sean-k-mooneybut with spwan_n10:50
sean-k-mooneythat means two greenthread coudl get the same lock if they run on the same os thread10:51
gibiyes, ReaderWriterLock rely on current_thread for reentrancy10:51
sean-k-mooneythis was changed in januaary10:52
sean-k-mooneycould you try downgrading fastener to 0.17.210:52
gibibut the fact that ReaderWriterLock depends on current_thread was not introduced there, it was there before https://github.com/harlowja/fasteners/pull/87/files#diff-bdd827bd84626190e8a93d1a50782b998b426261511e653de5bb775e9082e1f3L16910:52
sean-k-mooneygibi: it did not used to in the reader writer case10:52
sean-k-mooneyright but it used to prevent geting the reader lock if there were any writers10:53
gibiolso depends on the writer lock it seems https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L28810:53
gibiand the writer part had reentrancy before 0.17.210:53
sean-k-mooneyim oging to try your oslo repoducer and downgrade it just to see10:54
gibiand writer lock is affected independently from the 0.17.2 https://github.com/harlowja/fasteners/pull/87/files#diff-bdd827bd84626190e8a93d1a50782b998b426261511e653de5bb775e9082e1f3L20810:54
sean-k-mooneysynconise is takign a write_lock ya?10:56
sean-k-mooneyif so then its not related to that change10:56
sean-k-mooneybut the is_writer code is not eventlet safe10:57
sean-k-mooneylikely becuase of the workaround you mentioned they remvoed10:57
sean-k-mooneyhttps://github.com/harlowja/fasteners/commit/467ed75ee1e9465ebff8b5edf452770befb9391310:57
sean-k-mooneyso 0.15 dropped that10:58
gibiyes it is broken since 0.1511:00
gibiit is effecting master and yoga11:01
gibiin xena we have < 0.1511:01
sean-k-mooneyhttps://github.com/harlowja/fasteners/issues/9611:12
sean-k-mooneyat leasst they can triage ^ and decied if its somethign they want to fix 11:13
gibithanks11:20
opendevreviewBalazs Gibizer proposed openstack/nova master: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571711:33
gibiadded the fasteners issue link and hopefully stabilized the unit test ^^11:33
opendevreviewBalazs Gibizer proposed openstack/nova stable/yoga: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571811:34
sean-k-mooneyim going to propose two reverts to fasteneres and assocaite it with the issue link too incase they deciced that that is approicate11:34
gibiack11:35
sean-k-mooneyhttps://github.com/harlowja/fasteners/pull/9711:40
gibithanks11:47
*** tosky_ is now known as tosky12:23
opendevreviewBalazs Gibizer proposed openstack/nova master: Show candidate combinatorial explosion by dev number  https://review.opendev.org/c/openstack/nova/+/85588513:00
gibisean-k-mooney: here are the numbers of the combinatorial explosion13:00
gibi^^13:00
gibiin case of single device per RP (ie PCI or PF) we have worst case factorial amount of candidates13:01
gibiin case a single RP provides more than one devices (n VFs for a PF RP) then worst case we have exponential candidates13:01
gibiand placement first generate all of them then limit the result based on the limit queryparam https://github.com/openstack/placement/blob/723da65faf66cc9b8d02f3756387dc58437e62af/placement/objects/research_context.py#L289-L29213:04
gibiso this probably needs a bit of (probably massive) refactoring if we want to avoid placement to blow up on 8 devices 13:13
gibiwe need to inline the limit somehow 13:13
gibibut by that we would potentially loose viable candidates 13:15
gibiso placement alone cannot decide where to limiot13:16
gibiSo modeling similar PFs of PCI devs does not help as it would lead to the VF scenario. 13:23
gibiAlso we cannot model count=n as a single group as placement never splits a suffixed group to fit it into multiple RP13:24
gibiwhile for nova it would be enough to have a small number of candidates per compute host, while we have nova side PCI filtering we need all candidates from placement as we don't know which will fulfill the nova side filtering13:26
sean-k-mooneygibi: ya so this is exactly why we did not want each VF to ba an RP13:44
sean-k-mooneywe were very concerned it would explode like this13:44
* sean-k-mooney just back form dentist so reading back13:44
sean-k-mooneygibi: this feels a bit like the numa node combintorial issue13:46
sean-k-mooneyis this happening in sql or in python13:46
gibithis is in python13:46
gibiin case of numa we re-tried host - guest numa mapping multiple times13:46
sean-k-mooneyack i wond if we can use itertools.combinations there in stead of permuations in that case13:46
gibiwe use itertools.product to generate all mapping between RPs and groups13:47
sean-k-mooneyhum ya i wonder if we really need all13:47
gibifrom placement perspective we need all13:47
sean-k-mooneydo you have a pointer to the code13:47
gibinova might be able to limit it by providing al the PCI filtering information to placement13:48
sean-k-mooneygibi: we likely can take a perhost limit and then use a genortator to limit the amount we return13:48
sean-k-mooneygibi: to porvide all the info we would also need to pass the numa toplogy info which would change the tree structure13:49
sean-k-mooneydoable but a lot of work13:49
gibithe perhost limit has the problem that if nova still filters out PCI devices after placement then we need to make sure that placement returns enough candidate to fulfill that extra filtering13:49
sean-k-mooneyhttps://github.com/openstack/placement/blob/c68d472dca6619055579831ad5464042f745557a/placement/objects/allocation_candidate.py#L364-L38713:49
gibiyeh I linked it above :)13:50
sean-k-mooneygibi: ya its the same issue with the current request limit13:50
sean-k-mooneyyou linked to the resarch context13:50
sean-k-mooneyunless i missed it13:50
gibiahh sorry yes13:51
gibiin the commit message I linked to the product call13:51
gibihttps://review.opendev.org/c/openstack/nova/+/855885/1//COMMIT_MSG#2613:51
sean-k-mooneyack13:51
gibiat the moment I don't think this is easy to fix and given my time allocation for the next months I won't start on it.13:54
gibisongwenping_ might have time and ideas to hack on it13:54
gibiI left the above functional test top of the PCI series so we will not forget that this needs to be fixed13:55
gibibut this makes me question if we want to merge the scheduling support in AA 13:55
gibiit might be useful for small deployments (<8 devs per host)13:55
gibibut it is dangerous for big deployments13:56
gibibauzas: do we have a PTG etherpad? 13:56
sean-k-mooneyim wondering if we need to have a way to limit this form the api query 13:56
bauzasgibi: not yet, but I can create one13:56
gibibauzas: I could use one :)13:57
bauzasas you want13:57
gibisean-k-mooney: if we limit the a_c query then we need to give hints to placement about which order to iterate the candidates to fill the limited response13:58
gibisean-k-mooney: but I'm not sure I can express what we need13:59
gibisean-k-mooney: it is skip those candidates that are "too similar" to an already found candidate13:59
sean-k-mooneygibi: im wonderign if we can avoid it by generatting a suffictly diverse set of combinations14:01
gibii.e. in case RP1(2), RP2(2), G1(1), G2(1) -> (RP1-G1, RP2-G2) and (RP1-G2, RP2-G1) might be too similar if G1 an G2 asks for the same RC and traits14:01
gibiyeah divers set of candidates == skip the too similar ones :)14:01
gibibut what is divers might be not universal14:02
gibilike ir RP2 is remote_managed=True in nova then placement still sees the same symmetry but the two RP is not equivalent from nova perspecgive14:02
gibiperspective14:02
sean-k-mooneyright so we want to ignore order when lookign at equvialce provided the request group is the same14:04
sean-k-mooneyi feel likel there is definetly a way to optimise so that we dont generate a product14:04
sean-k-mooneythat is goign to over produce results14:04
sean-k-mooneybut off the top of my head im not sure the correct way to proceed 14:05
gibibauzas: I've created https://etherpad.opendev.org/p/nova-antelope-ptg14:05
sean-k-mooneylookingat https://docs.python.org/3/library/itertools.html#itertools-recipes14:06
gibisean-k-mooney: I agree on the second part "but off the top of my head im not sure the correct way to proceed" but I'm not sure we can have better than actually iterating the product14:06
sean-k-mooneymaybe take(per_host_limit, random_product(...))14:07
gibiyeah random is a way out even if a dirty one14:09
sean-k-mooneygibi: if we cant avoid the need to generate the product im wonderign if we can break the implict lexegfacical ordering14:09
opendevreviewAmit Uniyal proposed openstack/nova master: add regression test case for bug 1552777  https://review.opendev.org/c/openstack/nova/+/85590014:10
opendevreviewAmit Uniyal proposed openstack/nova master: Adds check for instance resizing  https://review.opendev.org/c/openstack/nova/+/85590114:10
gibiif nova could define a requested order based on information that nova has but placement doesnt, then yes, ordering can be a solution. that is basically nova asking placement to generate diverse candiates with a definition of divers provided by nova in the a_c query14:10
gibiOK I tried to document this issue on the PTG etherpad and by that I will move this to background processing :)14:18
opendevreviewBalazs Gibizer proposed openstack/nova master: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571714:34
opendevreviewBalazs Gibizer proposed openstack/nova stable/yoga: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571814:36
gibibauzas, sean-k-mooney: ML thread about the fair lock issue https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030325.html16:22
bauzasgibi: thanks16:22
sean-k-mooneycool 16:23
gibiI did find any direct evidence that other projects than nova and taskflow are affected16:26
gibiI did not16:26
opendevreviewBalazs Gibizer proposed openstack/nova master: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571716:30
opendevreviewBalazs Gibizer proposed openstack/nova stable/yoga: Fix fair internal lock used from eventlet.spawn_n  https://review.opendev.org/c/openstack/nova/+/85571816:31
sean-k-mooney gibi bauzas https://blueprints.launchpad.net/nova/+spec/non-admin-hw-offloaded-ovs17:24
sean-k-mooneyi can proably hack up a poc of that this week i guess but im unsure if i will have time to take that to completion17:24
bauzascycle highlights ready to review https://review.opendev.org/c/openstack/releases/+/85597417:26
sean-k-mooneydo we have any features to highlight for non libvirt drivers17:28
sean-k-mooneywere there any imporant ironic improvmements17:28
sean-k-mooneyor hyperv17:28
sean-k-mooneybased on https://docs.openstack.org/releasenotes/nova/unreleased.html#new-features i guess not17:29
sean-k-mooneygibi: bauzas ... so there is alos a libvirt bug at play for hardwaore offloaded ovs 18:05
sean-k-mooneyhttps://github.com/libvirt/libvirt/commit/8708ca01c0dd38764cad3e483405bdeb05ac2e9618:06
whoami-rajatbauzas, hey, we're past client freeze but my API feature is in and just wanted to mention the OSC and novaclient patches required by my feature19:03
whoami-rajatnovaclient https://review.opendev.org/c/openstack/python-novaclient/+/82716319:03
whoami-rajatOSC: https://review.opendev.org/c/openstack/python-openstackclient/+/83101419:03
*** haleyb_ is now known as haleyb20:12

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!