Tuesday, 2022-06-14

opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566002:41
opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566002:43
opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566002:46
opendevreviewMerged openstack/nova stable/xena: Fix eventlet.tpool import  https://review.opendev.org/c/openstack/nova/+/84073303:26
opendevreviewMerged openstack/nova stable/xena: Gracefull recovery when attaching volume fails  https://review.opendev.org/c/openstack/nova/+/82943303:26
opendevreviewMerged openstack/nova stable/yoga: Fix segment-aware scheduling permissions error  https://review.opendev.org/c/openstack/nova/+/84073204:07
opendevreviewMerged openstack/nova stable/yoga: Isolate PCI tracker unit tests  https://review.opendev.org/c/openstack/nova/+/84083004:07
opendevreviewMerged openstack/nova stable/yoga: Remove unavailable but not reported PCI devices at startup  https://review.opendev.org/c/openstack/nova/+/84083104:22
opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566004:24
opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566004:26
opendevreviewMerged openstack/nova stable/wallaby: Fix inactive session error in compute node creation  https://review.opendev.org/c/openstack/nova/+/81180904:35
opendevreviewTakashi Kajinami proposed openstack/nova master: libvirt: Add new option to require mulitpathd for volume attachment  https://review.opendev.org/c/openstack/nova/+/84566006:45
gibigood morning07:32
bauzasgood morning07:47
gibisean-k-mooney, artom: I have concerns in https://review.opendev.org/c/openstack/nova/+/82404807:54
*** lajoskatona_ is now known as lajoskatona07:59
bauzasgibi: following the discussion we had last week about VGPU allocations https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L8203-L820608:34
bauzashttps://images.squarespace-cdn.com/content/518f5d62e4b075248d6a3f90/1385326046909-827X09Z2RML6LYHWQBVI/git-blame2.jpg?format=1500w&content-type=image%2Fjpeg08:37
gibibauzas: ahh, it was wishfull thingkin08:42
gibithinking08:42
bauzasgibi: I just wrote a quick functest and this is weird08:43
bauzasI asked for VGPU=2 and I got only one allocation08:43
bauzasso I need to dig into the code08:43
gibione allocation of two vgpus from the same pgpu?08:43
bauzasjust a plain simple flavor with resources:VGPU=µ208:44
bauzaswhich should have created a single allocation of VGPU=208:44
bauzaswhen arriving before the above code08:44
bauzasagainst one single RP08:45
gibiyepp placement does not split allocation against a single RC between RPs08:45
bauzasbut I don't know why when I introspect, I see :08:45
bauzas(Pdb) vgpu_allocations08:45
bauzas{'1df35fc7-41f4-4ef4-b995-7fd561c6a391': {'resources': {'VGPU': 1}}}08:45
gibiohh, that is strange08:45
gibiyou should have 2 in the allocation08:45
bauzasyeah08:45
bauzascorrect08:45
gibibut then you have a nice problem to debug :)08:46
bauzasI'm checking the max limit of the inventory08:46
bauzasthat could be the reason08:46
bauzasif we cap to 1, then there are no ways to create an allocation of 208:46
bauzasbut... the scheduler should have failed, right?08:46
gibibut then you should get no allocation candidates08:47
bauzasyeah that08:47
bauzassomething got messed somewhere and I still need to investigate whether this is just a fixture issue08:47
gibido you have the placement log from the a_c query? 08:48
gibidoes nova requested 1 VGPU in the a_c query?08:48
bauzasI'll set the DEBUG level08:49
bauzasshit OS_DEBUG=1 doesn't seem to work with functional tests08:50
gibibauzas: yep it does not, I was not able to track that down last time08:51
bauzaspdb'ing a bit further down then08:52
bauzasanyway, looks like a nice bone to snag08:52
bauzasoh f***08:53
bauzasforget08:53
bauzasit's PEBKAC08:53
* bauzas facepalms08:53
* bauzas hides08:53
bauzasI wrote stupid code08:53
bauzashttps://paste.opendev.org/show/bgjrdJhdinSqXXIY6EWm/08:54
bauzasway better now this is fixed08:55
bauzaslet's pretend this whole conversation never existed08:55
bauzasbut my original point remains08:56
bauzaswe could let operators to isolate the allocations between different named groups08:56
bauzasin their flavors08:56
bauzasthe point is, we will just swallow all of them but one08:57
gibiahh different falvor :)08:57
* bauzas blushes08:57
bauzasI'm just about modifying the flavor to ask for one VGPU per group08:58
bauzasand I'm pretty sure we'll end up with only one mdev 08:58
ygk_12345hi all can anyone help me with this  https://bugs.launchpad.net/nova/+bug/197806509:14
gibiygk_12345: please check the request-id in the conductor and scheduler logs as well09:18
ygk_12345gibi: All I can find are those messages from all nova logs. They are still stuck in scheduling and building state. Even now out of 10 vms, only 8 are created fine. rest two are in building state09:21
ygk_12345gibi: i have added those logs now to the case., pls check them09:22
gibiI saw. that is awful small amount of log for an instance boot. do you see more logs for those VMs that booted successfully?09:22
ygk_12345gibi: let me check that09:23
opendevreviewRajat Dhasmana proposed openstack/python-novaclient master: Add support to rebuild boot volume  https://review.opendev.org/c/openstack/python-novaclient/+/82716309:25
ygk_12345gibi: i have added the log09:28
gibiI don't think you are actually having / finding all the logs. for a successfull boot you should see many log lines in the conductor / scheduler and compute service09:30
bauzasgibi: sorry to interupt you but my a_c language is a bit rusty 09:32
gibibauzas: no worries09:32
bauzasgibi: /placement/allocation_candidates?group_policy=isolate&in_tree=adbdb144-d84b-4b80-b03b-a0bc520d91ba&limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A2&resources1=VGPU%3A1&resources2=VGPU%3A1&root_required=%21COMPUTE_STATUS_DISABLED gives me no valid candidates and I wonder why09:32
bauzasthe two RPs are supporting different types but I don't ask them09:33
bauzasI just ask for one VGPU per RP09:33
gibido you have the PGPU RP in the tree of in_tree=adbdb144-d84b-4b80-b03b-a0bc520d91ba ?09:35
gibiso you requests two vgpus from two different pGPUs (isolate) 09:36
gibiI mean one vgpu from each pgpu09:36
gibithat should work if you have to pGPU RPs09:37
gibicould you paste the output of https://github.com/gibizer/osc-placement-tree ? (or publish the test you are running so I can reproduce it?)09:39
bauzassorry was doing other thing09:41
bauzasgibi: I can install it in the func venv09:42
bauzasoh, you said it in the README :)09:42
bauzaslemme pip it09:42
bauzasgibi: I need to disappear for gym reasons but I'll work on it this afternoon09:48
bauzasI just installed, I just need to add the placement api module in my test class09:48
gibibauzas: have a nice workout. feel free to ping me later09:48
bauzasgibi: <309:51
bauzasgibi: found my problem09:51
bauzasI was only having one child RP09:51
* bauzas needs to go off but thanks09:51
gibiack, good to hear that09:51
gibibauzas, sean-k-mooney: an interesting performance bug https://bugs.launchpad.net/nova/+bug/197837209:55
sean-k-mooneyfirst tought is they dont know how to use the hw:numa_* extra specs09:59
sean-k-mooneyits very bad pratcit to use hw:numa_cpus or hw:numa_mem with symetric numa toplogies10:00
sean-k-mooneyits explictly an anti pattern10:00
sean-k-mooneyand should not be doen10:00
sean-k-mooneyit wont affect the perfrmace but the also mis understand the relation ship betweeh hw:cpu_threads and hw:cpu_max_threads10:01
sean-k-mooneyif hw:cpu_treads is specifed then max is ignored10:01
sean-k-mooneysam for sockets10:02
sean-k-mooneyso that flavor defintion annoys me on a fundimental level :)10:02
sean-k-mooneygibi: hoststly im not that suprised we know that this is a very slow algorthim10:03
gibithen look at the minimal reproduction that has no falvor :)10:03
sean-k-mooneyi did some experiments a few year ago repimlementing it form scratch10:04
sean-k-mooneybut the reason i did not continue with that was we were going to implement numa in placement any day now10:04
gibialso we are emitting 18G debug logs during that run which is pretty extreme10:04
sean-k-mooneypeopel complaied there was not enough logging in the hardware module :P10:05
sean-k-mooneyhttps://github.com/SeanMooney/cpu-pinning10:06
sean-k-mooneythis was my experimal approch 10:06
gibias far as I understand this call does not call out to any extrnal system (no fs, db, rabbit calls) so it runs in pure python. So taking minutes indicate that we are doing something crazy10:08
sean-k-mooneywe are try to validate cpu (floating and pinned),ram ,pci, and pmem affinity based on the rules set in teh falvor with regards to thread affintiy, device affintiy and any semetic or asmetirc numa requirements10:10
sean-k-mooneygibi: by the way we have know about this basically from the start that its qudratic or wrose as you scale the number of host/guest numa nodes10:16
sean-k-mooneyits why the numa toploty filter shoudl always be last in your filter list if you enable it10:16
sean-k-mooneyi am not sure if ic an repodcue that simpel example in my ohter implemntaion but im going to give it a try quickly10:19
sean-k-mooney[11:27:50]❯ ./run.sh 10:28
sean-k-mooneyCourtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.                                            10:28
sean-k-mooney                                  10:28
sean-k-mooney0:0,1:32,2:512,3:54410:28
sean-k-mooney0:1,1:2,2:3,3:410:28
sean-k-mooney0:5,1:69,2:133,3:19710:28
sean-k-mooney0:33,1:513,2:34,3:514,4:35,5:515,6:36,7:610:28
sean-k-mooney0:2047,1:1983,2:1919,3:1855,4:2015,5:1951,6:1887,7:182310:28
sean-k-mooneybug repoducer10:28
sean-k-mooney----------------------------------------10:28
sean-k-mooney0:0,1:3,2:6,3:9,4:12,5:15,6:18,7:21,8:48,9:51,10:54,11:57,12:60,13:63,14:66,15:69,16:1,17:4,18:7,19:10,20:13,21:16,22:19,23:22,24:49,25:52,26:55,27:58,28:61,29:64,30:67,31:70,32:2,33:5,34:8,35:11,36:14,37:17,38:20,39:23,40:50,41:53,42:56,43:59,44:62,45:65,46:68,47:7110:28
sean-k-mooneyreal    0m1.004s10:28
sean-k-mooneyuser    0m0.893s10:28
sean-k-mooneysys     0m0.094s10:28
sean-k-mooneygibi: is ^ better10:28
sean-k-mooneyor 10:29
sean-k-mooneyreal    0m0.986s10:29
sean-k-mooneyuser    0m0.887s10:29
sean-k-mooneysys     0m0.078s10:29
sean-k-mooneyif i jus trun the repoducer10:29
gibithe reproducer took 6 mins to run on my laptop10:32
gibiso yours seems to be a loooot faster10:32
sean-k-mooneyhttps://github.com/SeanMooney/cpu-pinning/commit/018a1d0a9abeeec40d3e3ccbfd640363158433c510:34
sean-k-mooneythat is the same case right10:34
sean-k-mooney1 socket 16 numa nodes and 48 cores with 2 thread pre cpu for 96 theads10:35
sean-k-mooneythen booth a 48 core cpu with pinning and the prefer thread polciy with 1G hugepages 10:35
sean-k-mooneyand 488GB of ram10:35
gibithere are some usage too in the reproducer10:35
gibithe first 7 host numa cell is used10:36
sean-k-mooneyack i can add that and see if it makes any differnce10:36
sean-k-mooneybasically the same10:50
sean-k-mooneyreal    0m1.074s10:50
sean-k-mooneyuser    0m0.936s10:50
sean-k-mooneysys     0m0.115s10:50
sean-k-mooneyhttps://github.com/SeanMooney/cpu-pinning/blob/master/pinning.py#L378-L415=10:51
sean-k-mooneythe issue with my alternitiv implemntation is it only supprots hugepages and cpu pinning 10:52
sean-k-mooneyit is missing pci supprot and mixed cpu support10:52
sean-k-mooneyill run there repodcuer to have a comparison on the same hardware10:53
sean-k-mooneyim going to make coffee whiel this runs...10:59
gibidansmith, bauzas: what do you think do we want to / can do something about this? https://bugs.launchpad.net/nova/+bug/1978549 It feels like a bug but I'm not sure I want to go back and add a db migration to stein in nova or touch the placement db init script. https://bugs.launchpad.net/nova/+bug/197854911:03
sean-k-mooneygibi: dont we tend to not drop the columes right away 11:06
sean-k-mooneyto cater for rolling upgrades11:06
gibisure 11:06
sean-k-mooneywe stop the usage and then a few release later we can drop the column11:06
gibibut then we suddenly forget to add it to the placement db init script11:07
gibiso if you have a DB that was created before placemetn was moved out of nova repor then you have can_host11:07
sean-k-mooneywell that woudl only be an issue if you were initing on a version that used it right11:07
gibiwe removed the can_host column but without a DB migration 11:08
gibithe removal happen when the new db init was created for the moved out placement11:08
sean-k-mooneyi see11:08
sean-k-mooneygibi: real    12m32.814s11:10
sean-k-mooneymaybe my 4 year old laptop is starting to show its age11:10
sean-k-mooneygibi: we could just document this as a workaround for the db issue11:11
sean-k-mooneythe fix is only requried if movign form in nova to out of tree placement right11:12
sean-k-mooneyif you are doign a db restore11:12
sean-k-mooneyand even then only if you are migrating db backends11:12
sean-k-mooneyif you were just doing a db export and import in mysql it would be fien because the improt would create the tabels11:13
sean-k-mooneyin there case they likely are just migrating the data and using placemnt-mange to init an empty db11:13
gibiI agree to only document the workaround. I'm not sure where to put that documentation though. As adding a reno now in zed in placement  for an issue introduced in stein in nova does not feels right11:16
sean-k-mooneywe can do a stable only reno11:16
sean-k-mooneyon stine11:16
sean-k-mooneyor update the existing one for speliting out placment which i assuem exists somewhere11:17
gibihm there is admin/upgrade-to-stein.rst11:18
gibithat would be OK11:18
gibithanks11:18
opendevreviewBalazs Gibizer proposed openstack/placement master: Add WA about resource_providers.can_host removal  https://review.opendev.org/c/openstack/placement/+/84573011:32
sean-k-mooneygibi: im not going to do this now but if i were to update my poc to implemetne all or a subset of the current numa_fit_instance_to_host funciton 11:33
sean-k-mooneywould we consider that backportable if it was opt in and both implemations could be in the code in parallel even fi only one would be used on a compute host at a time11:33
gibiwhy we need both? does yours produce a different result than the current?11:34
sean-k-mooneymine does not do everythin the current one does11:34
gibiyepp, then we might need a config flag11:34
sean-k-mooneyso im wondering if we coudl make it incremental11:35
sean-k-mooneyand also have the option to run both as filters 11:35
sean-k-mooneyso mine currently only handeles hugepages and cpu pinning11:35
sean-k-mooneyits missing pci devices and mixed cpus and maybe pmem11:36
sean-k-mooneypmem enabels a numa toplogy but i dont think we provide affintiy for pmem11:36
sean-k-mooneyso its reall just mixed mode and pci devices that woudl initally be missing11:36
sean-k-mooneymy tought is you coudl enable both filters and have the fast one run first so you woudl only validate the slow hosts if we knew the cpu and ram request were valid11:37
sean-k-mooneys/slow hosts/hosts with the slow version/11:38
gibioverall I'm OK to make it incremental or even selectable, but I'm not sure stable cores will like it11:38
gibithis will include a new bounch of code to stable branches11:38
sean-k-mooneyso basicaly you woudl do pci_passthroughfilter,numa_v2,numa_toplogy_filter11:38
gibithat is liability11:39
sean-k-mooneyyep11:39
sean-k-mooneyit is 11:39
gibiwhat if we just add what we have to master as selectable, then we improve on it later on master11:39
sean-k-mooneywe could yes11:39
sean-k-mooneyim just trying to think if there is anything we can do to adress the bug report on older branches too11:40
sean-k-mooneyi dont think there is anythign trivial that can be fixed on the older brances to make the perfromance accpetabel.11:40
gibiI don't know. maybe we can look at the reproducer in a profiler11:41
sean-k-mooneyi think its to do with our usage so itertools permuations11:42
sean-k-mooneywe do a liniar loop over all permuations of numa nodes when trying to fit the instance11:42
sean-k-mooneywe early out once the instance fits but that is really not efficent with more then a hand full of numa nodes11:43
sean-k-mooneygibi: i can try an take a look at it breifly11:44
gibisean-k-mooney: only if you want :) I'm not asking to take this11:45
gibiI'm not promising either that I can fire up a profiler today11:45
sean-k-mooneyhehe11:45
sean-k-mooneythis is not new11:45
sean-k-mooneyso i dont think it supper high priority but i was half way thorugh fixing up my vdpa patches11:46
sean-k-mooneyso i want to finish those today11:46
sean-k-mooneyso im somehwat interested in is there a minimal fix we can do to make ti better11:46
sean-k-mooneyi just dont want to spend all day looking at it11:46
sean-k-mooneyso ill give it tilll the top of the hour11:46
sean-k-mooneywe can take another look later in the week or next week11:47
gibiack11:47
* sean-k-mooney forgot how nice it is to have a working debugger11:57
opendevreviewMerged openstack/nova stable/wallaby: Add service version check workaround for FFU  https://review.opendev.org/c/openstack/nova/+/84420211:57
gibi:)11:59
sean-k-mooneyits amazing how much simpler nova code becomes without eventlets11:59
sean-k-mooneyactully there might be a small tweak we can do12:01
sean-k-mooneywe currently do this 12:02
sean-k-mooneyfor host_cell_perm in itertools.permutations(12:02
sean-k-mooney        host_cells, len(instance_topology)12:02
sean-k-mooney    ):12:02
sean-k-mooneyso we get the next perumation for the full pinning12:02
sean-k-mooneyi wonder if we could do this one numa node at a time12:02
sean-k-mooneyso loop over the  instnace numa cells12:03
sean-k-mooneyand try to pin them one at a time12:03
sean-k-mooneythen try to pin the rest12:03
sean-k-mooneyi think that would be a lot faster12:04
sean-k-mooneyas currently if the first 7 numa nodes are full we have to try all permuations for the 7 numa nodes before we will try the 8th i think12:04
sean-k-mooneyif i refactor this we will do 7 test for the first guest numa node it will match on the 8th numa node and then we will try fiting the next numa node12:05
gibiwould that just do the permutation generation with loops? sure the order of the search would be different 12:05
gibiso it might optimize for the current csae12:06
gibicase12:06
sean-k-mooneythe worst case performace woudl still be the same but i think it would imporve best and average case12:06
gibiso we optimize based on some heuristics that the first numa nodes are more likely to be filled than the later numa nodes12:11
sean-k-mooneywell until master it did a liniar search12:11
sean-k-mooneyso yes the first numa nodes were always filled first deterministically12:11
sean-k-mooneywe recently added numa node blancing12:11
gibitrue, so using the old linear fill, it make sense to add a heuristics to the search to12:12
gibio12:12
sean-k-mooneyalso my entire system just hard locked up while i was debuging that even with it paused12:12
gibiwondering if just simply using permutations(reversed(host_cells)) would be enough to optimize too12:12
sean-k-mooneyperhaps i should close some broser tabs12:12
sean-k-mooneywell we are not sorting the host_cells12:13
gibithe other day I run out of memory on my laptop, so now I added some swap12:13
sean-k-mooneybased on aviable ram disk pci devices and if you asked for them12:13
sean-k-mooneyi ran out of swap but still had 8GB of ram free12:14
sean-k-mooneyi do have a 2 node devstack and openshift running in 3 8G vms currently too12:14
sean-k-mooneyoh my email client is only using 4G of ram today that is nice of it. it was using 15 last week...12:16
sean-k-mooney1910527.379325] Out of memory: Killed process 1338765 (.qemu-system-x8) total-vm:13382508kB, anon-rss:7832544kB, file-rss:0kB, shmem-rss:4kB, UID:0 pgtables:17324kB oom_score_adj:012:17
sean-k-mooneyyep i ran out of memory ill stop the vms for now12:17
sean-k-mooneythats totally a good sign for the effiency fo this code right12:18
sean-k-mooneygibi: :)12:29
sean-k-mooneyimport nova.conf12:29
sean-k-mooneyCONF = nova.conf.CONF12:29
sean-k-mooneyCONF.compute.packing_host_numa_cells_allocation_strategy = False12:29
sean-k-mooneythat fixes the issue12:29
sean-k-mooneywe disable the numa blancing by defualt for "backwards compatiblity"12:29
sean-k-mooneygibi: if you enable it when we sort by free memory all the used nodes go to the end12:30
sean-k-mooneyso the first permuation fits12:30
sean-k-mooney[13:30:55]➜ time python t.py 12:31
sean-k-mooneyInstanceNUMATopology(cells=[InstanceNUMACell(8),InstanceNUMACell(9),InstanceNUMACell(10),InstanceNUMACell(11),InstanceNUMACell(12),InstanceNUMACell(13),InstanceNUMACell(14)],emulator_threads_policy=None,id=<?>,instance_uuid=<?>)12:31
sean-k-mooneyreal    0m1.489s12:31
sean-k-mooneyuser    0m1.373s12:31
sean-k-mooneysys     0m0.095s12:31
sean-k-mooneygibi: not as fast as my out of tree version but pretty close12:31
gibiahh12:31
sean-k-mooneyand fully feature complete12:31
gibithat is an easy workaround for the particular case12:32
sean-k-mooneywell the spread approch is generaly beter provided you dont need really large vms12:32
sean-k-mooneythat depend on fully filling the numa nodes to spawn12:32
sean-k-mooneybut yes12:32
sean-k-mooneygibi: ok added a comment https://bugs.launchpad.net/nova/+bug/1978372/comments/612:38
sean-k-mooneygibi: we have already started backporting the pack/spread behavior to xena they reported it on wallaby12:38
sean-k-mooneyso if we just continue the backport of the sorting behavior we could close it as a dupe i guess12:39
gibisean-k-mooney: thanks12:39
sean-k-mooneythat does raise the question of if we should change the default for pack vs spread on master and or still look at the other implemation in the future12:39
gibiI think we can chance default on master now12:40
gibiif we want12:40
gibiprobably not on stable12:40
sean-k-mooneyright not stabel but i can propose a patch to change the default and add a release note for people to review12:44
gibiyeah, lets try12:47
gibiI have to refresh myself about the trade off though12:48
sean-k-mooneyspread will try to put vms on empty numa nodes first pack does the reverse trying to use all aviable space on numa ndoes before using the next one12:49
sean-k-mooneyif you have 2 numa nodes and but 3 vms 1 that need a full numa node and 2 that each need a half numa node12:50
sean-k-mooneythen with pack all 3 will in any order12:50
sean-k-mooneywith spread unless the big vms is booted first only 2 of the 2 will schdule12:50
sean-k-mooneygibi: ^ that the main trade off12:50
gibithanks12:51
sean-k-mooneybut if you spread you get better cpu/memory performace in the guest until the node is full and then its the same12:51
gibiI think both way is valid, but if we know that the scheduling performance is better in the spread case then that might be enough reasoning to switch12:52
gibithe default12:52
sean-k-mooneyyep both are valid but i personaly prefer spread as the default12:52
sean-k-mooneyit does need to be configurable12:53
sean-k-mooneyso that operators can choose12:53
sean-k-mooneyyou could technially have differnt values on scheduler vs compute too12:53
sean-k-mooneyyou could use spread in schduler and pack on computes12:53
bauzasgibi: good news, I was able to ask for multiple vGPUs12:54
sean-k-mooneycurrently we recompute the pinning on the compute anyway since we throwaway the schduler info12:54
bauzasbut yeah, we hit the warning, so we only have one VGPU12:55
sean-k-mooneybauzas: using generic mdevs to use differnt device classees for differnt host gpus12:55
sean-k-mooneyor via multi create12:55
bauzassean-k-mooney: lemme upload the functest12:55
sean-k-mooneyack12:55
opendevreviewSylvain Bauza proposed openstack/nova master: Add a functest for verifying multiple VGPU allocations  https://review.opendev.org/c/openstack/nova/+/84574713:09
bauzassean-k-mooney: ^13:09
bauzassean-k-mooney: tl,dr: given of the nvidia driver issue (you can't ask for more than one VGPU per pGPU per instance), operators would then want to spread the vGPUs between multiple pGPUs13:10
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Test aborting queued live migration  https://review.opendev.org/c/openstack/nova/+/84574813:12
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Add functional tests to reproduce bug #1960412  https://review.opendev.org/c/openstack/nova/+/84575313:26
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Clean up when queued live migration aborted  https://review.opendev.org/c/openstack/nova/+/84575413:29
opendevreviewSylvain Bauza proposed openstack/nova master: WIP : Support multiple allocations for vGPUs  https://review.opendev.org/c/openstack/nova/+/84575713:42
bauzasgibi: ^13:42
gibibauzas: left feedback :)14:15
bauzascool, this is just a WIP tho14:16
bauzasgibi: thanks for commenting, those are good thoughts14:17
gibiI didn't even relaized that it is a WIP, it has functional test :D14:18
bauzasZuul will hit my face as I think I'm closing some gaps14:19
bauzasand I'll need to change some UTs14:19
ygk_12345can someone look into this https://bugs.launchpad.net/oslo.messaging/+bug/1978562 please14:27
*** dasm|off is now known as dasm14:31
opendevreviewBalazs Gibizer proposed openstack/nova master: Clean up mapping input to address spec types  https://review.opendev.org/c/openstack/nova/+/84576514:32
opendevreviewBalazs Gibizer proposed openstack/nova master: Clean up mapping input to address spec types  https://review.opendev.org/c/openstack/nova/+/84576514:36
opendevreviewElod Illes proposed openstack/placement stable/victoria: Add periodic-stable-jobs template  https://review.opendev.org/c/openstack/placement/+/84577015:02
opendevreviewArtom Lifshitz proposed openstack/nova master: libvirt: remove default cputune shares value  https://review.opendev.org/c/openstack/nova/+/82404815:03
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Test aborting queued live migration  https://review.opendev.org/c/openstack/nova/+/84574815:09
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Add functional tests to reproduce bug #1960412  https://review.opendev.org/c/openstack/nova/+/84575315:10
opendevreviewAlexey Stupnikov proposed openstack/nova stable/victoria: Clean up when queued live migration aborted  https://review.opendev.org/c/openstack/nova/+/84575415:11
opendevreviewBalazs Gibizer proposed openstack/nova master: Remove unused PF checking from get_function_by_ifname  https://review.opendev.org/c/openstack/nova/+/84577515:21
*** lajoskatona_ is now known as lajoskatona15:24
opendevreviewBalazs Gibizer proposed openstack/nova master: Fix type annotation of pci.Whitelist class  https://review.opendev.org/c/openstack/nova/+/84578015:30
ygk_12345 can someone look into this https://bugs.launchpad.net/oslo.messaging/+bug/1978562 please15:35
bauzasreminder:  nova meeting in 20 mins15:40
opendevreviewBalazs Gibizer proposed openstack/nova master: Move __str__ to the PciAddressSpec base class  https://review.opendev.org/c/openstack/nova/+/84578115:50
bauzas#startmeeting nova16:00
opendevmeetMeeting started Tue Jun 14 16:00:00 2022 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
opendevmeetThe meeting name has been set to 'nova'16:00
bauzashowdy back16:00
dansmitho/16:00
bauzasand welcome on our first June nova meeting16:00
gibio/16:00
melwitto/16:01
* bauzas just hopes we'll have more people joining 16:01
bauzasbut we can start16:01
Ugglao/16:01
bauzas#topic Bugs (stuck/critical) 16:02
bauzas#info No Critical bug16:02
elodilleso/16:02
bauzas#link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14 new untriaged bugs (+0 since the last meeting)16:02
bauzas#link https://storyboard.openstack.org/#!/project/openstack/placement 26 open stories (0 since the last meeting) in Storyboard for Placement 16:02
bauzas#info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster16:02
bauzasI have to admit publicly I feel ashamed16:02
bauzasI forgot about the baton when it was mine16:02
bauzasthrow me tomatoes16:02
melwitt🍅16:03
bauzasbut alas I triaged one bug :)16:03
bauzasapparently, preparing the Summit trip and doing bug triage doesn't mix on my side16:03
gibiI took the baton from bauzas in Berlin in person \o/16:03
bauzasliterally16:03
gibithere was a normal amount of bug inflow16:04
gibihttps://etherpad.opendev.org/p/nova-bug-triage-2022060716:04
bauzasit could have been an olympic torch16:04
gibidoes that count as carry on baggage?16:04
melwittit could be your personal item16:05
gibi:)16:05
bauzasdepends on the size I guess16:05
bauzasor it could be seen as a sport gear16:05
bauzasanyway16:05
gibiI saw two interesting bugs 16:05
bauzasthanks gibi16:05
gibi    https://bugs.launchpad.net/nova/+bug/1978372 numa_fit_instance_to_host() algorithm is highly ineffective on higher number of NUMA nodes 16:05
gibisean-k-mooney updated me that this is a known ineffciency of our algo16:06
bauzaswhat kind of hardware has this large amount of NUMA nodes ?16:06
* bauzas is always unpleasantly surprised by all the new things that are created around him16:06
gibiI'm not sure16:07
gibibut I accept that it is possible16:07
bauzas8 NUMA nodes seems large to me, but I'm not a tech savvy16:07
sean-k-mooneybauzas: most recnet amd servers16:07
sean-k-mooney16 numa nodes is not uncommon now16:07
bauzasmy brain hurts.16:08
sean-k-mooneyyou can get 16 numa nodes in a singel socket now16:08
sean-k-mooneyand i have see systems with 6416:08
* bauzas is network-bound by his gigabit switches at home while he can download at 1016:09
sean-k-mooneyour current packing default falls apart after about 4-8 numa nodes16:09
gibiso right now we are slow by default, but if numa spread is enabled instead of the default pack then it is much better a sean-k-mooney discovered16:09
bauzasanyway, sounds an opportunity for optimization then16:09
sean-k-mooneyi have a patch in flight to change the default16:10
bauzasthe whole packing strategy is hidden within the code16:10
sean-k-mooneyill work on the release note and push it later16:10
sean-k-mooneywe can continue the discussion there if you like16:10
bauzassure16:10
sean-k-mooneybauzas: yep its also not part of the api contract and never was16:10
* bauzas shrugs16:11
gibiso the other bug I would like to mention16:11
gibi    https://bugs.launchpad.net/nova/+bug/1978549 Placement resource_providers table has dangling column "can_host"   16:11
bauzasanyway, I understand people wondering why our packing stragegy should struggle only after 16 nodes to iterate16:11
gibiI marked as wontfix with a small note in the placement documentation16:11
gibihttps://review.opendev.org/c/openstack/placement/+/84573016:11
gibithis was a mistake back in stable/stein16:12
gibiand I don't want to go back and touch DB migrations there16:12
gibithat is all I had for bug triage this week16:13
bauzasgibi: can_host is not part of the DB contract ?16:13
bauzasI mean the model16:13
gibiit was removed from the DB model since stein16:14
gibibut we never added a DB migration to drop the coulmn from the schema16:14
gibibut when Placement was split out of nova a new initial DB schema was defined but now without can_host16:14
gibihence the inconsistency16:14
gibion the schema level16:14
gibibut nothing is uses can_host16:14
bauzasoh I understand16:14
bauzasbut, if this is post-Stein, the table is removed anyway, no ?16:16
bauzasas said in 'Finalize the upgrade'16:16
sean-k-mooneyi think the issue here is they are doing a postgres to mariadb migration16:17
sean-k-mooneyso they were using placement manage to create the new db schma16:17
sean-k-mooneythen trying to do a data migration16:17
sean-k-mooneyand there orginal db had the column16:17
sean-k-mooneybtu the target does not16:17
sean-k-mooneyso if they drop the colum on the souce db then do the data migration it would be fine16:18
bauzasok, I didn't want to enter into the details, let's move on, I think it's safe bet what gibi did 16:18
gibiack16:18
bauzasmelwitt: are you OK with bug triaging this week or do you want me to do it due to my negligence last week as a punition ?16:19
bauzasthe latter is fine to me16:19
melwittbauzas: sure, maybe better bc I am out on pto next week16:19
bauzasmelwitt: cool, then I'll steal it from gibi16:20
melwittcool thanks16:20
bauzas #info Next bug baton is passed to bauzas16:20
bauzasif you don't mind, I'll pass you the baton next week16:20
bauzasor we could give it to anyone else 16:21
melwittyes that is ok16:21
bauzasmoving on16:21
bauzas#topic Gate status 16:21
* gibi feels sudden emptiness in his life16:21
bauzas#link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:21
bauzas#link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly Placement periodic job status 16:21
bauzas#link https://zuul.opendev.org/t/openstack/builds?job_name=nova-emulation&pipeline=periodic-weekly&skip=0 Emulation periodic job runs16:21
melwitt😂 gibi16:21
bauzas#info Please look at the gate failures and file a bug report with the gate-failure tag.16:21
bauzas#info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures16:21
bauzasvoilà16:22
bauzashadn't anything to tell gate-wise16:22
bauzasanything anyone ?16:22
bauzasgibi: I don't feel exactly empowered with the baton y'know16:22
bauzasok, next topic then16:23
bauzas#topic Release Planning 16:23
bauzas#link https://releases.openstack.org/zed/schedule.html16:23
bauzas#info Zed-2 is in 4 weeks, mind your specs16:24
bauzasas a reminder, we'll have a SpecApprovalFreeze on Zed-216:24
bauzasfwiw, here is the current list of accepted blueprints, including specless ones : https://blueprints.launchpad.net/nova/zed16:25
bauzas(I eventually updated it one hour before...)16:25
artomOh snap, that's in 2 weeks16:25
bauzasno16:25
bauzasJuly 1416:25
artomwtf brain16:25
bauzasunless I'm counting wrong16:25
artomSorry, ignore me, carry on16:25
bauzasthat being said, as the clock ticks, next week, we'll discuss of a spec review day 16:26
bauzasjust sayin16:26
bauzasnext topic16:26
bauzas#topic OpenInfra Summit 16:27
bauzaslemme just do a quick wrap-up16:27
bauzas#info bauzas, gibi and stephenfin attended the summit16:27
bauzas#info Nova meet-and-greet Operators feedback session on Wednesday,  June 8, 2:50pm - 3:20pm got positive feedback16:27
gibi(there was beer)16:27
bauzaswe had a large audience16:27
sean-k-mooneygibi: was it tasty16:27
chateaulavfun times16:27
gibigood beer16:27
bauzasgibi: not during sessions tho16:27
gibiyeah, bad timing16:27
sean-k-mooneybauzas: thats good to hear was the nova session well attended16:27
bauzaslet's claim this was a productive session16:28
bauzassean-k-mooney: packed room16:28
sean-k-mooneyexcelent16:28
artomFull glass, full room, nice16:28
bauzasI think this was well deserved, most of the operators thought we disconnected a bit for too long16:28
bauzasand the PTG thing doesn't help16:28
bauzasone outcome is at least a strong need for a nova recap at every possible gathering16:29
bauzasat least they were expecting one at the Summit, but I have to admit I didn't made it16:30
bauzasand the OpenInfra Live thing was on April16:30
bauzasI guess only a few of them saw it16:30
sean-k-mooneythe project updates went live on youtube 2 weeks ago16:30
bauzasI know16:30
sean-k-mooneybut i woudl assume many did not see it16:30
bauzasthat's the OpenInfra Live thing I mentioned16:30
bauzasapparently, people pay more attention to cycle highlights when it's in-person16:31
sean-k-mooneyright but that may have been in april but the videos only got published on youtube in june16:31
bauzasanyway, something easily solvable16:31
bauzasone other thing, communication16:32
bauzasnot a surprise, our ML isn't read16:32
bauzasand given they lag a lot, they don't think this is a valuable time to chime in16:32
artomWait, so ops show up to Summit, but don't read the ML? How do they know when Summit is? ;)16:32
bauzas(they lag by the number of releases)16:33
bauzasartom: easy answer : Twitter16:33
sean-k-mooneyand infra foundation marketing16:33
gibiyes, we was asked to tweet more16:33
bauzassomeone very seriously explained to me they'd prefer nova tweets16:33
artomWe as in the developers? o_O16:33
gmannyeah summit info is communicated in many other ML and places not only openstack-discuss16:33
gibiartom: yes, please :)16:33
melwitthuh.16:33
sean-k-mooneycould we auto treat the release notes some how16:33
dansmiththat's crazy, IMHO16:34
bauzassean-k-mooney: they know about our prelude16:34
bauzasbut again, they laaaag16:34
* sean-k-mooney rememebrs the april fools twitter as a message bus spec16:34
artomIf they lag releases, what's the point of tweeting, presumably about stuff we're working on *now*?16:34
chateaulavbuild interest and involvement16:35
bauzassean-k-mooney: I'm half considering to register a Twitter handle like @Yo_the_Openstack_Nova_gang16:35
gmannchateaulav: +116:35
artomBut what if I'm an anti-social curmudgeon?16:35
bauzasartom: heh16:35
bauzasanyway, that one wasn't an easy problem to solve16:36
gmannoperator involvement with developers is one of the key and open issue in board meeting too. and TC also raised it to them. 16:36
bauzasfwiw, I proposed them to only register to 'ops' and 'nova' ML tags16:36
gmannsome idea is to combine the events ops meetup and developers one but let's see16:36
bauzasboth in conjunction16:36
chateaulavi am too, dont have to be a genius. just start with little things. if it goes to the ML and you think is worthwhile then tweet it and reference the ML archive and irc chat16:36
bauzasgmann: please16:36
sean-k-mooneygmann: the best way to adress that woudl proably be to converge the events and bring back the devs summit 16:37
bauzasgmann: I feel the community more fragmented now we're spît16:37
bauzasspliut16:37
gmannsean-k-mooney: +116:37
gmannyeah, true16:37
bauzaschateaulav: as I said, I begged them to correctly use the ML tags16:37
gibisean-k-mooney: +116:37
gmannwe got separated when we combined the things :)16:37
bauzasand I ask people to *not* make use of [nova][ops] for communicating unless here we agree on the need to engage16:38
bauzas#action nova team will only exceptionnally make use of [nova][ops] for important communication to ops. If you're an Ops, feel free to register to both tags in the ML 16:39
artomTbf, the ML lately seems to be openstack-support, anecdotally16:39
sean-k-mooneynot entirly16:39
dansmithmostly16:39
bauzasartom: alas, we merged openstack@ openstack-dev@ and openstack-ops@16:39
sean-k-mooneywe do discuss gate issues and some dev issues16:39
artomSo maybe there is room for dev -> ops announcement-type stuff.16:40
bauzasopenstack@ was the place for troubleshooting16:40
artomsean-k-mooney, right, I'm missing a "mostly" in there16:40
artom@Nova_PTL twitter account? :D16:40
bauzasanyway, there are things the nova team can solve and there are other things that are way out of our team scope :)16:40
sean-k-mooneyso we spilt the events and merged the lists. if only we did the reverse :)16:40
sean-k-mooneyim not sure there is much we can do right now to adress this topic16:41
bauzassean-k-mooney: correct and I want to move on16:41
bauzasthis wasn't an ask to find a solution, just a feedback16:41
artomYou can't tell us "plz tweet moar" and not expect the convo to derail :P16:41
bauzas#link https://etherpad.opendev.org/p/r.ea2e9bd003ed5aed5e25cd8393cf9362 readonly etherpad of the meet-and-greet session16:41
bauzasartom: I personnally stopped twitting unless exceptional opportunities, not the one to blame16:42
bauzasnow, back to productive things16:42
* artom never tweets, but always twit16:42
bauzasyou'll see a long list of complains16:42
sean-k-mooneysome of which have been adresed in newer releases16:43
bauzasI encourage any of you to go read the etherpad and amend it (with the write URL of course) 16:43
gmannbauzas: any feedback on RBAC scope things if that is discussed in nova sessions also other than ops meetup ?16:43
bauzassean-k-mooney: yeah, I've seen you munging a lot of them, thanks16:43
bauzasgmann: I asked about it, this was too way advanced for them16:44
bauzasbut I pointed them the links to the new rules and personans16:44
bauzaspersonas*16:44
bauzasalso, this was a 30-min session,16:44
gmannbauzas: ok16:45
bauzasso, please understand we were basically only able to scratch the surface16:45
gibigmann: we had another session around service roles16:45
sean-k-mooneysome of the pain points are on our backlog16:45
sean-k-mooneyso its good that operaters have vlaidated that they still care about them16:45
gmannbauzas: I understand, just checking in case any specific feedback we got from nova sessions 16:45
bauzasfor the pain points, I'll diligently try to make sure all of them are adressed16:45
sean-k-mooneyim thinkign of iothread and virtio-multiqueue16:45
bauzasgmann: honestly this was frustrating16:46
gmanngibi: ack. do you have any link of that, I will combine those to discuss in RBAC meeting next week16:46
gibigmann: my sort summary on the serivce roles https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2022-06-13.log.html#t2022-06-13T06:43:5216:46
bauzasgive me 30 mins more and I could have made operators to sign off for sending herds of contributors to the nova project16:46
gmanngibi: thanks 16:46
bauzasdon't be surprised if I'm pinging some of you16:47
bauzasI want the etherpad to be curated16:47
gibigmann: and this is the session etherpad but it is a bit of a mess https://etherpad.opendev.org/p/deprivilization-of-service-accounts16:47
bauzasgibi: this wasn't a mess16:47
bauzasthis was rather a prank, I guess16:47
gibithen I was pranked :)16:48
bauzasexactly my point16:48
bauzasstep 1 : propose a forum session16:48
bauzasstep 2: let gibi see it 16:48
bauzasstep 3 : make sure gibi will attend it16:48
sean-k-mooneyin princiapl we shoudl be able to lable all endpoing that are used for inter service comunicatoin as needing the service role16:48
bauzasstep 4: don't attend your own session and let gibi lead it instead16:48
bauzasstep 5 : profit.16:48
gibi(just background: when I entered the room for that session I was cornered that there is nobody who can lead the session)16:49
sean-k-mooneythe service role really shoudl not be able to acces any api other then the inter service apis16:49
sean-k-mooneythat woudl allow use to entirly drop our use fo the admin role eventually16:50
gmannsean-k-mooney: yes, that is direction we are going and has to be a careful audit to verify this. 16:50
bauzascan we stop on the forum discussions ?16:50
sean-k-mooneyyes we can move on16:50
bauzasanyone having a last question or remark ?16:50
bauzas(just timeboxing, sorry)16:50
sean-k-mooneyno worries16:50
bauzas#topic Review priorities 16:50
bauzas#link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+label:Review-Priority%252B116:50
bauzas#link https://review.opendev.org/c/openstack/project-config/+/837595 Gerrit policy for Review-prio contributors flag. Naming bikeshed in there.16:50
bauzas#action bauzas to propose a revision of https://review.opendev.org/c/openstack/project-config/+/83759516:51
bauzas#link https://docs.openstack.org/nova/latest/contributor/process.html#what-the-review-priority-label-in-gerrit-are-use-for Documentation we already have16:51
bauzasthat's it on my side16:51
bauzasI encourage cores to make use of the flag if they wish16:51
bauzas#topic Stable Branches 16:52
bauzaselodilles: your time16:52
elodilles#info stable/train is blocked - melwitt's fix: https://review.opendev.org/c/openstack/nova/+/844530/16:52
elodilles#info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci16:52
elodillesrelease patches proposed (yoga, xena, wallaby): https://review.opendev.org/q/project:openstack/releases+is:open+intopic:nova16:52
sean-k-mooneyyep i was going to proceed with merging https://review.opendev.org/c/openstack/nova/+/844530 but wanted to ask if there were any objections 16:52
sean-k-mooneyi have also commended on the release patches16:52
sean-k-mooneymost of the patches i wanted to land are now landed over night16:53
bauzascool, I'll do a bit of reviews then16:53
bauzasany other point to raise about stable ?16:53
elodillesnothing else i think16:54
bauzascool16:54
elodillessean-k-mooney bauzas : thanks for looking at the release patches 16:54
bauzaslast point then16:54
sean-k-mooneyelodilles: happy too16:54
bauzaselodilles: I have to do it, sean-k-mooney told me already :)16:54
elodilles:]16:54
bauzas#topic Open discussion 16:54
bauzasthere were nothing on the agenda16:54
bauzasfor the sake of those last 5 mins, any item to raise ?16:55
melwittbauzas: I realized it would be better if I did bugs this week bc if I'm out next week, that's even less time 😆 16:55
melwittI won't be at the next meeting but I can put my bug etherpad link on the agenda for yall16:55
bauzasmelwitt: I'm both flexible and ashamed16:55
bauzasmelwitt: pick anytime you want16:55
bauzasand I'll do the overlap16:55
sean-k-mooneythe only item i was gong to raise was releases. we had a request to do stable release last week but that is proceeding anyway16:55
melwittbauzas: ok, I will do this week. sorry for the confusion16:55
bauzasmelwitt: np16:56
bauzasI guess not a lot of people are reading our weekly meeting and even less of them do bug triage16:56
bauzasweekly minutes*16:56
bauzasbut, not a reason for anarchy with no meetings and agenda ! :D16:57
bauzas(and proper highlights)16:57
gibiwe should try to have our meeting on twitter ;)16:57
bauzasOK, I guess we can call the wrap16:57
bauzasgibi: I was surprised noone debated on the tool itself16:58
* sean-k-mooney looks side eyed at gibi16:58
bauzasI could instagram nice pictures of me coding16:58
bauzaslike, me outside coding16:58
bauzasme inside in my office room16:58
melwittstart a twitch channel16:58
sean-k-mooneytotally we should all just stream our coding on twitch :)16:58
bauzasI'm feeling too old16:59
bauzasbut at least I'm happy to hear the TC be young-minded with Tik-Tok releases16:59
gibi:D16:59
bauzason that last word,16:59
bauzas#endmeeting16:59
opendevmeetMeeting ended Tue Jun 14 16:59:59 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:59
opendevmeetMinutes:        https://meetings.opendev.org/meetings/nova/2022/nova.2022-06-14-16.00.html16:59
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/nova/2022/nova.2022-06-14-16.00.txt16:59
opendevmeetLog:            https://meetings.opendev.org/meetings/nova/2022/nova.2022-06-14-16.00.log.html16:59
bauzasoh, snap17:00
sean-k-mooneybauzas: i would expect that form artom not you :P17:00
bauzashaven't thank you all17:00
gibithanks bauzas 17:00
elodillesyepp, thanks bauzas o/17:00
artomsean-k-mooney, I'm angry with myself for not thinking of that17:01
bauzasartom: sean-k-mooney: can't wait for OpenInfra Live events on Tik-Tok17:01
* bauzas reads everything wrong17:02
sean-k-mooneycant wait to see powerpoint in portrait mode17:02
chateaulavthe funny thing is, there is a twitter profile: @OpenStackNova17:03
chateaulavbut its a usergroup....17:03
sean-k-mooneybauzas: isnt there an annouchment feature in the channel bot17:03
bauzaschateaulav: NOOOOO17:05
gibiwe can alway try TheRealOpenStackNova :)(17:06
chateaulavpretty sure that can be revoked or something as it is an actual organization project name17:06
chateaulavgibi: true, sounds better too17:06
bauzasor rather: https://tenor.com/view/luke-skywalker-no-star-wars-mark-hamill-hanging-on-gif-1136845517:06
sean-k-mooneyim pretty sure https://twitter.com/OpenStackStatus  is tied into irc status mesages17:07
sean-k-mooneybauzas: if we really wanted to get stuff on twtter and make annouchment we proably coudl have a similar feed for nova or the comuntiy in general17:07
sean-k-mooneyand allow ptls to make annouchmes via the bot17:07
bauzaslike I said, I was surprised noone argued about Twitter with the whole group debating over 10 mins about Mastodon vs. Twitter17:08
chateaulavdefinaetly then you dont have to tweet. just irc as normal17:08
* bauzas wonders if people would hear more about us if I was playing in Geordie Shore17:10
bauzasor Jersey Shore17:10
opendevreviewArtom Lifshitz proposed openstack/nova master: libvirt: remove default cputune shares value  https://review.opendev.org/c/openstack/nova/+/82404817:23
kpdevHi,on fresh installed 17:46
kpdevubuntu 20.04, I see tox fails on stable/xena17:46
kpdevlot of workaround mentioned on threads e.g. use setuptools==58.0.017:47
kpdevbut it still not resolve issue. Anyone faced and solved similar issue in past ?17:48
kpdevCollecting suds-jurko>=0.6   Using cached suds-jurko-0.6.zip (255 kB)   Preparing metadata (setup.py): started   Preparing metadata (setup.py): finished with status 'error'   error: subprocess-exited-with-error    × python setup.py egg_info did not run successfully.   │ exit code: 1   ╰─> [1 lines of output]       error in suds-jurko setup command: use_2to3 is invalid.       [end of output]    note: This er17:48
sean-k-mooneypotentially dumb question but has a hw:bfv extra spec to enable the boot form volume workflow via flavor ever come up18:27
sean-k-mooneyi know the idea of a cinder images backend was disucessed and then never implemnted18:28
sean-k-mooneybut waht about hw:bfv=True hw:bfv_type=mass-storage-volume-type hw:bfv_delete_on_terminate=True18:29
sean-k-mooneyi was just looking at https://etherpad.opendev.org/p/r.ea2e9bd003ed5aed5e25cd8393cf9362#L23518:29
sean-k-mooneyand that seamed to be a way to achive that without having to add a cinder images backend18:30
sean-k-mooneyit woudl allow operators to define flavor that always used cinder remote storage and they could define the size with the normaly disk value18:31
sean-k-mooneyjust something to think about 18:32
*** dasm is now known as dasm|off21:02
zigoNova (from Yoga) fails with jsonschema 4.6.0: https://ci.debian.net/data/autopkgtest/unstable/amd64/n/nova/22676760/log.gz22:44
zigoIt'd be nice if someone could investigate.22:44
sean-k-mooneyzigo: that proably because uppreconstratis for yoga is 3.2.0 https://github.com/openstack/requirements/blob/stable/yoga/upper-constraints.txt#L583=22:46
sean-k-mooneyzigo: so that is not tested/supported on yoga22:46
zigosean-k-mooney: It's not tested anywhere, not even on Master.22:46
zigoAnd that's my point... :)22:46
sean-k-mooneythen we need to unpin it on master and fix issues there22:47
sean-k-mooneybut we dont normally backport that support to stable branches22:47
zigoIf we have a patch in master, that's enough for me (very often, I push backports of this kind of patch as Debian only patches...).22:48
sean-k-mooneyi dont see where that is bing pinned22:49
sean-k-mooneyits in upper-constreaits22:49
sean-k-mooneybut that is auto generated when tere are pypi release22:49
sean-k-mooneyhttps://github.com/openstack/requirements/search?q=jsonschema22:49
sean-k-mooneyso its not clear why it is still on 3.2.022:50
sean-k-mooney4.6 is the most recent release https://pypi.org/project/jsonschema/22:50
sean-k-mooney3.2 is form novemenr 18th 201922:51
sean-k-mooneywe should proably bring this up on #openstack-qa or infra22:51
gmannjsonschema 4.6.0 will be pretty new for nova and we might need to test properly not just nova but all project using it before we bump it from 3.2.0 to 4.6.022:59
sean-k-mooneygmann: im more concerned by the fact that we have not updated this in 2 years23:00
sean-k-mooneythere were plenty of releses in the 4 seriese23:00
sean-k-mooneythat were not automatically updated in requriemtnes23:00
sean-k-mooney4.6.0 cam our in june but there were plenty of other reslease sicne novmeber 201923:01
sean-k-mooneythat shoudl have been picked up in xena and yoga23:01
gmannyeah.  we might not need the new things as we are also using Draft4Validator which is enough as per nova need. but yes before bumping to new version we should also bump Draft4Validator to Draft7Validator23:04
gmannand with proper testing as we need to check backward compatibility than any new features which we do not need as such 23:04
sean-k-mooneygmann: im more woried that this is not the only lib that was not updated23:06
gmannthat is possible, I am not 100% sure how constraints generation work for non openstack deps on their new release. prometheanfire in requirement channel can tell us if there is bug in requirements scripts 23:10
sean-k-mooneygmann: i think its ment to be triggered peroficaly pulling the latest release form pypi23:11
sean-k-mooneyim trying to run it manually with tox -e generate -- -p $(which python3) -r ./global-requirements.txt23:12
sean-k-mooneyim getting failurese in the command execution23:13
sean-k-mooney  error in anyjson setup command: use_2to3 is invalid.23:13
sean-k-mooneyhttps://zuul.openstack.org/job/propose-update-constraints23:14
sean-k-mooneyi think is ment to do it23:14
sean-k-mooneyhttps://zuul.openstack.org/builds?job_name=propose-update-constraints23:15
sean-k-mooneythat is runnign fine i dont see anything that calles generate23:15
sean-k-mooneyhum looks liek its defiend in porject config23:17
sean-k-mooneyhttps://opendev.org/openstack/project-config/src/branch/master/playbooks/proposal/propose_update.sh#L31-L4223:18
sean-k-mooneymore or less the same issue with anyjson if i do23:21
sean-k-mooney.venv/bin/generate-constraints -b blacklist.txt -p python3.8 -r global-requirements.txt > upper-constraints.txt23:21
sean-k-mooneywhich is basically what the job does23:21
sean-k-mooneyit gets much futher if i comment out anyjson but im missign some bindeps which i cant install on my laptop23:26
sean-k-mooneyill try and run this in a devstack env tomorrow23:26

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!