Tuesday, 2020-05-12

*** mlavalle has quit IRC00:15
*** tetsuro has joined #openstack-nova00:16
*** gyee has quit IRC00:31
*** swp20 has joined #openstack-nova00:41
*** gregwork has quit IRC00:42
openstackgerritxuyuanhao proposed openstack/nova master: Optimization the soft-delete logical  https://review.opendev.org/72426000:43
*** songwenping_ has joined #openstack-nova00:52
*** swp20 has quit IRC00:56
*** Liang__ has joined #openstack-nova01:09
openstackgerritxuyuanhao proposed openstack/nova master: Optimization the soft-delete logical  https://review.opendev.org/72426001:25
*** tkajinam has quit IRC01:29
*** tkajinam has joined #openstack-nova01:29
*** xiaolin has joined #openstack-nova01:43
openstackgerritWenping Song proposed openstack/nova master: error may occur when filter scheduler with accelerator  https://review.opendev.org/72265102:03
*** hongbin has joined #openstack-nova03:00
openstackgerritxuyuanhao proposed openstack/nova master: Optimization the soft-delete logical  https://review.opendev.org/72426003:16
*** tetsuro has quit IRC03:24
*** tetsuro has joined #openstack-nova03:35
*** psachin has joined #openstack-nova03:39
*** eharney has quit IRC03:48
*** lbragstad has quit IRC03:51
*** ratailor has joined #openstack-nova04:06
*** songwenping_ has quit IRC04:07
*** songwenping_ has joined #openstack-nova04:07
openstackgerritWenping Song proposed openstack/nova master: error may occur when filter scheduler with accelerator  https://review.opendev.org/72265104:29
*** evrardjp has quit IRC04:36
*** evrardjp has joined #openstack-nova04:36
*** psachin has quit IRC05:03
*** hongbin has quit IRC05:04
*** songwenping_ has quit IRC05:05
*** links has joined #openstack-nova05:22
*** songwenping_ has joined #openstack-nova05:33
*** udesale has joined #openstack-nova05:35
openstackgerritHarshavardhan Metla proposed openstack/nova master: [Nova] Add reference to Placement installation guide  https://review.opendev.org/72693605:44
*** dpawlik has joined #openstack-nova06:05
*** brinzhang has quit IRC06:11
*** brinzhang has joined #openstack-nova06:12
*** nightmare_unreal has joined #openstack-nova06:14
*** brinzhang has quit IRC06:14
*** brinzhang has joined #openstack-nova06:15
*** tkajinam has quit IRC06:35
*** dklyle has quit IRC06:35
*** tkajinam has joined #openstack-nova06:36
gibigood morning Nova06:38
*** ttsiouts has joined #openstack-nova06:39
*** ikla has joined #openstack-nova06:47
iklahow do I passthrough multiple pci devices in nova?06:47
gibiikla: https://docs.openstack.org/nova/pike/admin/pci-passthrough.html#configure-a-flavor-controller06:53
*** ccamacho has joined #openstack-nova06:56
*** ccamacho has quit IRC06:56
*** belmoreira has joined #openstack-nova06:57
*** maciejjozefczyk has joined #openstack-nova06:59
sean-k-mooneyikla: you just create a flavor with multiple alia's referenced07:00
*** tony_ has joined #openstack-nova07:02
*** tesseract has joined #openstack-nova07:04
iklaI have rocky07:09
jkulikhi, is there something like sub-flavors somewhere on nova's roadmap? Having a flavor that maps to different specific flavors per hardware generation would be great.07:10
*** slaweq has joined #openstack-nova07:11
*** ccamacho has joined #openstack-nova07:13
iklado I need to define the devices in the nova controller?07:16
gibiikla: the alias needs to be defined both on the controller and on the computes, the passthrough_whitelist needs to be defined on each compute07:18
gibijkulik: I don't know about such feature on the roadmap. I think you can implement that top of nova07:19
iklado I have multiple lines with passthrough_whitelist ? or can I do it all on one line?07:20
bauzasgood morning nov07:21
bauzaserr, nova even07:21
gibiikla: you can have a full json dict, see the config doc https://docs.openstack.org/nova/rocky/configuration/config.html#pci.passthrough_whitelist07:22
gibiikla: or even a list of such dicts07:22
gibibauzas: o/07:22
*** awalende has joined #openstack-nova07:23
iklasame for alias?07:25
gibiikla: I think alias is a single dict, so you have to have multiple line of alias = {...} in your config to have multiple aliases07:26
iklathanks07:28
iklapassthrough cannot have multiple lines?07:28
jkulikgibi, by on top of nova, do you mean on the client? Even if we would control the client, which we don't necessarily, we would have to decide on the precise flavor before going into scheduling afaics. So if a user doesn't care about the hardware version, we would have to make a decision anyways instead of taking "what's free" automatically.07:29
*** huaqiang has joined #openstack-nova07:31
iklanvrmind :)07:32
*** ralonsoh has joined #openstack-nova07:33
gibiikla: I think passthrough_whitelist also can be used multiple times07:34
*** tosky has joined #openstack-nova07:35
ikladoes passthrough_whitelist work on rocky or does it need to be prefixed with pci_ ?07:36
gibiikla: in the [pci] section you can use passthrough_whitelist config07:37
*** xek has joined #openstack-nova07:38
*** sapd1_x has joined #openstack-nova07:45
openstackgerritHarshavardhan Metla proposed openstack/nova master: [Nova] Add reference to Placement installation guide  https://review.opendev.org/72693607:50
*** lpetrut has joined #openstack-nova07:52
*** dtantsur|afk is now known as dtantsur07:54
*** rpittau|afk is now known as rpittau07:58
openstackgerritJames Page proposed openstack/nova stable/queens: hardware: fix memory check usage for small/large pages  https://review.opendev.org/72686707:59
openstackgerritJames Page proposed openstack/nova stable/queens: Fix overcommit for NUMA-based instances  https://review.opendev.org/72686807:59
*** tony_ has quit IRC08:17
openstackgerritsean mooney proposed openstack/nova-specs master: move implemented specs in ussuri  https://review.opendev.org/72127808:19
sean-k-mooneygibi: sorry ment to do that last week ^ if you want any changes let me know and ill respin the patch08:19
*** tony- has joined #openstack-nova08:23
huaqiangstephenfin: There are ~20 patches in bp/use-pcpu-and-vcpu-in-one instance, for roles of py-checks, realtime policy, and fixes for hardware.py08:29
huaqiangthese are not depending on each other so much08:29
huaqianghow about let me re-arrange the order08:29
huaqiangand resolving some dependencies with slight change to make it possible to be reviewed parallelly08:30
*** ttsiouts has quit IRC08:32
*** ttsiouts_ has joined #openstack-nova08:32
openstackgerritNalini Varshney proposed openstack/nova master: Add migration to make key field type VARBINARY in aggregate_metadata table,  https://review.opendev.org/72552208:32
*** awalende_ has joined #openstack-nova08:32
*** salmankhan has joined #openstack-nova08:33
bauzasgibi: impressive count on open bugs https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New08:35
*** awalende has quit IRC08:35
bauzasgibi: how have you triaged them ?08:35
*** ttsiouts has joined #openstack-nova08:36
*** ttsiouts_ has quit IRC08:38
*** ttsiouts has quit IRC08:39
*** lpetrut_ has joined #openstack-nova08:39
*** ttsiouts has joined #openstack-nova08:39
*** lpetrut has quit IRC08:42
*** derekh has joined #openstack-nova08:43
gibibauzas: most of the did not have enough information08:48
gibibauzas: and during the last two weeks we get only about 5 new bug reports so the low inflow helped a lot08:48
*** tony- is now known as tony_su08:48
gibisean-k-mooney: thanks, I will check it08:49
*** martinkennelly has joined #openstack-nova08:50
gibijkulik: what you want to achive? Do you want to move users from old HW gen to new HW gen over time?08:50
stephenfinhuaqiang: Sure. The only issue with that is that future rebases are harder. If you can keep the existing +2s though, go for it08:51
jkulikgibi, we have different hardware versions in place at the same time, where, because of NUMA and whatnot, flavors differ slightly. We want a user to be able to deploy a "2 TB instance" flavor, even if that means 1.8 on one node and 2.1 on the other.08:52
sean-k-mooneyjkulik: that is how it works by default08:53
sean-k-mooneyunless you meen 1.8 TB on one and 2.1 TB on the other? in not sure what those number refer too08:54
jkuliksean-k-mooney, the flavor has to map to 1.8 TB on one hardware version and 2.1 TB on the other08:54
sean-k-mooneyjkulik: ya that is not allowed08:55
jkuliksean-k-mooney, we just want our users to be able to specify "give me that 2 TB thingy"08:55
sean-k-mooneyright but its a 1.8TB thing and a 2.1TB thing08:55
sean-k-mooneyits not a 2TB thing in either case08:55
jkulikbecause it's hard for us to keep hypervisors empty enough to support that size, the user can't really know up-front what hardware is currently free08:55
sean-k-mooneyyou could use boot form volume and not provide any default root disk08:55
jkulikwe're talking RAM here :/08:56
sean-k-mooneyoh i taught you ment disk08:56
sean-k-mooneyand the user should know know up front how much ram is free08:56
sean-k-mooney*should not08:56
sean-k-mooneyjkulik: it sound like you want to reconfigure the scheduler to pack08:57
openstackgerritTony Su proposed openstack/nova-specs master: Re-propose provider-config-file spec for Victoria  https://review.opendev.org/72578808:57
jkulikI don't see how we could tell the user what flavors could currently get deployed via nova's api08:57
sean-k-mooneyif you configure the schduler to pack based on ram usage it will try to keep hosts free if it will fit on othet hosts08:57
sean-k-mooneyjkulik: well you cant and that is kind fo intentional08:57
jkuliksean-k-mooney, the problem is even bigger for us: we use vmware .. so we have a cluster in the compute node and have to do some magic behind the scenes to convince vmware to spawn this monster08:57
sean-k-mooneyit breaks the cloud model if the tenant has a view of the aviaable capacity08:58
jkulikyeah, that's why I wanted a flavor, that can match any of those free hosts, by having sub-flavors ;)08:58
sean-k-mooneyjkulik: yep which is not going to happen08:59
sean-k-mooneywe kind of had something like that at one point08:59
sean-k-mooneyfor ironic08:59
sean-k-mooneybut that has been gone for a long time08:59
jkulikwe'll have to work something out downstream then, thanks for clarifying :)08:59
sean-k-mooneyim trying to think if we could use placmenet and custom resouces classes for this09:00
sean-k-mooneybut what you are trying to do is going against the grain of how nova is intended to be used ot a degree09:00
bauzasgibi: ack thanks09:00
*** priteau has joined #openstack-nova09:02
sean-k-mooneyjkulik: im not sure how configurable the vmware dirver is09:03
sean-k-mooneybut do you want to reserve space for this giant vms or no?09:03
bauzasgibi: agreed with Wontfix https://bugs.launchpad.net/nova/+bug/1838309 ?09:04
openstackLaunchpad bug 1838309 in OpenStack Compute (nova) "Live migration might fail when run after revert of previous live migration" [Undecided,Won't fix] - Assigned to Vladyslav Drok (vdrok)09:04
sean-k-mooneywhat you could do woudl be to reserve ram on the specifci hosts where you intend to spawn these isntances. then create a custom resouce class for them and a flavor that request 2TB or ram but set resocues:memory_mb=0 resouces:CUSTOM_GIANT_VM=109:04
jkuliksean-k-mooney, not in general. we currently run something, that frees up enough space on one note - so we can spawn one.09:05
sean-k-mooneyok so that wont work if you want it to be dynamic then09:05
jkulikyeah, we have something like that. but it has to fit into normal quota for billing09:05
tony_sugibi: This is Tony Su and I am doing provider-config-file re-propose thing. Could you kindly spare some time to review its spec (only minor changes in History and Assignee sections vs. Ussuri version). https://review.opendev.org/#/c/725788/09:05
*** grandchild has joined #openstack-nova09:05
sean-k-mooneyjkulik: well the flavor.ram value would still be 2TB09:06
jkuliksean-k-mooney, we dynamically create a sharing child-resource-provider for a node, that has the CUSTOM_GIANT_VM resource09:06
*** salmankhan has quit IRC09:06
*** salmankhan has joined #openstack-nova09:07
*** ociuhandu has joined #openstack-nova09:07
*** ttsiouts_ has joined #openstack-nova09:07
sean-k-mooneyok but the issue is you still need to aling the flavor.ram to what that means for that host09:07
jkuliksean-k-mooney, so currently, it works for us. but the flavor's RAM doesn't align with NUMA on all nodes. meaning, we would have to create more flavors.09:07
sean-k-mooneybased on the hardware version09:07
jkulikyes09:07
*** ociuhandu has quit IRC09:08
sean-k-mooneyjkulik: may i ask why you actully use VM instead of ironic nodes at that scale09:08
jkulikbut given that we only free up one node, that could fit 2 TB either because it's 2 TB in size or because it's bigger, the 2 TB are more a rounded value the custerom might request09:08
*** ociuhandu has joined #openstack-nova09:08
tony_sugibi: Really appreciate your support and help.09:08
jkuliksean-k-mooney, sure. we were on ironic, but our customers would like to have the automatic failover when a node goes down, that vmware provides. also TCO.09:08
*** ttsiouts has quit IRC09:09
*** ociuhandu_ has quit IRC09:10
sean-k-mooneyi see so they are relying on the plathform for failover rahter then using an orchestration layer like k8s to mage there application09:10
*** mgariepy has quit IRC09:10
iklathe controller and node doesn't need to have the same device in in for passthrough?09:10
sean-k-mooneyso this is very much a pets not cattel usecase09:10
jkuliksince the customer can't find out, which precise flavor she can currently deploy via API, it would have been nice to have specific sub-flavors for ~ 2 TB09:10
gibibauzas, stephenfin: easy +2 https://review.opendev.org/#/c/721278 (ussuri spec move)09:10
iklamy flavor keeps failing when trying to turn on a instance09:10
sean-k-mooneyikla: the alias need to be on the computes and contoler09:11
jkuliksean-k-mooney, yes. definitely pets09:11
sean-k-mooneyikla: the pci whitelist is only needed on the compute09:11
iklaalias yes, device only on compute..09:11
iklak09:11
sean-k-mooneyikla: so the contole just uses it to translate the value in the flavor into pci request for schduling09:11
gibibauzas: I agree with WontFix https://bugs.launchpad.net/nova/+bug/183830909:12
openstackLaunchpad bug 1838309 in OpenStack Compute (nova) "Live migration might fail when run after revert of previous live migration" [Undecided,Won't fix] - Assigned to Vladyslav Drok (vdrok)09:12
bauzascool, moving on09:12
gibitony_su: thanks for taking that feature over. I will look at the spec soon09:13
*** ociuhandu_ has joined #openstack-nova09:13
*** ociuhand_ has joined #openstack-nova09:14
*** mgariepy has joined #openstack-nova09:15
gibibauzas: thanks for looking at the new bugs. You have the bug lock, I have yet another downstream originated issue to look at https://bugs.launchpad.net/nova/+bug/187802409:15
openstackLaunchpad bug 1878024 in OpenStack Compute (nova) "disk usage of the nova image cache is not counted as used disk space" [Undecided,Confirmed]09:15
sean-k-mooneygibi: thats a tricky one09:16
sean-k-mooneyin that really we should never fail to boot a vm becasue of a cached image taking up space09:16
*** ociuhandu has quit IRC09:17
sean-k-mooneyi.e. i would expect as an operator for the cache to be perurged first09:17
sean-k-mooneyso im not sure i would want it to be counted as used09:17
sean-k-mooneythat siad we dont purge the cache today as far as i know at least not while the image is used on the host09:17
sean-k-mooneyso i can see this causeing issues too09:17
sean-k-mooneycould you just workaround this using the host reserved disk paramater09:18
*** ociuhandu has joined #openstack-nova09:18
*** ttsiouts_ has quit IRC09:18
*** ociuhand_ has quit IRC09:18
*** ociuhandu_ has quit IRC09:18
sean-k-mooneyor perhaps we need a config option to limit the size of the cache09:18
*** ttsiouts has joined #openstack-nova09:19
gibisean-k-mooney: the cache is used as a backing file for qcow root fs so it cannot be really purged09:19
sean-k-mooneythe easy option is just to add the size of the cache to the reserved value in placment but that will reduce the amount of vms that can be spawned09:19
sean-k-mooneygibi: i tought we had two copies09:19
sean-k-mooneyon in the cache and a second for the backing file09:20
sean-k-mooneyisnt the cach module indpentend of the image backend09:20
gibisean-k-mooney: reserved_host_disk_mb would only be useful if we could limit the size of the cache09:20
sean-k-mooneyyep the two line "fix" is to jsut count the cache size and added it to the reserved value in placment09:21
sean-k-mooneybut i dont think that is a resonable long term fix09:21
gibisimply counting it as reserved does not help when a VM is booted with a new image to a compute. There we would need to make sure that both the new cached image and the VM root fs fits the compute09:22
gibiwe discussed these options with dansmith yesterday09:22
gibisee the summary in the bug and the link to the IRC discussion09:23
openstackgerritMerged openstack/nova-specs master: move implemented specs in ussuri  https://review.opendev.org/72127809:24
sean-k-mooneygibi: are we sure we use the cache image directly for the backing file by the way09:25
sean-k-mooneygibi: i tought that was how it worked but i remeber someone telling me its not how it worked and that we chage two copies one in the caceh and one for the backing file09:26
gibilet me double check09:26
iklaany info on gpu passthrough w/ rtx 8000's ?09:27
huaqiangstephenfin: I am not sure how to keep exisiting +2, will that be kept if I do not change the09:27
sean-k-mooneyikla: are you trying to do full passthough or are you trying to use its vgpu capablity09:27
aarentsI think it is still the same directory09:27
huaqiangwill the +2 be kepet if I donot change the 'Change-ID'?09:28
gibisean-k-mooney: http://paste.openstack.org/show/793424/09:28
gibisean-k-mooney: based on this the image in the _base dir is the backing store for the instance's image09:29
gibiI have two instances from the same image, and have one backing image in the _base dir09:29
sean-k-mooneygibi: yep but is that the cached image or is it a second copy09:30
sean-k-mooneydepenidn gon how we set the force cow and force raw values this might change09:30
sean-k-mooneyin that case we are using a raw backing file09:31
sean-k-mooneybut waht formate was the image originally?09:31
sean-k-mooneyi assume qcow?09:31
*** martinkennelly has quit IRC09:31
gibisean-k-mooney: a) it doesn't matter. I can reword the bug that the bacing file size is not counted as used b) I think this is the cache as if I delete the instances it does not deleted imediately09:31
*** jhesketh has joined #openstack-nova09:31
bauzasikla: not sure I understand your question about gpu passthrough09:31
gibisean-k-mooney: I will try to look into the case when the VM uses a raw file to see if then the image in _base is deleted, or if I can make a small change to be deleted09:33
sean-k-mooneygibi: there is a periodic that deletes the image when no vms is using it on the host as far as i am aware09:34
gibisean-k-mooney: yes, that is the cache manager :)09:35
sean-k-mooneyso until that runs the cached copy will be there09:35
gibisean-k-mooney: anyhow the core of the problem is that nova uses more disk that it counts as used and today there is no way to avoid that except putting the _base dir on a separate partition09:35
gibiwhen we had the DiskFilter it had the disk_available_least information from the compute to prevent overallocating the disks but placement does not have such info09:36
sean-k-mooneywell not entirely09:37
iklagrid vgpu09:37
sean-k-mooneythe disk avaiable least behavior was still affected by the disk allocation ratio09:37
bauzasikla: gpu passthrough != grid vgpu , you know ?09:38
sean-k-mooneygibi: i still go back to https://gist.github.com/JCallicoat/43505cab0535057ca4fb every time i want to figure that out09:38
bauzasin one case, you're litterally giving up the gpu device to the guest09:39
iklasorry I didn't explain that correctly09:39
iklayou could do a full passthrough09:39
bauzasin the other case, you're asking the nvidia driver to slice your gpu into pieces that can be provided to the guests09:39
iklaor do vgpu09:39
iklacorrect09:39
sean-k-mooneyikla: yes both are supportred09:39
bauzasso, again, what's your question ?09:39
iklais there docs on it09:40
sean-k-mooneyyou can use mdev based vgpus or you can do direct passhtough of the gpu09:40
bauzasikla: indeed09:40
bauzashttps://docs.openstack.org/nova/latest/admin/virtual-gpu.html09:40
iklaI just found it09:40
iklaoh... thanks. :)09:40
bauzashttps://docs.openstack.org/nova/latest/admin/pci-passthrough.html09:40
bauzaswe don't provide special bits of nvidia installing and OS preparation09:40
bauzasbut you can get'em from the nvidia grid docs09:41
bauzaseg. https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#red-hat-el-kvm-install-configure-vgpu09:42
gibisean-k-mooney: I can reword. Assuming disk_allocation_ratio = 1.0. DiskFilter detected if you overallocated the disk and nova does that with the image cache. placement does not know if you overallocate the disk09:42
sean-k-mooneygibi:  placement has the allication raitio in the inventory recored too fyi09:43
sean-k-mooneygibi:  im not that worried about the wording to be honest09:43
sean-k-mooneygibi: im more interested in how you plan to adress it09:43
gibisean-k-mooney: this is not the intentional disk allocaiton ratio see my assumption above. It is the unintentional disk overallocation due to image cache09:43
gibisean-k-mooney: have you seen the options in the bug comments?09:44
sean-k-mooneyyes09:44
sean-k-mooneyi have09:44
sean-k-mooneyoptions A and b we have said wont work for different reasons09:45
*** Liang__ has quit IRC09:45
sean-k-mooneyso the only viable one would be disabel cache or one of the workarounds09:45
sean-k-mooneyi guess B could work if instead of purgin the image when the cache is full we just dont cache new iamges09:45
sean-k-mooneyA wont work becasue we could race with concurrent boot requests09:46
sean-k-mooneygibi: i have an option d09:46
gibishoot09:46
sean-k-mooneygibi: if we have consumer types we could create allocations for cached images against the RP09:47
sean-k-mooneyusing a nova consumer type09:47
sean-k-mooneyif we cant create the allcoation because there is not enough sapce then we dont cache it09:47
sean-k-mooneyif we can the it will prevent the issue as all usage will be tracked in placment09:48
gibiwe migt not need consumer types, we can simple create an allocation where the consumer_id is not an instance or a migration but the cache itself.09:49
sean-k-mooneyya i was thinking we could use the image uuid as the consumer uuid and a cache consumer type09:49
sean-k-mooneythat way we would know which image are cached on each node09:50
gibi in the allocation candidate query we have to either include 1x disk space if the host already caching the image or 2x disk space if the host will cache the image due to the current request09:50
sean-k-mooneygibi: not if we change the logic so that the cacheing is optional09:51
*** ociuhandu has quit IRC09:51
sean-k-mooneye.g. have the compute node make the allcoation after the fact when teh node is about to be spawned09:51
sean-k-mooneyso if the image is not already cached, chech can you create an allcoation for it, if so cache it and proceed with the boot if not dont cache it and just create a copy09:52
iklacan I request I set multiple names in pci_passthrough for the flavor?09:52
sean-k-mooneyfor the vm09:52
sean-k-mooneygibi: the only time that would not cache is when the disk is almost full09:53
iklawhat would be the syntax or is it multiple pci_passthrough lines for the flavor?09:53
gibisean-k-mooney: do you see the complexity of making the cache optional in nova code? yesterday we conculded with dansmith that it could be pretty hairy change09:54
gibibut I'm not an expert on the image backends09:54
sean-k-mooneygibi: i know that part of the code makes heavy use of functools.partil and is really hard to follow09:55
sean-k-mooneyso i would guess it would be non trivial09:55
sean-k-mooneybut mainly due to techdebt09:55
openstackgerritJiri Suchomel proposed openstack/nova-specs master: Add spec for downloading images via RBD  https://review.opendev.org/57280509:55
iklasomething like: {"pci_passthrough:alias"="name1:1, name2:1, name3:1"}09:56
*** ociuhandu has joined #openstack-nova09:57
sean-k-mooneyikla: yes its a comma seperate list09:57
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/nova/pci/request.py#L234-L23709:57
gibisean-k-mooney: thanks. I will look into this direction as well09:58
sean-k-mooneyso openstack flavor set --property "pci_passthrough:alias"="name1:1,name2:1,name3:1" my-flavor09:58
iklaInsufficient compute resources: Claim pci failed.10:01
iklahmm10:01
sean-k-mooneyikla: you might need to weaken the default numa affintiy policy10:02
sean-k-mooneye.g. if you dont have all the device on the same numa node10:02
*** tesseract-RH has joined #openstack-nova10:02
sean-k-mooneyikla: you can set numa_policy=perfer in the alisa10:03
sean-k-mooneyi just need to check that value is correct but i think its prefer or prefered10:03
*** tesseract has quit IRC10:04
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Address issues with flake8 3.8.0  https://review.opendev.org/72714010:05
sean-k-mooneyikla: its "preferred"10:05
stephenfingibi, bauzas: if that isn't failing our gate yet, it's going to start soon ^10:05
sean-k-mooneyikla: https://github.com/openstack/nova/blob/master/nova/objects/fields.py#L73410:05
*** tetsuro has quit IRC10:05
sean-k-mooneyikla: it could have failed for other reasons too10:05
stephenfinhttps://review.opendev.org/727133 will fix it but I don't know how long until that's released10:05
stephenfin*how long it will be10:05
bauzaswait10:06
bauzasstephenfin: doesn't that provide a new exception ?10:06
bauzasstephenfin: do you have more context ?10:07
stephenfinyes, E741 and F522 seem to be new10:07
bauzasso we could ignore them first ?10:07
stephenfinbut...why?10:07
iklaset in flavor?10:07
iklaoh, in the alias?10:08
iklaI get the same issue with one device10:08
stephenfinthe correct fix is for hacking to limit flake8 to a given minor version instead of the major version range it's using, but we need a new release of hacking for that10:08
*** kashyap has joined #openstack-nova10:09
sean-k-mooneyikla: are you testing with a gpu10:09
sean-k-mooneyikla: or do you have any specific errors int eh nova compute agent log10:10
iklano, these are network cards10:11
iklanothing in logs10:12
sean-k-mooneyok then likely you if its not the numa issue the next thing to check woudl be the pci device type10:12
sean-k-mooneydo the nics support sriov?10:12
iklayes10:13
iklathey are setup and I can see them in the pci list with lspci -nnn10:13
sean-k-mooneyif they dont have the capablity of SRIOV then the type will be type-PCI10:13
iklaVirtual x 410:13
sean-k-mooneyif they do then the PF will be type-PF and the VFs will be type-VF10:13
sean-k-mooneyyou need to match the alis to the type10:13
iklathats what I did10:14
sean-k-mooneyikla: by the way you only use that pci aliase for nics if you dont want them to be used with neutron10:14
iklayup10:15
*** martinkennelly has joined #openstack-nova10:15
*** udesale_ has joined #openstack-nova10:18
sean-k-mooneyok so your alias is something like this10:18
sean-k-mooney   | [pci]10:19
sean-k-mooney        | alias = '{10:19
sean-k-mooney        |   "name": "QuickAssist",10:19
sean-k-mooney        |   "product_id": "0443",10:19
sean-k-mooney        |   "vendor_id": "8086",10:19
sean-k-mooney        |   "device_type": "type-vf",10:19
sean-k-mooney        |   "numa_policy": "preferred"10:19
sean-k-mooney        |   }'10:19
*** rpittau is now known as rpittau|bbl10:19
*** ttsiouts_ has joined #openstack-nova10:19
*** ttsiouts has quit IRC10:19
sean-k-mooneyactully it shoudl be type-VF10:20
gibitony_su: could you make a small fix in https://review.opendev.org/#/c/725788 then I will +2 it10:20
*** dpawlik has quit IRC10:21
*** songwenping_ has quit IRC10:21
*** udesale has quit IRC10:21
bauzasstephenfin: I mean, your change is a bit unclear10:21
*** dpawlik has joined #openstack-nova10:21
openstackgerritLee Yarwood proposed openstack/nova stable/stein: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72714710:22
openstackgerritLee Yarwood proposed openstack/nova stable/rocky: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72714810:22
openstackgerritLee Yarwood proposed openstack/nova stable/queens: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72715010:23
bauzasstephenfin: so, IMO, we should add a new change ignoring the new errors, then having your own change modifyng what what needed and then also removing the error ignored line10:24
*** songwenping_ has joined #openstack-nova10:24
bauzasstephenfin: so in case a new patch would be merged, we would still make sure we would ignore the new errors until you are sure that all of them are fixed10:25
sean-k-mooneydo we even need to do that10:30
sean-k-mooneylets just cap it10:30
sean-k-mooneythen props a patch to uncap it and a patch that depends on that in nova10:30
sean-k-mooneyit should fail and we can iterate on that patch until it passes10:31
stephenfinor, you know, fix the five things that have changed and move on with our lives10:31
stephenfinthese are the only changes necessary. I checked. I'll update the commit message shortly10:31
sean-k-mooneystephenfin: you didnt explain why any of the five things needed to be fixed10:31
stephenfinI'll update the commit message shortly10:31
sean-k-mooneyand several fo them had #noqa on them10:31
sean-k-mooneyso they should have been ignofred10:31
stephenfinthey already had noqa10:31
sean-k-mooneyyep10:31
stephenfincheck my replies10:31
sean-k-mooneyso it should not have been checking them10:31
sean-k-mooneystephenfin: https://www.flake8rules.com/rules/E741.html does not feel like we should have it on by default10:33
*** ttsiouts_ has quit IRC10:34
*** ttsiouts has joined #openstack-nova10:34
* bauzas goes on lunch10:34
stephenfinsean-k-mooney: I'd really rather avoid that argument because those tend to be ratholes in nova. I'd much, much rather we just took what flake8 and hacking gave us and dealt with it.10:35
sean-k-mooneyarbitary rules like that one make code worse and less readable10:36
stephenfinarbitrary rules like you're *never* allowed exceed 80 characters?10:36
sean-k-mooneyi agree it can be consuing a time but its not any worse then using any 1 lettter valiable10:37
stephenfinone man's arbitrary rule is another's good idea10:37
sean-k-mooneystephenfin: yes that has been demonstrated to make code less readable10:37
sean-k-mooneyand pep8 enforece 7910:37
sean-k-mooneynot 8010:37
* gibi left comment in the flake8 patch and goes for lunch10:37
stephenfinthere is evidence to suggest otherwise https://www.youtube.com/watch?v=wf-BqAjZb8M&t=26010:37
stephenfinhttps://black.readthedocs.io/en/stable/the_black_code_style.html#line-length would be an informative read10:38
stephenfinbut this is exactly where I don't want to end up :D damn it10:38
stephenfingibi: ta10:38
sean-k-mooneystephenfin: i have read the black style guide it argues against the 80 column limit10:39
sean-k-mooney stephenfin its why it used a 80ish limit rahter then a fixed value.10:40
stephenfinyup10:40
stephenfinto be clear, my argument is against the strict limit10:41
stephenfin80ish is fine10:41
stephenfinhence the emphasis on *never* above10:41
openstackgerritBrin Zhang proposed openstack/nova master: Optimize _create_and_bind_arqs logic in conducor  https://review.opendev.org/72656410:45
sean-k-mooneystephenfin: i left some more comments. some of the noqas canbe remvoed with light tweeks or we can need to document why.10:47
* sean-k-mooney my typing is worse then usual today10:49
*** songwenping__ has joined #openstack-nova10:50
*** ttsiouts has quit IRC10:51
*** songwenping_ has quit IRC10:53
*** ttsiouts has joined #openstack-nova11:00
*** ttsiouts_ has joined #openstack-nova11:03
*** ttsiouts has quit IRC11:04
openstackgerritQiu Fossen proposed openstack/nova-specs master: specify mac for creating instance  https://review.opendev.org/70042911:19
*** samueldmq_ has joined #openstack-nova11:28
*** samueldmq_ has quit IRC11:34
*** samueldmq_ has joined #openstack-nova11:36
*** jsuchome has joined #openstack-nova11:37
*** samueldmq_ is now known as samueldmq11:40
openstackgerritNalini Varshney proposed openstack/nova master: Add migration to make key field type VARBINARY in aggregate_metadata table,  https://review.opendev.org/72552211:40
*** raildo has joined #openstack-nova11:44
openstackgerritsean mooney proposed openstack/nova master: add workaround to disable multiple port bindings  https://review.opendev.org/72438611:45
openstackgerritsean mooney proposed openstack/nova master: [DNM] testing with force_legacy_port_binding workaround  https://review.opendev.org/72438711:45
*** nweinber has joined #openstack-nova11:52
openstackgerritTakashi Natsume proposed openstack/nova master: Remove six.reraise  https://review.opendev.org/72689811:57
huaqiangstephenfin: Maybe I should drop my thought of reordering your bp/use-pcpu-and-vpuc-in-one-instance patches. My original thought was the followings patches could be acceleated since my patches donot depend on all the features introduced in the proceeding patches.12:06
huaqiangBut the reorder will bring a lot of extra work12:06
huaqiangI'll rebase my pathces by following your patches.12:07
*** ratailor has quit IRC12:07
*** vishalmanchanda has quit IRC12:07
*** sapd1_x has quit IRC12:08
openstackgerritTakashi Natsume proposed openstack/os-vif master: Remove egg_info in setup.cfg  https://review.opendev.org/72717312:10
*** ttsiouts_ has quit IRC12:20
*** ratailor has joined #openstack-nova12:20
*** ttsiouts has joined #openstack-nova12:21
*** rpittau|bbl is now known as rpittau12:21
*** adrianc has quit IRC12:24
*** ttsiouts_ has joined #openstack-nova12:31
*** ttsiouts has quit IRC12:31
*** ratailor has quit IRC12:33
*** happyhemant has joined #openstack-nova12:36
*** ttsiouts_ has quit IRC12:38
*** ttsiouts has joined #openstack-nova12:38
*** belmoreira has quit IRC12:41
*** vishalmanchanda has joined #openstack-nova12:41
*** ttsiouts has quit IRC12:42
*** elod_pto is now known as elod12:42
*** belmoreira has joined #openstack-nova12:44
*** adrianc has joined #openstack-nova12:45
*** adrianc has quit IRC12:50
*** songwenping_ has joined #openstack-nova12:51
*** adrianc has joined #openstack-nova12:53
*** songwenping__ has quit IRC12:54
*** lbragstad has joined #openstack-nova12:56
*** dklyle has joined #openstack-nova13:00
openstackgerritTakashi Natsume proposed openstack/nova master: Remove six.reraise  https://review.opendev.org/72689813:05
*** ttsiouts has joined #openstack-nova13:06
*** ttsiouts has quit IRC13:21
*** ttsiouts has joined #openstack-nova13:21
*** avolkov has joined #openstack-nova13:25
*** ttsiouts has quit IRC13:25
*** ttsiouts has joined #openstack-nova13:26
*** spotz has joined #openstack-nova13:27
*** lbragstad_ has joined #openstack-nova13:30
*** lbragstad has quit IRC13:33
*** redrobot has quit IRC13:39
*** songwenping_ has quit IRC13:46
openstackgerritLee Yarwood proposed openstack/nova stable/stein: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72714713:46
*** songwenping_ has joined #openstack-nova13:47
openstackgerritLee Yarwood proposed openstack/nova stable/rocky: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72714813:47
dansmithjsuchome: hey, I +2d that spec and then realized you forgot to address one critical piece of feedback from the last version13:48
dansmithjsuchome: *other* than that, I was going to be happy with it :)13:48
openstackgerritLee Yarwood proposed openstack/nova stable/queens: stable-only: skip volume backup tests in cellsv1 job  https://review.opendev.org/72715013:48
*** openstackstatus has quit IRC13:53
*** openstackstatus has joined #openstack-nova13:54
*** ChanServ sets mode: +v openstackstatus13:54
jsuchomedansmith: I see, I messed up the test part. And the other part I did not address intentionally, I'm not sure it deserves detailed explanation in the spec13:59
jsuchomedansmith: the 'significant changes' in glance.py are really only about executing extra download method under given conditions. And I think this is already covered13:59
*** mriedem has joined #openstack-nova13:59
*** brinzhang_ has joined #openstack-nova14:01
*** songwenping__ has joined #openstack-nova14:01
dansmithjsuchome: you're changing the base existing glance code more than just calling the rbd function and returning the way the plug point used to work, so I think it's pretty relevant14:01
dansmithjsuchome: there's also no documentation about it in the code change, so I don't even know what the goal of the change is14:02
dansmithjsuchome: it does't need to be detailed, but it should be something, IMHO.14:03
jsuchomeI tried to explain it already but now I see I forgot to post my changes ...14:04
*** songwenping_ has quit IRC14:04
*** brinzhang has quit IRC14:04
jsuchomedansmith: I actually added some comment to that part you are (probably?) mentioning, I assume you mean "Load chunks from the downloaded image file... " bit14:05
jsuchomeI've added some comments to the spec14:05
dansmithyeah, so re-reading the code this morning, it looks like we used to not do the image signature verification for things downloaded with the per-scheme handler,14:05
jsuchome(I mean: I've added some comments 1. to the code and 2. now I've also commented the spec)14:06
dansmithand this change is lining it up so we do right?14:06
*** brinzhang_ has quit IRC14:06
jsuchomeyeah, in previous version, this signature verification was just skipped if there was anything downloaded by the download handler14:06
jsuchomeSo you think it still should be mentioned in the specs?14:07
jsuchomeMaybe I could just mention it in the commit message of the code change14:08
dansmithyeah, so just a line in the spec  under proposed change like this is fine: "The glance module also never used to perform image signature verification when the per-scheme module was used. Since we are moving this into core code, we will also fix this so that per-scheme images are verified like all the rest."14:08
dansmithjsuchome: please just add that one line (or something like it) to the spec when you fix the test thing and we can move on14:08
dansmithit should also go into the code change commit message, btw14:09
jsuchomeOK, both places then. In a minute14:09
dansmithhonestly, if I was doing it, I would break the code into two pieces, one for that fix and one for the rbd module being added14:10
dansmithbut, it's already done, so.14:10
dansmith(biab)14:12
jsuchomeyeah, it's not exactly part of the feature. Seems like the original author realized it during testing, it appeared in some later PS14:12
*** sapd1_x has joined #openstack-nova14:12
*** adrianc has quit IRC14:15
*** ttsiouts has quit IRC14:16
*** adrianc has joined #openstack-nova14:17
openstackgerritJiri Suchomel proposed openstack/nova-specs master: Add spec for downloading images via RBD  https://review.opendev.org/57280514:18
openstackgerritJiri Suchomel proposed openstack/nova master: Add ability to download Glance images into the libvirt image cache via RBD  https://review.opendev.org/57430114:18
*** ttsiouts has joined #openstack-nova14:22
*** mgariepy has quit IRC14:31
*** ttsiouts_ has joined #openstack-nova14:31
dansmithjsuchome: +2 on the spec, thanks14:32
jsuchomecool14:32
*** ttsiouts has quit IRC14:35
*** tkajinam has quit IRC14:37
*** ralonsoh has quit IRC14:40
*** ralonsoh has joined #openstack-nova14:40
*** ttsiouts_ has quit IRC14:41
*** ttsiouts has joined #openstack-nova14:41
*** mgariepy has joined #openstack-nova14:44
*** ralonsoh has quit IRC14:44
*** ttsiouts has quit IRC14:46
jsuchomedansmith: two questions: 1. should I add that admin guide change into the same PS as the one with code or rather a new one? 2. where is that ceph CI job I should try to copy& adapt? (I've never done this part before)14:48
*** rcernin has quit IRC14:48
dansmithjsuchome: no, do it in a separate patch please14:48
dansmithjsuchome: I'm not super up on the state of the job (i.e. whether it's a legacy or converted job), nor where those bits live depending14:49
dansmithbut I bet sean-k-mooney knows14:49
sean-k-mooneywhich job14:50
sean-k-mooneyceph14:50
sean-k-mooneyi think its converted but ill check14:50
openstackgerritGhanshyam Mann proposed openstack/python-novaclient master: Bump hacking min version to 3.0.1  https://review.opendev.org/72721414:51
sean-k-mooney devstack-plugin-ceph-tempest-py3 i think is a zullv3 job. we are not defining any legacy playbooks for ti but it comes form devstack not zuul config14:52
dansmithsean-k-mooney: jsuchome needs to take that job, tweak the config on the compute node slightly, and get a one-off run of it at least14:52
sean-k-mooneydansmith: ah ok ill jsut triple check that its zuulv3 if so that is simpel to do14:52
gmannthat is zuulv3, derived from tempest-full-py3.14:54
lyarwoodhttps://review.opendev.org/#/c/708038/ is an example of me messing around with the ceph job recently14:54
sean-k-mooneyyep its zuul v3 https://github.com/openstack/devstack-plugin-ceph/blob/master/.zuul.yaml#L57-L13014:54
*** links has quit IRC14:55
sean-k-mooneyjsuchome: what sepcfically do you need to add14:55
dansmithlyarwood: ah nice14:55
dansmithmight need a DNM change against devstack to hack the half behavior into place and then control it with something like lyarwood's example14:56
sean-k-mooneydansmith: you should not need too you can create a job that uses it as a parrent and then add your changes14:56
dansmithsean-k-mooney: he needs to set up all the ceph stuff, but configure the compute to *not* use rbd backend, and have a new conf option set to enable direct-from-ceph download14:56
sean-k-mooneyso jsuchome just need to override the imagebackend in the nova.conf to be qcow214:57
dansmithsean-k-mooney: depends on how the ceph bit works in devstack right? in lyarwood's example above, he had https://review.opendev.org/#/c/708035/ for that reason I think14:57
sean-k-mooneye.g. lev devstack and the plugin do its thing but just tell nova not to use it14:58
dansmithsean-k-mooney: probably enough, as long as that sticks and devstack or something downstream doesn't override14:58
sean-k-mooneyceph is set up by https://github.com/openstack/devstack-plugin-ceph14:58
sean-k-mooneybut if jsuchome does something like https://review.opendev.org/#/c/724387/3/.zuul.yaml14:58
sean-k-mooneywhich is using the local.conf [[post-config:/etc/nova/nova.conf]] mechanism to set config options that will run after the plugin14:59
dansmithsean-k-mooney: will that override what the plugin does?14:59
sean-k-mooneyya i belive the order is intree modules then plugins then post-config  form local.conf15:00
lyarwoodwait, to disable ceph on Nova that plugin has a few variables you can set in the job15:00
lyarwoodENABLE_CEPH_NOVA=false etc iirc15:00
sean-k-mooneythat would deploy it for cinder only then?15:00
lyarwoodglance etc15:00
sean-k-mooneyor glance i guess15:00
lyarwoodI think that's the point of this test15:00
dansmithneeds to be set for glance, yeah15:01
lyarwoodbut that might be missing the required creds to download over RBD that I assume is the point here15:01
sean-k-mooneyya so that sound like it would work15:01
dansmithright15:01
dansmithhe still needs all the regular rbd config, just not the imagebackend part15:01
sean-k-mooneyya so basicaly what lyarwood example is doing is exactly what needs to be done.15:02
dansmithyup15:02
sean-k-mooneyis jsuchome about15:02
sean-k-mooneyif not i can submit a ptach that does that but i dont know what to put it on top15:02
dansmithhang on15:02
jsuchomesean-k-mooney: tweak few options, mostly for nova and one for glance15:03
dansmithsean-k-mooney: https://review.opendev.org/#/c/574301/15:03
jsuchome(sorry I wasn't following for a while)15:03
sean-k-mooneyclip notes the devstack-ceph plugin has an env varible that you can set to disable just the nova change it does15:04
sean-k-mooneyso you can have it deploy cpeh for just glance and cinder15:04
sean-k-mooneyso that should be easy to do im not sure what else you need to set but there is a simple way to set config options we can use15:05
sean-k-mooneyif you need to do something more involved then you need to use a pre playbook to configure the jobs properly or a local.sh script15:05
jsuchomeI think I need to test if nova correctly downloads image and spawns VM, and I need ceph as glance backend, and few options for glance and nova set for that15:06
dansmithjsuchome: what I'm asking for is to just configure fordirect download, make sure tempest runs normally, and ten we can look at the logs to convince ourselves that it's working15:07
sean-k-mooneyok ill put up a DNM patch that disablel the unnned jobs and just disables using ceph for nova15:07
sean-k-mooneythen we can tweak that15:07
dansmithjsuchome: not a direct test that confirms it was somehow downloaded direct -- that would be very hard15:07
jsuchometrue, tempest run should indeed be enough15:08
*** dtantsur is now known as dtantsur|afk15:08
dansmith(and logs)15:08
dansmithjsuchome: if you think that you need to add some logs to be able to validate it from the outside, you should go ahead and do that15:08
dansmithjsuchome: perhaps a LOG.debug('Downloading %(image)s from per-scheme handler %(handler)s') that we can look for15:09
jsuchomeThere's already "Successfully transferred using rbd" log line15:11
dansmithokay cool15:11
jsuchomeLOG.info I beleive15:11
dansmithjsuchome: ah you mean the "attempting to export" one15:12
dansmithoh, no,15:12
dansmithI see that one too15:12
dansmithokay cool, should be covered and easy to validate15:12
*** dasp_ has quit IRC15:14
*** dasp has joined #openstack-nova15:16
openstackgerritTakashi Natsume proposed openstack/nova master: Remove six.moves  https://review.opendev.org/72722415:17
lyarwoodjsuchome / sean-k-mooney ; https://review.opendev.org/727225 might work15:18
sean-k-mooneydansmith: so before i push this15:18
lyarwoodah sorry you're also working on it15:18
lyarwoodI didn't think you were15:18
sean-k-mooneythat is updating the plugin15:18
sean-k-mooneyim working on a patch to do it for nova15:18
lyarwoodit's the same either way15:19
sean-k-mooneydid you need to make those nova con changes too15:19
lyarwoodwell no, you'd just use the job in Nova15:19
lyarwoodthis change already pulls in the Nova changes15:19
dansmithjsuchome: I dunno where you got so much karma, but enjoy this overly exuberant clamor to help while you can :D15:19
lyarwoodI'm just trying to hack around with zuul as much as I can at the moment15:20
lyarwoodI've still got the live migrastion job migration to zuulv3 in my backlog15:20
lyarwoodmigration*15:20
toskylyarwood: is there something specific in that job compared to a "usual" devstack job? Maybe I can help15:20
lyarwoodtosky: the LM jobs?15:21
toskylyarwood: that one (and in general any job that requires to move from legacy to native zuulv3)15:21
*** tesseract-RH has quit IRC15:22
*** belmoreira has quit IRC15:22
lyarwoodtosky: http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013207.html covers some of it, it's currently three jobs in one15:23
*** tesseract has joined #openstack-nova15:23
lyarwoodtosky: I ran out of time in U to break things up but feel free to poke the changes early in V if you have time before I get around to it15:23
toskylyarwood: sure - I will ping everyone with pending legacy jobs as part of the work on the community goal15:24
openstackgerritsean mooney proposed openstack/nova master: [DNM] ceph direct download testing  https://review.opendev.org/72722815:25
sean-k-mooneyok well that is just push to have it there. it will do the excat same thing as lyarwood version15:25
sean-k-mooneyits just in nova rather then the devstack plugin15:25
lyarwoodtosky: ack understood thanks :)15:26
*** mlavalle has joined #openstack-nova15:27
jsuchomelyarwood: sean-k-mooney: thanks a lot ... so, where exactly do I set those nova+glance config values? under $NOVA_CPU_CONF for nova? And the one for glance?15:31
openstackgerritJiri Suchomel proposed openstack/nova-specs master: Add spec for downloading images via RBD  https://review.opendev.org/57280515:32
lyarwoodjsuchome: glance should be configured automatically, we only had to add the two specific configurables your change is using in Nova as we disabled the configuration of Nova by the ceph devstack plugin.15:35
lyarwoodjust corrected a mistake in my change, I'll check back in on it later.15:36
*** Guest49293 has joined #openstack-nova15:37
jsuchomelyarwood: for glance I need DEFAULT.show_image_direct_url=true, are you telling me this is set by default?15:38
*** Guest49293 is now known as redrobot15:38
sean-k-mooneyim not sure its in the base job15:38
sean-k-mooneyso we might need to set that15:38
jsuchomethat's what I would expect15:38
dansmithjsuchome: I think it must be, otherwise nova wouldn't be able to tell that the image is in the same rbd it is configured for, with the rbd backend15:38
jsuchomeI see, so you mean it already _is_ default for ceph jobs?15:39
dansmithi.e. we already look at the locations field of the image, that's what you need right?15:39
dansmithjsuchome: I expect15:39
sean-k-mooneydansmith: we only have one cluser set up by the plugin so that would not be an issue we would see in teh gate15:39
dansmithsean-k-mooney: if we look at the locations and require one that matches, it would15:39
jsuchomeyes, that's for location15:39
sean-k-mooneyi.e. it will always be the same15:39
dansmithbut I'm looking15:40
sean-k-mooneyi mean its not hard to add but ill chekc the parent jobs and see if we are doing it or not15:40
sean-k-mooneyor actully the plugin is likely where that would be set15:41
dansmithhttps://zuul.opendev.org/t/openstack/build/fbd122f546684ffebdca5f2f73b6167c/log/controller/logs/etc/glance/glance-api_conf.txt15:41
jsuchomelyarwood: sean-k-mooney: also this for nova conf: glance.allowed_direct_url_schemes = rbd ... as this is (was! and should not be anymore) deprecated option, I would not expect it to be set15:41
*** dasp has quit IRC15:41
dansmithshow_multiple_locations = True15:41
sean-k-mooneyhttps://github.com/openstack/devstack-plugin-ceph/commit/62ea04c8d180c5419300ddc7784c5c46f9fcbdad15:41
dansmithI think ^ is what we need for the check we do15:42
dansmithnot sure the difference between that and the show-direct-url one15:42
dansmithbecause, AFAIK, the locations are the direct url15:42
*** dasp has joined #openstack-nova15:42
dansmithah,15:43
jsuchomeso show_multiple_locations also implies the location info is present?15:43
dansmiththat commit implies that one impliesthe other15:43
dansmithyeah15:43
sean-k-mooneylyarwood's job is curerntly running https://zuul.openstack.org/stream/9b459dcfc0cd46d0a13ce5a5a1be2afe?logfile=console.log so we will know one way or another in about an hour15:45
sean-k-mooneymaybe too15:45
sean-k-mooney*two15:46
sean-k-mooneybut if we need to add anything else we can do that quickly once its done15:46
jsuchomewell that allowed_direct_url_schemes is a must, we only fire the new download handler if it is set15:47
sean-k-mooneyso we need that in nova i can see if that is set by the plugin15:47
dansmithit won't be15:48
dansmithI thought lyarwood did it in his change15:48
lyarwoodsorry I missed that15:48
lyarwood[glance]/allowed_direct_url_schemes=['rbd']?15:48
sean-k-mooneyya i was about to ask the same https://opendev.org/openstack/nova/src/branch/master/nova/conf/glance.py#L6415:49
dansmithlyarwood: not sure what group, but yes, hang on15:49
sean-k-mooneyits in the glance group15:49
jsuchomeyep15:49
dansmithif 'rbd' in CONF.glance.allowed_direct_url_schemes:15:50
sean-k-mooneydo we plan to turn this on by default15:50
dansmithno15:50
jsuchomeit should be documented15:50
sean-k-mooneyok we proably should remove the deprecation of that option in jsuchome patch15:50
sean-k-mooneyi assume it already does that15:51
dansmiththe spec says we will undeprecate it yes15:51
sean-k-mooneyk15:51
*** lpetrut_ has quit IRC15:51
jsuchomedoes it need a release note?15:51
dansmithyes15:51
sean-k-mooneythe feature would even without the undeprecation15:51
sean-k-mooneythe same one can cover both15:52
jsuchomeok, than it's another change for 57430115:52
dansmithjsuchome: I probably wouldn't pile that in there personally15:53
*** maciejjozefczyk has quit IRC15:53
*** gyee has joined #openstack-nova15:56
jsuchomeOK, another patch, no problem15:56
dansmithjsuchome: at least for the moment, we can always squash15:57
dansmithI think that by the time you get all the test stuff in this patch it will be plenty meaty15:57
dansmithone could argue that it could go in my first patch to remove the plug point, but that just means it has two semi-related atomic changes15:57
dansmithpatches are cheap15:57
jsuchomeyeah, and for tommorrow I got to work on the tests, I can see they are not enough15:58
dansmithcool15:58
openstackgerritBalazs Gibizer proposed openstack/nova master: WIP: allow disabling image cache for raw images  https://review.opendev.org/72726116:09
gibidansmith, sean-k-mooney: I looked at how easy is to disable the image cache and this is my first stab on it (seem to work in devstack) ^^16:09
gibiI will have to disappear now but feedback is appreciated16:10
*** rpittau is now known as rpittau|afk16:10
dansmithgibi: I don't understand that16:11
*** tesseract has quit IRC16:12
sean-k-mooneydansmith: i assumed it was just me :)16:13
sean-k-mooneyalso i expected to not underdstand that when looking at that code16:14
dansmithyeah16:18
dansmithI'll have to get all dug-in to that code again to really be able to speak intelligently about it,16:19
dansmithbut that surely seems to be basically doing the same thing as above at first glance16:19
dansmithsean-k-mooney: ah, maybe because this is the Flat implementation16:22
dansmithbut I really thought that it still cached even though it flattened the image before giving it to the instance, which this doesn't seem to change16:22
sean-k-mooneydansmith: it is cached16:23
sean-k-mooneywe copy it16:23
sean-k-mooneyon line 59816:23
sean-k-mooneyhttps://review.opendev.org/#/c/727261/1/nova/virt/libvirt/imagebackend.py@59816:24
dansmithyeah16:24
dansmithI wonder if gibi tested this and we're missing something, or he's assuming something else16:24
sean-k-mooneywell this would also have to be done for qcow right16:24
dansmithsean-k-mooney: we only copy if it doesn't exist, and he's passing self.path as the target16:24
dansmithbut, so does the "if generating" case above16:24
sean-k-mooneyright so if self.path  whcih shoudl be the imnstance disk path does not exist16:25
dansmithso I wonder if we're normally in Flat to grab the base image, and this makes for another case where we just download the image to our target when not caching and that somehow bypasses,16:25
*** songwenping_ has joined #openstack-nova16:25
sean-k-mooneybefore we would take the else path and create a copy form the base path16:25
dansmithbut I think the call path to the caching is too loopy to tell that16:25
sean-k-mooneyi think this will actully work but only for the flat backend16:26
dansmithdisabling the image cache entirely is also somewhat of a way-too-big hammer to solve this problem16:26
dansmithit's a workaround maybe, but it's really a terrible one16:26
sean-k-mooneyya16:26
sean-k-mooneydid you see my converstaion with gibi this morining16:26
dansmithI did16:26
dansmithwell, I skimmed it16:26
sean-k-mooneycool what do you think of the idea of creating allocation for the cached images inplacment16:27
dansmithsean-k-mooney: I suggested that yesterday16:27
sean-k-mooneyso we can track how much is being used16:27
sean-k-mooneyok cool16:27
dansmithwe have to be able to do it separately if the cache is on a different filesystem though,16:27
dansmithwhich will get messy, especially if you move it later16:27
sean-k-mooneyya16:27
dansmithand we'll need new healing stuff to make sure we can recover from getting out of sync16:27
sean-k-mooneyand if people put the cache on nfs or somehting it will be even worse16:28
dansmithyes16:28
*** songwenping__ has quit IRC16:28
dansmithso yeah, it's a thing, but it's not trivial for sure16:28
sean-k-mooneywhen you say self heling are you thinking nova audit or a periodic taks in the agent16:28
dansmithyeah, just anything that causes us to leak some allocations for images that are no longer cached,16:29
dansmithand also notice that new images are on disk that aren't allocated16:29
dansmithbecause we already tell customers they can sideload images into the disk cache and nova will (rightly) use them propery16:29
dansmithbut wouldn't have alocations for them16:29
sean-k-mooneydansmith: ideally i was think as well that it would make the caching best efffort. e.g. it would check if it was in the cache. if not check if it can create an allcotion for the image and cache it if it can and dont cache if it cant16:29
dansmithyes, but if it can't then it has to delete that allocation of course,16:30
dansmithand if everything goes nuts during that, we have to be able to heal away those stale ones when we reboot16:30
sean-k-mooneydansmith: do we actully support the sideloading or just it should work16:30
dansmithand if images get sideloaded, we have to notice, and if images get locally deleted, we have to notice16:30
dansmithsean-k-mooney: it does work and we (redhat) have prescribed it in a few cases :)16:30
sean-k-mooney.... im glad you have added the image cacahing feature in teh api to have an alternitive noew16:31
dansmithand I know people have in the past manually purged images from that cache before the timer fires16:31
*** ccamacho has quit IRC16:31
dansmithsean-k-mooney: yep, but that only works (by design) per aggregate, and doesn't let you purge,16:31
sean-k-mooneyya16:31
sean-k-mooneysitll an improvement16:32
dansmithso I think it would be foolish and fragile to not be able to reconcile the state of the disk with the other system16:32
dansmithfor sure16:32
sean-k-mooneywould you be ok with the compute agent calling placment to make those allocaitons and clean them up. i think it should be fine since its already updating placment in the update_avaiable_resouces funtion just not sure if i missed anything16:34
sean-k-mooneythe compute node already needs to be able to reach the placmenet api so it really should not be much of a change in that regard16:35
*** ttsiouts has joined #openstack-nova16:35
*** evrardjp has quit IRC16:36
*** evrardjp has joined #openstack-nova16:36
dansmiththe compute node the _only_ thing that _could_ do it...16:37
sean-k-mooneywell normally we do the allocation candiate request in the conductor right and we claim the allocation before we get to the compute16:39
sean-k-mooneyso the second allcotion for the cache image would have to be done on the ocmpute node and the claim16:40
*** dpawlik has quit IRC16:40
sean-k-mooneybut ya the compute node is really the only thing that could keep them in sync16:40
dansmithbut nothing outside the compute node knows about the state of the cache16:41
*** udesale_ has quit IRC16:41
dansmithso nothing else would have any idea if an image is cached or could be cached16:41
sean-k-mooneyyep16:41
sean-k-mooneyand even if they did it would be racey16:41
dansmithyou mean "even if they tried to guess" :)16:41
sean-k-mooneyso that means the healing task could not be part of nova audit16:41
dansmithno16:42
sean-k-mooneyit would have to be in the compute manager which is fine16:42
dansmithimage cache management is kinda weirdly split between the compute manager and virt driver16:43
dansmithwhich means we probably need some extra stuff between them I think16:43
dansmithI'd have to go look16:43
dansmiththis *would* be a pretty heavyweight new addition, to be clear16:43
dansmithit's unfortunate that we'd need to do this, IMHO, given the complexity required16:44
sean-k-mooneyya16:44
sean-k-mooneydo you see another path forward beyound just disabling hte cache16:44
dansmithso we need to make sure we think this is really worth all of that, other than kinda talking our way out of it16:44
sean-k-mooneyor puting a size limit on the cache and contionally disabling16:45
dansmithI don't really have any better ideas, no, I just don't like this one enough to be excited about it16:45
dansmithwe should consider some other ideas before we pull the trigger on this I mean16:46
dansmithlike, we *could* look for a-cs that have $imgsize+$flavor.root available disk16:47
dansmithwhich may generate some operator confusion, and will definitely avoid being able to schedule the last byte of disk space16:47
dansmithbut also makes some sense if you explain it to someone: there has to be enough disk for the image and the root, even if the image might be cached16:48
dansmithand that could be a behavior you enable with a pre-filter, which tries to avoid situations like bfv16:49
sean-k-mooneydansmith: we have 5 slightly different impleentation fo caching in that module16:49
sean-k-mooneyeach image backend is slightly different but mostly the same16:49
sean-k-mooneydansmith: could we do that and then srink the allocation16:50
*** ttsiouts has quit IRC16:50
*** ttsiouts has joined #openstack-nova16:51
dansmithsean-k-mooney: right, we'd not allocate that much, just look for hosts with enough to cover it16:51
sean-k-mooneyso do  $imgsize+$flavor.root then both the instance and shirnk the disk_gb ot  $flavor.root16:52
dansmithexcept in the most pathological cases, we'd be fine.. you could come up with a race scenario, but it'd be very very targeted16:52
dansmithit's also something we could try and roll back without having to change or migrate data,16:52
sean-k-mooneyya its worth a try16:53
dansmithwhereas the new allocations-per-image thing would be something we have to live with and migrate for a while if it doesn't pan out16:53
sean-k-mooneyir at least consiering16:53
sean-k-mooneymost of the the time  $imgsize+$flavor.root is not going to cause boot failure either16:54
sean-k-mooneyas it will only be an issue if the cloud is very full16:54
dansmithright, and if it was, it's because you're trying to schedule the last byte of disk, which is something we say is not in our project scope16:54
sean-k-mooneyif its a configurable prefilter then those that want every last gb could opt out16:54
sean-k-mooneywell ya that too16:54
*** ttsiouts has quit IRC16:55
sean-k-mooneywell it sound liek we have too light weight things. a disabel cache, b look for  $imgsize+$flavor.root in the placment query and srhink to  $flavor.root16:56
dansmithyeah, so maybe if gibi really has a do-not-cache fix here, and we provide that prefilter, maybe that's good enough for the moment16:56
dansmithyep :)16:56
sean-k-mooneyand then we could look at the allcotion per image later if we needed too16:56
sean-k-mooneyya i think he need to copy paste it to the 4 other location i commented on but it might work16:57
dansmithyes, I'd be much happier punting that out to a last-resort type of situation16:57
*** vishalmanchanda has quit IRC16:57
sean-k-mooneythis is a part of the code i would normally ping mdbooth or lyarwood to look at16:57
sean-k-mooneyok well im going to grab food o/16:58
dansmithaye dee ohs16:59
* lyarwood catches the hospital pass from sean-k-mooney16:59
lyarwoodoh joy the cache manager16:59
sean-k-mooneylyarwood: the intent of that patch is to have a config optioon to trun it off17:00
sean-k-mooneywell to workaround the larger bug17:01
sean-k-mooneyi think i commented on the other placces where it also need to be done but i proably missed one and i also dont realy understand the calling code paths so its just a guess17:01
dansmithhonestly, as much as I trust you guys, I'd have to test it myself before I believed you17:03
dansmithjust because I know how loopy all that code is17:03
*** derekh has quit IRC17:03
dansmithit's partial'd up the butthole17:03
*** alistarle has joined #openstack-nova17:04
lyarwoodyup same I'd have to play around with this, at first glance I'd be worried about this racing with multiple requests to spawn from the same image tbh17:05
lyarwoodbut I'm likely missing locking somewhere in the imagebackend or driver that stops this17:05
dansmithwell, this is trying to avoid ever downloading the image to the base location,17:06
dansmithso it really shouldn't be able to race because only one instance is booting per instance uuid at a time obviously17:06
dansmithbut the call path to know what self.path is here is the critical bit,17:06
lyarwoodah so it's not caching and then copying?17:06
lyarwoodmy bad17:06
dansmithbecause normally that is the base image path17:06
lyarwoodright understood17:07
dansmithlyarwood: that's the assertion, I just don't know how it got to that point here17:07
*** salmankhan has quit IRC17:17
*** priteau has quit IRC17:17
*** ralonsoh has joined #openstack-nova17:19
*** songwenping__ has joined #openstack-nova17:20
*** songwenping_ has quit IRC17:24
*** iurygregory has quit IRC17:27
*** ralonsoh has quit IRC17:28
*** ociuhandu has quit IRC17:35
*** ociuhandu has joined #openstack-nova17:37
*** ociuhandu has quit IRC17:37
*** ociuhandu has joined #openstack-nova17:37
*** ociuhandu has quit IRC17:48
*** ralonsoh has joined #openstack-nova17:48
*** ociuhandu has joined #openstack-nova17:51
*** nightmare_unreal has quit IRC17:54
*** ociuhandu has quit IRC17:55
*** sapd1_x has quit IRC18:03
*** vishalmanchanda has joined #openstack-nova18:04
*** dustinc has joined #openstack-nova18:05
*** JamesBen_ has quit IRC18:06
openstackgerritGhanshyam Mann proposed openstack/nova master: Bump hacking min version to 3.0.1  https://review.opendev.org/72734718:08
openstackgerritLee Yarwood proposed openstack/nova master: Add functional test for bug 1550919  https://review.opendev.org/63129418:15
openstackbug 1550919 in OpenStack Compute (nova) "[Libvirt]Evacuate fail may cause disk image be deleted" [Medium,In progress] https://launchpad.net/bugs/1550919 - Assigned to Matthew Booth (mbooth-9)18:15
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Don't delete disks on shared storage during evacuate  https://review.opendev.org/57884618:15
*** happyhemant has quit IRC18:15
*** alistarle has quit IRC18:16
*** ralonsoh has quit IRC18:17
*** ociuhandu has joined #openstack-nova18:18
*** ralonsoh has joined #openstack-nova18:22
*** jamesden_ has joined #openstack-nova18:26
*** jamesdenton has quit IRC18:27
*** jamesden_ is now known as jamesdenton18:28
*** iurygregory has joined #openstack-nova18:31
*** ralonsoh has quit IRC18:43
melwittgmann: I've got a queens backport that consistently fails the releasenotes job bc of a job timeout. we are wondering if, with the addition of ussuri releasenotes, if it's just too much for the job to get done in a limited time? https://review.opendev.org/72282218:52
gmannmelwitt: it takes lot of time for all reno to build but should not be timeout. it pass on master.18:54
gmannlet me check job timeout for that18:54
melwittyeah... I just don't get how my backport could cause this and only cause it on queens? confused18:54
melwittit got through rocky, stein, train without issue18:55
elodmelwitt: when those got merged there weren't stable/ussuri yet, were there?19:00
elodthis shows some master fails too: https://zuul.opendev.org/t/openstack/builds?job_name=build-openstack-releasenotes&project=openstack/nova19:00
melwittelod: no there wasn't, was merged before branching19:01
melwittoh you mean the stable merges. let me check19:01
melwitttrain change merged on Apr 18 and I assume stable/ussuri was cut on Apr 2419:03
gmannreno job timeout is 60 min which should be well enough  https://github.com/openstack/openstack-zuul-jobs/blob/13ef0adb415e6296fe5c73d9ff9d1ca557843c54/zuul.d/jobs.yaml#L63819:03
melwittstein change merged on Apr 2419:03
melwittrocky change merged on May 1119:03
melwittthe rocky change build does _not_ show processing of stable/ussuri notes https://zuul.opendev.org/t/openstack/build/bd1684e2f6d4460a985d822dfe773b81/log/job-output.txt#580-58319:06
melwittwhereas the failing queens change build _does_ show processing of stable/ussuri notes https://zuul.opendev.org/t/openstack/build/e63085cb0ebf4e0ea8d91e07c94557ae/log/job-output.txt#536-542 so I think elod is right19:06
*** ociuhandu has quit IRC19:12
*** ociuhandu has joined #openstack-nova19:12
*** ociuhandu has quit IRC19:13
*** ociuhandu has joined #openstack-nova19:13
gmannmelwitt: elod it seems job most of time run on the edge time when it pass. >50 for master reno also and 59m 52s in case of https://review.opendev.org/#/c/725146/19:15
melwittgmann: livin on the edge. I'm wondering the obvious question, is there a nice way we could run the job for only the relevant branches to the branch it's running on? eventually we're gonna have too many reno branches and job will take longer and longer19:16
gmannmelwitt: you mean only build the reno for proposed branch ?19:17
melwittgmann: something like that yeah. or some other way to avoid wasting resource on building irrelevant renos19:17
melwittjust thinking out loud19:18
melwittwe could increase the job timeout obvs but aside from that I'm wondering about future when we have more and more reno branches19:19
*** ociuhandu has quit IRC19:23
*** ociuhandu has joined #openstack-nova19:24
gmannmelwitt: yeah, i remember we were facing same issue on tempest which has 10-15 releases notes bt might be less than nova. it was long back ago so cannot remember is indexing or per release directory fixed that19:25
melwittah... looks like nova has "only" 11, liberty-ussuri + unreleased19:27
gmannmay be doug can help with some trick, he is not here but we can find him on openstack-dev19:29
*** ociuhandu has quit IRC19:29
melwittok, I just sent him a question over there19:38
melwittthanks for suggesting19:38
*** slaweq has quit IRC19:57
*** slaweq has joined #openstack-nova20:01
gmannmelwitt: along with the parallel run option, i think we can remove all these < newton-eol reno? anyways those will be present in older tag- https://github.com/openstack/nova/tree/newton-eol/releasenotes/notes20:02
melwittgmann: oh you mean treat them as static? yeah I would think so. but keep in mind I don't understand this super well so I might be misunderstanding what you're saying :P20:06
*** slaweq has quit IRC20:06
gmannmelwitt: i was thinking to remove as not sure how to make them static. but removing will remoev those from this site too which is issue or might be ok  - https://docs.openstack.org/releasenotes/nova20:08
melwittoh, yeah I wouldn't want to remove them from the site20:08
melwittpersonally I'm leaning toward the parallel idea because that would be easier. but yes it's unreleased. I wonder if we could use LIBS_FROM_GIT to do it in the meantime20:10
melwittand also note that sean said he tried making old stuff static but it didn't help bc most of the time looked to be spent in a rst => html conversion20:11
melwitt(I had to re-read it to pick that out)20:11
gmannmelwitt: LIBS_FROM_GIT or required_project need job update but this should also work, as devstack should checkout the depends-on Depends-On: https://review.opendev.org/#/c/724666/20:12
*** nweinber has quit IRC20:14
melwittgmann: for a temporary test yes but not for mergeable fix right?20:14
smcginnisThe suggestion for making those older branches static would be to call reno to emit the generated straight rst and use that to replace the current page that has the reno sphinx directive. That way it only needs to convert rst to HTML, and not have to generate the rst via the reno directive first.20:16
smcginnisThat should save a little time, even if there were not a lot of release notes in those older series.20:17
*** dpawlik has joined #openstack-nova20:17
gmannmelwitt: yeah, for mergeable fix if that improves we can ask for new reno release. if we do LIBS_FROM_GIT it might need more testing on reno master gate.20:17
melwittgmann: yeah good point20:17
openstackgerritmelanie witt proposed openstack/nova master: DNM Try out running sphinx-build in parallel for releasenotes  https://review.opendev.org/72742920:21
*** slaweq has joined #openstack-nova20:35
*** vishalmanchanda has quit IRC20:37
*** awalende_ has quit IRC20:40
*** ociuhandu has joined #openstack-nova20:40
*** jsuchome has quit IRC20:41
*** awalende has joined #openstack-nova20:42
*** ociuhandu has quit IRC20:53
*** ociuhandu has joined #openstack-nova20:54
*** ociuhandu has quit IRC20:54
*** ociuhandu has joined #openstack-nova20:54
*** jamesdenton has quit IRC20:59
melwittguh, 205 changes in the gate ...21:02
*** jamesdenton has joined #openstack-nova21:03
gmannyeah, pep8 fixing-try also stuck for long time21:04
*** slaweq has quit IRC21:13
*** slaweq has joined #openstack-nova21:25
*** slaweq has quit IRC21:30
*** awalende has quit IRC21:36
*** slaweq has joined #openstack-nova21:40
*** ociuhandu has quit IRC21:41
*** ociuhandu has joined #openstack-nova21:42
*** ociuhandu has quit IRC21:42
*** ociuhandu has joined #openstack-nova21:43
*** slaweq has quit IRC21:44
*** ociuhandu has quit IRC21:52
openstackgerritmelanie witt proposed openstack/nova master: DNM Try out running sphinx-build in parallel for releasenotes  https://review.opendev.org/72742921:53
*** ociuhandu has joined #openstack-nova21:54
*** ociuhandu has quit IRC21:54
*** ociuhandu has joined #openstack-nova21:54
*** ociuhandu has quit IRC22:05
*** ociuhandu has joined #openstack-nova22:05
*** ociuhandu has quit IRC22:06
*** ociuhandu has joined #openstack-nova22:06
*** ociuhandu has quit IRC22:16
*** ociuhandu has joined #openstack-nova22:17
*** ociuhandu has quit IRC22:17
*** ociuhandu has joined #openstack-nova22:18
*** ociuhandu has quit IRC22:28
*** ociuhandu has joined #openstack-nova22:29
*** ociuhandu has quit IRC22:29
*** ociuhandu has joined #openstack-nova22:29
*** avolkov has quit IRC22:29
*** martinkennelly has quit IRC22:37
*** ociuhandu has quit IRC22:39
*** mriedem has left #openstack-nova22:39
*** ociuhandu has joined #openstack-nova22:41
*** ociuhandu has quit IRC22:51
*** ociuhandu has joined #openstack-nova22:52
*** ociuhandu has quit IRC22:52
*** ociuhandu has joined #openstack-nova22:53
*** tkajinam has joined #openstack-nova22:54
*** raildo_ has joined #openstack-nova22:57
*** raildo has quit IRC23:00
*** ociuhandu has quit IRC23:03
*** ociuhandu has joined #openstack-nova23:04
openstackgerritTakashi Natsume proposed openstack/nova master: Remove six.moves  https://review.opendev.org/72722423:13
*** tosky has quit IRC23:14
*** ociuhandu has quit IRC23:15
*** ociuhandu has joined #openstack-nova23:16
*** rcernin has joined #openstack-nova23:19
*** ociuhandu has quit IRC23:21
*** raildo_ has quit IRC23:33
*** awalende has joined #openstack-nova23:36
*** awalende has quit IRC23:41
*** mlavalle has quit IRC23:53
openstackgerritTony Su proposed openstack/nova-specs master: Re-propose provider-config-file spec for Victoria  https://review.opendev.org/72578823:56

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!