Thursday, 2021-02-04

*** hamalq has quit IRC00:12
*** hamalq has joined #openstack-nova00:12
*** rchurch has quit IRC00:23
*** ociuhandu has joined #openstack-nova00:26
*** mlavalle has quit IRC00:26
*** rchurch has joined #openstack-nova00:26
*** mlavalle has joined #openstack-nova00:27
*** ociuhandu has quit IRC00:30
*** LinPeiWen has joined #openstack-nova00:33
prometheanfiremelwitt: thanks :D00:45
*** bbowen has joined #openstack-nova01:02
*** k_mouza has joined #openstack-nova01:30
*** k_mouza has quit IRC01:35
*** aarents has quit IRC01:36
*** aarents has joined #openstack-nova01:36
*** sapd1 has joined #openstack-nova01:39
*** martinkennelly has joined #openstack-nova01:58
*** xinranwang has joined #openstack-nova01:59
*** martinkennelly has quit IRC02:00
*** martinkennelly has joined #openstack-nova02:00
*** tbachman has quit IRC02:02
*** dviroel has quit IRC02:13
*** macz_ has quit IRC02:15
*** rcernin has joined #openstack-nova02:39
*** hamalq has quit IRC02:45
*** spatel has joined #openstack-nova02:47
*** LinPeiWen has quit IRC03:02
*** khomesh24 has joined #openstack-nova03:26
*** martinkennelly has quit IRC03:26
*** macz_ has joined #openstack-nova03:33
*** macz_ has quit IRC03:38
*** mkrai has joined #openstack-nova03:43
*** dklyle has quit IRC04:06
*** LinPeiWen47 has joined #openstack-nova04:07
*** LinPeiWen47 has quit IRC04:17
*** spatel has quit IRC04:25
*** links has joined #openstack-nova04:36
*** nweinber has joined #openstack-nova04:37
*** nweinber has quit IRC04:43
*** hemanth_n has joined #openstack-nova05:16
*** ratailor has joined #openstack-nova05:16
*** vishalmanchanda has joined #openstack-nova05:19
*** whoami-rajat__ has joined #openstack-nova05:23
*** LinPeiWen has joined #openstack-nova05:25
*** xinranwang has quit IRC05:39
*** k_mouza has joined #openstack-nova05:54
*** yonglihe has quit IRC05:57
*** k_mouza has quit IRC05:58
*** pmannidi has joined #openstack-nova06:32
openstackgerritsean mooney proposed openstack/nova master: [WIP] harden image metadata storage  https://review.opendev.org/c/openstack/nova/+/77404406:34
*** mkrai has quit IRC07:01
*** mkrai_ has joined #openstack-nova07:01
*** zzzeek has quit IRC07:07
*** khomesh24 has quit IRC07:07
*** zzzeek has joined #openstack-nova07:08
*** lpetrut has joined #openstack-nova07:11
*** slaweq has joined #openstack-nova07:15
*** mkrai_ has quit IRC07:20
*** khomesh24 has joined #openstack-nova07:28
*** pmannidi has quit IRC07:31
*** pmannidi has joined #openstack-nova07:34
*** ralonsoh has joined #openstack-nova07:36
*** rpittau|afk is now known as rpittau07:39
*** xek has joined #openstack-nova07:51
gibigood morning08:02
*** mkrai_ has joined #openstack-nova08:16
*** tesseract has joined #openstack-nova08:17
*** mkrai_ has quit IRC08:18
*** mkrai_ has joined #openstack-nova08:18
openstackgerritYongli He proposed openstack/nova master: smartnic support  https://review.opendev.org/c/openstack/nova/+/75894408:21
*** mkrai_ has quit IRC08:23
*** mkrai_ has joined #openstack-nova08:29
*** cgoncalves has quit IRC08:31
*** cgoncalves has joined #openstack-nova08:33
*** haleyb has quit IRC08:34
*** haleyb has joined #openstack-nova08:37
*** pmannidi has quit IRC08:43
*** pmannidi has joined #openstack-nova08:45
*** tosky has joined #openstack-nova08:46
*** andrewbonney has joined #openstack-nova08:50
*** LinPeiWen has quit IRC08:53
*** tesseract has quit IRC08:59
*** tesseract has joined #openstack-nova09:01
*** tobias-urdin has joined #openstack-nova09:05
*** LinPeiWen has joined #openstack-nova09:05
*** rcernin has quit IRC09:07
*** rcernin has joined #openstack-nova09:17
*** derekh has joined #openstack-nova09:22
*** rcernin has quit IRC09:25
*** rcernin has joined #openstack-nova09:43
*** pmannidi has quit IRC10:00
*** pmannidi has joined #openstack-nova10:03
*** sapd1 has quit IRC10:09
*** ociuhandu has joined #openstack-nova10:37
*** mkrai_ has quit IRC10:43
*** mkrai_ has joined #openstack-nova10:43
*** dviroel has joined #openstack-nova10:47
*** ociuhandu has quit IRC10:48
*** rcernin has quit IRC10:48
*** ociuhandu has joined #openstack-nova10:49
*** rcernin has joined #openstack-nova10:53
*** ociuhandu has quit IRC10:55
*** ociuhandu has joined #openstack-nova10:57
*** rcernin has quit IRC10:57
*** dtantsur|afk is now known as dtantsur11:01
*** pmannidi has quit IRC11:14
*** pmannidi has joined #openstack-nova11:18
*** k_mouza has joined #openstack-nova11:18
*** mkrai_ has quit IRC11:21
lyarwoodstephenfin: morning, do you know if there's a standard way of checking the config of an instance in the libvirt func tests before I go looking or hacking around?11:39
* lyarwood assumes a pass-through mock saving the config object somewhere would do11:39
stephenfinwdym the "config"?11:39
stephenfinAs in the XML generated by nova or attributes of the instance?11:40
lyarwoodstephenfin: well that or the GuestConfig (?) objects we generate that in turn create the XML11:40
lyarwoodstephenfin: I don't really want to assert things in the XML if we have objects that work just as well tbh11:40
lyarwoodLibvirtConfigObjects is what I mean sorry11:41
lyarwoodnp if there's no prior art, I'll hack something up now11:41
stephenfinHmm, I don't think so. The only example I can think of is the NUMA live migration tests, but that intercepts the live migration API call to validate the XML11:41
lyarwoodcool yeah I think that's the one I saw yesterday, I'll hack something up now11:42
lyarwood_get_guest_config should work tbh11:43
*** ociuhandu_ has joined #openstack-nova11:46
*** rcernin has joined #openstack-nova11:49
*** ociuhandu has quit IRC11:50
*** ociuhandu_ has quit IRC11:50
*** ociuhandu has joined #openstack-nova11:57
*** ociuhandu has quit IRC12:02
sean-k-mooneylyarwood: there isnt really one12:14
sean-k-mooneyunless you have a reference to the driver12:15
sean-k-mooneynormally the funct test start the services12:15
sean-k-mooneyand there is no rpc call that you can make to teh compute manager to get it so you need to use teh service instance ot get teh driver an then call functions on it.12:15
sean-k-mooneybut what you can do is use the logs12:16
sean-k-mooneyand you can technically get teh domain xmls form that if you had too12:16
lyarwoodyeah I've got it working12:19
lyarwoodwe keep a reference to the driver so it's a normal passthrough mock12:19
sean-k-mooneycool12:20
*** hemanth_n has quit IRC12:22
*** pmannidi has quit IRC12:24
*** pmannidi has joined #openstack-nova12:27
*** k_mouza has quit IRC12:31
*** k_mouza has joined #openstack-nova12:38
*** ratailor has quit IRC12:42
lyarwoodsean-k-mooney: so https://review.opendev.org/c/openstack/nova/+/774044/1/nova/utils.py would still let hw_machine_type through right12:46
* lyarwood pushes his WIP stuff to show the func tests12:46
sean-k-mooneyyes12:47
sean-k-mooneybut i know how to fix that im debating if i want to go with a simpler fix before this one12:47
*** gryf is now known as _gryf12:47
sean-k-mooneybasically i notice we never included ramdisk_id and kernel_id when we converted image metadata to ovos12:48
*** _gryf is now known as gryf12:48
sean-k-mooneyso this became much more involed of a change involving object changes12:48
lyarwoodkk12:48
sean-k-mooneybut it was quite late/early this moring when i figured that out and didnt want to start agin12:48
*** gryf is now known as niech_co_krwawy_12:49
*** niech_co_krwawy_ is now known as gryf12:49
sean-k-mooneyit will include hw_machine_type because you are using the same key as the image proerty - a sufix12:49
sean-k-mooneylyarwood: if you add a prefix it would not be included12:49
lyarwoodkk, well I think I might leave it, the actual behaviour means I don't need to make any other changes to the util method fetching the machine type12:50
sean-k-mooneywhat im going to do in the simpler version is remove key = key[len(SM_IMAGE_PROP_PREFIX):]12:50
lyarwoodkk12:50
sean-k-mooneyand build the proerties differnetly so it only contianes the one with image_12:50
sean-k-mooneylyarwood:hehe12:51
*** songwenping_ has joined #openstack-nova12:51
lyarwoodso that would cause me to also add a system_metadata lookup in that util method12:51
lyarwoodthat isn't the end of the world, it's what I had originally12:51
lyarwoodin a seperate change12:52
sean-k-mooneywell i wanted you to use image_ orginally as a prefix to avoid the extra lookup12:52
sean-k-mooneywe can do it either way12:52
sean-k-mooneyi was going to look at wriing the other patch to have someting backportable12:52
sean-k-mooneyassumeing we wanted too12:52
sean-k-mooneyif we dont then i can just finish fixing this patch12:53
lyarwoodthat's also something I could do, always storing it as SM_IMAGE_PROP_PREFIX_hw_machine_type12:53
lyarwoodand that wouldn't bork me12:53
lyarwoodwith this fix in place12:53
sean-k-mooneyya12:53
*** swp20 has quit IRC12:54
sean-k-mooneydo you have an opipion on what way i should go with the fix. should i also do the version without the ovo change12:54
lyarwoodI don't at the moment tbh12:59
lyarwoodLet me post this and grab something to eat and then I'll think it through once I'm back12:59
sean-k-mooneycool its more dod we need a backportable fix or not12:59
sean-k-mooneymy current fix uses the ovo field but we forgot to add at least 2 when we created the object13:00
sean-k-mooneyso it cant be backported but i can fix it without relying on the fields too13:00
sean-k-mooneywhich would be13:00
sean-k-mooneyi can do both too so its not either or13:01
openstackgerritLee Yarwood proposed openstack/nova master: WIP libvirt: Record the machine_type of instances in system_metadata  https://review.opendev.org/c/openstack/nova/+/76753313:02
openstackgerritLee Yarwood proposed openstack/nova master: WIP nova-manage: Add commands for managing instance machine type  https://review.opendev.org/c/openstack/nova/+/76954813:02
openstackgerritLee Yarwood proposed openstack/nova master: WIP nova-status: Add hw_machine_type check for libvirt instances  https://review.opendev.org/c/openstack/nova/+/77064313:02
lyarwoodsean-k-mooney: https://review.opendev.org/c/openstack/nova/+/767533/5/nova/tests/functional/libvirt/test_machine_type.py - comments on the tests here would be appreciated btw13:04
*** nweinber has joined #openstack-nova13:04
sean-k-mooneysure. i think i owe bauzas a review of his routed stuff first but i now have your open in front of me :)13:05
*** sapd1 has joined #openstack-nova13:06
*** mtreinish has joined #openstack-nova13:06
lyarwoodthanks13:08
* lyarwood -> lunch13:08
bauzassean-k-mooney: I'll upload a new revision https://review.opendev.org/c/openstack/nova/+/773976 today later13:09
sean-k-mooneybauzas: ok ill review lyarwood patch then now and ill look at yours when you push it13:10
bauzasthanks13:11
*** dtantsur is now known as dtantsur|brb13:14
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove __unicode__() from nova unit test Exception  https://review.opendev.org/c/openstack/nova/+/76989413:26
*** rcernin has quit IRC13:43
*** ociuhandu has joined #openstack-nova13:45
*** sapd1 has quit IRC13:45
*** ociuhandu has quit IRC13:48
*** spatel has joined #openstack-nova13:49
*** ociuhandu has joined #openstack-nova13:49
*** pmannidi has quit IRC13:49
*** pmannidi has joined #openstack-nova13:51
*** ociuhandu has quit IRC13:54
*** ociuhandu has joined #openstack-nova13:55
*** songwenping__ has joined #openstack-nova14:01
*** songwenping_ has quit IRC14:05
openstackgerritLee Yarwood proposed openstack/nova master: Add regression test for bug #1908075  https://review.opendev.org/c/openstack/nova/+/76697614:05
openstackbug 1908075 in OpenStack Compute (nova) "Nova allows a non-multiattach volume to be attached to multiple instances *if* its volume state is reset by an admin" [Undecided,In progress] https://launchpad.net/bugs/1908075 - Assigned to Lee Yarwood (lyarwood)14:05
openstackgerritLee Yarwood proposed openstack/nova master: api: Reject volume attach requests when an active bdm exists  https://review.opendev.org/c/openstack/nova/+/76847214:05
openstackgerritLee Yarwood proposed openstack/nova master: fup: Merge duplicate volume attachment checks  https://review.opendev.org/c/openstack/nova/+/77338014:05
lyarwoodstephenfin: ^ updated, would you mind hitting the changes below that fup as well?14:05
*** khomesh24 has quit IRC14:08
*** ociuhandu has quit IRC14:09
*** ociuhandu has joined #openstack-nova14:09
*** dtantsur|brb is now known as dtantsur14:16
*** ociuhandu has quit IRC14:16
*** Underknowledge has quit IRC14:17
*** Underknowledge1 has joined #openstack-nova14:17
*** Underknowledge1 is now known as Underknowledge14:18
*** ociuhandu has joined #openstack-nova14:18
openstackgerritGhanshyam proposed openstack/placement master: DNM: testing direct l-c  https://review.opendev.org/c/openstack/placement/+/77381314:37
lyarwoodsean-k-mooney: ^ sorry forgot to update the unit tests in that fist change14:44
* lyarwood works on docs now14:44
lyarwoodargh git-review is still slow14:44
openstackgerritLee Yarwood proposed openstack/nova master: WIP libvirt: Record the machine_type of instances in system_metadata  https://review.opendev.org/c/openstack/nova/+/76753314:45
openstackgerritLee Yarwood proposed openstack/nova master: WIP nova-manage: Add commands for managing instance machine type  https://review.opendev.org/c/openstack/nova/+/76954814:45
openstackgerritLee Yarwood proposed openstack/nova master: WIP nova-status: Add hw_machine_type check for libvirt instances  https://review.opendev.org/c/openstack/nova/+/77064314:45
*** ociuhandu has quit IRC14:45
gibilyarwood, stephenfin: do we need both? https://review.opendev.org/c/openstack/nova/+/773727 https://review.opendev.org/c/openstack/nova/+/76992014:55
stephenfinYes, I think so. The fixtures proves the stubbing isn't complete and would be useful even when it is to prevent regressions14:56
lyarwoodyeah what stephenfin said, already has shown a few things we missed AFAICT14:56
*** belmoreira has joined #openstack-nova15:00
*** tesseract has quit IRC15:02
gibithanks15:04
*** tesseract has joined #openstack-nova15:05
*** derekh has quit IRC15:05
*** ociuhandu has joined #openstack-nova15:11
*** ociuhandu has quit IRC15:21
*** pmannidi has quit IRC15:21
*** ociuhandu has joined #openstack-nova15:22
*** lpetrut has quit IRC15:23
*** mkrai has joined #openstack-nova15:23
gibiI'm +2 on the fairly simple libvirt metadata feature https://review.opendev.org/c/openstack/nova/+/75055215:23
*** pmannidi has joined #openstack-nova15:24
gibiso if some core has time then it is an easy win15:24
openstackgerritSylvain Bauza proposed openstack/nova master: Add network and utils methods for getting routed networks and segments  https://review.opendev.org/c/openstack/nova/+/77397615:26
openstackgerritSylvain Bauza proposed openstack/nova master: WIP: Add a routed networks scheduler pre-filter  https://review.opendev.org/c/openstack/nova/+/74906815:26
*** ociuhandu has quit IRC15:27
bauzasgibi: lemme look15:27
bauzasgibi: btw. thanks for continuing to review the routed networks series15:28
gibibauzas: thanks15:28
bauzasfwiw, I'm pretty done, just the last change needs to be having UTs and docs15:28
gibibauzas: ack, I will continue looking at it, actaully the self -1 made me stop so it is good that you stated now that it is basically ready15:33
*** LinPeiWen has quit IRC15:33
bauzasgibi: yeah I needed to add UTs15:33
bauzasnow it's done15:33
bauzasthose are easy peasy15:33
*** derekh has joined #openstack-nova15:36
gibi:)15:37
*** tbachman has joined #openstack-nova15:37
bauzasgibi: concerns with reliability of the guest metadata information in https://review.opendev.org/c/openstack/nova/+/75055215:44
sean-k-mooneybauzas: reliablity?15:45
sean-k-mooneythis is an internal debug info15:46
sean-k-mooneyso if its a little out of sync i think its ok15:46
bauzassean-k-mooney: well, if so, we don't need it15:47
bauzasoperators could get their infos by other means, right?15:47
sean-k-mooneywe dont need it but it does make debuging from logs simpler15:47
sean-k-mooneythey could but this would be useful for us reading sosreports15:48
bauzassean-k-mooney: right, but then we need it to be reliable15:48
sean-k-mooneywhere we cant15:48
sean-k-mooneyya15:48
sean-k-mooneywell15:48
sean-k-mooneyit would be preferable15:48
bauzassean-k-mooney: I personnally voted on the spec because I do agree with the usecase15:48
bauzasbut if we go down the road, we need this information to be correct15:48
sean-k-mooneyi have not read your concern in context in the review15:48
sean-k-mooneyyou belive there is a race in the code ?15:49
bauzasright, when detaching15:49
sean-k-mooneyi see15:49
sean-k-mooneyif that can be fixed then i agree it shoudl be.15:49
bauzasthe proposer wrote to delete the info without waiting the neutron event15:49
*** mkrai_ has joined #openstack-nova15:49
bauzaswhich could fail15:49
bauzasand for most of the cases where operators would want to see the IPs, those would be for networking debugging15:50
sean-k-mooneywhich neutron event? network-vif-unplugged?15:50
bauzasyeah15:50
sean-k-mooneywe dont need to wait for that15:50
bauzassean-k-mooney: see the patch https://review.opendev.org/c/openstack/nova/+/75055215:50
sean-k-mooneywe can but we dont need too.15:50
sean-k-mooneyonce we detach it form libvirt its detacted form the vm15:51
sean-k-mooneywhat could fail is removing the device owner(vm uuid) form the port15:51
*** ociuhandu has joined #openstack-nova15:52
*** mkrai has quit IRC15:52
bauzassean-k-mooney: sean-k-mooney: but then the IP would still be assigned to the instance, right?15:53
sean-k-mooneythis is the only place we use network-vif-unplugged i belvie https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L10079-L1008315:53
sean-k-mooneybauzas: the ip is assigned to the port15:54
sean-k-mooneyif the port is not attached to the vm anymroe then even if nueton still thinks the port has teh ip packet wont get to the vm15:54
bauzassean-k-mooney: the comment is confusing here https://review.opendev.org/c/openstack/nova/+/750552/8/nova/virt/libvirt/driver.py#232915:54
bauzaswe have some internal object that awaits a neutron callback15:55
gibibauzas: ack, I will check15:55
*** ociuhandu has quit IRC15:55
sean-k-mooneybauzas: the network info cache wont be update until neutron sees the port is removed15:55
gibibut nova meeting starts in 4 minutes on #openstack-meeting-315:55
sean-k-mooneyi belive that is what it is refering too15:55
sean-k-mooneythe filter however will remove it from the network info when generating the metadata15:56
sean-k-mooney            network_info = list(filter(lambda info: info['id'] != vif['id'],15:56
sean-k-mooney                                       instance.get_network_info()))15:56
sean-k-mooneyso regardless of if neutron has sent the event or not to cause the info cache to be refreshed the copy we pass to generate the data has it removed15:56
*** ociuhandu has joined #openstack-nova15:57
bauzassean-k-mooney: my concern is not the fact it filters15:58
bauzashe wrote the filter for a good reason15:58
bauzasmy concern is that we remove this information from the metadate while we could still need it15:59
bauzasactually, the question is more, who is the source of truth ? nova or neutron ?15:59
bauzasthe IP address is bound to a port, which itself is attached to an instance16:00
bauzaswhat if the detach event fails in the meantime ?16:00
sean-k-mooneywe remove it after libvirt has finished detaching the interface so why would we need it16:00
gibibauzas: if this info is in the domain xml then I would say that what matters is what the VM sees. so if the vif was removed from the VM then we can remove the metadata too16:01
bauzasgibi: in this case, I could understand this16:01
sean-k-mooneythe sequencing is we remove the interface form the domain16:01
sean-k-mooneythen we unplug the vif form the backend16:02
sean-k-mooneythen we remove it form the metadata16:02
gibithat sequence is OK to me16:02
sean-k-mooneythen after that i belive the compute manger update the neutron port and remvoed the device owner16:02
*** macz_ has joined #openstack-nova16:03
*** macz_ has quit IRC16:03
sean-k-mooneyby the way we cannot unconditionally wait for network-vif-unplugged here as not all backend will send it if im not mistaken16:04
sean-k-mooneyml2/ovs will16:04
sean-k-mooneyafter we do  self.vif_driver.unplug(instance, vif)16:04
sean-k-mooneybut i doint think al backend will16:04
sean-k-mooneyyep https://github.com/openstack/nova/blob/788035add9b32fa841389d906a0e307c231456ba/nova/compute/manager.py#L7779-L779416:07
sean-k-mooneywe tell the dirver to detach which is what is being modified16:07
sean-k-mooneyand then if we dont raise an exceptio we do _deallocate_port_for_instance16:07
sean-k-mooneythat is what does the neutron port update/delete https://github.com/openstack/nova/blob/788035add9b32fa841389d906a0e307c231456ba/nova/network/neutron.py#L1710-L171416:09
*** mlavalle has quit IRC16:09
sean-k-mooneybauzas: hopefully ^ that makes sense16:11
bauzassean-k-mooney: on the nova meeting, catching up16:11
*** artom has quit IRC16:12
bauzassean-k-mooney: well, gibi's point sounds reasonable to me16:13
bauzasfrom a VM perspective, the nic is detached16:14
sean-k-mooneyyep before we ever touch the xml to update the metadta16:14
sean-k-mooneyso its consitent with novas/libvirt view16:14
bauzasok, so I'll comment but I'll leave my -1 for other nits16:14
sean-k-mooneycool16:14
*** mlavalle has joined #openstack-nova16:15
bauzashumm, eavesdrop is lagging 15 mins behind, can't just provide a link yet16:16
*** hoonetorg has joined #openstack-nova16:20
sean-k-mooneyya it can16:25
sean-k-mooneybauzas: its up to date now16:26
bauzasyup, commented 5 mins before16:26
sean-k-mooneyso you did16:26
bauzasit just updated straight while it was lagging16:26
bauzasI guess there are crons behind eavesdrop16:27
bauzasunless it's event-based, which would surprise me16:27
sean-k-mooneyi think its a chron/periodic sync ya16:27
sean-k-mooneyits rare that it get more then a few minutes out of date16:28
*** dklyle has joined #openstack-nova16:39
*** pmannidi has quit IRC16:47
*** pmannidi has joined #openstack-nova16:51
*** mkrai_ has quit IRC16:59
*** ociuhandu has quit IRC17:05
lyarwoodstephenfin: mind if I address a nit in https://review.opendev.org/c/openstack/nova/+/751367/2 and rebase the series for you?17:06
*** ociuhandu has joined #openstack-nova17:06
stephenfinlyarwood: for sure17:06
stephenfingo for it17:06
stephenfinI missed the AR17:07
lyarwoodstephenfin: np and apologies for missing this series in train for so long17:08
*** ociuhandu has quit IRC17:12
openstackgerritLee Yarwood proposed openstack/nova stable/train: Only allow one scheduler service in tests  https://review.opendev.org/c/openstack/nova/+/75136217:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: func tests: move _run_periodics() into base class  https://review.opendev.org/c/openstack/nova/+/75136317:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: Helper to start computes with different HostInfos  https://review.opendev.org/c/openstack/nova/+/75136417:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: tests: Add reproducer for bug #1879878  https://review.opendev.org/c/openstack/nova/+/75136517:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: Add generic reproducer for bug #1879878  https://review.opendev.org/c/openstack/nova/+/75136617:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: Don't unset Instance.old_flavor, new_flavor until necessary  https://review.opendev.org/c/openstack/nova/+/75136717:14
openstackbug 1879878 in OpenStack Compute (nova) train "VM become Error after confirming resize with Error info CPUUnpinningInvalid on source node " [Undecided,In progress] https://launchpad.net/bugs/1879878 - Assigned to Stephen Finucane (stephenfinucane)17:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: Move confirm resize under semaphore  https://review.opendev.org/c/openstack/nova/+/75136817:14
openstackgerritLee Yarwood proposed openstack/nova stable/train: Move revert resize under semaphore  https://review.opendev.org/c/openstack/nova/+/75136917:14
*** tesseract has quit IRC17:15
*** ociuhandu has joined #openstack-nova17:19
*** openstack has joined #openstack-nova17:23
*** ChanServ sets mode: +o openstack17:23
*** ociuhandu has quit IRC17:26
*** ociuhandu has joined #openstack-nova17:32
*** vishalmanchanda has quit IRC17:38
*** zzzeek has quit IRC17:42
*** artom has joined #openstack-nova17:43
*** zzzeek has joined #openstack-nova17:44
*** ociuhandu has quit IRC17:45
*** ociuhandu has joined #openstack-nova17:47
*** zul has quit IRC17:49
*** ociuhandu has quit IRC17:49
*** ociuhandu has joined #openstack-nova17:49
*** ociuhandu_ has joined #openstack-nova17:53
*** ociuhandu has quit IRC17:57
*** ociuhandu_ has quit IRC17:58
*** pmannidi has quit IRC17:59
*** derekh has quit IRC18:00
*** pmannidi has joined #openstack-nova18:02
*** rpittau is now known as rpittau|afk18:10
sean-k-mooneydansmith: regarding ci resouces. one thing that i have tought about from time to time was spliting check into check and fast check with check dependt on fast check18:11
sean-k-mooneynow we said we dont want to make them dependent because we want all the result at once18:11
*** hemna has quit IRC18:11
*** songwenping_ has joined #openstack-nova18:12
sean-k-mooneybut if we had two pipelines liek that we might be abel to run only the logner jobs if the patch did not have -w18:12
sean-k-mooneyim not sure if that would save much resouces18:12
sean-k-mooneybut sometimes im torn between pushing code to gerrit to have a backup or make shareing ti between multiple servers simpler18:12
*** dtantsur is now known as dtantsur|afk18:12
sean-k-mooneyand wasting gate resouces18:12
sean-k-mooneyif we had a way to say dont run  the jobs yet that might help with early verions18:13
*** hemna has joined #openstack-nova18:13
*** songwenping__ has quit IRC18:14
*** hamalq has joined #openstack-nova18:14
sean-k-mooneyin the grand scheme of things its proably not going to be large but it might be worth exploring having a ready-for-ci lable or something18:14
sean-k-mooneyi did that for my third party ci https://github.com/SeanMooney/ci-sean-mooney/blob/main/zuul.d/pipelines.yaml#L53-L5618:16
sean-k-mooneyand its how the intel nfv ci used to run to save capsity18:16
sean-k-mooneyalthogh it was not night and day or anything18:16
*** k_mouza has quit IRC18:17
*** ralonsoh has quit IRC18:18
*** hoonetorg has quit IRC18:24
*** tesseract has joined #openstack-nova18:25
dansmithsean-k-mooney: yeah I'm not sure if that's really doable, but I would love to get *some* results before others, that would help a lot18:29
sean-k-mooneywhat i suggested before was ll the non tempest ones first then the rest18:29
sean-k-mooneygranted you can run those simpley locally18:29
sean-k-mooneyits too bad zuul cant report back as each job finishes but when i realy want that i do go to zuul.openstack.org18:31
sean-k-mooneyand just get teh results from there instead of waiting18:31
sean-k-mooneythe results are avaiable in zuul once the indivuatl job finsihes just not in gerrit18:32
*** lyarwood has quit IRC18:34
dansmithsean-k-mooney: yeah I'd like to have one tempest job and the easy ones in the first go I think18:46
dansmithworker counts being lower would make that still go faster I think18:47
sean-k-mooneyyep you could do that18:47
sean-k-mooneychoose one of the faster ones18:47
dansmithbut I think zuul lacks some persistence required to split up the job and still know when it can gate18:47
dansmithso not sure that's really an option18:47
sean-k-mooneyim not sure about that18:47
sean-k-mooneyit curerntly does it based on lables18:47
dansmithusing something like experimental and requiring a +1 experimental run before gate would be a hack around that maybe18:48
dansmithsean-k-mooney: well, talk to the infra folks, but my understanding is it's hard18:48
sean-k-mooneywe woudl jsut need a requires claus in the gate piple to look for verifed and fast-verfied +1 from zuul18:48
dansmithin addition to solving this by dividing up the problem or saying "zuul should have a feature" I think there is a LOT of work we all can do to make things faster, duplicate less, and be more targetd18:49
sean-k-mooneycurrently its looking for just verifed and workflow https://github.com/openstack/project-config/blob/master/zuul.d/pipelines.yaml#L80-L8218:49
sean-k-mooneybut you could add a 3rd labple to that e.g. fast-verifed  and still requrie a +1 form both form zuul18:50
dansmithokay but without a prioritization, we'd still be hours and hours before running18:50
sean-k-mooneydansmith: it would require use to update the gerrit config and add the feature however18:50
dansmithI have stuff that has been in the check queue for three hours and it hasn't started to run a single thing18:51
sean-k-mooneywe have  precedence: normal18:51
sean-k-mooneyfor prioritisation beteween pipliens18:51
dansmithI've already talked to infra about this,18:51
dansmithand the other precedences are used for things18:51
dansmithcheck and experimental are the same even18:51
sean-k-mooneyyep they are18:51
sean-k-mooneyyep both low18:51
sean-k-mooneyalthoguh experimtal will report back first18:52
dansmithagain, I think we can do a lot without making this an infra problem18:52
sean-k-mooneythey have teh same precidence but are in differnet queues18:52
dansmithand just making it so we get fast check in two hours and slow check in 24 hours isn't really going to help18:52
sean-k-mooneydansmith: oh ya i know18:52
sean-k-mooneyits more if we run out of room with your current effort18:52
sean-k-mooneythere are other things we can do with infra18:52
sean-k-mooneybut its more involed18:52
sean-k-mooneyim not suggesting we start with infra changes just pointing out we can do things via infra changes if its still a proablem18:53
dansmiththere's lots we could ask infra to do, but relative to the staffing of the top five projects, I mean.. :)18:53
sean-k-mooneythe other thing too is you were just looking at 1st party ci18:54
dansmithI get the impression the things we "could do with infra" require a very wide-scope of potential considerations, more than we can just do in our job defs, and likely would need zuul changes18:54
sean-k-mooneydansmith: yes the infra chagnes are openstack wide18:54
sean-k-mooneyrequireign both gerrit and zull configuration chagnes18:55
sean-k-mooneyso very big/wide reaching hammer18:55
* dansmith nods18:55
sean-k-mooneynot running 8 almost identicaly jobs is relitivly local in contrast18:55
dansmithyeah18:56
dansmithI wish I could help accelerate us not running the two grenades because those are fairly heavy and really duplicative18:56
sean-k-mooneywell we can stop that in nova18:56
sean-k-mooneyright now if we want too18:57
dansmithI know, but I think we agreed to wait until the ceph and zuulv3 thing was resolved18:57
dansmithI already proposed it with -W to wait on that18:57
*** spatel has quit IRC18:57
sean-k-mooneywell mor i ment we can drop integrated-gate-compute template then contol which grenddade jobs run rom the check and gate pipelines18:58
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/.zuul.yaml#L42118:58
sean-k-mooneywhich means we coudl jsut run         - nova-grenade-multinode18:59
dansmithhttps://review.opendev.org/c/openstack/tempest/+/77149918:59
dansmithwe agreed we would wait to do that until the ceph multinode zuulv3 thing was resolved18:59
sean-k-mooneyoh i know i just was pointing out we could do it via nova if we had resovled it and not need a tempest patch19:00
*** lemko7 has quit IRC19:00
*** alexe9191 has joined #openstack-nova19:00
*** lemko has joined #openstack-nova19:00
dansmithand when we discussed, gmann wanted it changed there ^, but yes, the mechanics aren't hard, it's the agreement required, and in this case, blocking on the zuulv3 conversion19:01
alexe9191good day everyone:)19:01
sean-k-mooneyalexe9191: o/19:01
alexe9191I have a question about the retry filter in nova19:01
alexe9191I am wondering where does it get it's spec_obj from ? specefically this piece of code here:19:01
alexe9191    def host_passes(self, host_state, spec_obj):19:01
alexe9191        """Skip nodes that have already been attempted."""19:01
alexe9191        retry = spec_obj.retry19:01
sean-k-mooneyalexe9191: it has not been required for quite some time19:01
sean-k-mooneyalexe9191: its passed in by the filter schduler19:02
alexe9191but where is it stored? memory or db?19:02
sean-k-mooneyits the request spec19:02
sean-k-mooneyits builts in the api19:03
sean-k-mooneythen passed to the conductor and scudler19:03
sean-k-mooneyi belive we might have it in the api db19:03
alexe9191Interesting, let me check19:03
dansmithyes, api_db19:03
*** tesseract has quit IRC19:04
alexe9191ok, so if a host fail, it will register it's state here in that table.19:04
alexe9191request_specs19:04
sean-k-mooneyno19:04
sean-k-mooneywe dont commit that to the db19:04
sean-k-mooneywe only track that in memroy i belive19:05
alexe9191that table is quite loaded though.19:05
sean-k-mooneythe request_sepc is used for other things19:05
sean-k-mooneyso it is saved in the db19:05
sean-k-mooneybut wee dont save the failed host in the request spec during schdule and commit that back19:06
alexe9191so that part right here:19:06
alexe9191        return self.filter_handler.get_filtered_objects(self.enabled_filters,19:06
alexe9191                hosts, spec_obj, index)19:06
alexe9191i see it's passing hosts but the retry filter is kicking out all of the hosts in the aggregate I am trying to schedule in. They are all healthy and they are hosting VMs, they probably had issues at some time.19:07
alexe9191The interesting thing, this happens only with one flavor, other flavors are returning a different results for the retryfilter.19:07
alexe9191I restarted nova-compute on all of those hosts but that did not change the result of the scheduling:)  so I am wondering to be honest where does it saves the host state.19:08
sean-k-mooneyalexe9191: what releast of nova are you using by the way19:08
alexe9191rocky:)19:08
sean-k-mooneyso you have placment19:08
alexe9191yes19:08
sean-k-mooneythen you can disable the retry filter entirly19:09
sean-k-mooneyi belive rocky is the release we stopped using it19:09
sean-k-mooneythats what im checking now19:09
sean-k-mooneyah it was queens https://github.com/openstack/nova/blob/master/releasenotes/notes/deprecate-retry-filter-4d1dba39a2c21836.yaml19:10
sean-k-mooneyas part of  https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/return-alternate-hosts.html19:10
*** spatel has joined #openstack-nova19:10
sean-k-mooneyalexe9191: so on rocky you can and should disable the retry filter19:10
alexe9191Ok! that's good to know19:11
alexe9191Is there a way to mitigate the effect of the retry filter right now? restarting the nova scheduler right now is probably something that's gonna cause a lot of grief19:12
alexe9191We have about 800~ hosts19:12
alexe91919 schedulers19:12
sean-k-mooneyreally why so manny?19:12
alexe9191we're thinking about cells but this is in the future plans19:12
alexe9191it's a big infrastructure19:13
alexe9191that's why I was wondering if I can empty that spec_obj from a cache/db table19:13
sean-k-mooneystill scduling is typeiclaly not the largets part of a but19:13
*** belmoreira has quit IRC19:13
sean-k-mooneyinfacti its typeiclly quite a small amount19:13
sean-k-mooney9 schdulers is quite a lot19:13
sean-k-mooneyalexe9191: no the previsouly tired host are only updated in memory19:14
alexe9191in the nova-scheduler I am guessing19:14
sean-k-mooneyyep19:15
openstackgerritMerged openstack/nova master: Fix invalid argument formatting in exception messages  https://review.opendev.org/c/openstack/nova/+/76351119:15
sean-k-mooneythey only are tracked during the singel scheduling request19:15
alexe9191one more question, is that spec_obj also tied to the flavor ?19:15
sean-k-mooneykind of19:15
dansmithsean-k-mooney: should the retry filter even be used anymore? don't we pass selected_hosts down and have the conductor just iterate over them and then declare it dead?19:16
sean-k-mooneydansmith: we do from queens19:16
sean-k-mooneydansmith: thats whyi said to remove it19:16
dansmithoh I see you said that above19:16
sean-k-mooneydansmith: also i delete it on master a few release ago19:16
sean-k-mooneyalexe9191: this is the request_spec https://github.com/openstack/nova/blob/stable/rocky/nova/objects/request_spec.py#L50-L8719:17
sean-k-mooneyit has the flavor and image avaiabel which the filters can use19:17
alexe9191:)  So the flavor is a part of it19:17
alexe9191a combination of things then19:17
sean-k-mooneyits modeling all the requirements for scduling an instnace more or less19:18
alexe9191that's the reason why it's giving different results for different flavors19:18
sean-k-mooneyyes19:18
alexe9191:)  Many thanks! I am gonna go and schedule a remove of the retry filter since it's not needed19:19
sean-k-mooneyalthough  on the first iteration thorugh the scudler it should do nothing19:19
alexe9191it's doing that on the retryfilter19:19
sean-k-mooneyhttps://github.com/openstack/nova/blob/stable/rocky/nova/scheduler/filters/retry_filter.py#L4319:20
*** andrewbonney has quit IRC19:20
sean-k-mooneyretry.hosts shoudl be empty19:20
*** spatel has quit IRC19:20
alexe9191I am not sure I understand why?19:21
alexe9191https://github.com/openstack/nova/blob/stable/rocky/nova/scheduler/filters/retry_filter.py#L3419:21
sean-k-mooneywell spec_obj.retry will not be set19:22
sean-k-mooneyso it will return true on line 3619:22
alexe9191I am not quite sure how am I ending up with this number then:19:22
alexe91912021-02-04 19:16:58.981 112479 DEBUG nova.filters [req-7daf5214-19ca-48bc-9240-7db0be15c304 03169685e1924a6fa4eee2da46335331 a0b22ddc828140beaf13bc3daaba4a93 - default default] Starting with 148 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:7019:22
alexe91912021-02-04 19:16:58.982 112479 DEBUG nova.filters [req-7daf5214-19ca-48bc-9240-7db0be15c304 03169685e1924a6fa4eee2da46335331 a0b22ddc828140beaf13bc3daaba4a93 - default default] Filter RetryFilter returned 148 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:10419:22
alexe9191For that specefic flavor, though other flavors are returning different numbers19:23
sean-k-mooneywell in that case the RetryFilter return 148 hosts19:23
sean-k-mooneyand it was given 148 hosts19:23
sean-k-mooneyso they all were pased19:23
alexe9191where was it given 148 hosts from ?19:24
sean-k-mooneyfrom placment19:24
alexe9191does placement register the failed hosts?19:24
sean-k-mooneyyou started with 148 hosts form placment19:24
alexe9191ow19:24
sean-k-mooneyno19:24
sean-k-mooneyplacment basically says ( based on our request) here are the set of host that could fit the vm19:25
sean-k-mooneythen the filter refine that19:25
*** belmoreira has joined #openstack-nova19:25
sean-k-mooneyplacemtn in rocky is basically only lookign at ram disk and cpus19:25
openstackgerritArtom Lifshitz proposed openstack/nova master: WIP: libvirt: start tracking NUMACell.socket for hosts  https://review.opendev.org/c/openstack/nova/+/76681619:26
openstackgerritArtom Lifshitz proposed openstack/nova master: WIP: extra specs/image pros: add `socket` PCI NUMA affinity  https://review.opendev.org/c/openstack/nova/+/77274819:26
openstackgerritArtom Lifshitz proposed openstack/nova master: WIP: Add `socket` PCI NUMA affinity policy request prefilter  https://review.opendev.org/c/openstack/nova/+/77274919:26
openstackgerritArtom Lifshitz proposed openstack/nova master: WIP: pci: implement the `socket` NUMA affinity policy  https://review.opendev.org/c/openstack/nova/+/77277919:26
openstackgerritArtom Lifshitz proposed openstack/nova master: WIP: Track host NUMA topology in PCI manager  https://review.opendev.org/c/openstack/nova/+/77414919:26
alexe9191not the availability zone or such then ?19:26
sean-k-mooneyit does a bit more but basically of your 800 comptue nodes it said here are the 148 that could fit your vm19:26
sean-k-mooneynot in rocky by deault at least19:26
sean-k-mooneywe did add the az later19:26
alexe9191ok... but the interesting thing then is, when i schedule the virtual machine directly on the host it works just fine19:26
alexe9191so from resources point of view there are plenty19:27
sean-k-mooneyactully it can do the az in rocky https://github.com/openstack/nova/blob/stable/rocky/nova/scheduler/request_filter.py#L6319:27
sean-k-mooneybut i think that is off by default and you use the az filter19:27
alexe9191I am using that19:28
alexe9191but I end up with hosts that are not usable for that specefic az19:28
alexe9191they are all from zone 2,3,4 for instance and the one i want is zone119:28
alexe9191BUt this is happening only on this flavor.19:29
sean-k-mooneythe flavor wont change teh az interaction19:29
alexe9191indeed19:29
alexe9191if I drop the az I get more hosts to start with on the retry filter though.19:29
*** artom has quit IRC19:30
sean-k-mooneyyes so that is placment limiting the hosts19:30
sean-k-mooneyto only those in the requeted az19:30
sean-k-mooneyits likely that one of the later fiters is failing19:30
*** spatel has joined #openstack-nova19:30
sean-k-mooneycan you paste the fule filter logs for the spwan to http://paste.openstack.org/19:31
alexe9191actually what I said was just wrong.. i end up with the same number of hosts 148 if I drop the az, the scheduling happen though cause az filter is not filtering anything out19:31
alexe9191yes one moment let me sanitise it19:31
sean-k-mooneyya if you have enable the plamcent version you can also disable the az filter19:32
alexe9191indeed that will also be done since it can be used19:32
alexe9191so here is the version that works:19:32
alexe9191http://paste.openstack.org/show/802341/19:32
sean-k-mooneyover the release we have slowly been moving things too placment where it makes sense19:32
sean-k-mooneyyep so you went form 148 down to 10419:33
sean-k-mooneythen those would get weighed19:33
alexe9191http://paste.openstack.org/show/802342/ this is the one that does not (COmpute is reporting 0 cause those are disabled)19:33
alexe9191Filter AvailabilityZoneFilter returned 8 because 8 are only in zone119:34
alexe9191so I am starting with less than I should19:34
*** belmoreira has quit IRC19:34
sean-k-mooneyso the first following filters RetryFilter AvailabilityZoneFilter AggregateDiskFilter, AggregateCoreFilter and AggregateRamFilter can be removed19:34
sean-k-mooneyand NUMATopologyFilter should come last19:35
sean-k-mooneyso the one that faild had most of the host elimiated by the AvailabilityZoneFilter19:36
alexe9191indeed, because none of the hosts in zone1 reported it self to placement to be a good match for that flavor19:36
alexe9191though there are plenty of space on those hosts to cover the needed resources19:36
alexe9191and I have no max placement in the config19:36
sean-k-mooneyit sound like you have stale allocation in plamcent then19:37
sean-k-mooneyalthough what i dont understand is why the az filter removed any hosts19:37
alexe9191anyway to make sure that this is the case?19:37
sean-k-mooneyyou said you enabeld the plamcent az filtering19:37
alexe9191no I meant on the api request19:38
sean-k-mooneyoh ok19:38
alexe9191apologies for the confusion:)19:38
sean-k-mooneyno worries19:38
sean-k-mooneyam we have a heal allcoation command19:38
sean-k-mooneydansmith: do you know if that is in rocky19:38
dansmithnot off hand19:39
sean-k-mooneyso we have https://github.com/openstack/nova/blob/7b5ac717bd338be32414ae25f60a4bfe4c94c0f4/nova/cmd/manage.py#L212119:40
sean-k-mooneyya that is on rocky19:40
*** rcernin has joined #openstack-nova19:40
sean-k-mooneyso you can do nova-manage --heal-allocations i think but before you do that19:41
sean-k-mooneyalexe9191: you have the aggreate ram,disk,core filters enabled19:41
alexe9191indeed19:41
sean-k-mooneyalexe9191: do you manage allocation ratios by aggreate19:41
alexe9191more or less yes19:42
sean-k-mooneyok that is proably the issue19:42
sean-k-mooneyhttp://lists.openstack.org/pipermail/openstack-dev/2018-January/126283.html19:42
sean-k-mooneywe deprecated those in ocata because once you use placment you can nolonger do that19:43
sean-k-mooneyalexe9191: you have to set the allocation ratios per host19:43
alexe9191let me check the code I think we have that in place now19:43
sean-k-mooneyok here are the docs on that topic if you have not got them set on each compute host19:44
sean-k-mooneyhttps://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#allocation-ratios19:44
*** rcernin has quit IRC19:45
alexe9191just checked right now and we have that on rocky19:45
alexe9191older versions are a different story but we are getting rid of those19:45
sean-k-mooneyok so its in the nova.conf on each of your compute nodees19:45
alexe9191yes19:45
alexe9191docker exec -it nova_compute grep cpu /etc/nova/nova.conf19:46
alexe9191cpu_allocation_ratio=1.019:46
sean-k-mooneyok then what you basically need to do is compare the avaiable resouces in plamcnet to those reported in the hyperviors api19:46
sean-k-mooneyif there is a missmatch due to stales allcoation the usage wont be the same in nova view and placments19:47
alexe9191I am actually building now a json file with the resources using openstack hypervisor show19:47
sean-k-mooneywhich would be why plamcent would have elminated the host before it got to the az filter19:47
sean-k-mooneyunfortunetly im more or less done for the day so i wont be able to help you contiue debuging but my best guess is placemetn and nova are out of sync19:49
sean-k-mooneyso placment is filtering out the host in that az first. before it gets to the schduler19:49
alexe9191You've already sat me on the good path:)  I am going to check this and check the heal command if that is the case19:50
alexe9191one more question though, can I query the resources per node? or is that per class only?19:50
sean-k-mooneyin placment?19:50
alexe9191yes19:50
sean-k-mooneyyes you can19:50
sean-k-mooneyyou can use the resouce providers endpoint ot list all the invetores and the usage per host19:50
sean-k-mooneythere is an osc plugin for that too19:51
sean-k-mooneycalled osc-placement if you dont have it installed19:51
sean-k-mooneyit will give you some placment command for the openstack client19:51
alexe9191installing now:)  thanks alot19:52
sean-k-mooneyhere are the docs https://docs.openstack.org/osc-placement/latest/cli/index.html#resource-provider-inventory-list19:53
sean-k-mooneyopenstack resource provider show [--allocations] <uuid> will also be useful19:53
sean-k-mooneyor openstack resource provider usage show <uuid>19:53
alexe9191Testing it now :)19:54
alexe9191yup, got the numbers19:54
*** artom has joined #openstack-nova19:57
*** hoonetorg has joined #openstack-nova20:01
*** artom has quit IRC20:02
*** pmannidi has quit IRC20:06
*** pmannidi has joined #openstack-nova20:10
alexe9191yup, I see differences between what nova is reporting vs what placement is reporting20:10
*** stand has quit IRC20:12
*** hoonetorg has quit IRC20:13
*** rcernin has joined #openstack-nova20:17
*** k_mouza has joined #openstack-nova20:17
*** k_mouza has quit IRC20:21
*** k_mouza has joined #openstack-nova20:21
*** k_mouza has quit IRC20:26
*** hoonetorg has joined #openstack-nova20:30
*** rcernin has quit IRC20:31
openstackgerritMerged openstack/nova master: functional: Add tests for mixed CPU policy  https://review.opendev.org/c/openstack/nova/+/75585220:44
*** ociuhandu has joined #openstack-nova20:45
*** rcernin has joined #openstack-nova21:00
openstackgerritMerged openstack/nova master: Remove __unicode__() from nova unit test Exception  https://review.opendev.org/c/openstack/nova/+/76989421:03
*** whoami-rajat__ has quit IRC21:10
*** pmannidi has quit IRC21:15
*** pmannidi has joined #openstack-nova21:17
*** ociuhandu has quit IRC21:20
*** ociuhandu has joined #openstack-nova21:21
*** ociuhandu has quit IRC21:22
*** ociuhandu has joined #openstack-nova21:22
*** ociuhandu has quit IRC21:29
*** alexe9191 has quit IRC21:30
*** nweinber has quit IRC21:33
*** ociuhandu has joined #openstack-nova21:38
*** rcernin has quit IRC21:39
*** rcernin has joined #openstack-nova21:56
*** cap has quit IRC22:00
*** xek has quit IRC22:00
*** ociuhandu has quit IRC22:07
*** ociuhandu has joined #openstack-nova22:08
*** ociuhandu has quit IRC22:15
*** lbragstad_ has joined #openstack-nova22:22
*** pmannidi has quit IRC22:24
*** lbragstad has quit IRC22:24
*** rcernin has quit IRC22:26
*** rcernin has joined #openstack-nova22:26
*** pmannidi has joined #openstack-nova22:27
openstackgerritMerged openstack/nova stable/ussuri: Warn when starting services with older than N-1 computes  https://review.opendev.org/c/openstack/nova/+/77076422:39
openstackgerritMerged openstack/nova stable/ussuri: Reproduce bug 1896463 in func env  https://review.opendev.org/c/openstack/nova/+/77076822:40
openstackbug 1896463 in OpenStack Compute (nova) ussuri "evacuation failed: Port update failed : Unable to correlate PCI slot " [Low,In progress] https://launchpad.net/bugs/1896463 - Assigned to Balazs Gibizer (balazs-gibizer)22:40
openstackgerritMerged openstack/nova stable/ussuri: [doc]: Fix glance image_metadata link  https://review.opendev.org/c/openstack/nova/+/76197722:40
*** spatel has quit IRC22:45
openstackgerritMerged openstack/nova stable/train: Fix a hacking test  https://review.opendev.org/c/openstack/nova/+/76779322:55
*** lemko has quit IRC22:57
*** lemko has joined #openstack-nova22:57
*** tkajinam has quit IRC22:59
*** tkajinam has joined #openstack-nova22:59
*** slaweq has quit IRC23:03
*** lbragstad_ is now known as lbragstad23:03
*** lbragstad has quit IRC23:21
*** lbragstad has joined #openstack-nova23:23
*** pmannidi has quit IRC23:33
*** pmannidi has joined #openstack-nova23:35
*** efried has quit IRC23:38
*** efried has joined #openstack-nova23:38

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!