Thursday, 2022-02-10

spatelAny idea what is wrong with this error - ERROR oslo_service.service oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 32738247b2bc4024bd0a4e65d900729d02:06
spatelmy nova-compute throwing that error 02:06
spatelEven i rebuild my RabbitMQ from scratch 02:06
spatelto me look like a bug 02:33
spatelhttps://paste.opendev.org/show/bFtAhaCLfed1R3ptNr1F/02:33
*** ministry is now known as __ministry02:37
spatelwhat are the option i have here?02:39
opendevreviewsean mooney proposed openstack/nova master: [WIP] add healthcheck manager to manager base  https://review.opendev.org/c/openstack/nova/+/82784403:01
spatelsean-k-mooney are you around?03:06
*** dasm is now known as dasm|off03:08
opendevreviewGhanshyam proposed openstack/nova master: Make more project level APIs scoped to project only  https://review.opendev.org/c/openstack/nova/+/82867005:37
*** amoralej|off is now known as amoralej07:21
fricklergibi: gmann: wouldn't https://bugs.launchpad.net/devstack/+bug/1960346 rather be a nova issue than devstack?07:41
__ministryhi all, I have a question that is: "Can we plug multiple vGPU at the same time into a compute node?"07:48
__ministryPlease, help me.07:49
gibifrickler: it more feels like a libvirt regression between 6.0.0 and 8.0.0 but sure it is probably not a devstack issue07:53
gibi__ministry: you probably want to plug vGPUs to the guest VMs as compute nodes are physical things so they have physical gpus plugged 07:55
gibi__ministry: I think nova support multiple pGPUs per compute as well as multiple vGPUs per guest07:56
__ministryyep. I want know to estimate amount of devices to make order.07:57
__ministrythank you.07:58
fricklergibi: how confident are you that it is actually a regression and not an intentional change that nova would have to adapt to? anyway I'll assign it to nova then to find out more details08:16
gibifrickler: yeah, you are right, as there is major version bump, it can be an intended change too08:16
__ministrygibi: multiple physical GPUs which vGPU supported in a compute node, we can?08:19
gibi__ministry: I think so. bauzas do we have limitations around that ^^ ?08:20
bauzas__ministry: gibi: which exact question do you want to know about vGPUs ?08:21
gibibauzas: as far as I understand __ministry asking in nova supports multiple pGPUs providing vGPUs in a single compute08:21
bauzasmost of the limitations are written here https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#caveats08:21
bauzasgibi: oh this totally works08:22
bauzassometimes with some GPU board (like a Tesla T-something) you actually get more than 1 pGPU per board :)08:22
bauzassince stein, we just create a resource provider per physical PCI device, that's it08:23
__ministryI want plug multiple vGPU to a nova instance.08:23
bauzasoh then the other way around08:23
bauzas__ministry: so you're asking about resources:VGPU=X where X>108:23
bauzasI'm unfortunate to say there is some limitation from libvirt due to some nasty bug08:24
__ministryyep. it like we can attach multiple volume to nova instance.08:24
__ministryok08:24
bauzas__ministry: https://bugs.launchpad.net/nova/+bug/175808608:27
__ministrybauzas: thank you. ^.^08:33
bauzas__ministry: that being said, this is a nvidia driver limitation, you're welcome to test again another newer GRID release version 08:35
opendevreviewManuel Bentele proposed openstack/nova master: libvirt: Add properties to set advanced QXL video RAM settings  https://review.opendev.org/c/openstack/nova/+/82867408:52
opendevreviewManuel Bentele proposed openstack/nova master: libvirt: Add configuration options to set SPICE compression settings  https://review.opendev.org/c/openstack/nova/+/82867508:59
opendevreviewManuel Bentele proposed openstack/nova master: libvirt: Add property to set number of screens per video adapter  https://review.opendev.org/c/openstack/nova/+/82867609:09
opendevreviewBalazs Gibizer proposed openstack/placement master: Add any-traits support for listing resource providers  https://review.opendev.org/c/openstack/placement/+/82649110:09
opendevreviewBalazs Gibizer proposed openstack/placement master: Add any-traits support for allocation candidates  https://review.opendev.org/c/openstack/placement/+/82649210:09
opendevreviewBalazs Gibizer proposed openstack/placement master: Remove unused compatibility code  https://review.opendev.org/c/openstack/placement/+/82649310:09
opendevreviewBalazs Gibizer proposed openstack/placement master: Add microversion 1.39 to support any-trait queries  https://review.opendev.org/c/openstack/placement/+/82671910:10
gibigmann: thanks for the comment in the any-traits series. I restored the legacy behavior in https://review.opendev.org/c/openstack/placement/+/826491/810:11
gibimelwitt: I had to respin the any-traits series due to ^^10:11
opendevreviewManuel Bentele proposed openstack/nova master: libvirt: Add configuration options to set SPICE compression settings  https://review.opendev.org/c/openstack/nova/+/82867510:27
sean-k-mooneychateaulav: you have 1 local branch for all commits in a feature10:58
sean-k-mooneychateaulav: you should be able to check out that branch locally and run all the code form the top of the branch10:58
sean-k-mooneye.g. run the unit/func tests and or deploy nova and execute the code10:58
sean-k-mooneyfrickler: im pretty sure any change in the detach behavior is a libvirt/qemu regression likely related to the rework they are doing in this arrea for pci passthtough11:02
sean-k-mooneyregarding https://bugs.launchpad.net/nova/+bug/196034611:02
bauzassean-k-mooney: yup we'd appreciate some qemu expert on this one11:05
bauzaskashyap: maybe you can help ?11:05
kashyapbauzas: Hey, reading back11:05
bauzaskashyap: sean-k-mooney: context is some TripleO CI blocked https://bugs.launchpad.net/tripleo/+bug/196031011:05
kashyapsean-k-mooney: I wouldn't be so confident about regressions in libvirt/QEMU without evidence.11:05
bauzaswhich looks to be due b/c of https://bugs.launchpad.net/nova/+bug/196034611:06
kashyapWhen it comes to bugs, I follow "seeing is believing"11:06
bauzaskashyap: if you see the last bug, you'll see gibi saying we use a new libvirt/qemu version 11:06
bauzasbut it seems to be a regression11:06
kashyapbauzas: Regression where?  Me looks11:06
frickleroh, it's not only devstack, but also tripleo, then devstack is even more out of the boat ;)11:07
bauzaskashyap: see https://zuul.openstack.org/build/3e24d977991d4536b6279afd7f3b5d56/log/controller/logs/screen-n-cpu.txt?severity=4#4943311:08
kashyapbauzas: Yep, already noticed it11:09
kashyapbauzas: Looking for libvirtd logs w/ QEMU filters11:09
kashyapbauzas: Have you got the affected instance ID from the logs11:09
bauzaslemme try to find one11:09
kashyap1074c6fa-12fe-40a8-b1d5-a47a49018d9f11:11
kashyap?11:11
bauzasfor this job, yes11:12
chateaulavsean-k-mooney: alright then im on the right track, got that all setup and inline. 11:15
*** bhagyashris__ is now known as bhagyashris11:26
chateaulavsean-k-mooney: and then with any further changes, i would use interactive rebase to edit and add any files to each specific commit and the submit the topic for review.11:29
kashyapbauzas: Do you know why I'm unable to fetch all the instance log files with a simple `wget`?11:29
kashyapI'm tryin this:11:29
kashyap$> wget -r -nH -nd -np -R "index.html*" https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3e2/828280/1/check/devstack-platform-centos-9-stream/3e24d97/controller/logs/libvirt/libvirt/qemu/11:29
kashyapAnyway, it's not required; ignore the above11:42
*** mdbooth2 is now known as mdbooth11:49
kashyapDuh, I wish Launchpad didn't break the complete formatting in text messages by wrapping text11:56
sean-k-mooneychateaulav: yes. and you can always author new commits at the end of the chain and move them with an interactive rebase if need12:04
chateaulavthanks for that last confirmation! your mentorship is much appreciated.12:05
sean-k-mooneybut basically gerrit is intended to work with freature branches and it tack each chage to a review with the change-id in the commit message12:05
sean-k-mooneychateaulav: no worries glad to help12:05
sean-k-mooneyso rebases or change to a commit will not create a new review if the change id does not change and it will just update teh exsit review with a new revsion 12:06
gibikashyap: so what do you think about "Device virtio-disk1 is already in the process of unplug" error? Should we increase the amount of time we wait before we retry the detach?12:15
gibiit is configurable with CONF.libvirt.device_detach_timeout12:17
gibiit is 20 sec by default12:17
opendevreviewErlon R. Cruz proposed openstack/nova master: Adds regression test for bug LP#1944619  https://review.opendev.org/c/openstack/nova/+/82184012:22
opendevreviewErlon R. Cruz proposed openstack/nova master: Fix pre_live_migration rollback  https://review.opendev.org/c/openstack/nova/+/81532412:22
bauzaskashyap: sorry, was at lunch (isolated but in the kitchen tho)12:23
gibikashyap: I've pushed https://review.opendev.org/c/openstack/devstack/+/828705 to see if longer timeout helps or not12:24
sean-k-mooneygibi: i tought you implmented an event based retry12:29
gibisean-k-mooney: it is event based but with a timeout12:29
gibiso if the event came then we stop waiting12:29
sean-k-mooneyand when it times out we give up12:29
gibibut if the event never cames we give up12:29
sean-k-mooneyya ok12:30
gibimore preciesly we retry 12:30
sean-k-mooneywell no12:30
sean-k-mooneyretrying would be wrong12:30
sean-k-mooneysince we know that is an error in qemu and will abort the detach12:30
sean-k-mooneyat least in current verions in old version it was undefiend behavior12:30
gibiI remember that even with libvirt 6.0.0 retry was needed in some cases12:30
kashyapbauzas: Don't worry12:31
gibibut maybe that was just the case of not waiting enough12:31
sean-k-mooneyits qemu rather then libvirt that i think is important here12:31
kashyapgibi: Reading back; went for some air12:31
gibisean-k-mooney: ack, then qemu 4.2.0  vs 6.2.012:31
sean-k-mooneygibi: the orginal behavior change was qemu consider a second detach request to be an error and aborting12:31
sean-k-mooneyyes12:32
kashyapgibi: What I'm wondering is why is the unplug still in the process - what is holding up the unplug.  Let me chat w/ the libvirt block dev12:32
gibikashyap: thanks12:32
gibikashyap: could be the guest keeping the dev busy?12:32
sean-k-mooneykashyap: unplug requries the guest kernel to cooperate12:32
kashyapgibi: Right, this is a negative test of server rescue, right?  I'm trying to look at the exact test12:32
gibisean-k-mooney: do you have a qemu version number from which we should never retry?12:32
sean-k-mooneyso if the guest is not fully booted or busy that can delay it12:32
kashyapsean-k-mooney: But note: there's no QEMU guest agent installed here12:32
sean-k-mooneykashyap: it is not realted to the guest agent12:33
sean-k-mooneyits related to hardware interupts that are sent by qemu that guest must process12:33
sean-k-mooneyeither via achi or the pci native hotplug mechium depend on you machinetype and qemu version12:34
gibikashyap: we have a positive test test_stable_device_rescue_disk_virtio_with_volume_attached and a negative test_stable_device_rescue_disk_virtio_with_volume_attached both failing12:34
gibiahh12:34
gibithis is the negative test_rescued_vm_detach_volume12:34
kashyapYeah, this is negative that's failing12:35
kashyapWhat exactly is the negative test doing?  /me looks...12:35
sean-k-mooneygibi: in terms of the exact vesion i think its in the release notes but ill see if i can find it12:35
kashyapsean-k-mooney: Are you confident it is "related to hardware interupts that are sent by QEMU?"  What evidence there is for it?12:35
gibisean-k-mooney: thanks. if we know the version number then I can craft a patch that conditionally set the detach attempts to 1 if the qemu is new enough12:36
sean-k-mooneykashyap: we dont know that for certin and in fact i think https://bugzilla.redhat.com/show_bug.cgi?id=2007129 is a large part of the problem12:37
sean-k-mooneyunless you use virtio-scsi which we dont by default each volume attach and deatch is a pci hotplug form teh guest perspective as we add a seperte pci device for each virtio-blk device12:37
fricklerkashyap: for downloading logs, I think wget may have issues because the source is swift and not a "normal" webserver. you may want to look at https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3e2/828280/1/check/devstack-platform-centos-9-stream/3e24d97/download-logs.sh and maybe filter if you need only some subdirs12:38
kashyapOkay, that bug is about PCI hotplug emulation12:38
sean-k-mooneykashyap: that bug might be unrealted but artom suggested it might be in a different context12:38
kashyapsean-k-mooney: Do you know what the negative test is exaclty doing?12:38
sean-k-mooneyi have not looked expictly12:39
sean-k-mooneyi guess botting into rescue mode and detaching a cinder volume12:39
sean-k-mooneyhttps://github.com/openstack/tempest/blob/7e96c8e854386f43604ad098a6ec7606ee676145/tempest/api/compute/servers/test_server_rescue_negative.py#L13612:40
gibiyepp it does that, I try to correlate that with the logs12:40
kashyapSo it is trying to rescue a paused instance, and a non-existing instance12:40
sean-k-mooneyno12:40
sean-k-mooneyits booting a vm, attaching a volume, then puting it in rescue mode which reboot with a new root disk12:41
sean-k-mooneywaiting for ti to get to rescue meaning its running12:41
sean-k-mooneythen asserting that detach raises a 409 conflict12:41
sean-k-mooneythere is not paused instance12:42
kashyapsean-k-mooney: Well.  What do you think this is doing, then? - test_rescue_non_existent_server()?12:42
kashyapAnd test_rescue_paused_instance()12:43
sean-k-mooneyits just asserting that if the server does not exist the rescue call returns a 40412:43
kashyapAnd there are also: test_rescued_vm_attach_volume() and test_rescued_vm_detach_volume()12:43
sean-k-mooneypaused is asserting that you cant call resuce when its paused12:43
sean-k-mooneyyep12:43
sean-k-mooneyhow is this relevent12:43
sean-k-mooneythe test all look valid and are asserting what i woudl expect12:44
kashyapIt is relevant in the sense that these are the different tests being run here12:44
sean-k-mooneyyes they are differnt test but they are not using the same vm12:45
kashyapYep, noted12:45
sean-k-mooneyso they should have no impact on each other12:45
sean-k-mooneyalthough i will say if we get very very unlucky the random uuid could colide with a real on ein the non_existent_instance test12:45
sean-k-mooneystatistically however we are not going to get uuid collisions12:46
gibiwait a bit I think we are looking at the wrong test case 12:48
kashyapgibi: So you don't think this is the failing test? test_rescued_vm_detach_volume()?12:49
gibiIm confused12:49
gibiIm looking at this run https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3e2/828280/1/check/devstack-platform-centos-9-stream/3e24d97/controller/logs/index.html 12:49
kashyapSame here.12:50
*** dasm|off is now known as dasm12:52
kashyapgibi: If we can narrow down the exact test action that is causing this, then we can debug it further from libvirt/QEMU angle.12:53
gibiyeah I'm trying that12:53
gibiI mean trying to collect the steps the test took12:53
kashyapSure, no rush.  (I just want to arrive at a reproducer w/ just libvirt - APIs or shell)12:53
gibihttps://paste.opendev.org/show/bEtFDBoLqDfotDMPDOq8/12:55
gibiOK so that test case is failing similarly than the other so we can look at any of the too12:56
gibithat paste has the steps from test_rescued_vm_detach_volume12:56
* kashyap clicks12:56
gibiso yeah the log correlates with the test case12:57
kashyapgibi: So, it is indeed test_rescued_vm_detach_volume(), then12:57
gibiwe boot an instance then rescue it (basically destory the domina and start it again with different disk config) then we attempt to detach a volume while the rescue domin is running12:57
gibikashyap: yes12:57
kashyapgibi: Please post this info in the bug as a record.12:58
gibikashyap: but the other test_stable_device_rescue_disk_virtio_with_volume_attached also failing similarly 12:58
gibikashyap: hence my confusion12:58
kashyapHm, so both positive and negative are failing similarly12:58
gibiupdating the bug...12:58
kashyapgibi: During rescue, by "different disk config" do you mean a fresh, similar config?  Or actually different?  If so, how is the disk config different before rescue?13:00
gibiwe create a domain in a way that it boots from a rescue image but also has the original root fs attached13:00
kashyapI see, noted.13:02
gibikashyap: https://paste.opendev.org/show/bf9JaJYMYDOX9onjFFy0/ here are the domain xmls for that nova instance13:05
kashyapgibi: Excellent; I just asked Peter Krempa (he meditates on libvirt block layer) on #virt (OFTC)13:07
*** amoralej is now known as amoralej|lunch13:09
kashyapCorresponding QEMU log: 13:10
kashyaphttps://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3e2/828280/1/check/devstack-platform-centos-9-stream/3e24d97/controller/logs/libvirt/libvirt/qemu/instance-0000004d_log.txt13:10
sean-k-mooneykashyap: lyarwood  implemented a stabel rescue feature but basiclly we add a new disk on the hw_rescue_bus typeicly usb as the boot disk13:10
sean-k-mooneythe default rescue disk is the same image from glance that the vm booted wiht but it will be a clean copy of it13:11
sean-k-mooneyyou can specify an alternitive image to use via config or the rescue action but these tests do not13:11
kashyapI see, noted.13:12
*** artom__ is now known as artom13:14
gibikashyap: these are the nova request ids from the compute log correlated with actions: https://paste.opendev.org/show/bS5Jmmw4PbLirMXtAUyj/ 13:26
gibithere is two rescue / unrescue pair13:27
kashyapgibi: Noted; meanwhile, the libvirt dev says:13:27
kashyap"weird, we [libvirt] indeed try to detach a blockdev node that wasn't ever attached"13:27
gibithat is wierd indeed :)13:27
kashyapgibi: I sent him an email with you in Cc.  He asked it, as he's in a hurry 13:27
gibithanks for the cc13:28
gibiI have to jump on a call I will be back in an hour13:28
kashyapNo rush; we can deal with this async.13:28
kashyapgibi: Oh, I don't think we have this output captured anywhere, right?  The dev was asking me13:29
kashyap"please also get me the output of 'qemu-img info' of the copy destination image if it wasn't removed"13:29
gibicopy destination? 13:30
kashyap[quote] What breaks is a block copy into a image with "--reuse-external" [this is what Nova uses - i.e. reuse an external file], so we try to obey the metadata. [/quote]13:31
kashyapgibi: Here an image copy is involved under the hood13:31
gibiinteresting..13:31
kashyapOkay, Peter says: "it's okay to just point me to the place formatting the image, I don't need an actual example, just what's put into the metadata"13:32
* kashyap taps on the table and thinks13:33
kashyapgibi: He does admit that there's a potential libvirt bug here.13:33
* kashyap --> bbiab13:34
gibibahh there are multiple test cases reusing the same nova instance during the testing https://paste.opendev.org/show/b0OYtRRb5FyZ5pyWgBCu/13:43
gibihence the more action on the instance in the compute log than what is in the test case that fails13:44
gibiOK, see a bit more.13:48
gibiThe actual tempest test case passes. The detach returns conflict when the instance is in RESCUE state. That is the end of the test case.  _THEN_ the tempest starts cleaning up the pieces, and during that first it unrescues the VM and then detached the volume. This detach should work and remove the volume but it fails with the error in libvirt 13:50
*** amoralej|lunch is now known as amoralej13:58
gibibtw increasing the timeout from 20 to 60 did not helped detach still timeouts14:07
gibihttps://zuul.opendev.org/t/openstack/build/61f733fc73834ff0924284dac61c9a4b/log/controller/logs/screen-n-cpu.txt?severity=314:08
* kashyap reads back14:13
gibiI don't know what block copy action we do during these sequences 14:17
gibihopefully with this patch I can run only the singe test case we want to troubleshoot so the logs will be smaller an cleaner https://review.opendev.org/c/openstack/devstack/+/82870514:23
opendevreviewErlon R. Cruz proposed openstack/nova master: Adds regression test for bug LP#1944619  https://review.opendev.org/c/openstack/nova/+/82184014:24
opendevreviewErlon R. Cruz proposed openstack/nova master: Fix pre_live_migration rollback  https://review.opendev.org/c/openstack/nova/+/81532414:24
kashyapgibi: So the block copy is definitely there, looking at the commands libvirt has sent to QEMU (from the CI log):14:31
kashyap2022-02-08 15:24:09.109+0000: 72482: info : qemuMonitorSend:914 : QEMU_MONITOR_SEND_MSG: mon=0x7f74e40cde70 msg={"execute":"blockdev-mirror","arguments":{"job-id":"copy-vda-libvirt-2-format","device":"libvirt-2-format","target":"libvirt-4-format","sync":14:31
kashyap"top","auto-finalize":true,"auto-dismiss":false},"id":"libvirt-408"}14:31
kashyapThe QEMU keyword here is "blockdev-mirror"14:31
kashyap... which is what libvirt calls "block copy".14:32
kashyapgibi: When you get a minute, please post your above observation about how the Tempest test passes, but the detach returns conflict in RESCUE.  It is useful for the record.14:33
gibikashyap: sure. the conflict is from nova, in rescue we don't allow detach. And that part works. 14:34
kashyapOkay, so expected error there.14:34
gibiyepp that is OK and the test case is pass, but after the test case tempest cleans up14:34
gibibasically did the actions in revers to move back to the starting state14:35
gibiso as the server in RESCUE state it unrescues it14:35
gibiand as a volume was attached to the server before rescue, it tries to detach the volume after the unrescue14:35
gibiand that detach should remove the volume from the domian but that fails14:35
kashyapI see.  And looks like it couldn't find that volume?14:37
kashyapMaybe this goes back to Peter's comment earlier about how libvirt tries "to detach a blockdev node that wasn't ever attached"14:37
kashyapThese weird tests are spinning my head.14:38
gibiduring this detach: nova first detaches the volume from the persistent domain that succeeds14:39
gibithen nova issue the detach command from the live domian and waits for the event14:39
gibithat event is not received in 20 sec so it issue the command again14:40
kashyapAh-ha, that makes sense14:40
gibithat commend returns 14:40
gibi error message: internal error: unable to execute QEMU command 'device_del': Device virtio-disk1 is already in the process of unplug14:40
gibithen nova retries 6 more times14:40
kashyapRight, and then times out14:41
gibialways getting the same message14:41
gibiand then gives up14:41
gibithis is the log from the detach attempst https://paste.opendev.org/show/be647YeC57HREuAfwwru/14:41
kashyapgibi: Your last 10-ish messages are a great summary of the prob at hand.  Can I take and rephrase them into a paragraph on that mail thread?  14:41
gibisure14:41
kashyapgibi: I've also posted the pimped up version here: https://bugs.launchpad.net/nova/+bug/1960346/comments/814:48
gibithanks14:48
gmanngibi: ack. thanks14:52
opendevreviewAndre Aranha proposed openstack/nova stable/xena: Add check job for FIPS  https://review.opendev.org/c/openstack/nova/+/82789514:52
gmannfrickler: I opened it for devstack due to centos9 libvirt version bump but it is ok to add nova too14:53
*** hemna8 is now known as hemna14:53
kashyapfrickler: Forgot to respond in the scrollback; yeah, that explains it - why my `wget` didn't work :)  Thank you.14:55
gibikashyap: these are the libvirtd logs from the first and second detach https://paste.opendev.org/show/bKANW2WAzfAzEIGcgJX8/ probably nothing new here I just working through the logs...14:55
kashyapgibi: Nice work narrowing down14:59
kashyapA side-tip is:14:59
kashyap    $> grep -Ei '(MONITOR_SEND_MSG|QEMU_MONITOR_RECV_)' libvirtd_log.txt14:59
kashyap(That gets you all the commands and the responses libvirt is sending to QEMU.)14:59
* kashyap --> a call; back later14:59
spatelfolks i need urgent help to understand what is wrong with nova and rabbitMQ :(15:43
spatelI have rebuild rabbitMQ but now not able to spin up VM15:44
spatelvm getting stuck in BUILD 15:44
spatelnova-conductor throwing these errors - https://paste.opendev.org/show/bbSVnr5zGdCCOPdL5tQF/15:45
kashyapNo answer, but please don't count on community to provide "urgent help".  That's what vendors are for15:48
spatelkashyap i understand just looking for clue to see what is going on15:49
melwittgibi: ack, will look16:04
gibimelwitt: thanks16:05
*** priteau_ is now known as priteau16:11
opendevreviewErlon R. Cruz proposed openstack/nova master: Fix pre_live_migration rollback  https://review.opendev.org/c/openstack/nova/+/81532416:14
opendevreviewAndre Aranha proposed openstack/nova stable/xena: Add check job for FIPS  https://review.opendev.org/c/openstack/nova/+/82789516:24
opendevreviewAndre Aranha proposed openstack/nova stable/wallaby: Add check job for FIPS  https://review.opendev.org/c/openstack/nova/+/82789616:29
*** tkajinam is now known as Guest21016:30
*** priteau is now known as priteau_16:48
*** priteau_ is now known as priteau16:48
opendevreviewLior Friedman proposed openstack/nova master: Support use_multipath for NVME driver  https://review.opendev.org/c/openstack/nova/+/82394117:02
opendevreviewDmitrii Shcherbakov proposed openstack/nova master: Document remote-managed port usage considerations  https://review.opendev.org/c/openstack/nova/+/82751317:05
opendevreviewLior Friedman proposed openstack/nova master: Support use_multipath for NVME driver  https://review.opendev.org/c/openstack/nova/+/82394117:12
gibikashyap: fyi there is a smaller reproduction in https://bugs.launchpad.net/nova/+bug/1960346/comments/1017:40
gibibut I have to drop off now17:40
opendevreviewmelanie witt proposed openstack/placement master: Make perfload jobs fail if write allocation fails  https://review.opendev.org/c/openstack/placement/+/82843818:01
*** amoralej is now known as amoralej|off18:21
opendevreviewGhanshyam proposed openstack/nova master: Make more project level APIs scoped to project only  https://review.opendev.org/c/openstack/nova/+/82867018:32
chateaulavgibi: can i get a little more on the backporting of the 1.3 to 1.2. I have been playing around with it but am not quite sure. is this more related to the actual version itself or pulling the new values available in 1.3 to 1.2. this is for https://review.opendev.org/c/openstack/nova/+/828369 and i know that my question seems repeatative18:54
opendevreviewMerged openstack/nova master: Join quota exception family trees  https://review.opendev.org/c/openstack/nova/+/82818519:43
spatelkashyap by the way i found issue, it was related to neutron-metadata service which was causing issue and holding VM build.. 20:27
opendevreviewmelanie witt proposed openstack/nova stable/wallaby: libvirt: Add announce-self post live-migration workaround  https://review.opendev.org/c/openstack/nova/+/82517821:35
*** dasm is now known as dasm|off21:49
chateaulavgibi: found the info I needed. Will add the back ports tomorrow.23:03
opendevreviewGhanshyam proposed openstack/nova master: Server actions APIs scoped to project scope  https://review.opendev.org/c/openstack/nova/+/82435823:21
opendevreviewGhanshyam proposed openstack/nova master: Server actions APIs scoped to project scope  https://review.opendev.org/c/openstack/nova/+/82435823:21
*** prometheanfire is now known as Guest223:49
*** osmanlicilegi is now known as Guest023:49

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!