Monday, 2021-02-15

*** tosky has quit IRC00:09
*** swp20 has joined #openstack-nova01:33
*** macz_ has joined #openstack-nova02:01
*** macz_ has quit IRC02:06
*** jhesketh has joined #openstack-nova02:34
*** rcernin has quit IRC02:42
*** rcernin has joined #openstack-nova02:54
*** k_mouza has joined #openstack-nova03:08
*** khomesh24 has joined #openstack-nova03:09
*** k_mouza has quit IRC03:12
*** psachin has joined #openstack-nova03:42
*** vishalmanchanda has joined #openstack-nova04:22
*** ratailor has joined #openstack-nova04:38
*** rcernin has quit IRC04:51
*** rcernin has joined #openstack-nova04:56
*** vdrok has quit IRC04:58
*** johnsom has quit IRC04:58
*** flaviof has quit IRC04:58
*** TheJulia has quit IRC04:58
*** masayukig has quit IRC04:58
*** johnsom has joined #openstack-nova04:58
*** vdrok has joined #openstack-nova04:58
*** flaviof has joined #openstack-nova04:58
*** TheJulia has joined #openstack-nova04:58
*** masayukig has joined #openstack-nova04:58
*** sapd1 has quit IRC05:31
*** whoami-rajat__ has joined #openstack-nova05:32
*** k_mouza has joined #openstack-nova05:34
*** k_mouza has quit IRC05:38
*** k_mouza has joined #openstack-nova05:45
*** k_mouza has quit IRC05:50
*** rcernin has quit IRC07:16
*** rcernin has joined #openstack-nova07:19
*** ralonsoh has joined #openstack-nova07:22
*** rcernin has quit IRC07:23
*** rcernin has joined #openstack-nova07:36
*** links has joined #openstack-nova07:47
*** zoharm has joined #openstack-nova07:47
*** k_mouza has joined #openstack-nova07:50
*** k_mouza has quit IRC07:54
*** supamatt has quit IRC08:01
*** luksky has joined #openstack-nova08:07
*** prometheanfire has quit IRC08:10
*** andrewbonney has joined #openstack-nova08:22
*** k_mouza has joined #openstack-nova08:29
*** rcernin has quit IRC08:32
*** k_mouza has quit IRC08:33
*** jraju__ has joined #openstack-nova08:34
*** links has quit IRC08:35
*** ygk_12345 has joined #openstack-nova08:36
*** ygk_12345 has left #openstack-nova08:36
*** prometheanfire has joined #openstack-nova08:42
*** xek has joined #openstack-nova08:43
*** prometheanfire has quit IRC08:47
*** martinkennelly has joined #openstack-nova08:49
*** prometheanfire has joined #openstack-nova08:49
*** prometheanfire has quit IRC09:02
*** rpittau|afk is now known as rpittau09:03
kashyapMorning, folks09:03
kashyapAnyone else hitting these two Nova live migration job failures:09:04
kashyap- tempest.api.compute.admin.test_live_migration.LiveMigrationTest.test_iscsi_volume09:05
kashyap- {0} tearDownClass (tempest.api.compute.admin.test_live_migration.LiveMigrationTest)09:05
kashyapBoth the failures seem to be unrelated to my patch: https://zuul.opendev.org/t/openstack/build/2d510b007eb14485a5492c51efdc731809:06
kashyapLooks like timeouts09:09
lyarwoodit's a detach timeout09:10
kashyapRight:09:11
kashyap    Details: volume d461731f-26eb-419d-8b43-e0fd7a691abd failed to reach available status (current detaching) within the required time (196 s).09:11
lyarwoodhttps://zuul.opendev.org/t/openstack/build/2d510b007eb14485a5492c51efdc7318/log/controller/logs/screen-n-cpu.txt#1106509:11
lyarwoodthe failure to delete the volume is because n-cpu couldn't detach the actual device from the instance09:11
*** prometheanfire has joined #openstack-nova09:11
kashyaplyarwood: I see; what are options now?  'recheck'?09:12
kashyapMorning, BTW09:12
kashyaplyarwood: gibi: If you get time this week, can you have a gander at the simple patch below it too (a partial revert of an earlier one), when you can? -- https://review.opendev.org/c/openstack/nova/+/775431 (libvirt: Don't drop CPU flags with policy='disable' from guest XML)09:13
*** ociuhandu has joined #openstack-nova09:13
kashyapSorry for the lengthy commit message; it is due to the example.09:14
lyarwoodkashyap: morning :) yeah recheck09:14
kashyaplyarwood: Okay; will do.09:15
lyarwoodkk09:15
kashyapNext week I'm on PTO, trying to see if I can get these across the line this week.09:15
*** hemna has quit IRC09:20
*** mtreinish has quit IRC09:20
*** hemna has joined #openstack-nova09:20
*** zenkuro has joined #openstack-nova09:21
gibikashyap: will look09:21
*** ociuhandu has quit IRC09:22
*** ociuhandu has joined #openstack-nova09:22
kashyapThank you09:23
*** tosky has joined #openstack-nova09:24
*** rcernin has joined #openstack-nova09:28
*** ociuhandu has quit IRC09:34
*** dtantsur|afk is now known as dtantsur09:36
*** derekh has joined #openstack-nova09:56
*** k_mouza has joined #openstack-nova10:00
*** rcernin has quit IRC10:02
*** ociuhandu has joined #openstack-nova10:04
*** ociuhandu has quit IRC10:09
gibilyarwood: thanks for the reply in https://review.opendev.org/c/openstack/nova/+/767533 now it is lot clearer to me10:12
*** hemanth_n has joined #openstack-nova10:12
lyarwoodgibi: np, just rewrote the commit to make it clear and working through some failures in the nova-next job using q35 at the tip of that series10:13
lyarwoodbrb10:13
hemanth_nlyarwood: sean-k-mooney: elod: zuul build issues are resolved now on nova stable/rocky, can you review this nova backport to rocky when you get time https://review.opendev.org/c/openstack/nova/+/76182410:15
sean-k-mooneyah the pci stat change10:16
sean-k-mooneyam sure just finishing an email10:17
*** ociuhandu has joined #openstack-nova10:18
*** derekh has quit IRC10:19
*** derekh has joined #openstack-nova10:24
*** ociuhandu has quit IRC10:28
*** ociuhandu has joined #openstack-nova10:29
*** jangutter_ has quit IRC10:33
openstackgerritLee Yarwood proposed openstack/nova master: nova-next: Drop NOVA_USE_SERVICE_TOKEN as it is now True by default  https://review.opendev.org/c/openstack/nova/+/77557510:33
lyarwoodhemanth_n: ack looking10:34
*** ociuhandu has quit IRC10:34
*** jangutter has joined #openstack-nova10:34
*** ociuhandu has joined #openstack-nova10:38
lyarwoodkashyap: any idea why blkid wouldn't work within an instance using the q35 machine type?10:43
* kashyap thinks10:45
kashyaplyarwood: It rings a faint bell; but my tired brain cannot seem to recall ... let me see if I can rediscover it10:46
sean-k-mooneylyarwood: i have used it before10:46
lyarwoodkashyap: np, the context is https://review.opendev.org/c/openstack/nova/+/708701 where the only failures are due to the IDE bus not being present and these blkid failures to find the config drive10:47
sean-k-mooneylyarwood: it normally works10:47
lyarwoodhmm ah wait I think I know what it is10:47
lyarwoodjust rubber duck'd myself there10:47
lyarwoodthe config drive is normally IDE10:47
sean-k-mooneyide + q3510:47
lyarwoodreally?10:47
kashyaplyarwood: Aaah10:47
sean-k-mooneyya its sata10:47
sean-k-mooneyfor q3510:47
sean-k-mooneyso different name10:47
lyarwoodyeah so it's now SATA so blkid is going to be different10:47
lyarwoodyeah10:47
lyarwoodfun10:47
kashyaplyarwood: Yes, for Q35 it is SATA -- you did the patch yourself :)10:48
lyarwoodyeah I knew the IDE thing10:48
lyarwoodI just didn't connect the dots with the config drive10:48
lyarwoodI thought the label would be the same tbh10:48
lyarwoodhmm10:48
sean-k-mooneythe filesystem lable shoudl be10:49
* lyarwood runs lsblk during a test10:51
*** ociuhandu has quit IRC10:55
*** ociuhandu has joined #openstack-nova10:56
*** macz_ has joined #openstack-nova11:01
*** ociuhandu has quit IRC11:01
*** jangutter_ has joined #openstack-nova11:05
*** macz_ has quit IRC11:06
*** dviroel has joined #openstack-nova11:07
*** jangutter has quit IRC11:09
*** zenkuro has quit IRC11:09
*** zenkuro has joined #openstack-nova11:09
lyarwoodhmm can we kill the ovsdbapp.backend.ovs_idl.vlog DEBUG logs in devstack@n-cpu11:20
sean-k-mooney... yes i kind of ment to do that a while ago11:20
sean-k-mooneyits form the lib we use in os-vif11:21
lyarwoodah11:21
sean-k-mooneythat said its not that annoying just a bit noisy11:21
sean-k-mooneynormally i dont notice it at this point but i perfonally have filtered it out in my head11:21
sean-k-mooneywe can register a log filter in the os-vif init11:22
sean-k-mooneyor do it in nova11:22
lyarwoodyeah it's just killing my devstack env and I assume isn't helpful for our log indexing11:22
sean-k-mooneylyarwood: you are seeing this more recently because we now default to the ptyhon libvary11:23
sean-k-mooneyneutron should also have that log in teh l2 agent11:23
sean-k-mooneyif you feel like writing a patch it shoudl proably get setup here https://github.com/openstack/os-vif/blob/master/os_vif/__init__.py#L24-L4911:24
lyarwoodyup I can look at that later today11:25
* lyarwood also has the os-brick patch to write11:25
lyarwoodkashyap: any idea how I can tell if q35 has a SATA bus?11:27
kashyaplyarwood: 'dmesg' in the guest?11:28
sean-k-mooneywell it will the way we generate the xml by default11:28
lyarwoodoh wait I wonder if support is actually missing in the cirros image11:28
lyarwoodeverything looks fine in the domain11:28
lyarwoodthe device is there and attached as a SATA cdrom etc11:29
lyarwoodit's just not there within the guestOS11:29
kashyaplyarwood: Very good point; it's the CirrOS image11:29
sean-k-mooneywell the cirros image usess a ubuntu 18.04 kernel11:29
sean-k-mooneyso it will have sata support11:29
lyarwoodyou would think11:29
lyarwooddon't they recompile the kernel now?11:30
kashyapYeah, don't assume.11:30
sean-k-mooneyim not11:30
sean-k-mooneyi have use it with sata before11:30
sean-k-mooneyalthough i dont know if i have used it with sata and q3511:31
sean-k-mooneyi have used it with sata and pc11:31
sean-k-mooneylyarwood: are you lookign at a gate failure by the way11:32
lyarwoodsean-k-mooney: yeah the tip of my q35 series switches nova-next over to using q35 and some tests that rely on config drive (device tagging etc) are failing11:33
sean-k-mooneyah ok11:33
lyarwoodsean-k-mooney: have a local reproducer anyway and I can't see anything related to SATA in dmesg within the instance11:33
sean-k-mooneyya its strang11:37
sean-k-mooneyi booted it too11:37
sean-k-mooneyso it boots fine with sata11:37
sean-k-mooneybut lsblk and blkid are empty11:37
* lyarwood tilts head11:38
lyarwoodf33 finds the device fine11:38
lyarwoodweird, I could make the cdrom bus host configurable?11:39
lyarwoodas a workaround11:39
sean-k-mooneywell its configurable via the image alredy11:39
sean-k-mooneywhats odd its ist not in /sys/block11:39
kashyap[root@rawhide-1 ~]# dmesg | grep -i sata | head -111:40
kashyap[    1.724456] ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode11:40
sean-k-mooneyi dont see it in /dev either11:41
sean-k-mooneykashyap empty11:41
lyarwoodI wonder if SCSI would be a better bus for this anyway11:43
kashyapsean-k-mooney: Look for `dmesg | grep -i sata`11:43
kashyapWhat gues did you boot?11:43
lyarwoodah I guess we don't know if we have the controller setup11:43
sean-k-mooneykashyap: i did and it gives nothing11:43
kashyapWhat guest?11:44
lyarwoodyeah same for me with cirros11:44
sean-k-mooneycirros11:44
kashyapRight; my guest is Fedora ("Rawhide" - rolling)11:45
sean-k-mooneyyes which is not what we are trying to debug11:45
sean-k-mooneyits odd because im not seeing anyting in /dev11:46
sean-k-mooneyor /sys for the disk11:46
sean-k-mooneyim really confused how the root filestyem is currenlty mounted11:46
kashyapsean-k-mooney: I was posting it for info; not saying Fedora is what we're debugging here11:47
kashyaplyarwood: sean-k-mooney: I see some forums online say CirrOS does not support SATA interface11:47
sean-k-mooneyunless we use the vfat config driver format that woudl be a problem11:48
sean-k-mooneywe cant really assuem scsi will work and virtio-blk is not avalid for a cdrom11:49
lyarwoodsean-k-mooney: ah check /etc/modules11:50
sean-k-mooneyjust deleted them to try scsi11:50
lyarwoodscsi should work but I think we can get sata to also work by updating /etc/modules11:50
sean-k-mooneythat woudl requrie cirros to be updated11:51
sean-k-mooneygiven its botting it is able to use sata somehow11:52
sean-k-mooney/etc/modules is just the set of addtional moduels to load11:52
sean-k-mooneyby the way why i said scsi isnt really an option is windows does not have teh virtio-scsi drivers by default11:53
sean-k-mooneyso we could not change the default to scsi11:53
sean-k-mooneywe chose sata becasue it shoudl work on all operating systmes11:53
lyarwoodis scsi == virtio-scsi?11:53
lyarwoodyou have to set the model to virtio-scsi for that to be the case right?11:53
sean-k-mooneyyes you do11:54
sean-k-mooneythe other scsi contoller are all quite old11:54
sean-k-mooneyim not sure we would want to use any of those by default11:54
lyarwoodwell at least Windows would support it then11:54
lyarwoodbut yeah11:54
sean-k-mooneyim not sure if it would it might11:54
sean-k-mooneyusing scsi works by the way for cirros11:55
sean-k-mooneyshow up as normal11:55
lyarwoodnot for me as the config cdrom drive11:56
lyarwoodso I think there's something still missing there11:56
lyarwoodI found https://bugs.launchpad.net/cirros/+bug/1715009 from a while ago11:56
openstackLaunchpad bug 1715009 in CirrOS "Missing modules for scsi cdrom config_drive in ppc64le" [Medium,Fix committed]11:56
lyarwoodso I wonder if there's additional things we need on x86_6411:56
sean-k-mooneyso looking at /lib/modules/...11:57
sean-k-mooneycirros hs sfcsi and virtio drivers11:57
sean-k-mooneybut not sata11:57
sean-k-mooneyit only has the virtio scsi drivers by the way11:57
sean-k-mooneyim guessing the init ramfs has the sata drivers11:58
sean-k-mooneyor it fell back to an in kernel driver11:58
sean-k-mooneyrahter then a module11:58
*** nightmare_unreal has joined #openstack-nova11:59
*** ociuhandu has joined #openstack-nova12:00
sean-k-mooneylyarwood: by the way we have a bug with bfv12:02
sean-k-mooneylyarwood: shocking i know but bfv does not select the correct bus when you use sata12:02
sean-k-mooneyit will use scsi instead12:03
sean-k-mooneyi fixed this years ago for non bfv12:03
sean-k-mooneybut it looks like it back or was never fixed for bfv12:03
kashyapSorry for my choppiness here.  Struggling with something today.12:03
openstackgerritHemanth N proposed openstack/nova stable/rocky: Update pci stat pools based on PCI device changes  https://review.opendev.org/c/openstack/nova/+/76182412:08
*** jraju__ has quit IRC12:09
*** ociuhandu has quit IRC12:22
*** zenkuro has quit IRC12:22
*** ociuhandu has joined #openstack-nova12:23
*** zenkuro has joined #openstack-nova12:23
lyarwoodsean-k-mooney: sorry had to go afk, with hw_disk_bus=sata?12:26
*** ociuhandu has quit IRC12:27
sean-k-mooneylyarwood: ya if you hw_disk_bus=sata with bfv on ussuri then you get scsi12:30
*** sapd1 has joined #openstack-nova12:34
*** ociuhandu has joined #openstack-nova12:39
*** k_mouza has quit IRC12:39
*** k_mouza has joined #openstack-nova12:39
*** khomesh24 has quit IRC12:44
*** martinkennelly has quit IRC12:45
*** martinkennelly has joined #openstack-nova12:47
*** Luzi has joined #openstack-nova12:55
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Record the machine_type of instances in system_metadata  https://review.opendev.org/c/openstack/nova/+/76753313:02
openstackgerritLee Yarwood proposed openstack/nova master: nova-manage: Add machine_type get command  https://review.opendev.org/c/openstack/nova/+/76954813:02
openstackgerritLee Yarwood proposed openstack/nova master: nova-manage: Add machine_type update command  https://review.opendev.org/c/openstack/nova/+/77489613:02
openstackgerritLee Yarwood proposed openstack/nova master: WIP nova-manage: Add machine_type list_unset command  https://review.opendev.org/c/openstack/nova/+/77489713:02
openstackgerritLee Yarwood proposed openstack/nova master: nova-status: Add hw_machine_type check for libvirt instances  https://review.opendev.org/c/openstack/nova/+/77064313:02
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Add a config update workflow test for [libvirt]hw_machine_type  https://review.opendev.org/c/openstack/nova/+/77489813:02
openstackgerritLee Yarwood proposed openstack/nova master: docs: Add admin docs for configuring and updating machine types  https://review.opendev.org/c/openstack/nova/+/77489913:02
*** macz_ has joined #openstack-nova13:02
*** macz_ has quit IRC13:07
*** khomesh24 has joined #openstack-nova13:08
*** xek has quit IRC13:22
*** xek has joined #openstack-nova13:23
*** artom has joined #openstack-nova13:23
*** xek has quit IRC13:26
*** xek_ has joined #openstack-nova13:26
*** hemanth_n has quit IRC13:29
*** khomesh24 has quit IRC13:39
*** khomesh24 has joined #openstack-nova13:55
*** macz_ has joined #openstack-nova13:56
openstackgerritLee Yarwood proposed openstack/nova master: WIP: nova-next: Start testing the 'q35' machine type  https://review.opendev.org/c/openstack/nova/+/70870113:57
*** khomesh24 has quit IRC13:57
*** khomesh24 has joined #openstack-nova13:58
*** macz_ has quit IRC14:00
*** ratailor has quit IRC14:03
*** ociuhandu has quit IRC14:12
*** zenkuro has quit IRC14:14
*** ociuhandu has joined #openstack-nova14:14
*** zenkuro has joined #openstack-nova14:14
*** Luzi has quit IRC14:34
*** tosky has quit IRC14:55
*** zenkuro has quit IRC14:56
*** zenkuro has joined #openstack-nova14:56
*** ociuhandu has quit IRC14:56
*** ociuhandu has joined #openstack-nova14:57
*** efried has joined #openstack-nova15:13
*** psachin has quit IRC15:17
*** vishalmanchanda has quit IRC15:22
*** sapd1 has quit IRC15:27
*** iurygregory_ has joined #openstack-nova15:29
*** iurygregory has quit IRC15:30
*** iurygregory_ is now known as iurygregory15:30
lyarwoodstephenfin: thanks for the review, would you mind if I hold off on that respin until the rest of the series has been looked at? ./me doesn't want to hammer CI15:36
stephenfinyup, fine by me. Working through it atm15:36
lyarwoodstephenfin: excellent thanks15:37
openstackgerritLucas Alvares Gomes proposed openstack/nova master: DO NOT REVIEW: Test OVN devstack module  https://review.opendev.org/c/openstack/nova/+/74822615:38
*** ociuhandu has quit IRC15:39
*** ociuhandu has joined #openstack-nova15:40
*** xek_ is now known as xek15:44
*** ociuhandu has quit IRC15:44
*** ociuhandu has joined #openstack-nova15:46
*** khomesh24 has quit IRC15:46
*** macz_ has joined #openstack-nova15:48
*** dklyle has quit IRC15:48
*** ociuhandu has quit IRC15:51
*** ociuhandu has joined #openstack-nova15:52
kukaczhi. having queens, nova disk with images_type=raw, when doing server resize I'm ending up with unbootable instance. I noticed that new (resized up) boot disk is being created as qcow, not raw. Is this a bug or am I rather missing some configuration detail?15:53
lyarwoodkukacz: that sounds like a bug, can you open one and share some logs of the resize?15:57
* lyarwood isn't sure we actually have any coverage in the gate for images_type=raw and resizing root disks15:58
lyarwoodhttps://launchpad.net/nova/+bug for the bug btw if you didn't know15:58
*** tosky has joined #openstack-nova15:59
*** ociuhandu has quit IRC15:59
*** tosky has quit IRC15:59
*** ociuhandu has joined #openstack-nova16:00
*** tosky has joined #openstack-nova16:02
kukaczlyarwood: sure, thanks, I'll collect logs+details into the bug. I'll need to switch the server to different configuration (lvm backend) for production, most probably not being able to collect more details or tests into the bug when required for some weeks. is it reasonable to file the bug despite such constraint?16:06
lyarwoodkukacz: yup I might try to reproduce this tomorrow anyway16:08
* lyarwood has downstream customers on queens using images_type=raw16:08
kukaczlyarwood: perfect! I'll do that then16:09
kukaczbtw. what is the current status of LVM backen resize support in Nova? I did a similar test with it and was (correctly) refused, stating it in logs that it's not supported. has that changed in newer releases?16:10
lyarwoodkukacz: I don't think so but let me check16:11
lyarwoodyeah I don't think it is sorry16:19
*** ociuhandu has quit IRC16:29
*** ociuhandu has joined #openstack-nova16:29
kukaczno worries, thanks for checking that!16:34
*** prometheanfire has quit IRC16:34
*** ociuhandu has quit IRC16:34
*** macz_ has quit IRC16:36
*** ociuhandu has joined #openstack-nova16:37
*** prometheanfire has joined #openstack-nova16:39
kukaczlyarwood: is the LVM backend in a good shape otherwise? I am planning to deploy "local disk" compute nodes utilizing LVM as image type after doing own evaluation, comparing it to qcow2/raw options.16:40
lyarwoodkukacz: It doesn't get as much love as the default qcow2 backend and isn't something we support downstream as a result16:41
*** ralonsoh has quit IRC16:44
*** ralonsoh has joined #openstack-nova16:44
kukaczlyarwood: hmm, I see. I've found LVM superior both in performance characteristics (qcow2 on ext4 was badly stealing most of NVMe random pattern iops) and availability-operations capabililites though16:53
kukaczlyarwood: when writing "we support ..." you meant a particular vendor or Nova project?16:55
lyarwoodkukacz: Vendor sorry, Red Hat.16:55
kukaczI am considering putting this into production, therefore I am so curious :-)16:56
kukaczok, it might even mean both in that case ;-)16:56
kukaczI might still be missing something. especially I wonder if that performance hit on qcow2 (on default ext4 on 1-2 disk mdraid level 0) is common or there's an factor I've ignored so far16:59
*** ociuhandu has quit IRC17:07
*** ociuhandu has joined #openstack-nova17:08
*** ociuhandu has quit IRC17:12
*** ociuhandu has joined #openstack-nova17:15
*** ociuhandu_ has joined #openstack-nova17:22
*** ociuhandu_ has quit IRC17:23
*** ociuhandu has quit IRC17:24
*** rpittau is now known as rpittau|afk17:47
*** mlavalle has joined #openstack-nova17:52
*** rcernin has joined #openstack-nova17:59
sean-k-mooneykukacz: qcow2 tends not to have a large perfromnce hit if you preallocate the file by the way17:59
sean-k-mooneyat least not on an xfs host filesystem18:00
*** derekh has quit IRC18:00
sean-k-mooneyim not sure you really rant to run raid 018:00
sean-k-mooneyraid 10 maybe but unless you hate your customers data raid 0 is proably not what you want to use in production18:01
*** johnsom has quit IRC18:01
*** rpittau|afk has quit IRC18:01
*** johnsom has joined #openstack-nova18:02
*** rpittau|afk has joined #openstack-nova18:04
*** rcernin has quit IRC18:04
kukaczsean-k-mooney: I had the preallocation enabled while testing. the performance was good in sequential operations (>3300MiB/s), not in random though. in random, the 750k IOPS seen in lvm or raw are dropping to around 30k. tested on precreated 80GiB large fio file, running concurrently 6 instances18:05
kukaczI was using ext4 without extra parameters except -m0. same as with raw, though18:06
sean-k-mooneykukacz: try turing off the atime? i think18:07
kukaczhmm, I believe fio keeps the single testfile opened for all the time of test. but I might try that anyway18:07
sean-k-mooneyfor xfs i do defaults,noatime,nodiratime,logbufs=8,logbsize=256k,largeio but you will proably want something similar for ext418:08
sean-k-mooneyoh i have /var/lib/docker ext4 defaults,noatime 0 018:08
sean-k-mooneykukacz: historically18:09
sean-k-mooneykukacz: lvm had better io performacne but it took longer to deploy vms and used more space18:09
sean-k-mooneybasically becasue even when using thing providioning it need to write all the data for the base iamge every time a vm boots18:10
sean-k-mooneywe dont do any kind of snapshoting in the lvm driver to merged multiple thin provisioned images18:10
sean-k-mooneyfor the qcow backedn we share a backing disk and jsut create a thin snapshot18:10
kukaczI was not enabling thin provisioning with LVM, if you mean that nova.conf parameter18:11
sean-k-mooneyso if you have long running vms lvm will likely preferm betere even if its less maintianed18:11
sean-k-mooneyif you have short lived vms the qcow is proably going to be better.18:12
kukaczI wanted the extents to be preallocate in a predictable way - priorizing volumes to be placed on same disk, if it fits by size18:12
kukaczhmm, the provisioning time was nothing I have even noticed with LVM, but I was using quite smallish but typical ubuntu 20 image (around 5GB)18:14
sean-k-mooneyare you on flash18:15
sean-k-mooneyif you have ssd then you wont notice it too much18:15
sean-k-mooneyif you have HDD and you have multiple vms starting you will18:15
kukaczraid0 was intentional here - focus on performance. if customer is using local disk, they assume that the single instance might be lost anyway. targetting cloud-native replicated applications18:15
kukaczyes, flash, NVMe SSDs18:16
sean-k-mooneykukacz: well jus tmake sure they are aware of that18:16
sean-k-mooneyi assume you will be providing them with a ha cinder solution? ceph?18:16
sean-k-mooneyso tha they can store all there main data elsewhere18:17
kukaczof course, they're having other options for more traditional network storage18:17
kukaczyes, ceph as cinder backend18:17
sean-k-mooneyso local lvm is for low latency local storage18:17
sean-k-mooneyfor root files system/scratch space18:18
sean-k-mooneyit used to work resonbably well for that18:18
sean-k-mooneyi is tempthing to just use ceph for everything if you have it18:18
sean-k-mooneypossible deploying osd on the compute nodes  too18:19
sean-k-mooneyif you find the perfomce of lvm is not worth it the rbd driver would be worht considering18:19
kukaczwe've had used rbd driver so far as nova backend18:20
kukaczstill, ceph is a shared network storage. if it has an issue, everyone in the cloud has an issue. very binding for the operators18:21
kukaczwhile there's an amount of workload which does not need the features of shared storage. also, they replicate blocks by themselves already, and storing those on ceph, which creates additional copies is counter-productive18:22
kukaczso the idea is to offer a storage layer for these workloads. it would get much better latencies, much higher iops, not being exposed to risk of central storage issue18:24
sean-k-mooneyit a valid concern and ya that is what many doo18:26
sean-k-mooneyif you can live with it being shared storage you can also modify the crush map to create an ssd only pool with no replication and expsoe that as a seperate cinder volume type18:27
sean-k-mooneyanyway if you want best performce then you shoudl use eivehr lvm or raw18:27
sean-k-mooneyqcow will be slower the both the other but it has some advnatges if customer  will use thing like snapshopt18:28
kukaczso far, I've been quite excited with the LVM experience in this usage. only issue not being able to resize18:29
*** macz_ has joined #openstack-nova18:30
kukaczI acn even lose a drive from linear VG, only dropping the volumes stored on that particular drive. the rest of workload on same VG keeps running. which is also an advantage compared to raid-018:31
kukaczI can ...18:31
*** dtantsur is now known as dtantsur|afk18:32
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Parse the 'os' element from domainCapabilities  https://review.opendev.org/c/openstack/nova/+/67379018:34
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Auto-detect UEFI, secure boot support  https://review.opendev.org/c/openstack/nova/+/68262718:34
openstackgerritStephen Finucane proposed openstack/nova master: WIP: libvirt: Methods to handle request for Secure Boot & non-Q35 machine types  https://review.opendev.org/c/openstack/nova/+/68262818:34
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Add secure boot loader config support  https://review.opendev.org/c/openstack/nova/+/77568718:34
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Add missing hints  https://review.opendev.org/c/openstack/nova/+/77568818:34
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Stop passing around virt_type, caps  https://review.opendev.org/c/openstack/nova/+/77568918:34
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Report secure boot support to scheduler  https://review.opendev.org/c/openstack/nova/+/77569018:34
openstackgerritStephen Finucane proposed openstack/nova master: WIP: libvirt: Switch to libvirt's firmware auto-selection  https://review.opendev.org/c/openstack/nova/+/77569118:34
openstackgerritStephen Finucane proposed openstack/nova master: WIP: Store UEFI'ness of guest as attribute  https://review.opendev.org/c/openstack/nova/+/77569218:34
openstackgerritStephen Finucane proposed openstack/nova master: WIP: libvirt: Stop checking non-host architectures for SEV  https://review.opendev.org/c/openstack/nova/+/77569318:34
*** andrewbonney has quit IRC18:34
*** macz_ has quit IRC18:35
*** openstackgerrit has quit IRC18:38
*** martinkennelly has quit IRC18:39
kukaczsean-k-mooney: I might retry the perf with xfs. thank you for the parameters. I don't target for top performance comparing lvm-raw-qcow. say 20% difference is perfectly ok if there is better implementation of qcow. just the huge drop (700k -> 30k) I'm seeing is too much already. I have to recheck the formatting/mount options.18:39
sean-k-mooneyraw shoudl basically give you the same performace as lvm with a small amount of overhead for ext4/xfs18:41
sean-k-mooneybut you can resize ectra18:41
*** zoharm has quit IRC18:43
kukaczyes, with raw I'm getting good numbers. resize did not work for me today though, perhaps I hit a bug - resizing an instance with raw disk in queens created an unbootable qcow image18:45
sean-k-mooneydid you have the for raw disk set18:45
sean-k-mooneyhttps://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.force_raw_images18:45
kukaczyes, I have that set to True18:46
sean-k-mooneyand https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.use_cow_images set to false18:47
kukaczno, I have use_cow_images unset, which probably renders true18:49
sean-k-mooneytrue is the default18:49
aarentskukacz: seems odd, we are heavy user of raw and resize are fine, we have use_cow_images to false as sean suggest in newton and stein18:50
*** macz_ has joined #openstack-nova18:51
kukaczaarents: interesting. I'll try that configuration18:51
kukaczthough I did not have other issues with raw image deployments and backing files. only issue was this resize18:51
sean-k-mooneyso we removed teh raw backend a while ago and folded it into the flat backend https://github.com/openstack/nova/blob/0503bb14409a1a78a82fd8e6f931259fde753df9/nova/virt/libvirt/imagebackend.py#L128618:52
sean-k-mooneyand it gets is resize form ate dynamically https://github.com/openstack/nova/blob/0503bb14409a1a78a82fd8e6f931259fde753df9/nova/virt/libvirt/imagebackend.py#L593-L59518:53
sean-k-mooneyi think this is what decideds https://github.com/openstack/nova/blob/0503bb14409a1a78a82fd8e6f931259fde753df9/nova/virt/libvirt/imagebackend.py#L537-L54518:54
sean-k-mooneykukacz: perhaps qemu-image info was retruning something other then raw?18:55
*** macz_ has quit IRC18:55
kukaczsean-k-mooney: yes, file tool identified the disk as "QEMU QCOW Image (v3)"18:57
sean-k-mooneybefore or after the resize18:58
kukacznow I have retried with `use_cow_images = false` and that works18:59
*** rcernin has joined #openstack-nova18:59
*** nightmare_unreal has quit IRC18:59
sean-k-mooneydid you explictly set teh images_type to raw?18:59
kukaczsean-k-mooney: that was after resize. original disk is identified as "DOS/MBR boot sector" = raw18:59
kukaczsean-k-mooney: yes, explicitely `images_type=raw`19:00
sean-k-mooneyi suspect tha twehe we merged raw into flat the behavior changed slightly and now you neeed to set use_cow_iamges=false19:00
sean-k-mooneyya its not an alais for flat which wallows qcow images19:00
sean-k-mooneyit just does not crate a backing files when it uses qcow image like the normal qcow one those19:01
*** k_mouza has quit IRC19:03
sean-k-mooneystephenfin: bauzas  so i figured out my fixture issue. i was expecting to use the libvirtneutron fixture but i was using the base one.19:07
sean-k-mooneyonce i fixed that my func tests started passing.19:07
*** openstackgerrit has joined #openstack-nova19:12
openstackgerritsean mooney proposed openstack/nova master: support per port numa policies with sriov  https://review.opendev.org/c/openstack/nova/+/77379219:12
*** rcernin has quit IRC19:12
kukaczsean-k-mooney, aarents: thank you! the resize issue is resolved. I did couple more tests and seems ok now.19:14
sean-k-mooneycool19:14
sean-k-mooneyraw should be very very close to lvm so if that works for you then it will give you the advanate of better testing and more features too19:15
kukaczsean-k-mooney: hmm, that sounds good. so lvm is the least tested option compared to qcow/raw? not being used in production deployments as often?19:19
sean-k-mooneythat is my understnading yes most peopel that dont use qcow for local sotrage use raw then lvm is less common after that19:25
kukaczthere's another factor I've registered: with LVM I could expand capacity instantly just by adding a PV into VG. with qcow2/raw, I was using mdadm level 0, which needs to redistribute blocks while adding a disk. that was such a performance impacting and lengthy operation, that I could not be doing this online - with production workload running.19:27
sean-k-mooneywell you can just put the raw files on an lvm volumn19:28
kukaczbut I could move the LVM to OS level here, format, mount to /var/lib/nova and keep the disk management benefits perhaps, while utilitizing better integrated qcow or raw backends19:28
sean-k-mooneyand expand it the same way19:28
kukaczsean-k-mooney: exactly :-)19:30
sean-k-mooneyusing raw/qcow does not percluded you mounting the nova state dir on an lvm volumn seperatly19:30
kukaczsean-k-mooney: you are right, it does not. I have just defined myself different test scenarios originally, not including this one19:33
sean-k-mooneymy home openstack has my nova directory in an lvm vg on a raid 6 disk array with an optane ssd as a cache via bcache19:34
kukaczwith the current knowledge, it seems the best option. supposing I can get to slightly comparable performance19:35
sean-k-mooneyin your case you dont need bcach since its all ssd but for me its nice to hide some of the latency of HDDs19:36
sean-k-mooneyanyway o/19:36
kukaczof course. I'm also trying to keep the configuration as simple as possible. avoiding any extra layers. that's why I'm so enthusiastic of the idea of direct LVM usage19:40
kukaczif I put filesystem (/var/lib/nova/instances) on top of LVM, it will not partially-tolerate disk failure anymore. that could only be achieved with direct Nova-LVM integration19:42
kukaczseems that I'll need to accept a compromise19:43
*** rcernin has joined #openstack-nova19:48
*** macz_ has joined #openstack-nova19:51
*** macz_ has quit IRC19:56
*** luksky has quit IRC19:58
*** luksky has joined #openstack-nova19:58
*** rcernin has quit IRC20:05
*** openstackgerrit has quit IRC20:06
*** luksky has quit IRC20:17
*** rcernin has joined #openstack-nova20:21
*** whoami-rajat__ has quit IRC20:22
*** luksky has joined #openstack-nova20:30
*** prometheanfire has quit IRC20:46
*** prometheanfire has joined #openstack-nova20:52
*** rcernin has quit IRC20:57
*** luksky has quit IRC21:00
*** prometheanfire has quit IRC21:06
*** prometheanfire has joined #openstack-nova21:12
*** luksky has joined #openstack-nova21:12
*** jkulik has quit IRC21:17
*** jkulik has joined #openstack-nova21:18
*** hamalq has joined #openstack-nova21:22
*** rcernin has joined #openstack-nova21:23
*** rcernin has quit IRC21:38
*** rcernin has joined #openstack-nova21:38
*** xek has quit IRC21:48
*** bbowen has quit IRC22:04
*** lxkong has joined #openstack-nova22:06
*** slaweq has quit IRC22:18
*** noonedeadpunk has quit IRC22:24
*** noonedeadpunk has joined #openstack-nova22:30
*** mlavalle has quit IRC23:02
*** luksky has quit IRC23:17
*** macz_ has joined #openstack-nova23:25
*** macz_ has quit IRC23:30
*** efried has quit IRC23:41
*** bbowen has joined #openstack-nova23:42

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!