*** dviroel_ is now known as dviroel | 09:40 | |
madamski | hi, I have a problem with CephFS snapshots created by Manila not being exposed in the .snap directory properly and I'm wondering if it's a known issue | 09:47 |
---|---|---|
madamski | for some reason only the first snapshot is shown there, but it's always empty | 09:48 |
madamski | tested with Yoga and Zed on Ceph Nautilus, Octopus and Quincy | 09:48 |
madamski | I thought it might be a problem with Ceph, but when I create subvolumes and snapshots from scratch with the ceph fs subvolume interface, it works ok | 09:49 |
vkmc | madamski, hey there | 09:49 |
vkmc | madamski, do you have logs from the m-shr we can look into? | 09:49 |
vkmc | madamski, also, which versions are you using? for both OpenStack and Ceph | 09:50 |
madamski | vkmc, I can try to produce some useful logs | 09:56 |
vkmc | madamski, I'll try to reproduce in my env too | 09:56 |
madamski | vkmc: as to the Manila version I'm using, it's Zed from Ubuntu Cloud Archive, 15.0.0-0ubuntu1~cloud0 | 09:57 |
vkmc | madamski, right, so I assume Ceph is the version shipped by the distro | 09:57 |
vkmc | which I think it's Quincy | 09:57 |
madamski | vkmc: and Ceph is the newest Quincy, 17.2.5 from ceph.com | 09:57 |
vkmc | oki | 09:58 |
madamski | vkmc: got it, hope it's useful: https://dl.plik.ovh/file/G9UhlXz6A1Hg9Kw9/8a26U8fr0bHUg6i0/log | 11:25 |
vkmc | madamski, thanks, will take a look and get back to you | 11:26 |
rdupontovh | Hello everyone, i am currently testing the "NetApp ONTAP: Add support for multiple subnets per availability zone " feature released on Yoga version, and i was wondering why the update of share export locations was conditioned to the share having replica_state "active", as shown in this line of code https://github.com/openstack/manila/blob/stable/yoga/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L2234 ? | 13:47 |
rdupontovh | I am trying to understand what's behind this, thanks a lot in advance for anyone having some information on this topic :) | 13:47 |
dviroel | felipe_rodrigues: ^ | 13:59 |
vkmc | madamski, can you check one level up? | 14:08 |
vkmc | madamski, based on your logs, check /volumes/_nogroup/63fe53b8-1bc3-4b07-a39b-5f80cc68cb85/.snap | 14:08 |
vkmc | madamski, new versions of Ceph stores snaps there | 14:08 |
vkmc | madamski, this was done this way to allow users to preserve subvolume snapshots even after a subvolume is deleted | 14:09 |
vkmc | madamski, users need to clone the subvolume snapshot data to use it | 14:09 |
madamski | vkmc: that works, thanks | 14:18 |
madamski | vkmc: there are indeed 2 snapshots with the expected content one level higher | 14:20 |
vkmc | madamski, great! | 14:20 |
madamski | vkmc: so, we cannot rely on the old behaviour, but have to clone it into the new volume, got it | 14:21 |
madamski | vkmc: well, thanks a lot for your time, then, that clears things up for me | 14:22 |
vkmc | madamski, I'll open a tracker so we add proper docs to the Ceph drivers | 14:24 |
vkmc | madamski, thanks for bringing this up | 14:24 |
felipe_rodrigues | Hi rdupontovh, it was developed by sfernand.. I couldn't identify why.. I'll ask him. Is it causing any bug ? | 15:48 |
*** dviroel is now known as dviroel|lunch | 15:50 | |
rdupontovh | felipe_rodrigues: hi, no it is not causing any bugs because i am currently testing this feature to evaluate the feasibility of a new offer, so i am very interested to know why but is not a production issue at all :) | 15:58 |
*** dviroel|lunch is now known as dviroel | 16:46 | |
*** dviroel is now known as dviroel|biba | 19:51 | |
*** dviroel|biba is now known as dviroel|biab | 19:51 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!