Thursday, 2023-09-07

opendevreviewAtsushi Kawai proposed openstack/cinder master: HPE XP: Support HA  https://review.opendev.org/c/openstack/cinder/+/89260801:40
opendevreviewcuiyeliu proposed openstack/cinder master: Reference - Documentation correction  https://review.opendev.org/c/openstack/cinder/+/89333001:56
opendevreviewAtsushi Kawai proposed openstack/cinder master: Follow up: HPE XP: data deduplication and compression  https://review.opendev.org/c/openstack/cinder/+/89333102:18
opendevreviewAtsushi Kawai proposed openstack/cinder master: Follow up: HPE XP: support data deduplication and compression  https://review.opendev.org/c/openstack/cinder/+/89333104:09
*** geguileo is now known as Guest210906:52
*** Guest2109 is now known as geguileo08:22
sfv880__Hello reviewers, Could you please review 881188: Fix Infinidat driver to inherit compression | https://review.opendev.org/c/openstack/cinder/+/881188 ? Thank you!08:45
liuc49Hello reviewers, Could you please review 893330: Reference - Documentation correction | https://review.opendev.org/c/openstack/cinder/+/893330 ? Thank you!10:50
opendevreviewXuQi proposed openstack/cinder master: Fujitsu Driver: Add QoS support  https://review.opendev.org/c/openstack/cinder/+/84773011:38
roquejHello team, from a storage pespective how do we accomplish a Disaster recovery plan with replicated volume? If I want to use my secondary volume because my primary site has failed?12:53
opendevreviewLuigi Toscano proposed openstack/cinder-tempest-plugin master: Volume multiattach: negative test for the old way  https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/89418913:04
simondodsleyroquej: Cinder replication is really designed only for full array failover, not individual volume recovery. A full array DR would mean using the `cinder failover` command, but that will fail every volume over and any non-replicated volumes would go unavailble13:47
roquejok understood13:51
opendevreviewPavlo Shchelokovskyy proposed openstack/cinder master: Add noop backup API class  https://review.opendev.org/c/openstack/cinder/+/88489714:38
jbernarddwhite4: whould you mind looking at (and possibly assigning to yourself) this bug: https://bugs.launchpad.net/cinder/+bug/203361216:12
dwhite4jbernard Chris phrased it better in his reply on the bug, but I think it's a 'won't fix' or not a bug for now, as we don't want to suddenly drop support for older arrays that rely on MD5 still. Looking for feedback if there's any policy on deprecating/announcing a breaking change in a release note like that.16:19
dwhite4https://bugs.launchpad.net/cinder/+bug/2033612/comments/616:19
JayFdwhite4: jbernard: I'll note that in Ironic, we had some success adding a flag to remove support for MD5 for operators who did not want it used under any circumstances. It might be an option if MD5 support is not reasonable to deprecate or remove.16:21
roquejsimondodsley: in case an end user has to initiate a disaster recovery plan from the array itsef (Primary site is dead), how cinder will be aware of the change in terms of metadata? Is there any proper way to do it?16:33
jbernardroquej: i believe a failover is initiatiated from teh cinder client and passed through to the driver to implmeent; we have document on replication and the failover process in the repo I beleive, will try to find it when i get back16:40
roquejok I think this is specific to our storage. Powermax has primary R1 and secondary R2 devices. Replication goes from R1 to R2. In specific circumstances, like an outage it may be necessary to swap personalities which means that R1 become R2 and R2 become R1. How can we update the volume metadata with that kind of information? 16:43
opendevreviewAshley Rodriguez proposed openstack/devstack-plugin-ceph master: Remote Ceph with cephadm  https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/87674718:06
opendevreviewAshley Rodriguez proposed openstack/devstack-plugin-ceph master: Remote Ceph with cephadm  https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/87674718:08
simondodsleyroquej: that is down to the functionality of the driver, so you would need to ask DellEMC how the PowerMAX driver works in a cinder failover situation. As jbernard says the cinder client will passthrough commands targeted for the primary array to the secondary array, based on the `replication_device` parameter in the primaries backend configuration in cinder18:39
happystackersimondodsley: I'm part of DellEMC actually ;-) I understand how the cinder failover-host works and that's fine. My question is more if an end-user operates manually on the powermax array without the help of cinder in case of an outage for example, how can we align the volume metadata with the current situation? (Primary become seconday and vice versa)18:44
simondodsleyhappystacker: ah - users doing manula things on a backend outside of the knowledge of cinder is bad. The DB will get all confused and will require manual intervention. Not pretty. I always tell our users NEVER to touch a volume that was created by OpenStack18:45
happystackerwell I agree in some way. Problem is that for cinder hasn't the ability to do every array operation and as a result the user has to operate manually. The typical use case I see is when he needs to swap site personality as described. There is a command from the powermax which can do that, but nothing in the cinder space, unless I missed anything18:58
opendevreviewEric Harney proposed openstack/cinder master: RBD: tpool.Proxy client object  https://review.opendev.org/c/openstack/cinder/+/89422620:39
opendevreviewTakashi Kajinami proposed openstack/cinder master: Deprecate Windows OS support  https://review.opendev.org/c/openstack/cinder/+/89423723:42

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!