cancel
Showing results for 
Search instead for 
Did you mean: 

cinder multi attach limitation with Veritas

achal
Level 2

Hi, We are deploying our multi node cluster solution using vertias cluster in openstack cloud, in our solution we use external storage and LUNs/volume are attached to all the node.

However, there is a limitation from cinder that multi-attach functionality is not available.

What i found that if we don't use CFS and just use SFCFS instead of SFCFSHA. Can we resolve the cinder multi attach problem?

6 REPLIES 6

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hello,

Can you elaborate on what you mean with "there is a limitation from cinder that multi-attach functionality is not available"?

Do you mean that you're unable to assign the disk to mutliple nodes simultaneously?

If so, you'll need to look at an FSS configuration. This works sort of like VMWare's VSAN where disk assigned to a node is shared to and with other nodes across high speed interconnects.

I'm not that fimiliar with Openstack yet but are you aware of Veritas Hyperscale which is a Software Defined Storage service design specifically for Openstack by Veritas? It might be useful to ask your local Veritas SE about that as well.

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified

Have a look at Hyperscale. For many reasons it's a better fit for SDS on OpenStack

Hi,

Veritas cluster solution provides facility to share the external storage among all the nodes in cluster, this requires volume to be attached to all nodes but problem with cinder is that this can't attach the volume to all the nodes,

when i see that CFS is responsible to attach the volume to all the nodes, i would like to to remove whihc can be done using SFHA.

But i am not sure if this still can resolve the cinder limitation.

Using Hyperscale will not be possible for us as this requires big changes on the application.

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified

OK if you need a FSS-like functionality you may want to consider Veritas Access which is SDS that can hide FSS volumes behind a Cinder-compatible driver. Access uses CFS internally but you don't interact with it directly

Hi Let me show you first what is the limitation for Openstack.

Note

The OpenStack Block Storage service is not a shared storage solution like a Network Attached Storage (NAS) of NFS volumes where you can attach a volume to multiple servers. With the OpenStack Block Storage service, you can attach a volume to only one instance at a time.

The OpenStack Block Storage service also provides drivers that enable you to use several vendors’ back-end storage devices in addition to the base LVM implementation. These storage devices can also be used instead of the base LVM installation.

 

Go here for more info.

https://docs.openstack.org/admin-guide/blockstorage-manage-volumes.html

The solution given by cinder is below.

https://specs.openstack.org/openstack/cinder-specs/specs/kilo/multi-attach-volume.html

Now let me tell you what i am thinking that instead of using CFS, I would like to install Veritas cluster solution without CFS. That means i would not have shared storage among different VMs. I mean at 1 time each VM would have dedicated storage partitian but when the failover/switchover happen on VM1, the Mount point of VM1 will get unmvouted and will mount on VM2.

Do you think that installing veritas without CFS would resolve the Cinder issue?

 

Regards,

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hello,

A Veritas file system in a CFS cluster is attached to both systems, and they both access it at the same time.

A Veritas file system in a non-CFS cluster is also attached to both systems, but they dont access it at the same time.

These were the options available to you before Veritas came up with FSS. So as explained in the previous post, you can have 2 or more systems with only an internal disk (or disk assigned only to it), and you can the export that disk so that the other nodes can see it as well. This export relies on the cluster interconnects.

It basically works like vSan in VMware at a high level. Mutliple nodes have a disk and it gets virtualized so that all nodes can access it without needing to assign the lun to multiple nodes.

Is this clear now?