cancel
Showing results for 
Search instead for 
Did you mean: 

Concurrent Access to two different vendor arrays from same host

PhilBry
Not applicable
Hi,

This may sound like a dumb question.  I have a requirement for a customer using VxVM to have access to an IBM and EMC storage array concurrently.  Assuming each array has is supported and has its own ASM, can I have parallel access to LUNs presented by both arrays over a SAN through the same FC HBAs in the host.  OS is Solaris 9/10.
From all the information I've read it seems that this would be supported, but I can't find a clear statement to that effect.  Can anyone help me on that note?

Thanks,

Phil
1 ACCEPTED SOLUTION

Accepted Solutions

g_lee
Level 6
Phil,

Assuming the arrays are supported, and the hardware/SAN zoning side is supported (ie: the switches/HBAs can handle concurrent traffic from both arrays through same paths), then it should also work from a Storage Foundation perspective.

As Gaurav mentioned above, it's not an uncommon occurrence, as customers often do this when migrating from one array to another (ie: system has disks presented from existing array, so luns from new array are zoned in, and mirrored to existing disks to migrate); the only difference in your case is that you will be keeping both arrays as opposed to removing one of them.

In terms of finding a document stating this specifically, this might be a challenge, as it's a rather specific configuration, so it's more a case of read-between-the-lines - provided both arrays are supported, ie: you could connect each of them individually, then there is nothing to say you can't use both of them together.

regards,
Grace

View solution in original post

3 REPLIES 3

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi Phil,

I don't see any reason for being not supported.....

It is a generic config, whenever someone wants to migrate data from one array to different, they would create vxvm volumes mirrored across arrays .. that way you have a parallel access as one plex belongs to one array & second plex to other array..

However I would imagine to get them connect on different HBAs....

Thinking of document, I doubt too if we can find a document suggesting above supportability....

Gaurav

g_lee
Level 6
Phil,

Assuming the arrays are supported, and the hardware/SAN zoning side is supported (ie: the switches/HBAs can handle concurrent traffic from both arrays through same paths), then it should also work from a Storage Foundation perspective.

As Gaurav mentioned above, it's not an uncommon occurrence, as customers often do this when migrating from one array to another (ie: system has disks presented from existing array, so luns from new array are zoned in, and mirrored to existing disks to migrate); the only difference in your case is that you will be keeping both arrays as opposed to removing one of them.

In terms of finding a document stating this specifically, this might be a challenge, as it's a rather specific configuration, so it's more a case of read-between-the-lines - provided both arrays are supported, ie: you could connect each of them individually, then there is nothing to say you can't use both of them together.

regards,
Grace

harishkrist
Level 2
Employee

Hi Phil,

Let me understand your question more clear.

1) you have already configured SAN/Storage zoning for IBM and EMC array for you nodes and you are able to see the disk from OS side.
if zoneing is not configured, please request the same to your SAN/Storage team, they will able to help you.

2) if you are able to see the both disks from OS side, then i'm sure that from VxVM (ASL/APM) will recognize disks from VxVM DMP (dynamic multipath  level) level; to confirm array support i would suggest you please double check the Hardware compatiblity list from Symantec website for your VxVM versions.


Coming to your question: "can I have parallel access to LUNs presented by both arrays over a SAN through the same FC HBAs in the host.  OS is Solaris 9/10."

YES. for the parallel access to lun i would suggest you to create a volume with mirror=enclosure method as suggested below, so that you will have good high availability with any failure from array side
this command works fine with Solaris 5.0MP3 and above,

#vxassist -g DGNAME make VOLNAME SIZE layout=mirror-concat mirror=enclosure logtype=dco dcoversion=20 drl=yes fmr=yes  


PS: If you're happy with the answer, please mark the post as solution.