cancel
Showing results for 
Search instead for 
Did you mean: 

VEA configuartion steps using SAN

Sridhar_sri
Level 5
Hi all,
             i have 2 solaris servers which are connected with storage array. When i launch VEA frmo server 1 and create disk group followed by volume creation it creates. SImiliarly when i do the same from secondary server, it also does the same but the thing is allocating one more separate volume.

but i want to see those volumes which are mounted from server 1 (after unmounting from server 1) in server 2.

is it possible. that is i have to have the volume as common , so that anyone of the server can access at an instant. If one fails, then the secondary server can access it.

Can anyone help me on this


Regards,
Sri
1 ACCEPTED SOLUTION

Accepted Solutions

ScottK
Level 5
Employee
Hi Sri,

Not entirely following but here goes.

It sounds like you want to have a disk group that can be accessed by either host. A traditional diskgroup can do this. To access the diskgroup with server 1, you import the diskgroup command is vxdg import -- see here for details:
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/manpages/vxvm/man1m/vxdg.html

Now, if you want to access the diskgroup from server 2, you deport the diskgroup -- vxdg deport, same man page as above -- from server 1, and import the diskgroup on server 2.

If you want simultaneous access, you need Storage Foundation Cluster File System. This allows shared diskgroups. A shared diskgroup is simultaneously imported on anywhere from 2 to 32 servers simultaneously, so that if you want server 1 to stop accessing the dg and server 2 to start accessing the dg, there is not wait time of unmount/deport/import/mount, and a number of customers do this. However, shared diskgroups leave it up to you or the application to do the locking. Shared diskgroups happily allow both server 1 and server 2 to write simultaneously to the same block on the same volume. If your application or failover scheme isn't aware of this, your data can be corrupted. So we generall recommend that you use either:
a. A parallel application, such as Oracle RAC, Sybase ASE CE, Informatica, TIBCO EMS, MQSeries, etc., where the application itself coordinates access across the different servers.
b. A workload whose semantics allow for 'last writer wins' behavior, such as NFS or ftp
c. A workload or application where only one node writes; all other nodes read; there are a number of billing applications in this category
d. If none of the above, then use a cluster server to handle application startup, shutdown, and failover. We recommend Veritas Cluster Server (VCS).

View solution in original post

6 REPLIES 6

ScottK
Level 5
Employee
Hi Sri,

Not entirely following but here goes.

It sounds like you want to have a disk group that can be accessed by either host. A traditional diskgroup can do this. To access the diskgroup with server 1, you import the diskgroup command is vxdg import -- see here for details:
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/manpages/vxvm/man1m/vxdg.html

Now, if you want to access the diskgroup from server 2, you deport the diskgroup -- vxdg deport, same man page as above -- from server 1, and import the diskgroup on server 2.

If you want simultaneous access, you need Storage Foundation Cluster File System. This allows shared diskgroups. A shared diskgroup is simultaneously imported on anywhere from 2 to 32 servers simultaneously, so that if you want server 1 to stop accessing the dg and server 2 to start accessing the dg, there is not wait time of unmount/deport/import/mount, and a number of customers do this. However, shared diskgroups leave it up to you or the application to do the locking. Shared diskgroups happily allow both server 1 and server 2 to write simultaneously to the same block on the same volume. If your application or failover scheme isn't aware of this, your data can be corrupted. So we generall recommend that you use either:
a. A parallel application, such as Oracle RAC, Sybase ASE CE, Informatica, TIBCO EMS, MQSeries, etc., where the application itself coordinates access across the different servers.
b. A workload whose semantics allow for 'last writer wins' behavior, such as NFS or ftp
c. A workload or application where only one node writes; all other nodes read; there are a number of billing applications in this category
d. If none of the above, then use a cluster server to handle application startup, shutdown, and failover. We recommend Veritas Cluster Server (VCS).

Sridhar_sri
Level 5
Thanks Scott. Your explaination clears more of my quesions. But i have one query here.

How you want me to create a disk group as shared disk group in cluster ? Do u want to me to create the disk group resource as parallel during cluter configuration ?

Thanks,
Sri

g_lee
Level 6
Sri,

If you want a shared disk group (concurrent access, ie: both systems have dg imported and have the filesystems mounted at the same time), you need Storage Foundation Cluster File System to import the dg as shared and manage the concurrent access. ie: do not create a parallel disk group resource if you have a traditional diskgroup, as this can result in data corruption.

As Scott mentioned, if you just need failover access (ie: the dg only needs to be on one node at a time, in the event one node fails the other takes over/the group fails over to the secondary node), provided both nodes can access the same storage (ie: both see the dg) then you just deport the dg from the first node, and import it on the second. ie: you only create the diskgroup once (on one node), then to use it on the second node, just deport from first and import on 2nd node.

Note in the failover case, the volume access for the secondary node won't be instantaneous, as it will depend on how long it takes the sg to failover/come up on the second node.

Sridhar_sri
Level 5
what is Storage Foundation Cluster File System to import the dg ?

How to configure this ? so far i havent tried this setup. can you point me to some location where i can get the configuration procedure for this


Thanks,
Sri

g_lee
Level 6
SFCFS 5.0MP3 Administrator's Guide - Solaris (html version):
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/html/sfcfs_admin/index.html

PDF version:
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/pdf/sfcfs_admin.pdf

If you are using another version, see http://sfdoccentral.symantec.com/ and select the relevant link(s) for that version.

For installation: use the installer and select SFCFS from the menu, or use the installsfcfs script directly (located under the storage_foundation_cluster_file_system directory). Note you may need additional licensing to use the SFCFS components.

Note if you are planning on using SFCFS/mounting the filesystems simultaneously your application needs to be cluster aware/be able to cope with this ie: even if SFCFS allows parallel writes, if the application has not been written to accommodate parallel writes it may see this as "corruption" and cause problems/issues.

aamir
Level 3
Hi,
 
Best way to achieve what you are looking for is,

deploy Veritas Storage Foundation HA, Oracle RAC,
Veritas Cluster Service
 then only you can achieve this.


Best Regards,
Aamir