Hi Sri,
Not entirely following but here goes.
It sounds like you want to have a disk group that can be accessed by either host. A traditional diskgroup can do this. To access the diskgroup with server 1, you import the diskgroup command is vxdg import -- see here for details:
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/manpages/vxvm/man1m/vxdg.html
Now, if you want to access the diskgroup from server 2, you deport the diskgroup -- vxdg deport, same man page as above -- from server 1, and import the diskgroup on server 2.
If you want simultaneous access, you need Storage Foundation Cluster File System. This allows shared diskgroups. A shared diskgroup is simultaneously imported on anywhere from 2 to 32 servers simultaneously, so that if you want server 1 to stop accessing the dg and server 2 to start accessing the dg, there is not wait time of unmount/deport/import/mount, and a number of customers do this. However, shared diskgroups leave it up to you or the application to do the locking. Shared diskgroups happily allow both server 1 and server 2 to write simultaneously to the same block on the same volume. If your application or failover scheme isn't aware of this, your data can be corrupted. So we generall recommend that you use either:
a. A parallel application, such as Oracle RAC, Sybase ASE CE, Informatica, TIBCO EMS, MQSeries, etc., where the application itself coordinates access across the different servers.
b. A workload whose semantics allow for 'last writer wins' behavior, such as NFS or ftp
c. A workload or application where only one node writes; all other nodes read; there are a number of billing applications in this category
d. If none of the above, then use a cluster server to handle application startup, shutdown, and failover. We recommend Veritas Cluster Server (VCS).