cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to create the share disk group for two node access

Home_224
Level 6

Hi 

I have urgent task to create the shared disk group for two nodes to read write access. My current system run Solaris 9 , with VCS 4.1 MP2 , I create the vxdg group for devdg on node 1 and to be share with another node 2 

vxdg list
NAME STATE ID
rootdg enabled 1323751603.17.devuardbs01
devdg enabled,cds 1524544628.114.devuardbs01

vxdg -s import devdg
VxVM vxdg ERROR V-5-1-587 Disk group devdg: import failed: Operation must be executed on master

vxdctl -c mode
mode: enabled: cluster inactive
root@devuardbs01 # vxlicrep -e | grep CVM
CVM_LITE_OPS = Disabled
CVM_FULL = Disabled
CVM_LITE = Disabled
CVM_LITE_OPS = Disabled
CVM_FULL = Disabled
CVM_LITE = Disabled
CVM_FULL#VERITAS Volume Manager = Enabled
CVM_LITE_OPS = Disabled
CVM_FULL = Enabled
CVM_LITE = Disabled

I have no idea for the issue.  Can you please if there is professional guys to fix my problem?

 

 

8 REPLIES 8

Home_224
Level 6

root@devuardbs01 # vxlicrep -e |grep CFS
VXCFS = Disabled
VXCFS = Disabled
VXCFS#VERITAS File System = Enabled
VXCFS = Enabled


@Home_224 wrote:

Hi 

I have urgent task to create the shared disk group for two nodes to read write access. My current system run Solaris 9 , with VCS 4.1 MP2 , I create the vxdg group for devdg on node 1 and to be share with another node 2 

vxdg list
NAME STATE ID
rootdg enabled 1323751603.17.devuardbs01
devdg enabled,cds 1524544628.114.devuardbs01

vxdg -s import devdg
VxVM vxdg ERROR V-5-1-587 Disk group devdg: import failed: Operation must be executed on master

vxdctl -c mode
mode: enabled: cluster inactive
root@devuardbs01 # vxlicrep -e | grep CVM
CVM_LITE_OPS = Disabled
CVM_FULL = Disabled
CVM_LITE = Disabled
CVM_LITE_OPS = Disabled
CVM_FULL = Disabled
CVM_LITE = Disabled
CVM_FULL#VERITAS Volume Manager = Enabled
CVM_LITE_OPS = Disabled
CVM_FULL = Enabled
CVM_LITE = Disabled

I have no idea for the issue.  Can you please if there is professional guys to fix my problem?

 

 




frankgfan
Moderator
Moderator
   VIP   

in order top create or import a CVM diskgroup, vxvm needs to be in cluster mode.

run the command below on each node to see if all node can access the same shared storage

 

1. vxdisk -o alldgs list

2. vxclustadm -m vcs -t gab startnode

3. gabconfig -a

4. uname -a

5. haclus -display | grep -i vers

 

this lomd of problem is most often related to shafred storgae  ex.  ne node lost connectivity to some or all the shared storages.

If you post the above commands output, I shall be able to know the issue and assist you further.

 

Regards,

 

Frank

 

Hi Frank,

Here is the  output

root@devuardbs01 # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t40d0s2 auto:cdsdisk - - online
c3t40d1s2 auto:cdsdisk - - online
c3t40d2s2 auto:cdsdisk - - online
root@devuardbs01 # vxdisksetup -i c3t40d0
VxVM vxdisksetup ERROR V-5-2-1813 c3t40d0: Disk is part of devdg disk group, use -f option to force setup.
root@devuardbs01 # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t40d0s2 auto:cdsdisk - - online
c3t40d1s2 auto:cdsdisk - - online
c3t40d2s2 auto:cdsdisk - - online
root@devuardbs01 # vxclustadm -m vcs -t gab startnode
VxVM vxclustadm ERROR V-5-1-9652 CVM group definition missing in /etc/VRTSvcs/conf/config/main.cf
root@devuardbs01 # gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 1a061d membership 01
Port h gen 1a0623 membership 01
root@devuardbs01 # uname -a
SunOS devuardbs01 5.9 Generic_122300-60 sun4u sparc SUNW,Sun-Fire-V240
root@devuardbs01 # haclus -display | grep -i vers
MajorVersion 4
MinorVersion 1007

root@devuardbs02 # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t44d0s2 auto:cdsdisk devdg01 devdg online
c3t44d1s2 auto:cdsdisk devdg02 devdg online
c3t44d2s2 auto:cdsdisk devdg03 devdg online
c3t44d3s2 auto:cdsdisk - - online
root@devuardbs02 # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t44d0s2 auto:cdsdisk devdg01 devdg online
c3t44d1s2 auto:cdsdisk devdg02 devdg online
c3t44d2s2 auto:cdsdisk devdg03 devdg online
c3t44d3s2 auto:cdsdisk - - online
root@devuardbs02 # vxclustadm -m vcs -t gab startnode
VxVM vxclustadm ERROR V-5-1-9652 CVM group definition missing in /etc/VRTSvcs/conf/config/main.cf
root@devuardbs02 # gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 1a061d membership 01
Port h gen 1a0623 membership 01
root@devuardbs02 # uname -a
SunOS devuardbs02 5.9 Generic_122300-60 sun4u sparc SUNW,Sun-Fire-V240
root@devuardbs02 # haclus -display | grep -i vers
MajorVersion 4
MinorVersion 1007

 

frankgfan
Moderator
Moderator
   VIP   

Hello,

 

The output 1 I asked for was below

1. vxdisk -o alldgs list

But you sent me vxdisk list outoput.

 

Can you run the command " vxdisk -o alldgs list " on both the nodes and post the output?

If vxdisk -o alldgs list on both nodes show the disk group devdg, try the steps below

 

1.  on node devuardbs02, run

vxdg deport devdg

(need to umounnt all the volumes in devdg first in order to deport the dg

 

2. run

vxdg -s import devdg

3. grep devdg /etc/VRTSvcs/conf/config/main.cf

4. 3. grep devdg /etc/VRTSvcs/conf/config/main* | head -10

PS - VCS version 4 is a very old version and it is no longer supported,  Please upgrade to a supported version (free of charge).

Regards,

 

Frank

Hi Frank,

Sorry to keep you wrong message. 

Below is the informaton for your review. 

root@devuardbs02 # vxdg import devdg
root@devuardbs02 # grep devdg /etc/*vcs/conf/config/main.cf
DiskGroup Oracle_devdg_Diskgroup (
DiskGroup = devdg
BlockDevice = "/dev/vx/dsk/devdg/dev_vol01"
Oracle_dev_Netlsnr requires Oracle_devdg_Diskgroup
Oracle_dev_db_Mount requires Oracle_devdg_Diskgroup
// DiskGroup Oracle_devdg_Diskgroup
// DiskGroup Oracle_devdg_Diskgroup
root@devuardbs02 # grep devdg /etc/VRTSvcs/conf/config/main* | head -10
/etc/VRTSvcs/conf/config/main.cf: DiskGroup Oracle_devdg_Diskgroup (
/etc/VRTSvcs/conf/config/main.cf: DiskGroup = devdg
/etc/VRTSvcs/conf/config/main.cf: BlockDevice = "/dev/vx/dsk/devdg/dev_vol01"
/etc/VRTSvcs/conf/config/main.cf: Oracle_dev_Netlsnr requires Oracle_devdg_Diskgroup
/etc/VRTSvcs/conf/config/main.cf: Oracle_dev_db_Mount requires Oracle_devdg_Diskgroup
/etc/VRTSvcs/conf/config/main.cf: // DiskGroup Oracle_devdg_Diskgroup
/etc/VRTSvcs/conf/config/main.cf: // DiskGroup Oracle_devdg_Diskgroup
/etc/VRTSvcs/conf/config/main.cf.05Dec20131206: DiskGroup Oracle_devdg_Diskgroup (
/etc/VRTSvcs/conf/config/main.cf.05Dec20131206: DiskGroup = devdg
/etc/VRTSvcs/conf/config/main.cf.05Dec20131206: BlockDevice = "/dev/vx/dsk/devdg/dev_vol01"

root@devuardbs02 # vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t44d0s2 auto:cdsdisk devdg01 devdg online
c3t44d1s2 auto:cdsdisk devdg02 devdg online
c3t44d2s2 auto:cdsdisk devdg03 devdg online
c3t44d3s2 auto:cdsdisk docdg01 docdg online

frankgfan
Moderator
Moderator
   VIP   

the diskgroup devdg is imported on node *02 per the output below.

root@devuardbs02 # vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:sliced rootdg01 rootdg online
c1t1d0s2 auto:sliced rootdg02 rootdg online
c3t44d0s2 auto:cdsdisk devdg01 devdg online
c3t44d1s2 auto:cdsdisk devdg02 devdg online
c3t44d2s2 auto:cdsdisk devdg03 devdg online
c3t44d3s2 auto:cdsdisk docdg01 docdg online

what do you want to achieve?  import the dg on both the node simultaneously? 

As all the luns are used in the dg devdg, you can not use any of the existing luns to crerate a new dg (unless you destroy the dg or free up some luns from the dg).

To import the dg as shared dg (we call it CVM dg), you need to deport the dg from node *02.  then import it by running the command below

 

#vxdg -s import devdg

Was the dg ever imporetd as shared?  Can you run the commands below and post the output?

 

1. egrep -i "cvm|cfs" /etc/VRTSvcs/conf/config/main* | head -10

2. hastatus -sum

Regards,

Frank

 

Hi Frank,

I check the disk group not to shared, only thing to do I use import and deport  diskgroup, only one host can access the san disk.  I am not sure if there is missing the patch on the vcs 

root@devuardbs01 # egrep -i "cvm|cfs" /etc/VRTSvcs/conf/config/main* | head -10
root@devuardbs01 # hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A devuardbs01 RUNNING 0
A devuardbs02 FAULTED 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B Oracle_MultiNIC devuardbs01 Y N ONLINE
B Oracle_dev_SG devuardbs01 Y N OFFLINE|FAULTED
B Oracle_sit_SG devuardbs01 Y N OFFLINE|FAULTED

-- RESOURCES FAILED
-- Group Type Resource System

C Oracle_dev_SG DiskGroup Oracle_devdg_Diskgroup devuardbs01
C Oracle_dev_SG Oracle Oracle_dev devuardbs01
C Oracle_sit_SG DiskGroup Oracle_sitdg_Diskgroup devuardbs01

frankgfan
Moderator
Moderator
   VIP   

Hello, 

I tried to reply your email last nigh but the website was down.  Anyway, please see my answers in line starting with "<<<"

Regards,

Frank

=========================================================

Hi Frank,

I check the disk group not to shared, only thing to do I use import and deport  diskgroup, only one host can access the san disk.  I am not sure if there is missing the patch on the vcs 

 

<<< Since one node is not able to access the san disks, the disk group can only be imported on one host in this 2-node configuration.  What you need to do is to make sure that OS of each node can access the shared disks needed for the dg.  The output of the command below can be used to know if the disks of the dg can be accessed by each node

#vxdisk -o alldgs list | grep debdg

<<< the issue is not likely a VCS patch related one.

root@devuardbs01 # egrep -i "cvm|cfs" /etc/VRTSvcs/conf/config/main* | head -10

<<< so the dg was never configued as a cvm shared dg


root@devuardbs01 # hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A devuardbs01 RUNNING 0
A devuardbs02 FAULTED 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B Oracle_MultiNIC devuardbs01 Y N ONLINE
B Oracle_dev_SG devuardbs01 Y N OFFLINE|FAULTED
B Oracle_sit_SG devuardbs01 Y N OFFLINE|FAULTED

-- RESOURCES FAILED
-- Group Type Resource System

C Oracle_dev_SG DiskGroup Oracle_devdg_Diskgroup devuardbs01
C Oracle_dev_SG Oracle Oracle_dev devuardbs01
C Oracle_sit_SG DiskGroup Oracle_sitdg_Diskgroup devuardbs01

 

<<< the output above shows that

  1. node *02 was faulted

2. service group Oracle_dev_SG was failed on node *01  (check /var/VRTSvcs/log/engine_A.og and DiskGroup_A.log to know the cause of the failure

Recommendations:

1. check SAN/storage and ensure both nodes can axxess the shared storage

2. review engine_A.log and DiskGroup_A.log so to determine the cause of the servi ce group online and DiskGroup resource failed issue

3. reboot cluster (meaning reboot botrh nodes at the same time)

Regards,

Frank