cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

Volume Manager: Cannot import diskgroup on second node

Cluster_Server
Level 3

Hello @all,

hoping this is the right forum.

Installing SFHA6.1 on two Solaris 10 nodes.

After reinstalling our two nodes via Jumpstart server, we're facing problems with importing the diskgroups on the second node.

When we reinstall the nodes, at frist we delete all diskgroups

vxdg import dg

vxdg destroy dg

and vxdiskunsetup all disks. This procedure is well tested and works like a charm.

Yesterday an update from our SAN group was made in our SAN and I don't know if the update is the reason for our problem.

Today when I reinstall my two nodes I cannot vxdiskunsetup my disks:

pri720[root]~ {CONSOLE}: vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:ZFS        -            -            ZFS
disk_1       auto:ZFS        -            -            ZFS
emc0_0351    auto:cdsdisk    -            -            online
emc1_1fec    auto:cdsdisk    -            -            online thinrclm
emc1_1fe4    auto:cdsdisk    -            -            online thinrclm
emc1_1fe8    auto:cdsdisk    -            -            online thinrclm
emc1_1ff0    auto:cdsdisk    -            -            online thinrclm
emc1_1ffc    auto:cdsdisk    -            -            online thinrclm
emc1_1ff4    auto:cdsdisk    -            -            online thinrclm
emc1_1ff8    auto:cdsdisk    -            -            online thinrclm
emc1_0204    auto:cdsdisk    -            -            online thinrclm
emc1_2000    auto:cdsdisk    -            -            online thinrclm
emc1_2004    auto:cdsdisk    -            -            online thinrclm
emc1_2008    auto:cdsdisk    -            -            online thinrclm
emc2_0bf7    auto:cdsdisk    -            -            online thinrclm
emc2_22ab    auto:cdsdisk    -            -            online thinrclm
emc2_22af    auto:cdsdisk    -            -            online thinrclm
emc2_22a3    auto:cdsdisk    -            -            online thinrclm
emc2_22a7    auto:cdsdisk    -            -            online thinrclm
emc2_228b    auto:cdsdisk    -            -            online thinrclm
emc2_228f    auto:cdsdisk    -            -            online thinrclm
emc2_229b    auto:cdsdisk    -            -            online thinrclm
emc2_229f    auto:cdsdisk    -            -            online thinrclm
emc2_2293    auto:cdsdisk    -            -            online thinrclm
emc2_2297    auto:cdsdisk    -            -            online thinrclm

pri720[root]~ {CONSOLE}: vxdiskunsetup -C emc1_1fec
VxVM vxdisk ERROR V-5-1-531 Device emc1_1fec: destroy failed:
        SCSI-3 PR Reservation Conflict error
VxVM vxdiskunsetup ERROR V-5-2-5052 emc1_1fec: Disk destroy failed.

The second point is, that all disks have status "online thinrclm", but before reinstallation only "online"!

I know thinrclm is "Thin Reclamation" but don't know if this is part of our problem!!!

This works:

vxdisksetup -i <disk>

and we can create a diskgroup

vxdg init arena ARENA00=emc1_1fec
vxdg -g arena adddisk ARENA01=emc1_1fe4
vxdg -g arena adddisk arena00=emc2_22ab
vxdg -g arena adddisk arena01=emc2_22af

which is imported on the first node.

But when I deport this group on the first node and import it on the second node we got an error:

pri620[root]~ {NSH}: vxdg import arena
VxVM vxdg ERROR V-5-1-10978 Disk group arena: import failed:
SCSI-3 PR Reservation Conflict error

Honestly I'm not sure if this is a problem of our SAN or a misconfiguration of SFHA6.1!

Any help would be appreciated!

Regards,

Heinz

 

3 REPLIES 3

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hello,

Lets see if there are any existing IOFencing keys available on the disks which may cause the issue ..

Paste following outputs

# cat /etc/vxfentab

# /sbin/vxfenadm -s emc1_1fec    OR give full path if command fails i.e.

# /sbin/vxfenadm -s /dev/vx/rdmp/emc1_1fec

Similar way check of other disks have any keys in them

If in case keys are found, we need to delete them in order to release reservation locks

Status of thinrclm should not be the cause .. its just an indication that disks are thin capable.

 

G

TonyGriffiths
Level 6
Employee Accredited Certified

As per Mike, the I/O fencing keys are worth investigating. Also check the syslog entries for any clues.

 

As for the thinrclm status, VxVM detects this from the array. Could it be that the array had something changed or enabled ? Another possibility is that a old version of SFHA was in use before that did not have the ability to list the thinrclm status.

cheers

tony

Heinz_Mueller
Level 3

Hi @all,

this is my second account :)

The solution is:

Because we had enabled I/O Fencing and the diskgroup is under VCS control we had to use

the option -o clearreserve if we want to bring on the diskgroup manually:

/usr/sbin/vxdg -o clearreserve -t import arena

Many thanks for your help.

Regards,

Heinz