Forum Discussion

Fugitive's avatar
Fugitive
Level 4
14 years ago

I/O fencing not working on VCS5.1 on Ldoms2.0

Hello,

 

I've a

1. Sun T5240 server configured with 4 Ldoms each with 8G Memory & 12 vCPUs.

2. Running Solaris10 u9 on all Primary domain and all the guest domains.

3. 2 Guest domains are running VCS5.1 and couple of oracleSGs .

 

Everything is running fine but when i configured the I/O fencing on the nodes it does not seems to work. The co-ordinator disks are from Clariion array 5Gb each.  So my question is does any one have I/O fencing working on Ldoms ?  

 

And if yes what i 'm doing wrong.

 

Following is the o/p .. rest i can give as asked 

 

 

vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0d0s2       auto:none       -            -            online invalid
emc_clariion0_17 auto:cdsdisk    emc_clariion0_17  vxfendg      online
emc_clariion0_18 auto:cdsdisk    emc_clariion0_18  vxfendg      online
emc_clariion0_19 auto:cdsdisk    emc_clariion0_19  vxfendg      online
emc_clariion0_20 auto:cdsdisk    -            -            online
emc_clariion0_21 auto:cdsdisk    -            -            online
 
 
cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc_clariion0_17s2
/dev/vx/rdmp/emc_clariion0_18s2
/dev/vx/rdmp/emc_clariion0_19s2
 
 
 
I have freeze the oraSG 
 
hastatus -sum
 
-- SYSTEM STATE
-- System               State                Frozen
 
A  Node1       RUNNING              0
A  Node2       RUNNING              0
 
-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State
 
B  oraSG           Node1       Y          N               OFFLINE
B  oraSG           Node2       Y          N               ONLINE
 
-- GROUPS FROZEN
-- Group
 
C  oraSG
 
-- RESOURCES DISABLED
-- Group           Type            Resource
 
H  oraSG           DiskGroup       oraDG
H  oraSG           IP              oraIP
H  oraSG           Mount           oraMNT
H  oraSG           NIC             oraNIC
H  oraSG           Netlsnr         oraLSN
H  oraSG           Oracle          OraSER
H  oraSG           Volume          oraVOL
 
 
  • This is documented in 5.1SP1 guide too.... so this doesn't seems to be fixed yet...

     

    https://sort.symantec.com/public/documents/sfha/5.1sp1/solaris/productguides/pdf/sfha_virtualization_51sp1_sol.pdf

     

    Guest LDom node shows only 1 PGR key instead of 2 after
    rejecting the other node in the cluster
    For configuration information concerning the guest LDom node shows only 1 PGR
    key instead of 2 after rejecting the other node in the cluster:
    See Figure 5-4 on page 76.
    This was observed while performing a series of reboots of the primary and alternate
    I/O domains on both the physical hosts housing the two guests. At some point
    one key is reported missing on the coordinator disk.
    This issue is under investigation. The vxfen driver can still function as long as
    there is 1 PGR key. This is a low severity issue as it will not cause any immediate
    Storage Foundation and High Availability Solutions support for VM Server for SPARC (Logical Domains)
    Known issues
    96
    interruption. Symantec will update this issue when the root cause is found for
    the missing key