cancel
Showing results for 
Search instead for 
Did you mean: 

slave node tries to access shared disks

halit_sakca
Level 3

Dear all,

 

I have a cluster (2xt2000) solaris 10, veritas storage foundation and veritas cluster server 5.1 sp1 rp2.

although node 1 is active and owning the disks when reboot node 2 in boot log I see errors that node can not access to disks and furthermore it blocks most of the services to start.

Do you have any idea?

 

Boot device: disk  File and args:

SunOS Release 5.10 Version Generic_147440-01 64-bit

Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.

Hostname: impdneilab2

VxVM sysboot INFO V-5-2-3409 starting in boot mode...

NOTICE: nxge0: xcvr addr:0x0d - link is up 1000 Mbps full duplex

NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk


NOTICE: VxVM vxdmp V-5-0-0 removed disk array FAKE_ENCLR_SNO, datype = FAKE_ARRAY


VxVM sysboot INFO V-5-2-3390 Starting restore daemon...

Jun 26 11:23:17 vxvm:vxconfigd: WARNING V-365-1-1 This host is not entitled to run Veritas Storage Foundation/Veritas Cluster Server.

As set forth in the End User License Agreement (EULA) you must complete one of the two options set forth below. To comply with this condition of the EULA and stop logging of this message, you have 53 days to either:

- make this host managed by a Management Server (see http://go.symantec.com/sfhakeyless for details and free download), or

- add a valid license key matching the functionality in use on this host using the command 'vxlicinst' and validate using the command 'vxkeyless set NONE'.


/dev/rdsk/c0t0d0s5 is clean

UX:vxfs mount: ERROR: V-3-20002: Cannot access /dev/vx/dsk/oracle_dg/oradata: No such file or directory

UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version

svc:/system/filesystem/local:default: WARNING: /sbin/mountall -l failed: exit status 1

Jun 26 11:23:32 svc.startd[8]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.

Jun 26 11:23:32 svc.startd[8]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)


impdneilab2 console login:
thanks,
Halit
1 ACCEPTED SOLUTION

Accepted Solutions

halit_sakca
Level 3

Hello,

I found out the solution; 

somehow there was an entry left

 

/dev/vx/dsk/oracle_dg/oradata /dev/vx/rdsk/oracle_dg/oradata /oradata vxfs 0 yes suid
 
the node I had problem should not have the ownership of diskgroups because ownership was on the other node at that moment so I deleted the entry and rebooted it.
 
regards,
Halit

View solution in original post

1 REPLY 1

halit_sakca
Level 3

Hello,

I found out the solution; 

somehow there was an entry left

 

/dev/vx/dsk/oracle_dg/oradata /dev/vx/rdsk/oracle_dg/oradata /oradata vxfs 0 yes suid
 
the node I had problem should not have the ownership of diskgroups because ownership was on the other node at that moment so I deleted the entry and rebooted it.
 
regards,
Halit