CFSMount Error
- 14 years ago
Ok..
So,
-- whether disk_0 is a shared disk & visible across the cluster nodes ? You can verify this with serial number of the disk:
# /etc/vx/diag.d/vxdmpinq /dev/vx/rdmp/disk_0
OR
If you run vxdisk -o alldgs list on node 2, you should see disk ora1 imported on second node as well....
-- Secondly, error appearing in engine-A.log says, no such device or address.... that means volume or FS is not ready..
Since you imported the diskgroup manually, did you started the volumes mannually ? can u see if volumes inside diskgroup are enabled active ?
# vxprint -qthg oracledg (all the volumes should be ENABLED ACTIVE), any other state won't work...
To start volumes
# vxvol -g oracledg startall
Once volumes are ENABLED ACTIVE, can you see a filesystem on it ?
# fstyp -v /dev/vx/rdsk/oracledg/oravol
If you see the filesystem, it should be able to mount...
Since you are using cfs commands, I believe you have already added mounts using cfsmntadm... If not then have a look at man page of cfsmntadm..
Gaurav