06-26-2015 06:02 AM
Hi,
I want to manage a CIFS mount within a Service Group and cannot find the appropriate resource.
I just one to mount a CIFS share in my VCS cluster, acting as a client.
Can anyone let me know which Resource type should I use?
Thanks & Regards,
JL
Solved! Go to Solution.
06-30-2015 01:54 AM
You can use application agent - so for example create a file something like /opt/VRTSvcs/bin/cifs.sh containing:
MOUNT_PATH=$2 SRC_PATH=$3 MOUNT=/bin/mount UMOUNT=/bin/umount GREP=/bin/grep if [ $# -eq 4 ] then MOUNT_OPTS="-o $4" fi case "$1" in start) $MOUNT -t cifs $MOUNT_OPTS $SRC_PATH $MOUNT_PATH ;; stop) $UMOUNT $MOUNT_PATH ;; monitor) $MOUNT | $GREP "^$SRC_PATH on $MOUNT_PATH type cifs" > /dev/null if [ $? -eq 0 ] then exit 110 else exit 100 fi ;; *) echo "Usage: $0 {start|stop|monitor} mount_path src_path" exit 1 ;; esac
Then create Application resource like:
Application cifs_test ( StartProgram = "/opt/VRTSvcs/bin/cifs.sh start /test //win-svr/test username=mike,password=123" StopProgram = "/opt/VRTSvcs/bin/cifs.sh stop /test" MonitorProgram = "/opt/VRTSvcs/bin/cifs.sh monitor /test //win-svr/test " )
Mike
06-26-2015 07:23 AM
The Mount agent does not support mounting CIFs so I think you would need to use the Application agent.
Mike
06-29-2015 06:01 AM
You have selected Linux as OS, right?
Any possibility you can mount the share as NFS?
If we look at the VCS Bundled Agent Guide for Linux, we see that the Mount Agent supports these filesystem types:
vxfs, bind, ext2, ext3,ext4, xfs, nfs, or reiserfs.
06-29-2015 06:58 AM
The other possibilty is to mount at boot, as there should be no real need to fail the mount over - you can have it mounted on all nodes in the cluster, all at the same time, so benefits of putting under VCS control is mainly just monitoring the mount.
Mike
06-30-2015 01:04 AM
Hi Marianne, thank you for this option. I need to use CIFS as AD user/password is required to mount the share.
06-30-2015 01:09 AM
Yes, this possibility will be used at last resource if no resource can manage the mount.
Imagine the case that more mounts are required within the cluster, I can't imagine having around 20/30/... mounts in both nodes and being used only by one node. Also I'm concerned about the security, why to mount if can be not mounted.
06-30-2015 01:12 AM
Curious to know how you would do that from cmd with mount command or what fstab entry would look like if you would want to mount at boot as per Mike's suggestion below.
The VCS mount resource only supports the FS types as listed above.
06-30-2015 01:54 AM
You can use application agent - so for example create a file something like /opt/VRTSvcs/bin/cifs.sh containing:
MOUNT_PATH=$2 SRC_PATH=$3 MOUNT=/bin/mount UMOUNT=/bin/umount GREP=/bin/grep if [ $# -eq 4 ] then MOUNT_OPTS="-o $4" fi case "$1" in start) $MOUNT -t cifs $MOUNT_OPTS $SRC_PATH $MOUNT_PATH ;; stop) $UMOUNT $MOUNT_PATH ;; monitor) $MOUNT | $GREP "^$SRC_PATH on $MOUNT_PATH type cifs" > /dev/null if [ $? -eq 0 ] then exit 110 else exit 100 fi ;; *) echo "Usage: $0 {start|stop|monitor} mount_path src_path" exit 1 ;; esac
Then create Application resource like:
Application cifs_test ( StartProgram = "/opt/VRTSvcs/bin/cifs.sh start /test //win-svr/test username=mike,password=123" StopProgram = "/opt/VRTSvcs/bin/cifs.sh stop /test" MonitorProgram = "/opt/VRTSvcs/bin/cifs.sh monitor /test //win-svr/test " )
Mike
06-30-2015 02:35 AM
That looks nice Mike. Even nicer than my solution, that I did with 3 scripts:
root@vcsnodea:/opt/VRTSvcs/bin# cat mount-cifs-start.sh
#!/bin/sh
MOUNT=/sbin/mount.cifs
print_usage ()
{
echo "Syntax: cifs-mount-start.sh cifs-share mount-point credentials uid"
}
# main
EXIT=110
if [ $# != "4" ]; then
echo "Incorrect number of parameters. Please check and run again."
print_usage
exit $EXIT
fi
# Mount the resource
$MOUNT "$1" $2 -o credentials=${3},uid=${4}
[ $? != "0" ] && EXIT=100
exit $EXIT
root@vcsnodea:/opt/VRTSvcs/bin# cat mount-cifs-stop.sh
#!/bin/sh
UMOUNT=/bin/umount
print_usage ()
{
echo "Syntax: cifs-mount-stop.sh mount-point"
}
# main
EXIT=100
if [ $# != "1" ]; then
echo "Incorrect number of parameters. Please check and run again."
print_usage
exit $EXIT
fi
# Unmount the resource
$UMOUNT $1
[ $? != "0" ] && EXIT=110
exit $EXIT
root@vcsnodea:/opt/VRTSvcs/bin# cat mount-cifs-monitor.sh
#!/bin/sh
APPLICATION_IS_ONLINE=110
APPLICATION_IS_OFFLINE=100
if [ $# != "1" ]; then
echo "Incorrect number of parameters. Please check and run again."
exit $APPLICATION_IS_OFFLINE
fi
/bin/mount -t cifs|grep "$1"
if [ $? == "0" ]; then
exit $APPLICATION_IS_ONLINE
else
exit $APPLICATION_IS_OFFLINE
fi
Thanks for your suggestions and help!