The multi-pathing policies are connectivity based and are therefore applicable to all the LUNs that are enclosed in the same physical entity, i.e. the instance of the storage array. Hence it makes sense to have these set on a per storage array instance basis and thus avoid the need for further configuration steps whenever new LUNs are added to an existing storage array and exposed to the same host
VxDMP understands the specific array using storage array specific policy modules and employs the best algorithm suited for providing access to LUNs from the array. The policies are pre-configured (out of the box) based on the array characteristics and hence does not require any user configuration. The administrator can just plug-in the array and start using it.
However if the administrator chooses, they can override the defaults for the storage array and these settings are persistent. VxDMP offers a choice of multiple I/O policies, as well as multiple proactive error detection and recovery policies. In a VMware environment, these can be changed directly from the vCenter using the VxDMP plugin or the VxDMP remote administration CLI.
VxDMP also understands the LUN path characteristics especially in case of Asymmetric LUN Access (ALUA) arrays and uses only the Active/Optimized LUN paths for I/O traffic, thus retaining the storage administrator configured balance of I/O load on the array storage controllers. Similarly when connectivity is restored after a SAN outage, DMP would automatically failback to the initial I/O load distribution set by the storage administrator.
VxDMP employs several advanced error detection and recovery algorithms to achieve quicker recovery from connectivity failures with less impact on the CPU which is vital when operating in the hypervisor.
The administrator can enable or disable these advanced features, but it’s recommended to have them enabled for best performance. They can also set host centric policies such as how frequently should the check for connectivity restoration be made, number of worker threads to be employed for maintenance tasks, etc. The default values are optimal for most cases and should not require any changes.
I would like to know you experiences with using VxDMP and feedback on ability to operate out of the box and providing enterprise grade multi-pathing
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.