I am sorry if this question has an obvious answer, but I have searched both Veritas, SORT and VMware for one but have not been able to find anything.
A few years ago I set up a VCS evironment containing eight Red Hat 6.x servers in a VMware ESX cluster of four servers. One of the requirements from Symantec for this setup were two Distributed vSwitches, one private and one public. This Vmware feature needed vSphere Enterprise Plus licenses. Naturally the client was a bit reluctant to buy them as they are quite expensive.
I now have another client asking for a similar setup with InfoScale. I assume the requirements have not really changed, and we still need the DSV. Only I cannot find this requirement anywhere and I need an official confirmation from a Veritas document before the client will consider buying the licenses.
Could anybody point me in the direction of an article or document where I can find this information. There is an old VMware KB article on this, but it is from 2013:
There is no requirement to make 2 vSwitches, only to have the communications segregated. You can achieve this with distributed port groups/vlans. Since you have to assing uplinks (physical adapters) to a vSwitch it might not be feasible to have multiple vSwitches which it not really required any way. 1 vSwitch with 4 uplinks or 2 vSwithces with 2 uplinks are really the same thing except that you're probably limiting yourself with the 2 switch option.
Thank you for your reply. As far as any licenses are concerned it does not matter if we use one or two distributed vSwitches. This feature is only available in VMware Enterprise Plus. The question is, are distributed vSwitches needed or simple vSwitches?
In the previous setup we chose to use two vSwitches because it was recommeded. As the HW has enough NIC for separated uplinks for the two vSwitches it even made sense from an architectural point of view.
Sorry for the delay. It doesn't matter if its SVS or DVS, we just need separation of the traffic which in itself is a luxury. If you take into consideration that LLT doesn't use much bandwidth (unless you're running FFS) then its a bit of a waste to physically dedicate a 10g link to the cluster.
I really think the recommendation might not have been written correctly. If you take this back to the traditional physical world, we needed a port and VLAN for separation, not a dedicated switch for LLT.
Thank you for the your time.
As a matter of fact I have had customers who have set up their VCS in this way, i.e. dedicated switches for the heartbeats.
So, to sum up, normal vSwitches are fine, provided LLT goes over a separated VLAN, meaning we can use the standard version of VMware and not the enterprise one.
You can find the best practices for cluster communications in the VCS administrators guide.
In short have two private links and a public link. Keep the private links seperated physically so a hardware failure does not take down both links.