maybe you have a suggestion for me.
We are running two SF 5.1 SP1 [Enterprise with Option MS Cluster Enterprise] on two MS Server 2003 Enterprise R2 x64 Cluster Nodes right know.
There are some storage-arrays [DAS] directly on the server-machines - clustering is possible, because of two controllers in the disk-arrays. And there is some real SAN-storage over FC.
My vision/planning for the future is to build up more storage in the SAN over FC or iSCSI to get away from DAS-storage and to manage all of it over SF (again).
We also planning to virtualize our servers (up to 6-7 servers (Exchange, DFS, DC/DNS, Lizenzmanager-Server, etc.)) to run all of them on one or two physical servers; maybe a third standby-ESX.
So there is the question to virtualize both of the enterprise cluster nodes and let them run as virtual machines or let them stay as physical machines?
• "double failsafe", because the enterprise servers are cluster nodes right know [failsafe 1] and if the physical ESX is going down this virtual machine(s) will switch automatically to the second ESX [failsafe 2].
• Faster server-hardware change in the future. Build up a new ESX on more performant hardware; move the vm-server to it; shut down the old ESX which isn't performant enough anymore - no downtime.
...ok, ok, I have a cluster, so changing hardware would be possible without downtime, too.
• Maybe better possibility to trunk the networkports. I'm not using vSphere right now, I only remembering some stuff from VMware webcasts.
• Better utilization of the physical hardware! That's why you will virtualize all servers.
• Making snapshots before upgrade software on the servers. Faster "recovery" of the old status. "Never touch a running system" is the past.
• Better allocation of the physical RAM to different servers.
• Will SF run as good as it does on the physical environment right now? Or are there any restrictions - maybe because of drivers, the hypervisor, anything else?
• Will SF be (more) performant than right now or will it slow down everything, especially if I use VxCache with 8GB RAM or more?
• I need lots of RAM
Would be nice to here some comments or another ideas from you.
Thanks in advance.
Running the servers in a VM environment with shared hardware resources will cause some slow down of server performance. You might want to try this with application servers that do not put much load on the server and see how they do. For example, your DFS, DC and DNS servers would be a good fit for being virtualized. While your Exchange server might be OK or it could have problems depending on how heavily you are using it. You are going to have to try it on your different servers and see if it has acceptable performance when virtualized for your organization. What works OK for one company may not work very well for another.
Let us know how your virtualization plan plays out.
yeah you're right. It depends on the workload and the performant physical hardware.
But the other virtualized servers (Exchange, DNS, DC, etc.) won't have heavy usage by their services or from the users.
And my main question is more like, if I virtualize my SF machine and let it run alone on one ESX - I know, not very efficient - would it be the same performance as on the physical machine right now? Better, worse? Or maybe will there will be the advice to not virtualize the SF machine under no circumstances. I don't know.
I'm searching for your answers and wide experience.
If by SF you mean Storage Foundation for Windows (SFW), then I would not expect SFW to behave any differently on physical hardware verses on a virtual server.
You have mentioned Storage Server. If you mean Windows Storage Server, I have run it on both physical hardware and on a virtual server. I just just doing testing with very few servers connecting to it. But I do not remember any major differences in performance. Again, if you are expecting a light load then a virtual server should be fine.
no I don't mean Windows Storage Server. With "Storage Server" I'm just talking about building up a file server [with SFW on it] with HBA-connection to SAS-Storage-Arrays [SAN]. Like often used. Sorry for the misunderstanding; I used the wrong term.
In my case there are two file servers right now. They run with MS Server 2003 Enterprise R2 x64 and they are clustered. The storage-management is done with SFW.
And because of the future plan of server-virtalization I'm thinking about virtualizing this two file servers also or only change the physical hardware for them.
We talking about an environment of 40-50 users and up to 70 render-node computers. The business is 3D-Animations with different kind of files. Some picture-files with 3-15 MB up to 55 MB, workfiles with 50-1.000 MB and texturefiles with 50-600 MB each.
The users will load the workfiles and some texturefiles (10-20 files) and work with that, so you have only the traffic for loading and saving.
The render-nodes will load the final workfiles and texturefiles and start the rendering. The rendering will take 10sec-30min before a final picture-file of a sequence could be saved. The rendered outputfiles (the picture-files) are mostly 100-1.000 files for one loaded workfile.
So you have most of the time read- and write-traffic over the file-servers on the storage arrays. You have mostly 4-10 render-nodes loading or saving files at the same time. Sometimes more, sometime less. But there are peaks with more render-nodes and users at the same time and maybe extreme heavy data. You can do the math with the given values above.
Just to give you an insight look what I'm talking about. Maybe that makes it easier for you.