07-11-2011 12:27 PM
Hello, here's my environment:
Master Server - RHEL 5 64 bit
Media Server 1 - RHEL 5 64 bit with Fibre Transport Enabled
Media Server 2 - RHEL 5 64 bit with Fibre Transport Enabled
I only have 3 or 4 SAN clients setup right now. 2 of them are RHEL and I get pretty good throughput when a backup kicks off. The other systems are running windows 2008 and when a backup kicks off, I only see 1 stream on one of my media servers and the throughput of the job is around 20 MB/s. I've read the performance tuning guide and the best practices white paper on San Client/Fiber Transport, and 1 thing I am doing that goes against recommended practice involves ISLs. My 2 windows boxes are on an edge switch that plugs into the core switch (where my tape library and all my Fiber Channel ports on my master and 2 media servers resides.
However, my two unix clients are on edge switches also and I get blazing speeds out of them.
Furthermore, my SAN guys tell me that when one of these Windows clients kicks off, the traffic on the SAN at the time isn't even approaching full utilization of the pipe from my windows 2008 box to either of my 2 media servers.
This leads me to believe that there is something not configured correctly on the Windows box.
OS: Windows 2008 R2
HBAs: QLogic 2462
Driver: 9.1.7.16
Firmware: 4.03.01
The data I'm backing up is on local SAS disks, but when we try to benchmark the throughput of the disks inside Windows by moving a large file from 1 volume to another, we get really good speeds, several gigabytes a minute. So I'm pretty sure my issue isn't that I'm trying to backup a slow disk across a SAN.
I'm stumped. My gut feeling is that there's a setting somewhere on the HBA of the Windows host that maybe has some type of bandwidth throttling, or something similar enabled.
Anyone have any pearls of wisdom on this one?
Solved! Go to Solution.
07-15-2011 12:01 AM
Hi,
I would have thought the setting in the text below is set automatically these days. I used it back in the days of LTO-2 to get the OS to write bigger blocks, as specfied in SIZE_DATA_BUFFERS in NetBackup. Take a look and see if its set to 65 (the note says 33 but that is for 128k)
1. Click Start | Run and type regedit and click OK
2. In regedit go to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<SCSI_CARD>\Parameters.
The <SCSI_CARD> entry will depend on the type of SCSI card that is in the server.
3. Create the Device key if it does not already exist under Parameters. If the Device key exists go to step 6.
4. Highlight Parameters then right-click and select New | Key
5. Name the new key as Device
6. Highlight the Device key then right-click and select New | DWORD Value
7. Give the DWORD Value the name MaximumSGList
8. Double-click on MaximumSGList and enter a decimal value of 33
9. Close regedit
A reboot of the server will be required to enable the changes.
07-12-2011 01:16 AM
What speed do you get when backing up SAN attached data on the Windows hosts?
07-13-2011 07:56 AM
When I kick off a FT job and watch the detailed status, I'm getting around 25 MB/s. That's if I'm lucky. I think it's actually faster to back it up over ethernet.
07-14-2011 02:49 AM
But is the data you're backing up on the SAN, or internally on SAS? My thinking is if you can make a comparison between the two speeds (SAN and SAS attached) you can see where the issue lies.
07-14-2011 01:58 PM
I've actually tried both. If I back up the internal SAS disks, I get 1 pipe to my FT media server that gets about 20 MB/s throughput. If I attack a SAN volume, throw some files on it and try to back it up, I get about 20 MB/s throughput.
Is there anything special I need to do in the OS? The SAN client guide says that you really don't have to do anything in Windows other than enable the San Client and make sure the ARCHIVE PYTHON devices show up in device manager (which they do).
I did find a statement in on of hte San Client tech notes that said the native windows HBA driver only supports up to 64 bit block sizes whereas the manufacturer (QLogic in this case) driver supports up to 256 bit transfer. We went in and made sure the QLogic driver was being used instead of the windows driver.
We've installed the SAN Sufer software, and some other vendor utilities, but I don't see any options really on the card. It's plugged in, it's zoned correctly... I'm just getting crappy throughput.
07-15-2011 12:01 AM
Hi,
I would have thought the setting in the text below is set automatically these days. I used it back in the days of LTO-2 to get the OS to write bigger blocks, as specfied in SIZE_DATA_BUFFERS in NetBackup. Take a look and see if its set to 65 (the note says 33 but that is for 128k)
1. Click Start | Run and type regedit and click OK
2. In regedit go to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<SCSI_CARD>\Parameters.
The <SCSI_CARD> entry will depend on the type of SCSI card that is in the server.
3. Create the Device key if it does not already exist under Parameters. If the Device key exists go to step 6.
4. Highlight Parameters then right-click and select New | Key
5. Name the new key as Device
6. Highlight the Device key then right-click and select New | DWORD Value
7. Give the DWORD Value the name MaximumSGList
8. Double-click on MaximumSGList and enter a decimal value of 33
9. Close regedit
A reboot of the server will be required to enable the changes.