Backup Exec 12.5 server is running Windows Server 2003 Standard Edition SP2, 4 GB RAM, Quad 2.5 GHz Xeon, Broadcom NetXtreme Gigabit Ethernet NIC
The SAN we have is a NetApp FAS2040A
Ther Server is connected to a Linksys/Cisco Layer 2 GB switch via CAT 5E, while our SAN is connected to a Cisco Layer 3 switch via CAT 6. So at most it's making two hops. Trace route from server to san shows one hop at <1ms.
Server is currently set to 1 Gb Full Auto for speed & duplex.
SAN Full backup consists of a CIFS share of around 138 GB which took 2 hours, 51 minutes and 2 seconds to complete. Job Rate = 1274 MB/min = 170 Mbps
The calculation above is after I set the compression setting to None. Prior to this the same job ran at 157 Mbps.
So we moved the server patch cable now both devices are on the same Cisco switch.
The network admin is now monitoring the I/O on the port the backup exec is connected to. He's seeing spikes up to 150 Mbps but that's it. The Networking tab of task manager is showing a utilization averaging around 35% of the 1 Gbps link.
As far as the networking equipment is concerned we feel that things are fine as is, but something with the backup exec media server itself is the culprit. I've done quite a bit of research and haven't really found anything pointing to the software.
Rather I think it might be the server internal hardware, i.e. scsi drives & controller. I'm about to do some updating...drivers/firmware.
Would someone with this kind of experience please toss in their 2 cents?
Also would upgrading to version 2010 be suggested? We paid for the support and I already have that version downloaded but just not installed yet.
Thank you kindly.
I forgot to mention that the CIFS share on the SAN is being backed up to a separate set of disks on the SAN. CIFS is on an array of SAS drives while that data is being backed up to an array of SATA drives.
So basically the media server pulls data from the CIFS store on the SAN, buffers it??, then pushes back out to SAN via iSCSI connection to the SATA drives.
I guess I'm questioning what the media server is doing with the data as it receives it from the SAN...
Since compression is turned off it should just send as it receives? Or is it buffering the data before pushing out?
You're writing to an array of SATA drives. I haven't done the maths, but have you tested the speed you can write to those disks? In my experience I've always found disk to be not-as-fast-as-you'd-think for backup.
It has been a while since I tested disk performance, but last time I did it I used IOmeter for the raw disk speed testing. For testing the speed of data copying between two places, there's nothing like a stopwatch and calc.exe!
I had our network admin test/observe the xfer rates when running a test backup the other day. The speeds topped off around 150 Mbps on the ports. Data flowing from SAN to Backup server, and then server to SAN.
All cabling between these devices is CAT6, and ports manually set to 1000 Mbps.
Please clarify your speed numbers. I see Mbps, Gbps, MB/min. That said, you are throwing out interface speeds of the network and HDD interfaces assuming one will actually reach those speeds. Never gonna happen.
The FAS2040 when done right, is a great performer. Better than the 2050 and 2020, IMO!
That said, you talk about SAN backup of the CIFS shares. So are you using NDMP, or are you backing up a host that has a LUN carved out for it, and that host is providing CIFS access?
Just because you have a namebrand storage array, doesnt mean it's going to perform equal to what it costs! A NetApp storage array is pretty complex, and can be misconfigured quite easily.
Best to use testing tools like HD_Speed or Intel's IOMeter to verify throughput. Typically BE is bound by hardware, and not the software itself being the bottleneck. Backup software ALWAY, almost ALWAYS shows you the weak spots in your infrastructure, it takes time and patience to root them out. You dont call your child ugly do you? Some Admin's get a little mad, when you point the finger at their network (child)
MB/min came straight from the Backup Exec job log results. The overall speed at which the job completed. Why they use MB/min doesn't make sense to me since most network devices are measured in MB/s or Mb/s. Anyhow, the MB/min I converted to Mbps in order to compare with the other networking devices we have including switches etc. I understand we'll never hit the 1Gbps advertised speed but 50% of that would at least me twice as better as what i'm seeing.
132,977,219,091 Bytes = 126,816.96 MB = 123.84 GB
1179 MB/min (From BE) = 9432 Mb/min = 157.2 Mbps
This job backed up 123.84 GB in 2hrs 42mins 16secs at 157.2 Mbps
Too me that seems slow but you may correct me if i'm wrong.
In regards to the FAS2040. Before reading you're post I had no idea whether it was top of the line or not. Quite honestly this is the first SAN i've worked with. Second, I didn't perform the install or config the SAN. That was done by the company that sold us the product. My job is to get data on the thing and have it backed up regularly. I've been setting up users folder redirection for now and in the future this will be our main storage device for company documents. before i move forward and put all company documentation on here i need to tweak the backups to run faster. since installed my disc-to-disc backups take much longer than disc-to-tape.
As far as backing up. A number of SAS drives were put into an array. This array was then made into a CIFS structure. Within CIFS I created several shared folders. These folders are intended to be used as our company directories, with which I will have GPO's to map network drives to the shares. So this CIFS storage array is needed to be backed up on a nightly basis. That's where BE came into play. I do not have the SAN remote agent. Not sure if that'd make much of a difference.
These backups are being directed to B2D folders that are on a seperate SATA array within the FAS2040. Went with SATA due to cost and since they are rated at 3.0 Gbps we figure there shouldn't be any bottleneck with read/write speeds between the SAS and SATA drives.
I fully understand the possibility of hardware bottlenecks and am trying to root them out. I've updated the server's bios and firmware per Dell's recommendations, and am using CAT6 for all connections. There's only one device in the middle and that's a cisco 3750 i believe.