Forum Discussion

DrDebate's avatar
DrDebate
Level 3
14 years ago
Solved

Poor performance backing up to iSCSI target

I am seeing very poor performance backing up my Exchange server to iSCSI-based Backup-to-Disk.  Here are my details:

 

Media Server Type:  Virtual (ESX 4.1)

Operating System:  Windows Server 2008 R2 Standard - fully updated

iSCSI Appliance:  Dell MD-3000i

NIC Configuration:  (2) VMXNet 3 - one on the LAN and one on the iSCSI (isolated) subnet

Backup Exec:  2010 R3 fully updated (Version 13.0 Rev. 5204 64bit)

Note:  The Dell MPIO driver is installed and iSCSI is configured per Dell's recomendations for Server 2008 R2

 

Exchange to iSCSI B2D Speed:  550 MB/min

Exchange to C: B2D Speed:  1350 MB/min

Media Server to iSCSI B2D Speed:  1150 MB/min

Windows Explorer C: to iSCSI Speed:  3000 MB/min

 

In other words, if I back-up the Exchange server to the Backup-to-Disk folder on the iSCSI appliance I get less than half the throughput I get backing up to the C: Drive on the Media Server or backing-up the Media Server to iSCSI.  If I do a straight file copy using Windows Explorer I get about 3000 MB/min.  I'm not particularly impressed with 1 GB/min but I can work with that.  500 MB/min is painful for the size of our Exchange environment.  I've searched the forum and the internet and I have found other people with similar problems but no solutions.  Any ideas out there?

15 Replies

  • To be honest, I don't think you'll fix this I'm afraid with Backup Exec as your chosen solution for backup. Performance is not one of it's strong points. Like you, we planned to rollout a replacement BE 2010 solution a while back using our new shiny iSCSI SAN (StarWind) with RAID-10 SATA-3 drive system (SAN had multiple enclosures, this was just the one we planned to use for BE).

    We soon realised that throughput with a dedupe system on iSCSI just wasn't going to hack it.

    So we switched to a lots-of-RAM but not that powerful PowerEdge bare metal server with a LsiLogic SAS/SATA card connected to a 16 disk external SAS/SATA disk enclosure. We ended up using 2TB SATA-2 disks because we had some of them lying around.

    With this set-up performance is okay.

    Reliability isn't so okay but that's another day.

    Cheers, Rob.

  • ...I don't see the issue with backing up to an iSCSI-presented HDD, much like I don't see the issue with backing up to a NAS when BE is a media server. So this all seems invalid.

    I've seen the TNs around BE being installed on a VM with a library/tape drive presented to it...this isn't a Symantec problem (trust me, been down this road already with virtualised media servers I didn't recommend!). It comes from a VMware side that Symantec seems to have picked up on. I don't see CA recommending that ARCserve R15/r16 can be installed on a VM, and EMC certainly don't recommend it with NetWorker for instance.

    Disk is disk...a virtualised media server wouldn't care in this case as it appears to be a local disk anyway. This could very well be a misconfiguration somewhere in the Vmware environment.

  • OK, good start...are your server NIC/s and the Dell storage appliance set to the fastest speed available (ie. 1000MB FULL)?

    Then make sure that any AV isn't perhaps scanning the BE services, and if so, put in an exclusion.

     

  • As an aside, up till this day, I am still amazed that the various manufacturers managed to get iSCSI to work.  If you have taken an in-depth look at the Ethernet protocol, you will realise how fagile it is.  Remember Ethernet is designed in the 70's for a Cold War situation.  Any Ethernet packet is delivered on a best-effort basis.  A disk protocol on the other hand is designed to be as bullet-proof as possible.  When the protocol says the write is completed, almost 100% of the time the data is on the disk.  The timing requirements for such protocols is quite exact, especially SCSI.  It still boggles my mind how they managed to marry the two seemingly incompatible protocols.

  • It still boggles my mind how they managed to marry the two seemingly incompatible protocols.

    Yes, I too am amazed it works at all but it really does work. We switched to StarWind iSCSI SAN for almost everything except backup (and the SAN itself obviously) for a 100 person company and most of the the time it works a treat. We do occasionally see slow downs but that's kind of to be expected. It would be amazing to think you could take four separate servers each with their own local high speed iSCSI RAID-10 disk systems and push the same IO through one SAN server and over "just" 1Gbit/s network. Actually we have multiple networks just for the iSCSI disks.

    My gut feeling is that BE is sometimes pretty heavy handed with local I/O like the catalog (remember how long it takes to search!) and that when you throw in ethernet & TCP/IP overhead into the equation, it struggles.

    Cheers, Rob.