Ok here's some steps:
- Perform some basic I/O performance testing using MS sqlio...
...prove whether this MSDP node meets the minimum required IO performance profile of...
...read/write of 250 MB/s or better using Microsoft's 'sqlio' tool for Windows...
...then and only then should MSDP/PureDisk be installed/configured...
...FYI, this step also handily acts as a basic I/O excerciser of the underlying storage...
SQLIO can be found here:
http://www.microsoft.com/en-us/download/details.aspx?id=20163
The NetBackup v7.5 DeDupe Admin Guide:
http://www.symantec.com/business/support/index?page=content&id=DOC5187
...on page 67 states:
Requirements: Storage Media: Disk, with the following minimum requirements per individual data stream (read or write):
Up to 32 TBs of storage 130 MB/sec ...but 200 MB/sec for enterprise-level performance.
32 to 48 TBs of storage 200 MB/sec Symantec recommends that you store the data and the deduplication database on separate disk,
each with 200 MB/sec read or write speed.
48 to 64 TBs of storage 250 MB/sec Symantec recommends that you store the data and the deduplication database on separate disk,
each with 250 MB/sec read or write speed.
These are minimum requirements for single stream read or write performance. Greater individual data stream capability or aggregate capability
may be required to satisfy your objectives for writing to and reading from disk.
- Determine the I/O Payload Sizes Used By MSDP:
...determine the read and write buffer sizes of MSDP...
more "D:\Program Files\Veritas\pdde\contentrouter.cfg"
findstr "ContentRouter CRDataStore BufferSize" "D:\Program Files\Veritas\pdde\contentrouter.cfg"
...there are two sections that we are interested in, namely [ContentRouter] and [CRDataStore]...
...and the relationship and typical I/O sizes are:
Section Path Write KB Read KB
[ContentRouter] /msdp_db 32 64
[CRDataStore] /msdp_data 64 1024
...so we'll use those buffer sizes in our performance checks...
...if the values change from version to version of NetBackup, or appliance to appliance...
...then adjust the I/O payload sizes used in the next two main steps...
...using some testing files that are twice the size of RAM, so if you have 64GB RAM in your server then adjust '-s 128g' below as required, and it needs to be double the size of RAM to defeat the O/S own file system caching, i.e. if you were to choose a small test file size, then the O/S file system caching will skew your results making them look better than what is actaully really achievable...
msinfo32 #to see size of RAM.
- I/O Performance Test Using SQLIO:
- Install SQLIO to the D: drive...
...i.e. the install path after installing should be: D:\Program Files (x86)\SQLIO
- Go to the folder:
start / run / cmd
D:
cd "\Program Files (x86)\SQLIO"
- Make the parameter files for I/O testing (note the figure 135168 specifies the size of the test file):
(echo E:\testfile.dat 16 0x0 135168)>param-E.txt
(echo F:\testfile.dat 16 0x0 135168)>param-F.txt
- Test random writes of "DB" drive using 32 KB buffers:
sqlio -kW -s300 -frandom -t1 -o1 -b32 -LS -Fparam-E.txt
- Test random reads of "DB" drive using 64 KB buffers:
sqlio -kR -s300 -frandom -t1 -o1 -b64 -LS -Fparam-E.txt
- Test random writes of "data" drive using 64 KB buffers:
sqlio -kW -s300 -frandom -t1 -o1 -b64 -LS -Fparam-F.txt
- Test random reads of "data" drive using 1024 KB buffers:
sqlio -kR -s300 -frandom -t1 -o1 -b1024 -LS -Fparam-F.txt
- Delete the large test files:
del "E:\testfile.dat"
del "F:\testfile.dat"
- I/O Performance Test Using NetBackup:
...in these command the "-s 128g" specifies the size of the test file...
...previous tests have shown that sometimes this tool gets stuck in a loop writing/reading 0 MB/s...
...if this happens just abort with a Ctrl-C and try again...
nbperfchk -i random: -o E:\tmp.tmp -ri 60 -bs 32k -s 128g -syncend
nbperfchk -i E:\tmp.tmp -o null: -ri 60 -bs 64k
nbperfchk -i random: -o F:\tmp.tmp -ri 60 -bs 64k -s 128g -syncend
nbperfchk -i F:\tmp.tmp -o null: -ri 60 -bs 1m
...a table of numbers is seen...
...left hand side: the average so far
...right hand size: splits per 60 seconds (default is 3s, can be changed with the "-ri n" switch)
...and then final line is the overall average.
del E:\tmp.tmp
del F:\tmp.tmp
REMINDER:
--------
...the NetBackup v7.5 DeDupe Admin Guide:
http://www.symantec.com/business/support/index?page=content&id=DOC5187
...on page 67 states:
Requirements: Storage Media: Disk, with the following minimum requirements per individual data stream (read or write):
Up to 32 TBs of storage 130 MB/sec ...but 200 MB/sec for enterprise-level performance.
32 to 48 TBs of storage 200 MB/sec Symantec recommends that you store the data and the deduplication database on separate disk,
each with 200 MB/sec read or write speed.
48 to 64 TBs of storage 250 MB/sec Symantec recommends that you store the data and the deduplication database on separate disk,
each with 250 MB/sec read or write speed.
These are minimum requirements for single stream read or write performance. Greater individual data stream capability or aggregate capability
may be required to satisfy your objectives for writing to and reading from disk.