cancel
Showing results for 
Search instead for 
Did you mean: 

The Cheap Storage Myth

RyanJancaitis
Level 4
Employee

 

Look back over the past 20 years and think about the size of drives you’ve purchased.  The first PC I owned ran Windows 95 and had a gigantic 512 MB hard-drive.  This machine was used primarily for writing papers, emacs based email, and ‘surfing the net’.  There was no way I could ever conceive of filling up my drive and thought I’d have the machine for years.  Then the default format for music and photos was digitized, and it quickly ran out of space.  When I bought my next PC, it had 8 times the storage, but came at a similar price.

That’s in the consumer space – where it gets interesting from the Enterprise Storage perspective.  When SANs were gaining popularity for their speed and capacities back in the early 2000’s, 10 TB SANs were enormous and only reserved for the largest and most IT aggressive customers.  In 2001, EMC Symmetrix 8000’s could be configured from 72 GB to “nearly 70 TB”.   The estimated cost for a new Symmetrix 8000 at the beginning of 2001? $3 million

The costs of this storage have decreased consistently and exponentially every year and continue to do so.  Going back to EMC, just to expand on this point, the base VMAX SE array comes with a minimum of 48 146 GB drives (7 TB) with a list price of 250k.  The high-end VMAX can be configured with more than 2 PB of raw SSD storage with estimated costs of up to $6 million. Similar costs to 2001, but an extraordinarily larger amount of storage.

Industry analysts estimate the capacity of new storage purchases is increasing 50% year over year within the large enterprise space, with storage prices decreasing at 25-30%.  Any time the unit purchased increases faster than the price/unit decreases, the purchaser is shelling out more money overall.   

Here in lies the myth of that “cheap storage”: Even though the price/GB is dropping, the Total Cost of Ownership (TCO) of that storage continues to rise, making your storage spend more today than 3 years ago.  Let’s look at a few causes for the increase of TCO:

  • Secondary Costs: The secondary costs of spinning that disk include: Power, cooling, floor space, rack space, software, backup/restore, IT staff (and their health care), and so on.  When looking into the cost of a new array, how much consideration is given to the secondary costs and not just the purchase and maintenance?
  • Application Complexity:  Applications have become more complex, layered, and data intensive as computing power (CPU/RAM) have increased and storage has gotten “cheaper”.  How many applications are deployed today within contained, stand-alone servers?  The majority of applications are multi-tiered, network attached, with 24x7 High-availability requirements.  These applications are using this improved power and memory to perform more complex computations on larger sets of data stored for longer periods of time.  All of this leading to more hardware requirements, restarting the cycle.  The promise of flexible storage in the SAN has been under delivered.  Storage is often provisioned at the initial application creation for the lifetime of the application, not as it is needed.  How do you respond when the application developers are continually increasing requirements?  
  • VM Growth: Virtual machines do a great job of improvin server provisioning tasks and increasing computation utilization.  However, this same virtualization has put a high tax on the storage teams as they try keep up with demand.  The local drive that was in that physical box, serving out the OS and running the application? Gone.  That space has to come from somewhere and that somewhere is now more expensive networked storage.  Have you displaced the problem of physical server sprawl with virtual server sprawl?  How many copies of the same OS and boot image are being stored for your virtual infrastructure?  What happens when moving to Virtual Desktops, do you need to store hundreds of copies of the same system .exe and .dll files for every image and every user?
  • Unstructured Data Explosion: Correlating to the rise of virtual machines and the mobilization of IT, the Storage Infrastructure is becoming relied on more and more to store and make available the unstructured data generated by end users.  Anything from excel spreadsheets with M&A data to AVIs of the new guy’s wedding video are being stored on the network – often long after that employee has left the company.  Unstructured data growth is beginning to out-pace block level growth, reaching 80% annually for some customers.  Combined with regulatory and compliance standards, this information will be stored for long periods of time.  How can you retain control the free for all created by fileshares?

Can you trust a hardware vendor to help you buy less disk?  Despite the myth that disk is “cheap”, storage is becoming a much larger percentage of the overall IT spend.  Symantec is uniquely positioned with enterprise class tools to help customers tame these areas of storage growth and bring a halt to unnecessary storage purchases.  Our customers are buying just-in-time, not just in case.   Tools like VOM (Veritas Operations Manager) give insight from application to spindle to drive utilization and promote accountability across the IT infrastructure.  Veritas Storage Foundation can deliver on the promise of “thin” storage by moving customers from thick to thin, and most importantly, staying thin.  Symantec VirtualStore will maximize your VMWare environment through optimization and shared storage utilizing the best-in-class Clustered file System.  Symantec Data Insight for Storage allows customers to get a handle on the unstructured data explosion and drive optimization through accountability.