cancel
Showing results for 
Search instead for 
Did you mean: 
GFK
Level 3
Partner Employee
There are some fundamental trends in the IT marketplace that are going to force organisations of all sizes to reconsider how they provide IT services to their respective businesses. Operational costs are outstripping capital outlay and power consumption is a huge proportion of these expenses.
Analysts predict that within the next few years that up to half the world’s data centres will disperse due to power consumption and space restrictions, with energy costs escalating to up to one-third of IT budgets. IDC say that IT organisations are already spending 1/4 of every hardware dollar on power.
So, in spite of the fact that the requirement for data storage is increasing exponentially and storage hardware costs are falling there is a requirement to look at storage consolidation – not just from a manageability point of view but due to the amount of power we are all using to store data.
Initially one will assume that storage consolidation is all about reducing the amount of storage and, of course, the first thing to do is take a serious look at disk subsystems and the specific disk types that those systems will contain.
But storage consolidation has to be more about the practice of simplifying the storage infrastructure, allowing storage administrators to maintain better organisation and control over their resources. When you are looking at, certainly hundreds, or in some instances thousands, of file servers across the IT infrastructure there is no question that it is tricky for storage administrators to utilise and maintain the storage infrastructure. This leads to wasted space, poor utilisation and escalating storage management and power costs.
The need for us to even consider storage consolidation when storage itself is getting smaller and cheaper by the hour seems bizarre. However, it is driven by the manageability of storage and its relentless growth. From a personal perspective imagine carrying around 1GB of storage – today its on a single USB, ten years ago it would be 1000s of floppy disks! Within the data centre you can now get 1TB disks – ten years ago it was not uncommon to have 1000+ disks on a server to give half this amount of storage. In fact, the growth of required storage has been so rapid in recent years that most organisations have failed to keep up. Simply adding more disks, more arrays or more servers in order to meet the demands of applications and users compounds the problem.
Inevitably, organisations suffer from storage sprawl where data is stored in disparate, cross vendor systems – in the data centre, in various locations within the organisation, in remote offices, and even in end user devices. This propagation of data and storage assets presents storage professionals with a never ending spiral of:
  • Storage management overhead
  • Constant drain on budget through the purchase of new storage
  • A strain on power and cooling systems
As a result, data centre storage consolidation has turn into a vital concern for many CIOs. A storage consolidation project could simply be a process of replacing direct attached storage with a single network attached storage (NAS) environment which is easier to maintain and manage as a centralised storage resource.
Alternatively, it could end up as a complete re-architecture of the storage environment involving large numbers of storage servers spread out over an enterprise, a project that re-architects the storage environment into a single storage area network (SAN) or a network attached storage (NAS) system, using continuous data protection technologies to replicate data from remote sites and users to a single location in the data centre.
The advantages of a consolidation or improved management and utilisation project is that by replacing or reorganising multiple disparate storage platforms with a single storage resource storage professionals can improve manageability, visibility and efficiency of storage assets and data following the process – effectively storing more stuff, more efficiently with less effort and hardware to do it.
The major attraction of storage consolidation is the ability to improve efficiency both in terms of power consumption and manageability. With capabilities to manage the same number of storage resources with standard tools crossing multi storage platforms storage administrators can maintain a larger number of servers, giving them the ability to organise and provision consolidated storage from a single GUI.
With easier management comes less errors and less time fighting fires. With complete transparency across the storage environment it's much easier for administrators to utilise the available storage to a much higher percentage and not leave that long-forgotten bit of storage with high levels of free disk available but under utilised.
In spite of the fact that there is a huge number of Storage Resource Management and Storage Volume Management software available the storage on distributed servers rarely sees more then 50% utilisation. With the ability to discover and map all the storage resources available, then Storage Administrators can easily exceed 80% - the ability to recover otherwise forgotten storage can result in a huge cost savings in hardware, software and administration.
The ability to see the storage gives IT the ability to make predictions about future storage requirements allowing organisations to save money by budgeting storage acquisitions more accurately and taking advantage of falling storage costs – something that is in a downward spin. If IT is able to consolidate the storage resources it has then it can also reduce the amount of power and cooling required by the data centre.
The objective of any storage consolidation project is normally to store more information using fewer disks or systems. Inevitably, all the storage is then concentrated in fewer centrally located devices. The result is that all the storage traffic passes across the same network connections, and this can easily result in a serious performance bottleneck.
Bottlenecks result in bad network performance which has an adverse impact on application performance and user service levels. It is an easy danger to fall into. To overcome network performance problems it is important to evaluate the network architecture early in the storage consolidation process and accommodate any changes or upgrades needed to prevent bottlenecks. By virtualising the storage infrastructure, using volume management, multipathing and high availability tools, as well as SAN infrastructures, organisations can alleviate reliability issues as well as ensuring that storage remains available to critical applications and processes.
Ideally, the storage consolidation process should simplify management by reducing the number of systems that need to be managed, but the new consolidated system will have its own management demands and GUI. If multiple heterogeneous storage systems are being introduced, it is imperative that the management software can support all of those systems through a single interface.
With more users relying on fewer storage systems if you’re not careful your storage consolidation project can create the very problem you are trying to solve by further complicating the management of the data stored, it's inevitable that systems slated for consolidation will support many more disks. Each disk needs to be selected for a balance of storage capacity, performance and reliability, as well as cost.
Fibre Channel drives usually offers the best performance, but they cost the most and have the lowest capacities. SATA drives represent the other extreme, offering the largest capacities and lowest costs, but with the lowest performance, whilst SAS drives fall somewhere in between. With the advances in technology it is no longer a safe assumption that ‘cheap’ is bad, The traditionally low-end disks continue to improve in performance and are often ‘good enough’ for many business applications.
It's important to select disks and storage systems that can maintain or improve on current storage service level agreements (SLA) for both users and applications. SLAs should be driven by the value of the application or data to the business but time is frequently ignored. The value of data diminishes with time as a rule – but frequently the SLA treats all data as equal, which may not be cost effective for the IT department or the business.
During the project management can be even trickier with a whole host of data migrations from a myriad of disparate systems to the new storage system. Inevitably the migration must occur with no downtime for applications. Storage consolidation almost always implies data migration in order to move data from existing disparate systems to the centralised storage platform. However, migration takes time and it's usually disruptive to the production environment. Even though most storage systems provide migration tools to ease the process, it's important to plan the migration between heterogeneous storage systems in order to minimise service disruptions. Systems that cannot be migrated with automated tools will require direct attention from IT staff.
Data backup and disaster recovery processes will also be influenced by consolidation. After deployment, IT will also need to address data backup and disaster recovery processes. IT needs to plan any changes needed to ensure that the new data locations or systems will be protected properly. The backup window required to backup this concentrated amount of storage may well have increased dramatically, however, at the same time the backup window has decreased in terms of application and data availability.
In order to ensure availability of systems IT will need to look at Continuous Data Protection (CDP), incremental backups, disk-based backup techniques, like virtual tape libraries (VTL) or disk to disk, or even replication of a single large NAS to a completely different physical location, as data replication is also a means of protecting data. New disk systems intended for storage consolidation should be able to duplicate their data to another local storage system or across a WAN to a storage system in another location. Most disk arrays include some amount of native replication capability.
Any time more data is stored on fewer disks, IT has to consider the role of data integrity or protection in your storage consolidation strategy. RAID is typically the only form of data protection within a storage array, so match the RAID level to your protection requirements. RAID-0 offers simple striping for performance improvement, while RAID-1 provides mirroring.
For today's storage arrays, RAID-5 handles single parity disk protection and RAID-6 (sometimes dubbed RAID-DP by proprietary vendors) supports dual-parity RAID that can protect against two simultaneous disk faults. RAID-6 is particularly useful for SATA storage systems where the probability of multiple disk faults is substantially higher than Fibre Channel systems. When a disk fault occurs, it will take time to rebuild the failed disk from parity information and the other disks in the group.
Remember that almost all RAID implementations will reduce the overall storage capacity. RAID-1 will use double the disk space because every disk is mirrored to another. RAID-5 requires one additional disk for the RAID group's parity data and RAID-6 requires two additional disks. This will impact the effective storage consolidation in your array.
Another tool to help manage the increasing amount of data we have to store is looking at tiering. Tiered storage has become an important element of the storage enterprise, allowing data to be classified and stored according to its relative importance to the enterprise. This can be based on content or on the age of the data.
Traditionally, this involved multiple storage systems. Typically, one could expect Tier-1 storage to be data that resides on a high-performance Fibre Channel storage array, while Tier-2 nearline storage and Tier-3 archival storage may rely on an array of SATA disks. However, storage systems can often support more than one disk type, allowing multiple storage tiers within the same system, further reducing the number of disk systems needed in an enterprise.
Data retention and compliance continues to be a top 5 issue for organisations and their technology professionals. This level of visibility and importance raises a dilemma in relation to interpretation and misuse. Compliance has been a significant business challenge for many years and is now firmly aligned with the need to react to new legislative and best practice requirements on an on-going basis.
Now businesses, and specifically CIOs and IT Managers are facing the first trend in a series of changes in the way that organisations need to deal with, and manage, their electronic data. What makes compliance so complicated is that no matter which set of regulations or vertical industry you look at, there are numerous stakeholders which involve System Administrators, CIOs, CFOs , operations departments and HR departments all working together. While a lot of time and effort has been spent putting in email and instant message archiving for both storage management and compliance purposes, files also need to be considered.
File archiving is a valuable tool for any storage infrastructure that needs to free up storage capacity, by moving unused data off of active storage systems for long term storage. Design documents, software builds and images are just a few of the data types that are often relegated to archives, archives that necessitate the need to retrieve data at a late date for compliance purposes.
Items that have not been access for several months, or if you know that a piece of data will not be needed for some time then it may well be appropriate to archive it. You may not know if you'll need a piece of data in the next five years, but it is also difficult to be absolutely sure that it is safe to delete it.
By placing the data into a long-term storage system where it's accessible if needed companies are able to save data – just in case – but also in order to comply with government, industry or self-imposed regulations. Archiving not only frees up storage, it also improves backup. As there are less ‘live’ files the backup will be quicker, take up less media – and should a restore be required, that will be faster too. Archiving, whether it is files or email can all be driven by policy – automation being the number one enabler for driving out cost.
Recent changes to compliance rulings means that anyone possibly subject to a future court proceeding has to be able to produce the required information. In other words all organisations who might be involved in litigation must take appropriate steps to protect and preserve their data – and how do you know whether a piece of data may or may not be needed in the future? Well, you don’t so keep it all … or at least have a well known policy which describes that which makes up a company record and how long it should be kept for. If in doubt… keep it.
Storage virtualisation presents multiple physical storage devices from various vendors as higher-level, logical devices that can be managed more easily and efficiently. This technology lays the groundwork for creating and managing large pools of storage instead of individually managing a multitude of multi-vendor devices. It opens the way for significant cost-effectiveness through improved storage utilisation, simplified and centralised storage management, and lower IT training costs. It also promises improved data access and more flexible storage allocation. Because of the virtualization, new storage can be switched in as old is switched out, all while the application continues, blissfully unaware.
Storage policies are the key to efficient file archiving, tiered storage and storage virtualisation. Policy management takes IT and business rules about the information being managed, then implement those rules to move, store and delete information in the infrastructure. Most policy management tools are proprietary and are not discreet hardware or software products. It is usually a feature that is built into numerous solutions.
Heterogeneous policy managers do exist and run in software on host servers, storage subsystems or on appliances within the network. These tend to tie functionality from the different devices together to create a single storage management point which in turn ensures uniform execution of business rules across all storage systems.
As unstructured business data proliferates, IT is looking for solutions that provide a vehicle to set policies to transform data into a valuable corporate asset; help control, manage and protect the data generated; enables high-speed search, view and retrieval of archived data; provides journaling facilities to support organisations in fulfilling their auditing and regulatory responsibilities, helping o reduce costs, improve user productivity and increase security.
When going through a storage consolidation/utilisation project there is no need to throw out all your old storage. Yes, storage consolidation replaces one or more existing storage systems, but if you are considering using tiered storage then you can opt to redeploy the older hardware either as a backup/disaster recovery resource, reinstall the older hardware as a lower tier or relegate it to a lab/testing service. The key here is that older hardware may still have value to the organisation, and any consolidation initiative should include plans to redeploy storage hardware that’s left over.
Unfortunately, most hardware vendors view "migration and consolidation" in a very different way than their customers do. When vendors hear those words, they immediately think "throw out all your existing equipment and buy all new hardware — from us." And it's no surprise that many promote their own "house–of–cards"-style "utility" strategies. Predictably, these are designed to work only if you purchase the vendor's entire product line, which is likely to be a huge investment. But while that may be in their best interests, it's generally not in yours.
Hardware independent products, i.e. software storage and server management applications, can operate just as efficiently with legacy equipment as with newer systems. Of course, this also means that you won't be restricted to a particular vendor's equipment when the time comes for future hardware acquisitions. When you choose the right strategies, the advantages of migration and consolidation far outweigh the risks. Heterogeneous products give you the opportunity to replace obsolete processes with much faster and more reliable systems at substantial savings.
In today’s cost cutting/saving world IT must consider the impact of power and cooling on a consolidation plan. Corporate data centres are a key focus for power-saving plans because of the massive amount of power they consume. Such developments are still in their infancy, but are likely to become the norm as the focus on energy costs becomes increasingly pervasive. When the new storage system is installed, it will demand additional power and cooling, especially in the migration stage. The responsibility remains with IT to reduce or maximise the efficiency of resources which includes electricity and cooling! Companies are becoming increasingly aware of power costs so, by reducing energy consumption you will improve operations and cut costs, storage consolidation/utilisation is one area you can affect now.
Version history
Last update:
‎07-27-2009 07:59 AM
Updated by: