cancel
Showing results for 
Search instead for 
Did you mean: 
GFK
Level 3
Partner Employee
“Standardisation is the ability for organisations to leverage a single layer of infrastructure software across their entire data centre that reduces IT complexity, protects information and applications, improves manageability and control of cross-platform storage and server assets, and drives down operational costs.”
The data centre is the beating heart of the enterprise and the IT infrastructure is on a journey to somewhere, but it is on a collision course with disaster if it cannot prevent its slide into chaos isn’t halted before it’s pulled into the abyss. We no longer feel in control of the destiny of our creation but at the beck and call of the latest technology that is going to solve all our problems. Doesn't it always seem that every step one takes is at the behest of a bit of tin? And that’s never the end of it, there’s always something else you need, another server here and other network there. This isn’t chaos created by our inability to make sense of all the assets we have that makes our data centre hang together, no, this is behest of vendors coming up with new stuff that’s different from the old stuff.
In today's economically challenged and fiercely competitive global marketplace, organisations are constantly struggling to reduce costs, boost profitability, and keep customers from migrating to a competitor. To stay ahead of the competition, organisations need to be as efficient and customer-oriented as possible. Not surprisingly, much of this responsibility falls on the shoulders of the enterprise IT manager. CIOs and IT managers are expected to maintain sophisticated networked environments while streamlining day-to-day operations, reducing overhead costs, and minimising time spent on server and network configurations.
At the same time, the IT department needs to take into consideration the constantly changing business requirements by continuously juggling computing assets, which can be easily wasted if they're not being directed to the areas where they are needed most. Faced with the difficult task of accomplishing more with fewer resources, IT managers are often forced to respond by simply purchasing more hardware than they really need or by assigning more staff to address these problems; problems that are on the increase and all have an adverse knock on effect right across the infrastructure.
Complexity
IT organisations have always had to ask themselves basic questions to justify spend as well as operational overheads, but these questions, or at least the answers to those questions, have become really quite complex:
  • How do I analyse my data centre to ensure accurate budgeting and decision-making and maximum business value?
  • How do I reduce infrastructure costs while maintaining service level agreements (SLAs) with my business units?
  • How do I seamlessly integrate and deploy new technologies as business groups demand them?
Some common business considerations should include:
  • What is the overall financial benefit to my business?
  • Will a new environment deliver the economic and performance benefits we want?
  • How will migration improve system performance and reduce manageability costs?
  • How can my business migrate from its current platform to next-generation hardware and software?
Most IT professionals would agree that a better approach to managing the data centre infrastructure is long overdue. A conundrum we have been facing for some time, but one that is increasingly driven by the dramatically growing demands on the data centre itself: budgets and staff levels are essentially flat; the business is demanding more and more services; every additional and existing spend has top be accounted for and justified; and data centres themselves are becoming so complex that they are becoming almost impossible to manage. Surely there must be a better way to manage this vital corporate asset.
If only we could stop and think about it for a while we would inevitably come up with the answer - standardisation. And yet standardisation is a word that seems to raise the hairs on the back of most self respecting IT manager’s neck. Standardisation means proprietary, standardisation means single vendor lock-in, standardisation means dumping your infrastructure for a whole new set of hardware – not true. The Valhalla of the IT world is IT Service Management, Cloud Computing, or whatever you wish to call it; this Valhalla can be driven by IT data centre and infrastructure standardisation and automation.
The standardisation of tools and the automation of processes within the data centre can improve storage, server, and application performance, as well as providing administrators with the ability to gain better control over the data centre, saving time, effort and money by better staff productivity, better utilisation rates of servers and storage, better efficiency and understanding of all the assets in the data centre environment.
How many tools are there?
There are simply hundreds of different tools to manage environments ranging from home grown solutions through to hardware incumbent middleware and vendor neutral software. What organisations need to be able to do is to source a solution that allows organisations to standardise on a single layer of infrastructure software in the data centre that runs on every platform with every database, irrespective of which application, storage platform or operating system; providing organisations with to create a single, common standard set of tools to manage the data centre.
The concept of IT Service Management is one that allows the infrastructure to drive the business of an organisation and not only meet the needs of the business, but also drive down costs associated, not just with the hardware and software in the heart of the data centre, but also the management of the increasingly complex environment. Organisations need to be able to discover and take control of all their assets from mobile devices, servers, applications, databases, right through to storage and archived, off-site media. This means having file systems that are standard across the infrastructure; volume management that spans all storage devices and operating systems; SRM and storage network management tools that recognises every hardware configuration no matter from which vendor it cometh; open system clustering, work load management; configuration and change management, provisioning and patch management, and application performance management that spans the entire server layer. And all that from a single view, easily accessible, proactive, integrated, and complementary with all our IT assets.
You can’t really do that consolidation thing until you understand what it is you’ve got in the first place, and, let’s face it, most of the time we really don’t know what we’re up to, and simply struggle from one crisis to the next. It is time that we re-took control and take the data centre into the future. However, standardisation my well be the way into the light, but it’s not simply that, it can also help organisations become responsive, no not responsive; automated– or predictive: the data centre is service based and delivered as a service, it's agile, it's measurable, it is policy driven, automated, and ultimately aligned completely with the business.
IT must add value
It is a given in today’s complex and competitive economy that unless you can prove added value to the business then IT will continue to be viewed as a cost centre and not a proactive driver of real business advantage and differentiation for your company as a whole. It is, therefore, essential that IT organisations are able to have comprehensive visibility across the backup, storage, database, server and application layers, as well as up into remote offices, desktops/laptops and mobile devices. Once we have that view then managing the infrastructure becomes so much simpler; capacity management or configuration management or availability management or IT service continuity.
Once IT have provided themselves with a comprehensive view of all the “stuff” that they have then they can build business and IT efficiencies, no matter whether the data centre is running a large corporation or small business organisations, in order to maintain and administer business applications at optimum efficiency. From the storage sub-system to disc usage, up through the infrastructure through server optimisation and consolidation, to the performance of mission critical applications, organisations need technologies to drive business efficiencies, giving IT immediate control over business information, avoiding bottlenecks, and ensuring high availability of critical systems and applications.
Infrastructure management
Organisations should consider the idea of a common integration platform. What you can expect to see is the delivery products that leverage more common integration elements: things like single sign on; common install; common workflow engines; common reporting, and a single user interface. Not only does IT need to discover all the assets in and associated with the data centre but we need to be able to increase the speed at which we react to events and become predictive in the way we manage assets and provide services.
Data protection and recovery’
The need for instant, on-demand data recovery is becoming increasingly vital for all business operations. While traditional tape backups have proven effective over the years, today’s dynamic business climate demands faster, more efficient backups and on-demand recovery. Disk-based data protection, specifically continuous data protection, addresses these issues in a way that eliminates the need for backup windows, allows end users to recover their own data without contacting IT, and delivers an integrated disk-to-disk-to-tape solution. For organisations looking to manage data growth, improve reliability, and speed data recovery, data deduplication will inevitably improve overall data protection without weighing down IT in costly, high-administration solutions. By using disk as the primary medium for data protection, archive and recovery, organisations can leverage traditional tape backups to provide secondary data protection for longer-term off-site data storage and retention.
There are technologies available now that automate the process of storing and managing off-site archived data tapes by automating the complex and tedious process of backup duplication and offsite media management. System or Backup Administrators can set up profiles to control what, when and how backups are duplicated, and when tapes will be shipped to and from the offsite vault to help ensure minimal data loss in the event of a disaster. This optimises tape utilisation at minimal cost and maximum efficiency and provides tremendous flexibility and automation to simplify duplication tasks. It gives organisations the flexibility to define different retention periods for each backup copy, making it easy to match backup procedures to business requirements.
The necessity to reduce or remove the backup window completely has been a growing necessity over the past few years. Organisations require a backup solution that will protect every open system operating environment and which eliminates the need to manage complex data environments from multiple sites or with multiple tools. In addition, IT needs to simplify operations, like database backup and recovery, In most organisations today IT is looking to protect hundreds or even thousands of gigabytes of data, a highly scalable solution that can provide fast, reliable backup and recovery of all types of data on any platform, with powerful, consolidated and easy to use snapshot technologies, will reduce backup times, reduce impact backup, and provide the fastest recovery possible. Disk to disk technologies enables enterprises to dramatically improve the reliability, security, and manageability of remote office data backup and recovery. Disk to disk continuous data protection technologies with advanced technologies in disk-to-disk backup, data reduction, encryption, replication, and centralised administration will directly address the most pressing challenges associated with remote office backup, including tape administration, lack of skilled IT personnel at remote offices, and the risks involved with transporting un-encrypted backup tapes offsite.
By standardising on a single UNIX, Windows, Linux, and NetWare enterprise backup and recovery platform for their data centre and remote offices, administrators can view and enable all backup and recovery operations from a single console, regardless of geographic location. Backup solutions now offer data protection for environments that span across multiple platforms, databases, applications, devices, and architectures as well as enabling the use of both disk and tape for backup provide options for advanced disaster recovery, snapshot backup, encryption, global single instance storage, NAS protection. Disk Staging, Data Streaming, Synthetic Backup and Checkpoint/Restart, Multiplexed Backup, Inline Copy, Online Catalogue Backup all improve performance of operations as well as reducing backup footprints and ease restores. The flexibility available in advanced data protection technologies across disk, tape, and storage networks, enables administrators to tune for performance and or economy, making the management and control of backups for all sizes of organisations simpler and more effective.
Storage management
If IT organisations are to meet the growing demands placed on them, and at the same time keep costs under control, they must find ways to make optimal use of data centre assets such as servers, storage hardware, and their IT staff. This requires the ability to transform storage management beyond a single application, or a single server, or a single storage device. Advanced disk and storage management solutions for enterprise computing environments will alleviate downtime during system maintenance by enabling easy, online disk administration and configuration, providing disk usage analysis, RAID techniques and the dynamic reconfiguration of disk storage while a system is online. These tools provide the optimisation of storage assets as well as ensuring the continuous availability and protection of data.
Centralised application, server, and storage management capabilities across a broad array of operating systems and storage hardware, with dynamic storage tiering, enables applications and data to be dynamically moved to different storage tiers which enables rapidly response to changing business needs. Storage resource management, performance and policy management, storage provisioning, and SAN management ensures that the storage infrastructure runs efficiently. It enables IT organisations to dynamically map applications to the resources they consume and implement storage tiers to match data to the appropriate storage devices based on business requirements.
Storage consolidation gives organisations the opportunity to reclaim badly needed disk space as well as to update organisational policies governing appropriate use of data. To meet storage optimisation requirements, organisations need to resolve three major challenges. Key applications require:
  • High service levels, storage solutions need to be extremely reliable, and data must be secure at all times.
  • Storage solutions have to support dynamic growth of applications, and dramatic and unpredictable increases in storage requirements.
  • Storage optimisation solution needs to be cost effective in order to preserve competitive advantages.
Policy-based management facilities enable automatic responses to alerts and data performance, end-to-end path management, together with multi-path features, provide troubleshooting capabilities to query storage paths from device to device. It also provides centralised and scalable storage provisioning and automation monitoring and management of the storage environment.
Organisations also need to consider heterogeneous Storage Resource Management and storage utilisation-reporting tools. By collecting file-level data from UNIX, and Windows platforms, IT can consolidate that data into a single set of reports. Storage Resource Management tools can enable administrators to understand how much storage is available, to whom the storage is allocated, and how it is being used. This information enables IT to identify and recover wasted storage – providing immediate return on investment.
Unfortunately, most hardware vendors view "migration and consolidation" in a very different way than their customers do. When vendors hear those words, they immediately think "throw out all your existing equipment and buy all new hardware — from us." And it's no surprise that many promote their own "house–of–cards"-style "utility" strategies. Predictably, these are designed to work only if you purchase the vendor's entire product line, which is likely to be a huge investment. But while that may be in their best interests, it's generally not in yours.
Server management
It’s pretty obvious that discovery of all the assets in your data centre includes the server environment; IT needs to understand in detail what is running on all the servers in the data centre, actively manage and administer those servers, and ensure that mission-critical applications running on those servers are always available. Many organisations have created large-scale architectures running complex multi-tier applications across a broad, distributed collection of physical and virtual servers, accessing terabytes of shared storage. If IT organisations are to keep up with the relentless growth in demand for data centre services while keeping costs under control, they need a comprehensive and automated way to control, not just their backup and storage environment but also their applications, virtual machines, and servers.
Discovery solutions for the server environment will discover all the servers and applications in the data centre, as well as their configurations and the dependencies between them. It then tracks any changes to these configurations and dependencies in real-time, and can compare current configurations to established standards to ensure internal and external compliance. Every server will suffer from configuration drift, the biggest failure of clustering is configuration changes, so, not only does confirmation management provide a comprehensive understanding of the server environment and the dependencies within it, but can also contribute to reducing downtime by preventing configuration drift that would otherwise prevent a failover from being successful. Configuration Management also reduces planned downtime by creating dependency maps that help organisations understand the impact of changes before they are made.
Because of the hours and often days it can take to set up, configure, reconfigure, upgrade, and manage network equipment, maintaining a server over its lifetime can cost as much as 8 to 15 times more than its actual purchase price. Approximately 15 to 20 percent of a server's unscheduled downtime is caused by operator errors in set-up or configuration.
Organisations need to improve the utilisation of existing IT environments by intelligently provisioning resources such as servers, switches, and load balancers across computer networks, organisations need the ability to integrate and optimise IT business processes with automated control, provisioning, and updating functions across heterogeneous Linux, Solaris, Windows, and IBM-AIX environments.
Integrated server management solutions that work with any hardware, supports multiple platforms simultaneously, and allows an entire networked environment to be administered remotely over the Web from a single location transforms Data Centre Automation, giving IT the ability to control when and where multi-tiered applications run across heterogeneous physical and virtual environments to maximise server utilisation and application availability.
To master the growing complexity of today's data centres, enterprises require a comprehensive data centre automation solution that addresses active management across the application layer of the infrastructure, enabling administrators to effectively monitor, and start and stop complex applications across hundreds or even thousands of physical or virtual servers, across every major operating system, in a secure, error-proof way; automating the management of complex application environments. Solutions now available can enable organisations to define an application's run-time requirements, such as its CPU and memory needs, network and storage connectivity, dependencies across internal application components and tiers, and its business priority. Administrators can then create and enforce policies based on those requirements to control when and where applications run across heterogeneous physical and virtual environments enabling them to maximise server utilisation, increase application availability and flexibly respond to changes in application workloads. Application management solutions can also provide a more granular level of visibility and control for virtualised environments by monitoring the applications within the virtual server, the virtual server itself, and the underlying hardware, as well as enabling the user to start, stop, and migrate the applications and the virtual servers across hosts.
Not only do organisations need to be able to manage their applications in real-time but they also need to have a centralised approach to patch servers and applications across operating systems, automatically performing scanning and assessment, examining the current patch footprint in each application or OS and comparing it to available patches using policy-based rules to automatically manage updates; with pre-release testing, patch repository management, and automatic patch distribution with deployment, rollback, and failed patch recovery. Patch management simplifies patch administration across the enterprise, enabling substantial savings in time and money compared to manual or semi-automated patch management.
OK, so we can understand what’s in our environment, we can prevent outages, we can dynamically manage our applications and servers and how those resources relate to storage and data, but what about the performance of the applications themselves?
Application management
In complex business application environments, delays can be caused by changes or updates required to keep pace with end user demand. In addition performance and availability of these applications directly affect the customer experience.
We are already able to improve application performance through raw device performance, online administration of storage, and the flexibility of storage hardware independence in addition to providing key storage virtualisation capabilities - the ability to manage logical pools of storage rather than physical storage devices; providing a management platform for databases that helps IT organisations manage larger and more complex environments with existing resources. By optimising the performance and manageability of an organisation’s database applications, allowing IT departments to more efficiently utilise existing resource. Now we are able to look at the performance of applications through all levels of the infrastructure—from the client to the storage.
With increasing workloads and changing performance dynamics, IT staff are forced to spend more of their time isolating and fixing performance problems. It's easy to see how this can quickly turn into an expensive drain on your precious resources. Optimising applications, managing real-time business processes, anticipating opportunities, increasing revenue, and reducing costs are imperative in today's "24x7xforever" environments. Centralised Application Management and Performance tools provide businesses with solutions that optimise the performance of business-critical applications. By continuously collecting high quality metrics from each supporting tier of the application infrastructure (web server, Java, application server, database and storage) and correlating these metrics to build a clear picture of application performance, organisations can ensure that the slightest indication of response time degradation can be quickly isolated anywhere in the architecture and the appropriate action is taken to minimise the impact to productivity.
And when additional resources are required Provisioning Management solutions can automate the provisioning of operating systems, applications, and network personalisation settings to improve administrator productivity, reduce errors, and improve server utilisation; lowering management and personnel costs and reducing error rates by automating many common, routine tasks such as provisioning servers and applications, deploying patches, and rolling out new software packages – all through a centralised console. Provisioning Management speeds operating system and application deployment, providing IT with a cost-effective solution to ensure consistent deployments, while streamlining management. It enables IT to automatically discover new bare metal or active servers, install operating systems, deploy and configure applications, and modify network settings. Administrators can then deploy servers, applications and patches in compliance with IT standards as well as define and schedule automated jobs and manage servers from a central location.
Data centre management, standard automation solutions provides centralised, proactive management of the storage infrastructure. Seamlessly integrating policy and performance management, storage provisioning, and zoning capabilities, these solutions simplify the complex tasks involved in managing and monitoring a multi-vendor networked storage environment; addressing key IT challenges in managing complex data centre environments while allowing IT organisations to recognise an immediate return on their infrastructure investment. By optimising IT system administration resources, staff can be redeployed to undertake key strategic activities including the evaluation of new technologies and the implementation of business systems instead of constantly fire-fighting. By proactively managing the data across the environment, reporting on capacity planning, using predictive analysis tools and ensuring preventative action System Administrators can see the total utilisation across a multi-platform environment, shifting the management of data away from fire fighting to: capacity planning, management reporting, predictive analysis and preventative hardware failure.
Conclusion
Data centres today are at the breaking point. Complexity has run out of control, driving costs up and jeopardising service levels. Data centres use equipment from a variety of different storage and server hardware vendors, and these vendors each provide unique and discrete tools to manage their own platforms. Unfortunately, the result has been a proliferation of inconsistent tools and approaches. A standard set of data protection, storage, server, and application management solutions allows IT to manage data, storage, servers, and applications with a unified set of products.
Standardisation is an approach that creates a software infrastructure that enables organisations to: simplify their data centres; actively manage and optimise their diverse storage and server assets; deliver IT service levels that support the business; and operate just as efficiently with legacy equipment as with newer systems, preventing organisations from being restricted to a particular vendor's equipment when the time comes for future hardware acquisitions.
Virtual fire drills allow organisations to easily perform non-disruptive testing of disaster recovery (DR) failovers to ensure reliability. Until now, DR testing required risky and time-intensive tests on production systems. As a result, most IT organisations have not tested their DR environment adequately, which leads to significant exposure that in the event of a disaster, those systems will not perform as intended.
The advantages of migration and consolidation far outweigh the risks. Standardisation gives organisations the opportunity to replace obsolete processes with much faster and more reliable systems at substantial savings; providing fail-safe consolidation and migration paths.
With a single command, complete server restores can be accomplished in a fraction of the time without extensive training or tedious administration. One solution addresses the demands of a variety of platforms, eliminating the need for customised restore procedures on each platform. Server restores will be faster, easier, and more successful, getting your business back online as soon as possible.
Standardising on a single infrastructure software platform across the entire data centre increases protection and availability of critical information and applications, improves utilisation of storage and server hardware assets, and enhances visibility and control of the data centre environment. Enterprises can replace dozens of different tools, enabling protection and availability of critical information and applications, improved utilisation of storage and server hardware assets, and enhanced visibility and control of complex data centre environments; providing comprehensive, easily managed data protection and data management, storage tiering, server and storage consolidation, data migration, storage capacity management, server automation, application availability, and disaster recovery—designed to help IT take proactive control of the data centre, drive down costs and increase service levels.
Version history
Last update:
‎07-27-2009 09:24 AM
Updated by: