If risk management is the process of measuring, or assessing risk, where risk is the potential impact (either positive or negative) to an asset, and therefore, a process, that has some form of characteristic value, from some future event, then how can IT automation restrict the negative impact on our business?
A negative event could be anything: power cut, flood, human error, fire, planned or unplanned downtime all resulting in the business’ inability to do, business. The most obvious, visible and easily identifiable cost is lost revenue but probably the higher and more immeasurable cost is the loss of reputation, brand and ultimately the loss of customer’s trust and loyalty.
Numerous surveys have indicated the extent to which business application slowdowns affect business productivity, customer loyalty, and employee morale. Around 24% of IT staff time is devoted to addressing business application performance delays. In complex business application environments, delays can be caused by changes or updates required to keep pace with end user demand. In addition, the IT professionals polled recognise the performance and availability of these applications directly affects customer experience. IT managers acknowledge that persistent delays would affect customer loyalty to their organisations. A reduction in customer loyalty is a risk very few companies are prepared to take.
Risk management can apply to any area of the business and is often a constituent part of a company’s business continuity plan covering people, process and infrastructure. But without the organisation’s essence – their data, information and applications – no organisation could function over any length of time. CIOs and IT Managers are faced with the challenge of keeping their organisation running and growing, no matter what. This requires planning and testing to mitigate against the risk or impact of a planned or unplanned disruption on the organisation.
IT exists to manage, streamline and ensure business efficiency, providing service to the business, whether that is restoring information where required, or providing critical applications, or new services, or simply ensuring that the information held is safe and secure. The challenge, of course, is managing to provide the right service to the right people at the right time. In adding services and SLAs to their remit, the management of IT has become a daunting task.
The growing complexity of information technology and IT infrastructures leads many organisations to react by installing additional amounts of server and storage capacity to counter the problems of storage and data bottlenecks. The effect is that the extra capacity is usually dedicated to specific servers as is, therefore, unavailable to help in other areas when needed. More people are required to help run the increased capacity, and the infrastructure becomes disjointed and more difficult to manage.
IT needs to allow the infrastructure to drive the business and drive down the associated costs, not just meet the needs of the business. IT organisations need to take control of their increasingly complex environment and streamline the management, be able to discover and take control of all their assets from mobile devices, servers, applications, databases, right through to storage and archived, off-site media. This means having file systems that are standard across the infrastructure; volume management spanning all storage devices and operating systems; SRM and storage network management tools that recognise every hardware configuration; open system clustering, work load management; configuration and change management, provisioning and patch management, and application performance management that spans the entire server layer. All that from a single view; easily accessible, proactive, integrated, and complementary with all IT assets.
The automation of data centre processes can improve storage, server, and application performance, as well as providing administrators with the ability to gain better control over the data centre, saving time, effort and money by better staff productivity, better utilisation rates of servers and storage, better efficiency and understanding of all the assets in the data centre environment.
If you’re not trying to simply manage you environment, you have more time and resources to provide new innovative services in order to remain ahead of the competition and provide a superior customer experience. It is a given in today’s complex and troubled economy that unless you can prove added value to the business then IT will continue to be viewed as a cost centre, not a proactive driver of real business advantage and differentiator. It is, therefore, essential that IT organisations have complete visibility across backup, storage, database, server and application layers, as well as into remote offices, desktops/laptops and mobile devices to build business and IT efficiencies to maintain and administer business applications at optimum efficiency.
Meanwhile, business scenarios go on changing. Corporate mergers, restructuring, tactical purchase decisions made on short notice, and proprietary IT solutions, all make IT infrastructures more complex and difficult to manage. Adaptability declines and companies cannot react as quickly or cost-efficiently to changing market conditions. The effect is that many companies expend too much energy reactively instead of focusing on future-oriented, proactive development in order to deal with local and global change.
As critical components of every business application, both security and storage need the same careful design as networks and systems, creating the concept of a robust architecture. The IT architecture is more than simply the topology of how you connect clients to applications to databases, to servers, to storage and how to protect it against malicious threat; it must include the people, processes, hardware and software that support data in the organisation.
And since critical tools aren’t always interoperable and IT operations and security functions often have conflicting priorities, such solutions can create more complexity—and more problems—than they solve.
To face the challenges of a changing business environment, organisations need to create an architecture that can adapt to change quickly and efficiently. And that requires building an infrastructure that is flexible enough to respond to a changing IT environment, but rigid enough to withstand disruptions.
A policy-based management system allows administrators to define rules based on manageability. They are best suited to large IT organisations where large numbers of devices are easier to manage from a central location. There are many frameworks for establishing Policy-based management disciplines at an organisation. The most widely accepted is the Information Technology Infrastructure Library (ITIL) framework. These frameworks provide “best practice” guidelines and architectures to ensure that IT processes are aligned with business processes and that IT delivers business solutions that are both consistent and useful. IT Service Management (ITSM) frameworks concentrate on creating business value through IT processes that manage incidents, configuration, change, releases, capacity, availability, and service levels. Standardising and automating these processes accounts for much of the value organisations realise by implementing ITSM.
ITSM takes a customer’s perspective on IT service delivery with business value measured at the customer interface and is traced back along the value chain, instead of forward from infrastructure investments, to the delivery point. ITSM focuses on customer value and helps IT “talk business,” delivering:
• IT service improvements, such as consistent performance to agreed service levels with minimum disruption and risks that are appropriately minimised and managed
• IT process improvements, including operational best practices, with all the information needed to support and document compliance to appropriate standards
• Standardisation of IT infrastructure and processes, to reduce costs, complexity, and time-to-value of new investments in hardware, software, utilities, and personnel Backup management IT has been using backup technologies for years and no one in their right mind would dream of not having a backup system of some sort. Interestingly, most organisations have become so relaxed about backup that this funnel for everything we create has become an unmanageable monster. Just try to retrieve a piece of information from a couple of years ago! Information comes in many forms and through a multitude of different channels. Despite the massive growth of the data storage industry, many organisations have preferred to invest in point solutions that address specific areas of pain. This results in a number of different tools, each with it’s own internal view of the storage infrastructure and its own administrative capabilities. This inevitably makes managing the backup and storage environment next to impossible, pushing up overheads and leaving the IT organisation with little idea if they are likely to be faced with a disaster in the near future.
As the information-driven organisation evolves, the demands placed upon the data storage infrastructure become stronger. Accepted wisdom indicates that the basic storage requirement for organisations doubles every one or two years. This is increased further as the need to take data copies, mirrors and replicas is taken into account. As the volume of data stored increases and as the importance of information to the organisation grows, an unacceptable burden of management is placed onto the organisation.
Having automated policies that span the entire data centre; that manage disk to disk backups, disk to tape backups, tiered storage migration, tape archiving with the ability to automatically manage tape rotation and e-discover content using backup, snapshot, mirroring and replication technologies, as well as storage virtualisation techniques removes the inevitable human error. Storage management
It follows that, in the same way that an organisation needs to construct a meaningful backup strategy, there is a need for a storage management strategy. What is required is a strategy that allows the business to see where its information resides and how it is being managed.
If IT organisations are to meet the growing demands placed on them, and at the same time keep costs under control, they must find ways to make optimal use of data centre assets such as servers, storage hardware, and IT staff. This requires the ability to transform storage management beyond a single application, server, or storage device. Advanced disk and storage management solutions for enterprise computing environments will alleviate downtime during system maintenance by enabling easy, online disk administration and configuration, providing disk usage analysis, RAID techniques and the dynamic reconfiguration of disk storage while a system is online. These tools provide the optimisation of storage assets as well as ensuring the continuous availability and protection of data.
Centralised application, server, and storage management capabilities across a broad array of operating systems and storage hardware, with dynamic storage tiering, enables applications and data to be dynamically moved to different storage tiers which enables rapidly response to changing business needs. Storage resource management, performance and policy management, storage provisioning, and SAN management ensures that the storage infrastructure runs efficiently. It enables IT organisations to dynamically map applications to the resources they consume and implement storage tiers to match data to the appropriate storage devices based on business requirements.
Storage consolidation gives organisations the opportunity to reclaim badly needed disk space and update organisational policies governing appropriate use of data. To meet storage optimisation requirements, organisations need to resolve three major challenges, key applications require:
High service levels, storage solutions need to be extremely reliable, and data must be secure at all times.
Storage solutions have to support dynamic growth of applications, and dramatic and unpredictable increases in storage requirements.
Storage optimisation solution needs to be cost effective in order to preserve competitive advantages.
Policy-based management facilities enable automatic responses to alerts and data performance, end-to-end path management, together with multi-path features, providing troubleshooting capabilities to query storage paths from device to device. It also provides centralised and scalable storage provisioning and automation monitoring and management of the storage environment.
Organisations also need to consider heterogeneous Storage Resource Management and storage utilisation-reporting tools. By collecting file-level data from UNIX and Windows platforms, IT can consolidate that data into a single set of reports. Storage Resource Management tools can enable administrators to understand how much storage is available, to whom the storage is allocated and how it is being used. This information enables IT to identify and recover wasted storage – providing immediate return on investment. Server management
It’s pretty obvious that discovery of all the assets in your data centre includes the server environment. IT needs to understand in detail what’s running on all the servers in the data centre, actively manage and administer them, and ensure that mission-critical applications running on those servers are always available. Many organisations have created large-scale architectures running complex multi-tier applications across a broad, distributed collection of physical and virtual servers, accessing terabytes of shared storage. If IT organisations are to keep up with the relentless growth in demand for data centre services while keeping costs under control, they need a comprehensive and automated way to control not just their backup and storage environment, but also their applications, virtual machines, and servers.
Discovery solutions for the server environment will discover all the servers and applications in the data centre, as well as their configurations and their dependencies to allow pro-active impact analysis of changes. It then tracks any changes to these configurations and dependencies, in some case in real-time, and can compare current configurations to established standards to ensure internal and external compliance. Every server will suffer from configuration drift. The biggest failure of clustering is configuration changes, so, configuration management can also contribute to reducing downtime by preventing configuration drift that would otherwise prevent a failover from being successful. Because of the hours and often days it can take to set up, configure, reconfigure, upgrade, and manage network equipment, maintaining a server over its lifetime can cost as much as 8 to 15 times more than its actual purchase price. Approximately 15 to 20 percent of a server's unscheduled downtime is caused by operator errors in set-up or configuration.
Organisations need to improve the utilisation of existing IT environments by intelligently provisioning resources such as servers, switches, and load balancers across computer networks. Organisations need the ability to integrate and optimise IT business processes with automated control, provisioning, and updating functions across heterogeneous Linux, Solaris, Windows, and IBM-AIX environments.
Integrated server management solutions that work with any hardware, supports multiple platforms simultaneously, and allows an entire networked environment to be administered remotely over the Web from a single location transforms Data Centre Automation, giving IT the ability to control when and where multi-tiered applications run across heterogeneous physical and virtual environments to maximise server utilisation and application availability.
Not only do organisations need to be able to manage their applications in real-time but they also need to have a centralised, policy-based and automated approach to patch servers and applications across operating systems: from scanning of a managed patch repository and assessment of current and available patches including the patch footprint; to pre-release testing, and automatic patch distribution with deployment, rollback, failed patch recovery and automated reporting. Patch management simplifies patch administration across the enterprise, enabling substantial savings in time and money compared to manual or semi-automated patch management.
OK, so we can understand what’s in our environment, we can prevent outages, we can dynamically manage our applications and servers and how those resources relate to storage and data, but what about the performance of the applications themselves? Application management and availability
We are already able to improve application performance through raw device performance, online administration of storage and the flexibility of storage hardware independence in addition to providing key storage virtualisation capabilities providing a management platform for databases that helps IT organisations manage larger and more complex environments with existing resources. By optimising the performance and manageability of an organisation’s database applications, IT departments can more efficiently utilise existing resource. Now we are able to look at the performance of applications through all levels of the infrastructure—from client to storage.
With increasing workloads and changing performance dynamics, IT staff are forced to spend more of their time isolating and fixing performance problems, an expensive drain on your precious resources. Optimising applications, managing real-time business processes, anticipating opportunities, increasing revenue and reducing costs are imperative in today's "24x7xforever" environments. Centralised Application Management and Performance tools provide businesses with solutions that optimise the performance of business-critical applications. By continuously collecting high quality metrics from each supporting tier of the application infrastructure (web server, Java, application server, database and storage) and correlating these metrics to build a clear picture of application performance, organisations can ensure that the slightest indication of response time degradation can be quickly isolated anywhere in the architecture and the appropriate action is taken to minimise the impact to productivity.
Testing a new application or service can be a time-consuming role that can result in delays predominately due to constant servers re-builds. Ensuring the development and testing of these applications is done so against IT policies and ensuring they are being tested against the ever changing production environment takes a huge amount of resource and manual interaction. Using an image based, Provisioning Management solution can automate the provisioning of server operating systems, applications and network personalisation settings to dramatically increase server build time, provide the ability to ‘roll-back’ to a previous image and substantially reduce the impact on IT. Tracking real-time production server configuration changes also brings huge challenges and can decrease the chances of the application being successfully deployed into the production environment. Using a real-time configuration change management solution automates this process and increases the chances of successful deployment.
But ensuring applications are available to the business and ultimately the user is fundamentally the most critical role of any IT organisation. A comprehensive data centre automation solution that addresses active management across the application layer of the infrastructure, enabling administrators to effectively monitor, automatically failover applications across physical or virtual servers, across every major operating system, in a secure, error-proof way; automating the management of complex application environments and ensuring critical applications are available according to business expectations and needs.
Additionally, some solutions now available can enable organisations start and stop applications and define an application's run-time requirements, such as its CPU and memory needs, network and storage connectivity and internal application dependencies, and its business priority. Administrators can then create and enforce policies based on those requirements to control when and where applications run enabling them to maximise server utilisation, increase application availability and flexibly respond to changes in application workloads. They can also provide a more granular level of visibility and control for virtualised environments by monitoring the applications within the virtual server, the virtual server itself, and the underlying hardware, as well as enabling the user to start, stop and migrate the applications and the virtual servers across hosts.
By optimising IT system administration resources, staff can be redeployed to undertake key strategic activities including the evaluation of new technologies and the implementation of business systems instead of constantly fire-fighting. By proactively managing the data across the environment, reporting on capacity planning, using predictive analysis tools and ensuring preventative action, System Administrators can see the total utilisation across a multi-platform environment, shifting the management of data away from fire fighting to: capacity planning, management reporting, predictive analysis and preventative hardware failure. Conclusion
IT organisations have always had to ask themselves basic questions to justify spend as well as operational overheads, but these questions, or at least the answers to those questions, have become really quite complex:
How do I analyse my data centre to ensure accurate budgeting and decision-making and maximum business value?
How do I reduce infrastructure costs while maintaining service level agreements (SLAs) with my business units?
How do I seamlessly integrate and deploy new technologies as business groups demand them?
Some common business considerations should include:
What is the overall financial benefit to my business?
Will a new environment deliver the economic and performance benefits we want?
How will migration improve system performance and reduce manageability costs?
How can my business migrate from its current platform to next-generation hardware and software?
Data centres today are at breaking point. Complexity has run out of control, driving costs up and jeopardising service levels. Data centres use equipment from a variety of different storage and server hardware vendors, and these vendors each provide unique and discrete tools to manage their own platforms. Unfortunately, the result has been a proliferation of inconsistent tools and approaches. A standard set of data protection, storage, server, and application management solutions allows IT to manage data, storage, servers, and applications with a unified set of products.
By creating a software infrastructure that enables organisations to: simplify their data centres; provide the right service at the right time using the appropriate amount of resource; actively manage and optimise their diverse storage and server assets; deliver IT service levels that support the business; and operate just as efficiently with legacy equipment as with newer systems, preventing organisations from being restricted to a particular vendor's equipment when the time comes for future hardware acquisitions, IT can mitigate against both IT and business risk.