Modern workloads need modern management and protection

Following the recent announcement that Veritas is extending its partnership with Nutanix, I reached out to my new colleague Andrew Brewerton, Director of Channels (Western Europe and Africa), to discuss one of the biggest challenges facing IT organisations today: how to manage and protect ‘modern workloads’. We’re seeing a huge growth in these – from big data apps like Hadoop, to hyperconverged infrastructures, to NoSQL databases – and they have a whole different set of requirements, so Andrew and I dug into some of the issues.

106898-001-VER-CLOUD-1-JM.JPG

 JS: Andrew, it’s great to meet you. Firstly, what’s your take on the Veritas-Nutanix partnership? Where do you think we best align?
AB: Thanks, Jasmit. I think as organisations continue to develop applications, and their IT gets more complex and distributed, they’re looking for help to manage and protect it all in a simpler way.

That’s where our partnership makes a lot of sense. At Nutanix, we talk a lot about making things simple for our customers – by rearchitecting their infrastructures to run effectively across clouds, across different locations. And we’ve always seen Veritas as doing the same for data protection, whether that’s at a compliance level or for disaster recovery.

By marrying up our respective technologies, we’ll be helping organisations deliver their applications when and where they’re needed, and keep them up and running no matter what.

JS: A key word you mentioned there is applications. Because that’s what IT is all about now, right?
AB: Yes, definitely. Businesses increasingly depend on the applications. The key goal for most is updating their legacy architectures, so they can digitise services for customers. It’s all about integrating social, mobile and IoT to create that modern user experience. For us, that means giving clients the ability to launch great new applications efficiently, and get a quick return, so they can be more competitive.

JS: Experimentation plays a big role in that doesn’t it?
AB: For companies to survive, they need to develop and evolve quickly. There are now so many different routes to market, you need to discover quickly what works for you as a business, and what works for your customer base. People talk a lot about failing fast, and it’s so true. Companies need that agile approach – trying things out, recovering quickly if it doesn’t work, and finding a new method that does before the market leaves them behind.  

JS: That’s where modern workloads like NoSQL, Cassandra, MongoDB and Kubernetes come in. Why are they so critical?
AB: Well, as businesses press ahead with more agile application development, they definitely need scale-out capabilities.

New services need to be able to support a large and often rapid growth in user numbers – for example, on the back of a high-profile press review or viral success. It’s no good building something that works great for 100,000 users, but falls down when the numbers go up. Yet equally, it’s not cost-effective to go the other way – to resource an app heavily for millions of users from day one.

Demand will always be unpredictable to begin with, and these modern workload technologies are highly dynamic. They ensure the architecture sitting behind a new service has the ability to scale up and down. And that’s what businesses need today.

JS: It sounds like modern workloads need to run in the cloud then?
AB: Yes and no. It’s actually about running the workload in the right place at the different phases of its lifecycle.  Public cloud services can and do provide great flexibility during the initial phase of development, where unpredictability is often a feature. As an application goes through the DevOps cycle, consuming varying levels of resource, it makes total sense to support this on a platform that flexes quickly both technically and commercially.

But when the application matures – as the development cycles lengthen and demand is more stable – the economics of public cloud get less attractive. When you can predict what you need in the mid to long term, with regards to processing and storage etc, it can make more technical and financial sense to deliver these locally using on-prem platforms or by creating your own ’private cloud’ infrastructure.

It’s a bit like relocating to a new city. You might stay in hotels to begin with, as you get to know the area. When you’re more settled, you’ll save money by renting somewhere. Then, when you’re established, you buy your own property. It’s about choosing the best and most cost-effective home for your workloads, according to their maturity.

JS: But if workloads are running in multiple places, they are more complex to manage, correct?
AB: That’s the perception businesses come to us with. They’re struggling with how to manage legacy IT deployments alongside cloud services, like Office 365 and Salesforce. And they often have even more services running in their remote offices and branches, plus lots of IoT-connected bits of kit out in the field. They’ve got this complex and growing ‘multi-cloud’ environment.

Our focus is on simplifying that for them. Giving them one interface and one set of tools to manage everything, no matter where it’s located. And I know your focus is similar from a data protection perspective, right?

JS: Yes, we talk about simplicity too. As these modern services grow and get more distributed, organisations can often struggle to ensure they are adequately protected. They’re drawn to scale-out architectures because they need high performance to analyse and process huge amounts of unstructured data.

Yet the traditional single-client data protection platform can’t scale accordingly. It just creates a performance bottleneck, with data sitting there in a long queue of traffic, unprotected. If a service went offline or was hit by a cybersecurity attack, that data would be unrecoverable. And the business impact could be massive. Say it was a new airline booking app – flights could be cancelled, reputations would be damaged, share prices might plummet.

In recognition of this risk, many organisations slow down their adoption of emerging workload technologies while they wait for their data protection provider to catch up.

Our focus, therefore, is to simplify the whole data protection process, so businesses can launch new services quickly without having to change their platform. For example, our NetBackup Parallel Streaming architecture, which we’ve made available in NetBackup 8.1, uses a flexible ‘plug-in’ system that allows new workloads to be added as required, at scale. It allows for new advancements in big data, open source and hyperconverged architectures without the data protection bottlenecks. And it gives IT teams the freedom to move and manage data as they wish, with absolute confidence that it is protected

Andrew, thank you for your time today. It’s been great to get further insight into Nutanix and how you view the complex landscape of today’s modern business workloads. I think we’re certainly aligned on the need for simplicity – and I look forward to working with you more over the coming year.

1 Comment

Exactly. Very laconic. I'm working as a professional writer at speech writing service and lately we've been suffering from pretty much similar issue. Great insight to the nature of the problem along with interesting thoughts! Thanks for the article.