The Veritas Flex Appliance and the Game of Leapfrog
It’s my firm belief that we don’t see much that is actually “new” in IT very often. Mostly what we see is either the clever application of existing technologies or the re-application of pre-existing technologies with the understanding that the tradeoffs are different. I include server virtualization and containerization in the “not new” category with both actually being quite old in terms of invention, but in more recent history, containers having more interesting applications. The reason I’m going down this path is I frequently get questions regarding the Flex appliance as to why we chose to implement with a containerized architecture instead of using a hypervisor for an HCI data protection appliance like [insert company name here]? And, are your sure there’s no hypervisor? Yes, I’m sure there’s no hypervisor in Flex, it uses containers. Fundamentally there are good applications for both, and for entirely different types of workloads, so let’s look at why chosecontainers for the Flex Appliance instead of a hypervisor. Containers has its roots in FreeBSD “jails”. Jails enabled FreeBSD clusters to be hardened and for deploying of “microservices” (to use a more modern turn of phrase) onto the systems. The big advantage here being very high levels of isolation for the microservices, each running in its own jail. Containers then versus now are fairly different. FreeBSD jails were relatively primitive compared to something like Docker, but for their time they were state of the art, and they worked quite well. Which brings us to hypervisors. VMware started largely in test/dev. About the time it was entering into production environments, we were also seeing significant uptake of widely adopted 64-bit x86 processors. Most applications were running on a single box and were 32-bit, single-threaded, single-core, and didn’t need more than 4GB of RAM. Rapidly, the default server was 4 cores and had 8GB of RAM, and those numbers were increasing quickly. The hypervisor improved the extremely poor utilization rates for many sets of applications. Today, most new applications are multi-core, multi-threaded, and RAM hungry, by design. 16-32 cores per box is normal as-is is 128+ GB of RAM, and modern applications can suck that all up effortlessly, making hypervisors less useful. Since 2004 Google has adopted running containers at scale. In fact, they were the ones who contributed back “cgroups”, a key foundational part of Linux containers and hence Docker. This is interesting because: Google values performance over convenience Google was writing multi-core, multi-threaded, “massive” apps sooner than others Google’s apps required relatively large memory footprints before others Googles infrastructure requires application fault isolation So, although virtualization existed, Google chose a lighter weight route more in line with their philosophical approach and bleeding edge needs. Essentially “leapfrogging” virtualization. Here we are today, with the Veritas Flex Applianceand containers. Containers allow us to deliver an HCI platform with “multi-application containerization” on top of “lightweight virtualization” - essentially leapfrogging HCI appliances built on a hypervisor for virtualization. A comprehensive comparison of virtualization vs. containers is beyond the scope of this blog, but I thought I would briefly touch on some differences that I think are key and that help to highlight why containers are probably the best fit for modern, hyper-converged appliances: Virtualization Containers Operating system isolation (run different kernels) Application isolation (same OS kernel) Requires emulated or “virtual hardware” and associated “PV drivers” inside guest OS Uses host’s hardware resources and drivers (in the shared kernel) Standardized “packaging” of virtual machine (mostly; variance between hypervisors) Standardized packaging requiring Docker or one of the other container technologies Optimized for groups of heterogeneous operating systems Optimized for homogeneous operating system clusters Here’s another way to look at it: Enterprise Cloud Hardware Custom/Proprietary Commodity HA Type Hardware Software SLAs Five 9s Always on Scaling Vertical Horizontal Software Decentralized Distributed Consumption Model Shared Service Self Service What you see here is a fundamentally different approach to solving what might be considered a similar problem. In a world with lots of different 32-bit operating systems running on 64-bit hardware, virtualization is a fantastic solution. In a hyper-converged appliance environment that is homogeneous and running a relatively standardized 64-bit operating systems (Linux) with 64-bit applications, only containers will do. The application services in the hyper-converged Flex Appliance are deployed, added or changed in a self-service consumption model. They’re isolated from a potential bad actor. The containers and their software services are redundant and highly available. Application services scale horizontally, on demand. One of the best party tricks of the Flex Appliance that I didn’t touch on above is that containers fundamentally change how data protection services are delivered and updated. With the Flex Appliance, gone are the days of lengthy and risky software updates and patches. Instead, quickly and safely deploy the last version in its own container in the same appliance. Put the service into production immediately or simultaneously run old and new versions until you’re satisfied with its functionality. We couldn’t do any of this with a hypervisor. And, this is why the Flex Appliance has leapfrogged other hyper-converged data protection appliances. I also refer you to this brief blogby Roger Stein for another view on Flex.3.4KViews0likes2CommentsStorage Management for Production Ready Docker
Dear Docker User, Developers and System Administrators around the planet eagerly look forward to connect and share knowledge on what is needed to efficiently build, ship and run applications. The recent edition of Dockercon EU 2015 at Barcelona was the place to do just that and also saw Docker announcing Docker Universal Control Plane and Docker Swarm scaling to 50,000 containers in half a second. 2/3 rd of the companies who evaluate Docker adopt it as per the Article and based on my interaction with Docker Community. Most of the hands on Docker users believe Storage is a key challenge to be solved for Stateful applications in Production ready Docker. ( Technology Exhibition ) Kubernetes is a widely used container management platform and is deployed and used by most of the customers whom I spoke to. However, Docker users would look at Swarm as an alternative considering they need not learn kubectl CLI. The demo which showed Docker Swarm scaling to 50,000 containers on 1000 machines was impressive. It is a testimony to the scaling potential of Docker and gave a tip of the iceberg of what is the future for clustering containers. Virtuozzo, a leading player showcased their solution on the Container migration use-case in the Black Belt track. The talk highlighted the complexities of Container Migration and how the challenges are being solved today which was very intriguing from the technical standpoint. It also provides new opportunities for Storage Management to evolve. Developers spend a majority of their time waiting for tests to run and tracking down and reproducing bugs. Some of the interesting use-cases that got discussed were how one can speed up integration tests by caching database state as a commit, and rolling back to it rather than re-creating database from scratch every time one runs the tests. If the developer finds a bug which only manifests when the database is in a certain state, it is difficult to recreate the state. So, the use-case would be, how to save the database state for later debugging? It’s like creating bookmarks for development/production database. One of the important topic in the conference was on how to orchestrate Persistent Storage Services for Docker. Veritas recently launched its Infoscale Plugin along with a whitepaper to guarantee Persistence for Docker Applications and also help in snapshotting or migrating data volumes to anywhere in the Datacenter. Please peruse the article to understand the value proposition. This plugin on Docker 1.8+ will make life easier for Developer and Operation admins to run distributed applications with stateful databases on persistent storage. The latest version of Dockerhandles storage via volume drivers and one of the goals for the feature is support for third-party storage hardware with custom behaviors in hyper converged environments. Better Storage Integration with Ceph, Glusterfs, ZFS, Veritas Infoscale and Flocker were some of the requests coming from Docker users. ( Meetup with Solomon Hykes, CTO Docker Inc ) This brings us to an interesting transition point on what would be the next set of use-cases to solve for Docker running on Production from the Storage perspective using Agile Development Methodologies. The vision is to have REST API to invoke advanced storage management features like Scalability, Quality of Service, Snapshots, Migration and Disaster Recovery directly from any Container Management Platform. It will also involve enabling Persistent Storage for a wide variety of heterogeneous storage with relevant customization for production environment from Veritas, the Software Defined Storage company. In a Datacenter with multiple containers running, it would be key to easily identify the storage set associated to a container for each of the containers running on a specific host. From the manageability standpoint, I envisage comprehensive Integration with Docker Universal Control Plane and Veritas Infoscale Operations Manager would be valuable to make any relevant technology on Docker widely usable. In summary, the architects who are considering to deploy Docker in production want Super Easy, highly resilient Storage Management, 100% Continuous Availability which would mean Zero Downtime and want applications to have the intelligence of self-healing. with regards, Rajagopal Vaideeswaran Product Owner Information Availability - Storage Email1.1KViews2likes1Comment