cancel
Showing results for 
Search instead for 
Did you mean: 

Predictable performance with containerized applications

krajay
Level 1
Employee

Containers have proliferated immensely in deployments where applications are stateless and compute intensive. Now that containers (specifically Docker) have proven their worth with stateless applications, the next frontier is to prove their worth with stateful applications. With such ephemeral nature of containerized applications, the persistent storage (and data to be specific) has to be as dynamic and mobile as the containerized applications themselves. The data should be accessible where the container moves and with on-premises data centers, the storage is expected to be as predictable as traditional storage (SAN/NAS based) while being as agile as containers themselves.

Veritas HyperScale Architecture

Veritas HyperScale for containers is based on a unique architecture to provide resilient storage with predictable performance. The predictability is provided by internally separating the storage for secondary operations from that required for primary operations. HyperScale employs an architecture with services segregated into two horizontal planes; the top plane (Compute Plane) responsible for active/primary IO from application containers and the lower plane (Data Plane) responsible for version(snapshot) management of volumes and the usage of these snapshots for secondary operations like backup, analytics, etc. This blog focuses on the Compute Plane and how it provides storage predictability and mobility to containers. Please refer the blog Snapshots and recovery of persistent volumes for discussion on Data Plane and its capabilities.

 Architecture - HyperScale for ContainersArchitecture - HyperScale for Containers

Compute Plane

Compute plane services are responsible for providing resilient distributed storage with predictable performance using commodity hardware. It employs a set of (micro)services, like, Device Discovery & management (vxdevmgr), Operations Controller (vxctlr), and IO Manager (vxiomgr) to provide a cohesive storage solution accessible through UI/Management Service (vxmgmt) and Veritas volume plugin (vxvol-plugin). The IO manager service is specifically tasked to handle the IOs on the volumes configured for application containers. The IO manager, with the help of a proprietary device-driver (vxblk), provides storage services including IO caching/Storage Tiering, Resiliency and QoS to the volume.

Veritas Volume Plugin

HyperScale is a storage solution with its primary capabilities built in user-space which gives it flexibility to configure and deploy storage for application containers and fit seamlessly with container ecosystem. The storage is exposed to Docker containers through volumes mounted in the container space. Once you have deployed the Veritas HyperScale (Refer blog HyperScale containers and enterprise storage service) you can configure volumes for use by an application through GUI. If you are a CLI enthusiast, you can use  docker volume create and docker run CLIs with Veritas Plugin to configure and use volume:

  1. Create a volume using Veritas plugin
docker volume create -d veritas --name <volume-name> -o size=<in-GB>
  1. Run your application using the volume
docker run –v <volume-name>:<mountpoint-in-container> ...

On its successful execution, a virtual device device would be associated with the volume that is managed by proprietary device driver(vxblk) and provides it resiliency in event of service failure for any reason. So the vxblk driver acts as a conduit between filesystem in application container and the IO-manager service to provide storage services including Reflection, Storage-Tiering and Quality of Service (QoS).

Reflection

Any storage solution is incomplete if its not resilient enough to handle node failures. Almost every storage solution provides fault tolerance of some sort. Where HyperScale differentiates itself is the amount of primary storage used to provide the resiliency. While most solutions rely on multiple full copies of the data using primary storage to provide resiliency, HyperScale needs to maintain such copies only of recent data gathered over a small period (epoch), which is typically 15 minutes. Data older than that is maintained in Data Plane with adequate resiliency. Active writes are reflected on specified number of nodes and in case of failure of the node, storage is made available to the container where it moves by reconstructing the data from cached data and snapshots in Data Plane.

Storage Tiering

The IO manager pools the storage of different media-types, discovered through vxdevmgr, into different tiers and uses the tiers to provide predictable performance. The SSD-tier (optional) comprises of SSD devices and is used for caching writes (write-back) and helps the latency sensitive applications. The writes on the volumes are captured over a period (typically 15 minutes) into the SSD tier and subsequently flushed into the HDD-tier backing storage in the Compute Plane and Data-plane. Once the data is flushed into HDD-tier, the SSD storage is available for further writes. In addition to absorbing the latencies of HDD tier, the write-caching on SSD-tier helps to coalesce writes over working set thus reducing absolute IOs on the HDD tier and achieving a better throughput from the same devices.

QoS

One of the concerns with Storage-as-a-Service solutions is how to isolate the storage access and its performance between different users (containers) of the storage. The IO manager limits the impact of noisy neighbors (containers) based on specified QoS attributes. Users can specify the minimum and maximum IOPs on the volume and the IO manager ensures that the none of the applications breaches its limits of usage.

 

Do let us know what kind of storage challenges in container ecosystem you are facing. Let us know what you find missing here to adopt Veritas HyperScale in your container environment and what is that looks appealing.