cancel
Showing results for 
Search instead for 
Did you mean: 

Say Goodbye to the Noisy Neighbor when Using Docker

ccarrero
Level 4
Employee

One of the major challenges with any kind of virtualization is to have the ability to provide enough and consistent performance to the different applications that will be running. When Virtual Machines emerged, we started seeing problems at the storage layer termed as the Noisy Neighbor problem. With no control of how a virtual machine is consuming I/O, it is very plausible that it will affect the performance of other virtual machines.

Containers would also have the same problem if the Storage Layer is not smart enough to differentiate the different type of I/Os that need to be served. Once again, all our consolidation efforts will be thrown away if we start having containers on the same machine affecting each other’s performance. The storage layer needs to be able to provide a solution where more containers can be run without fear of having performance degradation.

At the blog entry Docker Persistent Storage and Quality of Service with Next Generation InfoScale Storage Solutions I was talking about an upcoming new feature available with InfoScale 7.1 that was going to be able to help with that problem. Now I am happy to announce that not only Quality of Service is available with InfoScale 7.1, but also the InfoScale Docker Plug-in has been enhanced. So now this feature can be easily implemented.

InfoScale provides that intelligent storage layer that is able to provide persistent storage for Docker containers, so the containers can run anywhere within the cluster and they will be able to always get access to the storage that they need with the resiliency and quality that they need.

figure1_0.png

Another key aspect to consider is that the DevOps guys are the new consumers of this technology. Their focus is developing applications faster and not taking care about how many spindles are needed or how a container should be consuming storage. Here is where the InfoScale Plug-in for Docker comes in. The plug-in has been enhanced so that when a volume is created through the Docker CLI, the maximum number of IOPS a volume can consume can be specified. Therefore, policies can be created around this and the only thing DevOps needs to worry is about choosing one of those.

While the theory is good, I would like to show you a proof of how this really works. First we are going to start a container that is going to be doing some random reads. Let’s consider we have an application here running within that container that really cares about performance, so we want to avoid other containers interfering in its performance.

First we create the volume:

[~]# docker volume create -d veritas --name vol1 -o size=10G

vol1

And start the container:

[~]# docker run -d --name rand1 -v vol1:/volume --volume-driver veritas fio_randread

We can see that when this container is running alone we are having around 100K IOPS:

                                         OPERATIONS          BYTES           AVG TIME(ms)

TYP NAME                                 READ     WRITE      READ     WRITE   READ  WRITE

vol vol1                                99146         0  774.578m        0m   0.13   0.00

Now we are going to start 9 more containers that are going to be competing with the IO generated by the first one. We can have one script to perform that operation:

[~]# for i in 2 3 4 5 6 7 8 9 10 11

do

docker volume create -d veritas --name vol$i -o size=10G

done

 

And we are going to start the same image running the same type of IO:

[~]# for i in 2 3 4 5 6 7 8 9 10 11;

do

docker run -d --name rand$i -v vol$i:/volume --volume-driver veritas fio_randread;

done

This is now the performance for each of the volumes:

                                         OPERATIONS          BYTES           AVG TIME(ms)

TYP NAME                                 READ     WRITE      READ     WRITE   READ  WRITE

vol vol1                                31518         0 246.2373m        0m   0.48   0.00

vol vol2                                28940         0 226.0937m        0m   0.49   0.00

vol vol3                                27694         0 216.3608m        0m   0.49   0.00

vol vol4                                29530         0  230.703m        0m   0.49   0.00

vol vol5                                29889         0 233.5107m        0m   0.48   0.00

vol vol6                                31565         0 246.6074m        0m   0.48   0.00

vol vol7                                31469         0 245.8544m        0m   0.49   0.00

vol vol8                                31895         0 249.1796m        0m   0.48   0.00

vol vol9                                27395         0 214.0249m        0m   0.49   0.00

vol vol10                               31542         0 246.4248m        0m   0.48   0.00

vol vol11                               31748         0 248.0327m        0m   0.48   0.00

 

So as we can see, because the new containers, the performance for our original container that is using the volume named vol1 has been reduced from 100K down to 31K IOPS.

In order to limit the “noise” of those new containers that are less important we can use the new “maxiops” or maximum number of I/O per second feature.  Now we are going to run those containers again but limiting the number of I/O per second they can achieve to 1k IOPS.

Before that, I can check that once I stop the noisy containers, my original performance goes back to normal. Clearly the noisy containers are having an effect on performance when they are running.

                                         OPERATIONS          BYTES           AVG TIME(ms)

TYP NAME                                 READ     WRITE      READ     WRITE   READ  WRITE

vol vol1                                97402         0 760.9575m        0m   0.13   0.00

 

Now we create the volumes with the max IO limitation:

[root@target-3 ~]# for i in 2 3 4 5 6 7 8 9 10 11

do

docker volume create -d veritas --name vol$i -o size=10G -o maxiops=1000

done

And we create the containers using those volumes:

[root@target-3 ~]# for i in 2 3 4 5 6 7 8 9 10 11

do

docker run -d --name rand$i -v vol$i:/volume --volume-driver veritas fio_randread

done

And now we can see how the volume used by the application I want to protect keeps having a good performance, while all the others are only consuming 1000 IOPS was we specified during the volume creation:

 

                                         OPERATIONS          BYTES           AVG TIME(ms)

TYP NAME                                 READ     WRITE      READ     WRITE   READ  WRITE

vol vol1                                97453         0 761.3559m        0m   0.13   0.00

vol vol2                                 1000         0   7.8125m        0m   0.47   0.00

vol vol3                                 1000         0   7.8125m        0m   0.48   0.00

vol vol4                                 1000         0   7.8125m        0m   0.49   0.00

vol vol5                                 1000         0   7.8125m        0m   0.49   0.00

vol vol6                                 1000         0   7.8125m        0m   0.50   0.00

vol vol7                                 1000         0   7.8125m        0m   0.49   0.00

vol vol8                                 1000         0   7.8125m        0m   0.49   0.00

vol vol9                                 1000         0   7.8125m        0m   0.49   0.00

vol vol10                                1000         0   7.8125m        0m   0.49   0.00

vol vol11                                1000         0   7.8125m        0m   0.49   0.00

So, in summary, InfoScale 7.1 now provides Quality of Service that has been integrated with Docker, solving the problem of having un-controlled containers consuming all the storage bandwidth available affecting other containers performance. Now you can run your containers on production with much higher confidence.

The next question will be what happens if you want  to exceed performance so some containers can get the most of your storage system. Keep tuned for our next release when we will be talking about IO Acceleration.

Download the VRTSdocker-plugin-1.2-Linux.x86_64 Plug-in (remember you have to login or register within the community)

Learn how to easily run some testing in your laptop

Join our Veritas InfoScale Containers Group