Every company is becoming a software company these days, and there is so much happening around making software development occur at speed. In today’s cloud market, there are many DevOps tools & methodologies that are emerging every day and the people have enough options to choose from, the competition has spiked at its peak, which in turn has created a pressure on these software firms to deliver products and services at par excellence compared to the competitors.
As the cloud approach is intensely gaining its popularity, and many firms starting to embrace the cloud practices, concepts like containerization and especially the DevOps tools like Docker are in high demand. In this article, we are going to see some facts related to Docker that are useful for developers and architects.
Virtual Machines and Docker Evolution:
Long ago, before the introduction of containers and Docker concept, the big firms would go and buy servers & more servers to make sure their services and business didn’t go down. This process is basically used to end with the firms buying more servers than needed, and the issue was it used to be extremely expensive, and they need to do this because as more and more users hit their servers, they wanted to make sure they could scale well without any downtime or outage. So then we had VMware & IBM (There is still a debate on who introduced it first) coming out and introducing Virtualization which became and game-changer that allowed us to run multiple operating systems on the same host. But this also seemed to be very expensive, with multiple kernels and Os’s. So fast forward to modern-day containerization, we have this company ‘Docker’ that solves a lot of problems.
Why Developers like Docker?
Docker makes it easy for developers to develop and deploy apps inside neatly packaged virtual containerized environments. This means apps run the same no matter where they are and what machine they are running on; this makes it easy to delete the age-old theory of ‘It is working on my machine.’ Docker containers can be deployed to just about any machine without any compatibility issues, so your software stays system agnostic, making software simpler to use, less work to develop, and easy to maintain and deploy.
A developer will usually start by accessing the Docker Hub, an online cloud repository of Docker containers and pull one containing a pre-configured environment for their specific programming language, such as Ruby or NodeJS with all of the files and frameworks needed to get started. Docker is one such tool that genuinely lives up to its promise of Build, Ship, and Run.
Worldwide and across the industry, so many companies and institutes are using Docker to speed up their development activities.
PayPal has over 700+ applications now, and they have converted them all into container-based applications. They run 150000 containers, and this has helped them to boost their dev productivity by 50%.
MetLife, on the other hand, made huge savings on their infrastructure because they were able to use less operating systems to manage more applications. This gave them a lot of their hardware back, and hence they were able to save a lot of money on infrastructure and cost reduction. After using Docker, MetLife saw a 70% reduction in VM costs, 67% fewer CPUs, 10x avg. CPU utilisation, and 66% cost reduction.That’s the power of Docker for you.
Why Docker became so popular?
- No hypervisor
Docker is a form of virtualization, but unlike the virtual machines, the resources are shared directly with the host. This allows you to run many Docker containers, where you may only be able to run a few virtual machines.
A virtual machine has to quarantine off a set amount of resources, HDD space, memory, processing power, emulate hardware and boot the entire operating system, then the VM communicates with the host computer via a translator application running on the host operating system called a ‘Hypervisor.’ Whereas, Docker communicates natively with the system kernel, bypassing the middleman on Linux machines and even Windows 10 and Windows Server 2016 and above. This means, you can run any version of Linux in a container and it will run natively. Not only this, Docker uses less disk space too.
Let’s compare Virtualization Vs. Containerization:
In Virtualization, the infrastructure is going to represent your server is the bare metal, the host; it could be your laptop or desktop. On top of that, we have the operating system, and it is going to be something like a Windows server, or your personal laptop, it could be Mac OS, Linux distribution.
In Virtualization, we have something known as a Hypervisor, and because we are running these virtual machines which are basically isolated desktop environments inside of a file, the Hypervisor is what’s going to understand how to read that file. This is what a virtual machine image is, and common Hypervisors are things like VMware and VirtualBox, and they know how to interpret these operating systems.
On top of that, we have the actual guest OS, and each one of these guest OS will have their own kernel, and this is where things start getting a little expensive from a resource allocation perspective. On top of the OS is where we would actually install our binaries and libraries and then and finally we could copy over all of our files on to this operating system that actually makes up our application that we want to deploy on the server.
Now, let’s contrast this with Containerization, in this, we will have the infrastructure and OS, and we will not have the Hypervisor. It has a process that directly runs on the operating system known as Docker Daemon, and this facilitates and manages things like running containers on the system, the images and all of the command utilities they come with Docker. The applications that we run within these images basically run directly on the exact host machine and what happens is we create images that are like copies of the application that we want to distribute, and a running instance of an image is what’s known as a container. Containerization basically kills the ‘It works on my machine but not theirs’ drama.
Image: Image is basically an executable package that has everything that is needed for running applications which includes configuration file, environment variables, runtime, and libraries.
Dockerfile: This contains all the instructions for building the Docker image. It is basically a simple text file with instructions to build an image. You can also refer to this as the automation of Docker image creation.
Build: Creating an image snapshot from the Dockerfile
Tag: Version of an image. Every image will have a tag name.
Container: A lightweight software package/unit created from a specific image version.
DockerHub: Image repository where we can find different types of images.
Docker Daemon: Docker daemon runs on the host system. The users cannot communicate directly with Docker daemon but only through Docker clients.
Docker Engine: The system that allows you to create and run Docker containers.
Docker Client: It is the chief user interfacing for Docker, It is in Docker binary format. Docker daemon will receive the docker commands from users and authenticates to and fro communication with Docker daemon.
Docker registry: Docker registry is a solution that stores your Docker images. This service is responsible for hosting and distributing images. The default registry is the Docker Hub.
Embracing DevOps with Docker:
Docker, as a tool, fits perfectly well in the DevOps ecosystem. It is built for the modern software firms that are in the pace of digital transformation. You cannot ignore Docker in your DevOps toolchain; it has become a de facto tool and almost irreplaceable.
The things that make Docker so good for DevOps enablement are its use cases and advantages that it brings to the software development process by containerizing the applications that support the ease of development and fast release cycles.
Docker can solve most of the Dev and Ops problems and the main one, of course, ‘It works on my machine,’ enables both the teams to collaborate effectively and work efficiently.
According to RightScale 2019 State of the Cloud Report, Docker is already winning the container game with an amazing YoY adoption growth.
With Docker, you can make immutable dev, staging, and production environments. You will have a high level of control over all changes because changes are made using immutable Docker images and containers. You can always roll back to the previous version at any given moment if you want to. Development, staging, and production environments become more alike. With Docker, it is guaranteed that if a feature works in the development environment, it will work in staging and production, too.
Datadog took a sampling of its customer base, representing more than 10,000 companies and 700 million containers, in its report on the survey, it is shown that, at the beginning of April 2018, 23.4 percent of Datadog customers had adopted Docker, up from 20.3 percent one year earlier. Since 2015, the share of customers running Docker has grown at a rate of about 3 to 5 points per year.
Docker best practices:
Before approaching Docker, you must know some best practices to reap the benefits of this tool to the fullest extent. Listing down here some Docker best practices to keep in mind,
- Build images to do just one thing (Also, See Security Best Practices for Docker Images)
- Use tags to reference specific versions of your image
- Prefer minimalist base images
- Use multi-stage builds
- Don’t use root user, whenever possible
- Use official, purpose-built images
- Enable Docker content trust
- Use Docker Bench for security
- Use Artifactory to manage Docker images
- Leverage Docker enterprise features for additional protection
- Writing a docker file is always critical, build docker image which is slim and smart not the fat one
- Persist data outside of a container
- Use Docker compose to use as Infrastructure As Code and keep track using tags
- Role-based access control
- Do not add user credentials/keys/critical data to the image. Use it as a deployment variable
- Make use of docker caching, try pushing “COPY . .” to the last line in Dockerfile if possible
- Use .dockerignore file
- Don’t install debugging tools to reduce image size
- Always use resource limits with docker/containers
- Use swarm mode for small application
- Don’t blindly trust downloads from the DockerHub! Verify them! See more at ‘DockerHub Breach Can Have a Long Reach’
- Make Docker image with tuned kernel parameters
- Use alpine image
Docker is all about speed:
Containers are the next once in a decade shift in infrastructure that we all need to take part in. The hardest part in any IT industry whenever any tools come-in is the migration part, we have to learn the new tools, workflows, understand the terminology, and much more. But the nicest thing about Docker is, it is created with developers, sysadmins, test engineers, Ops people & IT architects in mind. According to Gartner research, it is said that more than 50% of global organizations will be running containers in production.
Without containers today, organizations get into something called ‘Matrix from Hell’ problem, where you have different types of applications, dependencies, environments, and all these things need to work together to make your software work efficiently, that really is a hell. And this problem has been fixed by Docker. Docker is all about speed, and it helps to develop fast, build fast, test fast, deploy fast, update fast, and recover faster.
Docker, a fantastic piece of technology with a high level of adoption, is making it a default tool when it comes to embracing DevOps practices. Docker has initiated the digital transformation at various firms. Millions of users rely on Docker, downloading 100M container images a day, or maybe even more (as per their blog) and over 450 companies have turned to Docker Enterprise Edition – including some of the largest enterprises in the globe. With such vast adoption, the range of stories to tell, and the diverse set of use cases continues to grow.