Containers
1. Containers
So, what is a container? To answer that question, we'll need to first explore how applications are deployed. Not long ago, the common way to deploy an application was on a local computer. To set one up, you needed physical space, power, cooling, and network connectivity. Then you needed to install an operating system, any software dependencies, and finally, the application. When you needed more processing power, redundancy, security, or scalability, you'd add more computers. And it was very common for each computer to have a single purpose, for example, for a database, web server, or content delivery. This practice wasted resources and took a lot of time to deploy, maintain, and scale. Then came virtualization, which is the process of creating a virtual version of a physical resource, such as a server, storage device, or network. Virtualization made it possible to run multiple virtual servers and operating systems on one local computer. The software layer that breaks the dependencies of an operating system on the underlying hardware and allows several virtual machines to share that hardware is called a a hypervisor. Kernel-based Virtual Machine, or KVM, is one well-known hypervisor. Today, you can use virtualization to deploy new servers fairly quickly. With virtualization, it takes less time to deploy new solutions. Fewer resources are wasted, and portability is improved because virtual machines can be imaged and easily moved. However, an application, all its dependencies, and operating system are still bundled together. It's not easy to move a VM from one hypervisor product to another, and every time you start a VM, its operating system takes time to boot up. But running multiple applications within a single VM creates another problem: applications that share dependencies are not isolated from each other. The resource requirements of one application can starve other applications of the resources they need. Also, a dependency upgrade for one application might cause another to stop working. You can try to solve this problem with rigorous software engineering policies. For example, you can lock down the dependencies so that no application is allowed to make changes; but this can lead to new problems because dependencies need to be upgraded occasionally. You can also add integration tests to ensure applications work as intended. However, dependency problems can cause novel failure modes that are hard to troubleshoot. Plus, it really slows down development if you have to rely on integration tests to confirm the basic integrity of your application environment. The VM-centric way to solve this problem is to run a dedicated virtual machine for each application. Each application maintains its own dependencies, and the kernel is isolated so one application won't affect the performance of another. The result is that two complete copies of the kernel are running. Scale this to hundreds or thousands of applications and you see its limitations. A more efficient way to resolve the dependency problem is to implement abstraction at the level of the application and its dependencies. You don't have to virtualize the entire machine, or even the entire operating system–just the user space. The user space is all the code that resides above the kernel, and it includes applications and their dependencies. This is what it means to create containers. Containers are isolated user spaces for running application code. Containers are lightweight because they don't carry a full operating system, can be scheduled or integrated tightly with the underlying system, which is efficient, and can be created and shut down quickly, because they just start and stop operating system processes and do not boot an entire VM or initialize an operating system for each application. With containers, you can still develop application code in the usual ways–on desktops, laptops, and servers. However, the container can execute final code on VMs. The application code is packaged with all the dependencies it needs, and the engine that executes the container is responsible for making them available at runtime. But what makes containers so appealing to developers? First, they're a code-centric way to deliver high-performing, scalable applications. Second, containers provide access to reliable underlying hardware and software. With a Linux kernel base, developers can be confident that code will run successfully regardless if it's on a local machine or in production. And if incremental changes are made to a container based on a production image, it can be deployed quickly with a single file copy. This speeds up development. And finally, containers make it easier to build applications that use the microservices design pattern–that is, with loosely coupled, fine-grained components. This modular design pattern allows the operating system to scale and upgrade components of an application without affecting the application as a whole.2. Let's practice!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.