Get startedGet started for free

Container images

1. Container images

An application and its dependencies are called an image, and a container is simply a running instance of an image. By building software into container images, developers can package and ship an application without worrying about the system it will run on. But to build and run container images, you need software. One option is Docker. Although this open-source technology can be used to create and run applications in containers, it doesn't offer a way to orchestrate those applications at scale like Kubernetes does. Later in this course, you'll use Google's Cloud Build to create Docker-formatted container images. A container has the power to isolate workloads, and this ability comes from a combination of several Linux technologies. The first is the foundation of the Linux process. Each Linux process has its own virtual memory address space, separate from all others, and Linux processes can be rapidly created and destroyed. The next technology is Linux namespaces. Containers use Linux namespaces to control what an application can see, such as process ID numbers, directory trees, IP addresses, etc. It's important to note that Linux namespaces are not the same thing as Kubernetes namespaces, which you'll learn more about later in this course. The third technology is Linux cgroups. Linux cgroups control what an application can use, such as its maximum consumption of CPU time, memory, I/O bandwidth, and other resources. And finally, containers use union file systems to bundle everything needed into a neat package. This requires combining applications and their dependencies into a set of clean, minimal layers. Let's explore how this works. A container image is structured in layers, and the tool used to build the image reads instructions from a file called the container manifest. For Docker-formatted container images, that's called a Dockerfile. Each instruction in the Dockerfile specifies a layer inside the container image. Each layer is read-only, but when a container runs from this image, it will also have a writable, ephemeral topmost layer. Let's explore a simple Dockerfile. A Dockerfile contains four commands, each of which creates a layer. For the purposes of this training, this Dockerfile has been a little oversimplified for modern use. The FROM statement starts by creating a base layer, which is pulled from a public repository. This one happens to be the Ubuntu Linux runtime environment of a specific version. The COPY command adds a new layer, which contains some files copied in from your build tool's current directory. The RUN command builds the application by using the "make" command and puts the results of the build into a third layer. And finally, the last layer specifies what command you should run within the container when it is launched. When you write a Dockerfile, the layers should start with those least likely to change at the top, and the layers most likely to change at the bottom. So I mentioned this Dockerfile example is oversimplified. Let me explain what I meant. Currently, it's not a best practice to build your application in the same container where you ship and run it. After all, your build tools are at best just clutter in a deployed container, and at worst they are an additional attack surface. Today, application packaging relies on a multi-stage build process, where one container builds the final executable image, and a separate container receives only what is needed to run the application. When launching a new container from an image, the container runtime adds a new writable layer on top of the underlying layers. This layer is called the container layer. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. And they're ephemeral, which means that when the container is deleted, the contents of this writable layer are lost forever. The underlying container image remains unchanged. So when it comes to application design, this means that permanent data must be stored somewhere other than a running container image. Because each container has its own writable container layer, and all changes are stored in this layer, multiple containers can share access to the same underlying image and while still maintaining their own data state. This allows container images to get smaller with each layer. For example, a base application image might be 200 MB, but the difference to the next point release might only be 200 KB. When building a container, instead of copying the entire image, it creates a layer with just the difference. When running a container, the container runtime pulls down the layers it needs. When updating a container, only the difference needs to be copied. This is much faster than running a new virtual machine. So, how can you get or create containers? It's common to use publicly available open-source container images as the base for your own images, or for unmodified use. Google maintains Artifact Registry at pkg.dev, which contains public, open source images. It also provides Google Cloud customers with a place to store their own container images and is integrated with Identity and Access Management (IAM). This allows storing container images that are private to your project. Container images are also available in other public repositories, like the Docker Hub Registry and GitLab. Google provides a managed service for building containers called Cloud Build. Cloud Build is integrated with Cloud IAM and was designed to retrieve the source code builds from different code repositories, including Cloud Source Repositories, or git-compatible repositories like GitHub and Bitbucket. To generate a build with Cloud Build, you must define a series of steps. For example, you can configure build steps to fetch dependencies, compile source code, run integration tests, or use tools such as Docker, Gradle, and Maven. Each build step in Cloud Build runs in a Docker container. From there, Cloud Build can deliver the newly built images to various execution environments including Google Kubernetes Engine, App Engine, and Cloud Run functions.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.