Container Engines and Runtimes
ACM.281 Sorting all all the runtimes and why they exist
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
⚙️ Check out my series on Automating Cybersecurity Metrics | Code.
🔒 Related Stories: AWS Security | Application Security | Container Security.
💻 Free Content on Jobs in Cybersecurity | ✉️ Sign up for the Email List
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the last post we looked at the container ENTRYPOINT as we considered executing a command at the point we build a container versus executing a command when we run the container. We cloned a public git repository both into an image, and container startup.
In this post, I want to dive into how containers work a bit more as we figure out how to make a container that is compatible with a Lambda function.
Cgroups and Namespaces on Linux
Now for a tad bit of history. Containers started as a construct for isolating processes on Linux.
Cgroups in Linux help distribute resources on an operating system to a container.
Cgroups allow processes to be grouped together, and ensure that each group gets a share of memory, CPU and disk I/O; preventing any one container from monopolizing any of these resources.
Namespaces provide a mechanism to giving permission to a user inside a container but not outside.
The implementation of user namespaces allows a process to have it’s own set of users and in particular to allows a process root privileges inside a container, but not outside.
How were those containers actually implemented and run? Well, you’d have to execute certain commands to start a process as a container same as you do today but the commands and file structures were a bit different.
If you want to look at how containers evolved check out this post.
You can see how FreeBSD Jails (second on the list) work here:
The Docker container engine and CLI
Docker came along and made containers easier to use. Docker has a GUI (which I have barely ever used) and a command line tool called the Docker CLI which allows you to build container images and then run containers from those images. I showed you how to do that with Docker here:
Over time, other companies started building containers, container engines, tools to run containers, and container orchestration tools like Amazon ECS:
and Kubernetes:
Open Container Initiative (OCI)
As containers became more popular different types of containers came into existence and different orchestration tools and platforms needed to leverage pulling and running containers, the need for standardization of how to integrate with them became apparent. This led to the Open Container Initiative (OCI).
Established in June 2015 by Docker and other leaders in the container industry, the OCI currently contains three specifications: the Runtime Specification (runtime-spec), the Image Specification (image-spec) and the Distribution Specification (distribution-spec).
This organization developed standards that, when followed, allow different tools to work together to build, manage, and run containers without modification.
Cloud Native Computing Foundation
At some point the Cloud Native Computing Foundation came along. I’m not honestly real sure why, but it exists and people donate projects to it. Here’s the definition from their website:
As part of the Linux Foundation, we provide support, oversight and direction for fast-growing, cloud native projects, including Kubernetes, Envoy, and Prometheus.
This is how they define “Cloud Native” if you are concerned with such things:
Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.
Note that although this pattern is interesting and useful, it is most important to design systems that are fit for purpose. Prime Video decided to revert to a monolith. Managing a monolith is easier in some ways, harder in others. Generally, I like breaking up systems into pieces where it makes sense, but the more pieces, the more complexity, and that complexity could lead to different but equally challenging management problems. You trade some of the problems with a monolith to different problems in a distributed environment. Either way, you need a good architect or architects.
Anyway the CNCF exists to manage these new types of software projects aimed at distributed architectures.
runc
The runc CLI tool as a universal container runtime built according to the Open Container Initiative Runtime Specification. This was a light weight tool to handle the very basics of container mangement.
containerd
Then, Docker developed the containerd runtime as a standalone component separate from the Docker daemon.
containerd is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
The containerd runtime encompasses runc as you can see from the picture in this blog post:
As a standalone component, containerd is designed to be embedded into other systems like Kubernetes and Docker, rather than used directly by developers. You’ll see why this matters when I cover the changes in Kubernetes as the specifications evolved.
While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon.
Then Docker donated containerd to the Cloud Native Computing Foundation.
Kubernetes and container runtimes
I explained in my Containers 101 post that Kubernetes is for container orchestration. Kubernetes was originally a container orchestrator for Docker.
As new options emerged and people started to use them, Kubernetes developed the Container Runtime Interface (CRI) to support other types of containers and container runtimes.
The Kubernetes CRI worked with all types of containers that followed the Open Container Initiative (OCI) Specification. In other words, as long as a container runtime follows that specification in the way it can be accessed and used, it should work with Kubernetes.
Dockershim
Docker did not follow the OCI specification because it was created before the specification existed. In order to keep supporting Docker the way it was originally written, Kubernetes used something called Dockershim.
Dockershim deprecation
As Docker modified components to be OCI compliant and developed containerd, the need to Dockershim went away. Because containerd conforms to the OCI specification, Kubernetes could support Docker containers run by containerd without Dockershim so they dropped it.
Some people freaked out. Kubernetes responded.
Although Kubernetes will pull containers in a different way and use a different runtime, this doesn’t affect how we created and built our image using Docker in prior posts. As mentioned in the above section, containerd is designed to be embedded whereas Docker and the Docker CLI is designed for developers to build, run, and test containers.
ctr
For completeness, ctr, a low level command line tool, is shipped with containerd.
nerdctl
However, nerdctl is another CLI tool from containerd. It supports additional features including some we may be interested in as security professionals like encrypted images and image signing and verification when using Docker containers.
crictrl
Meanwhile, the Kubernetes team developed crictl, a command line tool designed to work with all container runtimes, not just containerd. It can be used for debugging any type of containers run on Kubernetes.
OpenShift and CRI-O engine
The above is not an exhaustive list. For example, OpenShift Container Platform from RedHat uses the CRI-O engine for container management.
The default container runtime according to this documentation is runc, but may also use crun, a container runtime developed by RedHat. Some support for containerd also exists.
Managed Kubernetes Platforms
OpenShift provides a managed Kubernetes environment:
So does AWS — Elastic Kubernetes Service (EKS):
GCP and Azure also have a managed Kubernetes service name Google Kubernetes Service (GKE) and Azure Kubernetes Service (AKS) respectively.
AWS, GCP, and Azure managed Kubernetes services all use containerd as their embedded runtime.
Using alternate runtimes with Docker
You can specify an alternate runtime, should you need one, when running a container like this:
docker run --runtime [runtime] [container name]
Why would you want to use a different container runtime?
If all the container runtimes are interoperable in the platform you’re using, like Kubernetes, why would you need to use a different container runtime? Well, if you have some sort of bug when using a particular container runtime you might choose a different one if it suits your use case better. A container runtime might support a feature you need that some other container runtime doesn’t. You may find that one container runtime has better performance. You may find that one container runtime is more secure than another. Generally it comes down to testing out the runtimes, inspecting their features, and using the one that works best for you.
But as far as I can tell, containerd is the most widely used runtime with Kubernetes.
Lambda container runtimes and images
What about Lambda functions? What images does Lambda work with? What runtimes does it support?
Any OCI compliant container works with Lambda— so we can build containers with Docker.
What about the runtime? How does Lambda manage containers?
Lambda pulls images from the AWS Elastic Container Registry (ECR) service. We’ll look more at this service shortly. One of the functions of containerd is to pull images. AWS may or may not be using containerd under the hood or custom code. Who knows. In any case, it works.
To use a container with Lambda you have various options:
- You can build it on an AWS provided Lambda base container. That container will include your programming language of choice within the options supported by Lambda.
- If you use a different base image, you have to add the Lambda runtime interface client into your image. That Lambda runtime interface, like the base images, allows you to write code using one of the languages supported by Lambda. However, instead of the base images, you might want to add these libraries to a different operating system than what is used by the base Lambda images. AWS provides clients for each language. AWS provides open-source runtime interface clients for the following languages: Node.js, Python, Java, .NET, Go, Ruby
- If you want to use a different language that Lambda doesn’t support, you’ll have to write your own client that extends the Lambda Runtime API.
I find the way this is explained in the documentation very confusing. Hopefully I got that right. We’ll explore these concepts further in upcoming posts.
About Runtimes
All this talk about runtimes is also very confusing. I think that is because runtime is kind of a generic term for the environment in which your code can execute, but the runtimes described above are not apples to apples comparisons. But in any case, they all allow your code or containers to run.
If you really want to dig into the definition of runtimes you can read up on Wikipedia and the links at the bottom.
For now, I’m going to move on and try to get a container working in a Lambda function.
Follow for updates.
Teri Radichel | © 2nd Sight Lab 2023
About Teri Radichel:
~~~~~~~~~~~~~~~~~~~~
⭐️ Author: Cybersecurity Books
⭐️ Presentations: Presentations by Teri Radichel
⭐️ Recognition: SANS Award, AWS Security Hero, IANS Faculty
⭐️ Certifications: SANS ~ GSE 240
⭐️ Education: BA Business, Master of Software Engineering, Master of Infosec
⭐️ Company: Penetration Tests, Assessments, Phone Consulting ~ 2nd Sight Lab
Need Help With Cybersecurity, Cloud, or Application Security?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
🔒 Request a penetration test or security assessment
🔒 Schedule a consulting call
🔒 Cybersecurity Speaker for Presentation
Follow for more stories like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
❤️ Sign Up my Medium Email List
❤️ Twitter: @teriradichel
❤️ LinkedIn: https://www.linkedin.com/in/teriradichel
❤️ Mastodon: @teriradichel@infosec.exchange
❤️ Facebook: 2nd Sight Lab
❤️ YouTube: @2ndsightlab