Skip to main content

Container Network Interface (CNI)

So far, we've explored how network namespaces work, how to connect several of these namespaces to a bridge network, how to create pairs (virtual cables) with virtual interfaces, how to attach each end to the namespace and the bridge, how to assign IPs and bring them up, and how to enable NAT or IP masquerading for external communication.

We also observed how Docker follows a similar process, but using different naming standards. Other container solutions solve their networking challenges in a similar way, including Kubernetes.

All these solutions try to solve the same problems differently, but end up following the same steps. Why not create a single standard approach?

It was in this scenario that the Bridge program emerged, which encapsulates all the necessary steps to connect a container to a bridge network.

Using a simple command like:

bridge add 2353245 /var/run/netns/2353245

The container runtime is freed from the responsibility of implementing this task. For example, if a container runtime invokes the bridge passing the following parameters bridge add <container_id> <namespace>, that would be sufficient.

If you wanted to create such a program for yourself, perhaps for a new type of network, what arguments and commands should you pass? How would you ensure the program works correctly with existing runtimes? How to be sure that container runtimes like Kubernetes or rkt would invoke your program correctly? This is where the definition of standards becomes crucial.

CNI is a set of standards that defines how programs should be developed to solve networking challenges in a container runtime environment.

The programs are called plugins. In this case, the bridge program we refer to is a plugin for CNI. CNI defines how the plugin should be developed and how container runtimes should invoke them.

CNI assigns a set of responsibilities to both container runtimes and plugins. For container runtimes, CNI specifies that they are responsible for creating a network namespace for each container. Then, the runtimes must identify the networks to which the containers should connect and invoke the plugins during container creation (add) and deletion (del). Additionally, CNI stipulates the configuration of plugins in the container runtime environment using a JSON file.

In turn, the plugins must support command-line arguments like add, del and check, accepting parameters such as the network namespace. They are also responsible for assigning IP addresses to containers, configuring necessary routes for communication between containers on the network, and specifying the results in a particular format. As long as both container runtimes and plugins comply with these guidelines, they can coexist harmoniously.

CNI includes a set of supported plugins, such as Bridge, VLAN, IP VLAN, MAC VLAN, WINDOWS, Host Local and DHCP. Additionally, there are other third-party plugins available, such as Weave, Flannel, Cilium, VMware NXC, Calico, among others. All of these implement CNI standards, allowing any container runtime to work with any of these plugins.

However, Docker does not implement CNI, but rather CNM (Container Network Model), which represents the container network model. CNM is another standard that aims to solve container networking challenges similar to CNI, but with some differences. Due to these differences, CNI plugins don't integrate natively with Docker. This means you can't run a Docker container and specify the network plugin to use a CNI and specify one of these plugins. But this doesn't mean you can't use Docker with CNI at all. Just work around it, like creating a Docker container without any network configuration and then manually invoking the bridge plugin. This is practically what Kubernetes does when it creates Docker containers: it creates them without networking and then invokes the configured CNI plugins that take care of the rest of the configuration.