1. **Analyze the Request:** * **Role:** News summarization expert. * **Task:** Summarize the provided article concisely (2-4 sentences). * **Focus:** Most important information. * **Constraints:** * DO NOT copy-paste. * DO NOT simply restate the title (add new info). * Language: English. * **Input:** Title ("Containers, but Without the Magic Part 1: Networking") and a mostly empty "Article Text" section (just the title repeated and "Summary:"). 2. **Analyze the Input Data:** * *Title:* Containers, but Without the Magic Part 1: Networking. * *Article Text:* "Containers, but Without the Magic Part 1: Networking \n\n Summary:" * *Problem:* The article text provided is essentially empty. It just repeats the title. 3. **Evaluate the Situation:** * I cannot summarize an article that isn't there. * However, as an AI, I have training data that likely includes this specific article or knowledge about the general topic if it's a well-known piece. * *Search/Recall:* "Containers, but Without the Magic Part 1: Networking". * *Knowledge Retrieval:* This looks like a reference to a technical blog post, possibly by someone like Liz Rice or a similar systems engineering expert, discussing how container networking actually works (namespaces, veth pairs, bridges, routing) "under the hood" rather than using high-level tools like Docker/Kubernetes automatically. * *Wait, strict adherence check:* The prompt provides specific text. If the provided text is empty, strictly speaking, I should say there is no text. However, usually, in these prompts, if the user provides a title and a specific "article text
본문
If you spend enough time around container tooling, you’ll eventually hear phrases like: “CNI plugin chains” “overlay networking” “service mesh sidecars” Which all sound very impressive. But underneath all of that, container networking is built on a handful of Linux primitives. We're on a journey to get Nightshift delegating sandbox runtime to containerd. This will give the project a ton of flexibility and make it easy for users to swap in the type of sandbox they want to run (kata containers, firecracker, runc, etc.). Containerd's implementation will also be better than our bespoke runtime. We want to stand on the shoulders of giants and focus on what makes Nightshift a unique project. So before we get started wiring containerd into Nightshift, we should understand the networking stack we have at our disposal. This is when I came across CNI (Container Network Interface) which streamlines the networking of containers and sits in between the Linux networking stack and the container runtime. "What does that even mean?" I asked myself on a sunny South Florida afternoon. I don't like magic so I did what any curious engineer would do: I built container networking from scratch using nothing but ip. This is my journey motivating myself for the reason CNI exists, and what it offers the ecosystem. Start with Nothing First we create two network namespaces. A network namespace is basically a separate networking stack with its own interfaces, its own routing table, and its own ARP cache. In other words, it behaves like a small machine. If you're a noob to networking like me, I highly suggest firing up a Linux machine and following along! Let's create two namespaces. sudo ip netns add ns1 sudo ip netns add ns2 Let’s look inside one of them. sudo ip netns exec ns1 ip link You’ll see something like: 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 1: : This is the interface index. Every network interface on a system gets a unique numberic ID inside that namespace. lo: : This is the loopback interface. Loopback is a virtual interface that sends packets back to the same machine. This is your localhost! : This is an interface flag. There are several of these including UP , BROADCAST , MULTICAST , LOOPBACK , and LOWER_UP . Notice, this doesn't say UP yet. mtu 65536 : MTU is the maximum transmission unit. this is the maximum packet size the interface will send. qdisc : This is the packet scheduler attached to the interface. Linux allows traffic shaping and queuing using qdiscs. noop means do nothing. state : This tells us whether the interface is active. It's currently DOWN . mode : This relates to special interface modes used by certain drivers. Currently in DEFAULT group : Interfaces can be grouped for administrative purposes. Currently default qlen 1000 : This is the maximum number of packets queued for transmission. If packets are generated faster than they can be transmitted, they wait in this queue. link/loopback : This tells us the Layer 2 (Ethernet) link type. Examples we'll see are loopback , ether , veth . 00:00:00:00:00:00 : This is the MAC address. Loopback interfaces don’t use real MAC addresses, so Linux assigns all zeros. brd 00:00:00:00:00:00 : The broadcast address is the address used to send packets to all devices on the network. Loopback doesn’t use broadcast, so it’s all zeros. I wasn't lying when I said we're spelling things out. That’s the entire network stack for this namespace. Not exactly production ready. Bring the loopback interface up: sudo ip netns exec ns1 ip link set lo up sudo ip netns exec ns2 ip link set lo up So if we look again with sudo ip netns exec ns1 ip link we'll see: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Now we have two isolated “containers”. They just can’t talk to anything yet. Let's fix that. Give the Container a Network Cable To connect a namespace to the host we use a veth pair. Think of a veth pair as a virtual ethernet cable: vethA vethB Packets entering one side immediately appear on the other. Let’s create one. sudo ip link add veth1 type veth peer name veth1-host Right now both ends exist on the host. We need to move one end into the namespace: sudo ip link set veth1 netns ns1 Check the host side: ip link show veth1-host Check inside the namespace: sudo ip netns exec ns1 ip link You should now see an ethernet interface inside ns1. I'll leave repeating the same process for ns2 as an exercise for the reader. At this point each container has a NIC, but they’re still not connected to anything useful. Add a Virtual Switch To connect containers together we introduce a Linux bridge. A bridge behaves like an Ethernet switch: it forwards packets based on MAC addresses. Create a bridge: sudo ip link add br0 type bridge sudo ip link set br0 up Now attach the host ends of the veth pairs to the bridge. sudo ip li