Introduction
Networking in Kubernetes can be challenging to understand, but it's such an important part of the whole solution that I decided to prepare an ebook about this subject.
In this ebook, we will start with the foundational concepts and technical details of Kubernetes networking. This will include an overview of the Kubernetes networking model, key components, and how they interact with each other. Understanding these basics is crucial for anyone looking to effectively manage and troubleshoot Kubernetes networks.
The subsequent chapters will provide practical and useful examples of how to work with and use networking in Kubernetes. This includes configuring network policies, service discovery, and load balancing. We will also explore advanced topics such as network security, monitoring, and troubleshooting.
Finally, in the last chapter, we will present end-to-end examples that demonstrate reusing CNI (Container Network Interface) plugins and deploying applications in a Kubernetes cluster. These examples will help you apply the concepts and techniques discussed throughout the ebook in real-world scenarios.
By the end of this ebook, you should have a solid understanding of Kubernetes networking and be well-equipped to handle networking challenges in your Kubernetes environment.
Basics
In order to gain a better understanding of networking in Kubernetes, I recommend setting up a local Kubernetes cluster. This cluster will be used for all practical exercises and to test every Container Network Interface (CNI) solution described in this book.
Additionally, this chapter provides an overview of the Kubernetes networking model. Understanding this model is crucial as it lays the foundation for working with CNI plugins effectively. The concepts covered will help you grasp the essentials needed to manage and troubleshoot Kubernetes networking.
By the end of this chapter, you should have a solid understanding of the basic networking principles in Kubernetes and be prepared to dive deeper into more advanced topics in subsequent chapters.
Local environment
All examples in the book can be executed locally on a Kubernetes cluster created by kind.
To provision a local Kubernetes cluster using Docker container nodes, first prepare a configuration file that defines a multi-node cluster:
cat > multi-node-k8s-no-cni.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
disableDefaultCNI: true
podSubnet: 192.168.0.0/16
EOF
In the YAML file, there are 2 worker nodes and 1 control plane node defined. The default CNI (Container Network Interface) plugin (kindnetd
- a simple networking implementation) is disabled, as we will deploy a dedicated one later.
The Kubernetes cluster can be created using the YAML configuration file:
kind create cluster --config multi-node-k8s-no-cni.yaml --name home-lab
After the deployment is finished, the list of nodes can be checked with the following command:
kubectl get nodes -o wide
Moreover, the list of kind clusters can be verified with the following command:
kind get clusters
If necessary, the cluster can be removed with the following command:
kind delete cluster --name home-lab
Kubernetes networking model
The Kubernetes network model is based on several key concepts, with pod characteristics being the most crucial to understand initially:
- Every pod in the cluster has its own unique, cluster-wide IP.
- The pod IP is shared among all its containers, allowing containers within a pod to reach each other’s ports on localhost.
- Pods communicate with other pods (on the same node or different nodes) using pod IPs without NAT.
- The service API provides a stable IP address or hostname for a service implemented by one or more pods.
- The gateway API allows services to be accessible to clients outside the cluster.
- Network policies define isolation, controlling traffic between pods or between pods and external clients.
Cluster networking
As described in the Kubernetes cluster networking documentation, four distinct networking problems presented in diagram 1.1 and listed below are addressed in the Kubernetes networking model:
- Highly-coupled container-to-container communications: Containers within the same pod use localhost (via loopback) to communicate with each other.
- Pod-to-pod communications: Pods use their IPs to communicate with each other without NAT.
- Pod-to-service communications: Pods find services by DNS name.
- External-to-service communications: Services (with types ClusterIP, NodePort, LoadBalancer, or ExternalName), Ingress, or Gateway can be used for communication with external clients.
architecture-beta group k8s(carbon:kubernetes)[Kubernetes cluster] group worker1(carbon:kubernetes-worker-node)[Worker node 1] in k8s group worker2(carbon:kubernetes-worker-node)[Worker node 2] in k8s group pod1(carbon:kubernetes-pod)[Pod 1] in worker1 group pod2(carbon:kubernetes-pod)[Pod 2] in worker2 service container11(carbon:container-runtime)[Container 11] in pod1 service container12(carbon:container-runtime)[Container 12] in pod1 service container21(carbon:container-runtime)[Container 21] in pod2 service service1(carbon:ibm-cloud-kubernetes-service)[Service] in k8s service user1(carbon:user)[User] service1:R --> L:container11 container11:T <--> B:container12 container11:R <--> L:container21 user1:R --> L:service1
IP Management
In a Kubernetes cluster, IP addresses are assigned to different resources by the following components:
- The
network
plugin assigns IP addresses to pods. - The
kube-apiserver
assigns IP addresses to services. - The
kubelet
assigns IP addresses to nodes.
Implementation of the Networking Model
The Kubernetes networking model is implemented by the container runtime on each node. In most cases, Container Network Interface (CNI) plugins are used to manage network and security features. There are multiple networking addons supported by Kubernetes. In this book, we focus on three implementations, which are described in detail in the next chapter.
CNI plugins
CNI (Container Network Interface) is a CNCF (Cloud Native Computing Foundation) project, which consists of a specification and libraries for writing plugins to configure NICs (network interfaces) in Linux containers.
Below there is a list of the most popular CNI plugins to manage network and security capabilities:
- Calico
- Cilium
- Flannel
- Kindnet
Calico
In order to easily install Calico, Tigera operator can be used. Details can be found in Quickstart for Calico on Kubernetes. Tigera operator provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition. It can be installed by command:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
with example output:
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
After installation pods for operator can be verified by command:
kubectl get pods -n tigera-operator
with example output:
NAME READY STATUS RESTARTS AGE
tigera-operator-76c4976dd7-j5wvm 1/1 Running 0 1m
Next step is to download resource for Calico 3.26.1 version:
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml -o calico-custom-resources.yaml
Then create custom resource to configure Calico:
kubectl create -f calico-custom-resources.yaml
with example output:
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
Finally watch changes in resource while applying downloaded YAML:
watch kubectl get pods -n calico-system
with example output:
Every 2.0s: kubectl g... MacBook-Pro-M3-Sebastian.local: Fri Dec 27 16:36:45 2024
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-578cbdf488-t49jf 1/1 Running 0 2m44s
calico-node-95ppl 1/1 Running 0 2m44s
calico-node-mshzr 1/1 Running 0 2m44s
calico-node-w2ck6 1/1 Running 0 2m44s
calico-typha-5c5bb69d97-4mn68 1/1 Running 0 2m44s
calico-typha-5c5bb69d97-vp45t 1/1 Running 0 2m40s
csi-node-driver-8bvjb 2/2 Running 0 2m44s
csi-node-driver-ngxfm 2/2 Running 0 2m44s
csi-node-driver-x2zjn 2/2 Running 0 2m44s
While working with Calico, CLI tool can be useful to work with CNI and it can be installed by command:
curl -L https://github.com/projectcalico/calico/releases/download/v3.29.1/calicoctl-darwin-amd64 -o calicoctl
sudo mv calicoctl /usr/local/bin/
calicoctl
can be used to verify Calico installation:
calicoctl version
Cilium
Cilium can be installed using Helm, so first step it's adding repository by command:
helm repo add cilium https://helm.cilium.io/
"cilium" has been added to your repositories
Then preload cilium
image into Kubernetes worker nodes:
docker pull quay.io/cilium/cilium:v1.16.5
kind load docker-image quay.io/cilium/cilium:v1.16.5 --name home-lab
v1.16.5: Pulling from cilium/cilium
Digest: sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d
Status: Image is up to date for quay.io/cilium/cilium:v1.16.5
quay.io/cilium/cilium:v1.16.5
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview quay.io/cilium/cilium:v1.16.5
Image: "quay.io/cilium/cilium:v1.16.5" with ID "sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d" not yet present on node "home-lab-worker", loading...
Image: "quay.io/cilium/cilium:v1.16.5" with ID "sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d" not yet present on node "home-lab-control-plane", loading...
Image: "quay.io/cilium/cilium:v1.16.5" with ID "sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d" not yet present on node "home-lab-worker2", loading...
Images preloaded in Kubernetes can be verified:
docker exec -it home-lab-control-plane crictl images
IMAGE TAG IMAGE ID SIZE
quay.io/cilium/cilium <none> 01d6cc4aa7274 212MB
docker.io/kindest/kindnetd v20241023-a345ebe4 55b97e1cbb2a3 35.3MB
registry.k8s.io/coredns/coredns v1.11.3 2f6c962e7b831 16.9MB
registry.k8s.io/etcd 3.5.15-0 27e3830e14027 66.5MB
registry.k8s.io/kube-apiserver-arm64 v1.31.2 7db5e8fdce19a 92.6MB
registry.k8s.io/kube-apiserver v1.31.2 7db5e8fdce19a 92.6MB
registry.k8s.io/kube-controller-manager-arm64 v1.31.2 d034a1438c8ae 87MB
registry.k8s.io/kube-controller-manager v1.31.2 d034a1438c8ae 87MB
registry.k8s.io/kube-proxy-arm64 v1.31.2 7e641dea6ec8f 96MB
registry.k8s.io/kube-proxy v1.31.2 7e641dea6ec8f 96MB
registry.k8s.io/kube-scheduler-arm64 v1.31.2 4ff74b8997ace 67MB
registry.k8s.io/kube-scheduler v1.31.2 4ff74b8997ace 67MB
quay.io/cilium/cilium-envoy <none> a226bca93af4a 59.6MB
docker.io/kindest/local-path-helper v20230510-486859a6 d022557af8b63 2.92MB
docker.io/kindest/local-path-provisioner v20240813-c6f155d6 282f619d10d4d 17.4MB
registry.k8s.io/pause 3.10 afb61768ce381 268kB
Finally, CNI plugin can be installed by Helm:
helm install cilium cilium/cilium --version 1.16.5 \
--namespace kube-system \
--set image.pullPolicy=IfNotPresent \
--set ipam.mode=kubernetes
NAME: cilium
LAST DEPLOYED: Fri Dec 27 16:59:19 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.
Your release version is 1.16.5.
For any further help, visit https://docs.cilium.io/en/v1.16/gettinghelp
Installation can be verified by commands:
kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-drvt5 1/1 Running 0 5m35s
cilium-envoy-65vqs 1/1 Running 0 5m35s
cilium-envoy-7ls58 1/1 Running 0 5m35s
cilium-envoy-z48jk 1/1 Running 0 5m35s
cilium-hltmk 1/1 Running 0 5m35s
cilium-llgf7 1/1 Running 0 5m35s
cilium-operator-6c4fb78954-fqxvf 1/1 Running 0 5m35s
cilium-operator-6c4fb78954-tkp77 1/1 Running 0 5m35s
coredns-7c65d6cfc9-4bhh4 1/1 Running 0 18m
coredns-7c65d6cfc9-5l6kz 1/1 Running 0 18m
etcd-home-lab-control-plane 1/1 Running 0 18m
kube-apiserver-home-lab-control-plane 1/1 Running 0 18m
kube-controller-manager-home-lab-control-plane 1/1 Running 0 18m
kube-proxy-62vsh 1/1 Running 0 18m
kube-proxy-85v6f 1/1 Running 0 18m
kube-proxy-db228 1/1 Running 0 18m
kube-scheduler-home-lab-control-plane 1/1 Running 0 18m
Additionally tests can be executed to check network connectivity:
kubectl create ns cilium-test
namespace/cilium-test created
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/1.16.5/examples/kubernetes/connectivity-check/connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-d9f4f8f57-gnwgf 1/1 Running 0 62s
echo-b-7cb49646f5-955nk 1/1 Running 0 62s
echo-b-host-f4cbc8d47-wmczv 1/1 Running 0 62s
host-to-b-multi-node-clusterip-5c555886df-n528c 1/1 Running 0 61s
host-to-b-multi-node-headless-859b49fd85-tc5kj 1/1 Running 0 61s
pod-to-a-5568669fc6-bllcp 1/1 Running 0 62s
pod-to-a-allowed-cnp-66676b4c86-n2wpv 1/1 Running 0 62s
pod-to-a-denied-cnp-6b65879df6-sbnwx 1/1 Running 0 62s
pod-to-b-intra-node-nodeport-67c6bb4845-wl69s 1/1 Running 0 60s
pod-to-b-multi-node-clusterip-756ff8996c-nnwxk 1/1 Running 0 61s
pod-to-b-multi-node-headless-5cb4bcf569-bg9dw 1/1 Running 0 61s
pod-to-b-multi-node-nodeport-65b9d6fd7c-znl9l 1/1 Running 0 61s
pod-to-external-1111-8c8ddfcb6-6lvx9 1/1 Running 0 62s
pod-to-external-fqdn-allow-google-cnp-7f9f7c4b4-9gmpq 1/1 Running 0 62s
kubectl delete ns cilium-test
namespace "cilium-test" deleted
Another approach to install Cilium
is to use dedicated CLI tool - cilium
:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=arm64
if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
echo "Download https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz ..."
curl -L "https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz" -O
curl -L "https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum" -O
shasum -a 256 -c "cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum"
sudo tar xzvfC "cilium-darwin-${CLI_ARCH}.tar.gz" /usr/local/bin
rm "cilium-darwin-${CLI_ARCH}.tar.gz"
rm "cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum"
cilium install
🔮 Auto-detected Kubernetes kind: kind
ℹ️ Using Cilium version 1.16.4
🔮 Auto-detected cluster name: kind-home-lab
🔮 Auto-detected kube-proxy has been installed
cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3
cilium-envoy Running: 3
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.16.4
Image versions cilium quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf: 3
cilium-envoy quay.io/cilium/cilium-envoy:v1.30.7-1731393961-97edc2815e2c6a174d3d12e71731d54f5d32ea16@sha256:0287b36f70cfbdf54f894160082f4f94d1ee1fb10389f3a95baa6c8e448586ed: 3
cilium-operator quay.io/cilium/operator-generic:v1.16.4@sha256:c55a7cbe19fe0b6b28903a085334edb586a3201add9db56d2122c8485f7a51c5: 1
cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [kind-home-lab] Creating namespace cilium-test-1 for connectivity check...
....
[cilium-test-1] Running 103 tests ...
[=] [cilium-test-1] Test [no-unexpected-packet-drops] [1/103]
...
[=] [cilium-test-1] Test [no-policies] [2/103]
..............................................
[=] [cilium-test-1] Skipping test [no-policies-from-outside] [3/103] (skipped by condition)
[=] [cilium-test-1] Test [no-policies-extra] [4/103]
..................
[=] [cilium-test-1] Test [allow-all-except-world] [5/103]
...........................
[=] [cilium-test-1] Test [client-ingress] [6/103]
......
[=] [cilium-test-1] Test [client-ingress-knp] [7/103]
......
[=] [cilium-test-1] Test [check-log-errors] [103/103]
.........................
✅ [cilium-test-1] All 58 tests (555 actions) successful, 45 tests skipped, 1 scenarios skipped.
Flannel
Install Flannel in latest version into CNI-enabled Kubernetes:
curl -L https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml -o flannel-custom-resources.yaml
Change default CIDR 10.244.0.0/16
to 192.168.0.0/16
:
cat flannel-custom-resources.yaml | sed 's/"Network": "10.244.0.0\/16"/"Network": "192.168.0.0\/16"/' > flannel-custom-resources-tmp.yaml
mv flannel-custom-resources-tmp.yaml flannel-custom-resources.yaml
kubectl apply -f flannel-custom-resources.yaml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
Verify pods and daemon set:
kubectl get pods,ds -n kube-flannel
NAME READY STATUS RESTARTS AGE
pod/kube-flannel-ds-6dfbm 1/1 Running 0 38s
pod/kube-flannel-ds-6ktkf 1/1 Running 0 38s
pod/kube-flannel-ds-tlp5r 1/1 Running 0 38s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 3 3 3 <none> 38s
Kindnet
Install kindnet with latest version using command:
kubectl apply -f https://raw.githubusercontent.com/aojea/kindnet/main/install-kindnet.yaml
Check, if it's running:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
home-lab-control-plane Ready control-plane 13m v1.27.3 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 6.10.14-linuxkit containerd://1.7.1
home-lab-worker Ready <none> 13m v1.27.3 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 6.10.14-linuxkit containerd://1.7.1
home-lab-worker2 Ready <none> 13m v1.27.3 172.18.0.4 <none> Debian GNU/Linux 11 (bullseye) 6.10.14-linuxkit containerd://1.7.1
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5d78c9869d-cfzwx 1/1 Running 0 13m
coredns-5d78c9869d-qmztf 1/1 Running 0 13m
etcd-home-lab-control-plane 1/1 Running 0 13m
kindnet-5ss45 1/1 Running 0 12m
kindnet-f7xw6 1/1 Running 0 12m
kindnet-vvp24 1/1 Running 0 12m
kube-apiserver-home-lab-control-plane 1/1 Running 0 13m
kube-controller-manager-home-lab-control-plane 1/1 Running 0 13m
kube-proxy-kdzkl 1/1 Running 0 13m
kube-proxy-wl22j 1/1 Running 0 13m
kube-proxy-wp2gb 1/1 Running 0 13m
kube-scheduler-home-lab-control-plane 1/1 Running 0 13m
Kindnet logs can be verfied by commands:
kubectl -n kube-system logs kindnet-v4djh -f
Check kindnet configuration on control plane node:
docker exec -it home-lab-control-plane bash
more /etc/cni/net.d/10-kindnet.conflist
{
"cniVersion": "0.4.0",
"name": "kindnet",
"plugins": [
{
"type": "cni-kindnet",
"ranges": [
"192.168.0.0/24"
],
"capabilities": {"portMappings": true}
}
]
}
Check kindnet on pod:
kubectl -n kube-system get pod | grep kindnet
kindnet-5ss45 1/1 Running 0 33m
kindnet-f7xw6 1/1 Running 0 33m
kindnet-vvp24 1/1 Running 0 33m
kubectl -n kube-system exec -it kindnet-5ss45 -- sh
wget -qO- http://localhost:19080/metrics
...
TYPE process_open_fds gauge
process_open_fds 13
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 5.9019264e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.73827243737e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.31465216e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
wget -qO- http://localhost:19080/debug/pprof/cmdline
/bin/kindnetd--hostname-override=home-lab-control-plane--network-policy=true--admin-network-policy=false--baseline-admin-network-policy=false--masquerading=true--dns-caching=true--disable-cni=false--fastpath-threshold=20--ipsec-overlay=false--nat64=true--v=2/
Kindnet lightweight daemon binary file is located in:
/bin/kindnetd
CNI plugin binary file is located in:
/opt/cni/bin/cni-kindnet
SQLite3 database is used on workers (not control plane) and files are located in:
ls -la /var/lib/cni-kindnet
-rw-r--r-- 1 root root 4096 Feb 2 20:30 cni.db
-rw-r--r-- 1 root root 32768 Feb 2 20:30 cni.db-shm
-rw-r--r-- 1 root root 127752 Feb 2 20:30 cni.db-wal
Database can be checked locally by installing and copying file:
brew install sqlite
kubectl cp kube-system/kindnet-f42qw:/var/lib/cni-kindnet/cni.db cni.db
kubectl cp kube-system/kindnet-f42qw:/var/lib/cni-kindnet/cni.db-wal cni.db-wal
kubectl cp kube-system/kindnet-f42qw:/var/lib/cni-kindnet/cni.db--shm cni.db-shm
Then content can be verified by command:
sqlite3 cni.db
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.
sqlite> .tables
ipam_ranges pods portmap_entries
sqlite> select * from pods limit 1;
188788c1f427c9e0e33582a8760d64a656f42d7cc7c45291b05d47c64cbc2df6|coredns-7c65d6cfc9-ph8qk|kube-system|2ebc0bda-ccc6-4b3d-8fc4-f08e354141f2|/var/run/netns/cni-9d0d509d-8768-ca43-b05d-e1d2bad4bea2|192.168.1.153||192.168.1.0||knet88402eef|65535|2025-02-02 20:30:24
sqlite> select * from portmap_entries;
sqlite> select * from ipam_ranges;
1|192.168.1.0/24|
Connect The Dots
After setting up the local environment and choosing and installing the CNI plugin, we can start exploring networking configurations.
The tools subchapter describes several sample applications that will be helpful for investigating network settings. In the following subchapters, networking policies for each CNI plugin will be analyzed in detail. We will cover how to implement these policies, their impact on network traffic, and best practices for maintaining a secure and efficient network.
By the end of this chapter, you should have a solid understanding of how to configure and manage networking in your Kubernetes cluster using different CNI plugins.
Tools
Three types of tools are proposed for use:
-
podinfo - A tiny web application that showcases best practices for running microservices in Kubernetes. It is often used for testing and demonstrating Kubernetes features.
-
podtato-head - A prototypical cloud-native application built to colorfully demonstrate delivery scenarios using various tools and services. It is designed to help users understand the complexities of cloud-native deployments.
-
netshoot - A Swiss Army knife container for troubleshooting Kubernetes network issues. It includes various networking tools that can be used to debug and diagnose network problems in Kubernetes clusters.
Podinfo
The podinfo application can be installed by executing the following commands:
curl https://raw.githubusercontent.com/stefanprodan/podinfo/refs/heads/master/kustomize/deployment.yaml -o deployment.yaml
curl https://raw.githubusercontent.com/stefanprodan/podinfo/refs/heads/master/kustomize/service.yaml -o service.yaml
curl https://raw.githubusercontent.com/stefanprodan/podinfo/refs/heads/master/kustomize/kustomization.yaml -o kustomization.yaml
curl https://raw.githubusercontent.com/stefanprodan/podinfo/refs/heads/master/kustomize/hpa.yaml -o hpa.yaml
kubectl apply -k .
After installation, there are multiple endpoints that can be used to check how the application performs:
kubectl exec deployments/podinfo -- curl localhost:9898/metrics
kubectl exec deployments/podinfo -- curl localhost:9898/version
kubectl exec deployments/podinfo -- curl localhost:9898/healthz
kubectl exec deployments/podinfo -- curl localhost:9898/env
kubectl exec deployments/podinfo -- curl localhost:9898
Podtato-head
The podtato-head application can be installed using the following commands:
curl https://raw.githubusercontent.com/cncf/podtato-head/main/delivery/kubectl/manifest.yaml -o manifest.yaml
kubectl apply -f manifest.yaml
After installation use /
endpoint to check application:
k -n podtato exec deployment.apps/podtato-head-entry -- curl localhost:9000
Netshoot
The netshoot tool can be used for network troubleshooting and debugging. It include a variety of networking utilities (e.g. curl, iperf etc.). It can easily added as another running in pod e.g.:
- name: netshoot
image: nicolaka/netshoot
command: ["/bin/bash"]
args: ["-c", "while true; do ping localhost; sleep 60;done"]
Podinfo
In order to use netshoot
tools with podinfo
app, open shell for container netshoot
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
Then tools can be used directly e.g. tcpdump
for sniffing network traffic:
tcpdump -i any port 9898
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:53:15.685123 lo In IP6 localhost.58372 > localhost.9898: Flags [S], seq 346245541, win 65476, options [mss 65476,sackOK,TS val 1009258375 ecr 0,nop,wscale 7], length 0
21:53:15.685134 lo In IP6 localhost.9898 > localhost.58372: Flags [S.], seq 1480215945, ack 346245542, win 65464, options [mss 65476,sackOK,TS val 1009258375 ecr 1009258375,nop,wscale 7], length 0
21:53:15.685139 lo In IP6 localhost.58372 > localhost.9898: Flags [.], ack 1, win 512, options [nop,nop,TS val 1009258375 ecr 1009258375], length 0
21:53:15.685182 lo In IP6 localhost.58372 > localhost.9898: Flags [P.], seq 1:85, ack 1, win 512, options [nop,nop,TS val 1009258375 ecr 1009258375], length 84
21:53:15.685184 lo In IP6 localhost.9898 > localhost.58372: Flags [.], ack 85, win 511, options [nop,nop,TS val 1009258375 ecr 1009258375], length 0
21:53:15.685985 lo In IP6 localhost.9898 > localhost.58372: Flags [P.], seq 1:4097, ack 85, win 512, options [nop,nop,TS val 1009258376 ecr 1009258375], length 4096
21:53:15.685995 lo In IP6 localhost.58372 > localhost.9898: Flags [.], ack 4097, win 817, options [nop,nop,TS val 1009258376 ecr 1009258376], length 0
or netstat
to check ports, on which services are listening:
netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::9797 :::* LISTEN -
tcp 0 0 :::9898 :::* LISTEN -
tcp 0 0 :::9999
Podtato
Similarly netshoot
container can be accessed for podtato
app:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
Other use cases
In other cases run a temporary interactive shell using the nicolaka/netshoot
Docker image in a Kubernetes cluster.
The interactive shell will be removed automatically after you exit.
This is particularly useful for debugging and troubleshooting network issues within the cluster.
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot
Network policies
Networks policies provide the way to control traffic between pod and services withing the cluster in Kubernetes. There are multiple reasons why to use them:
- security - network policies allow to restrict access to/from pods and limit exposure by defining which pods are allowed to communicate
- isolation - can be achieved on namespace level or on application level by defining which services can communicated with each other
- compliance - network policies can help meet regulatory requirements
In next chapters described CNI plugins will be used to secure traffic for each of the tools listed in chapter Connect The Dots. In order to demonstrate usage of network policies for each CNI plugin there will be prepared examples:
- How to allow / deny traffic from other namespace ?
- How to allow / deny traffic from specific IP range ?
- How to generate logs for specific traffic ?
Calico
Calico network policy extends Kubernetes network policy by adding features such as namespace selectors and traffic logging.
The examples below demonstrate the usage of Calico network policies.
How to allow / deny traffic from other namespace ?
Let's verify the network connectivity between podinfo
and podtato-head
pods by testing traffic flow in both directions. First, let's check if podinfo
can reach podtato-head
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
The output shows:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>
Then let's verify the traffic flow in the opposite direction, from podtato-head
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898
The output shows:
{
"hostname": "podinfo-7f9d98d56d-c8zhf",
"version": "6.7.1",
"revision": "6b7aab8a10d6ee8b895b0a5048f4ab0966ed29ff",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.7.1",
"goos": "linux",
"goarch": "arm64",
"runtime": "go1.23.2",
"num_goroutine": "8",
"num_cpu": "8"
}#
Now it's time to create a Calico NetworkPolicy that allows ingress traffic from the podinfo
pod in the default namespace while blocking all egress traffic from podtato-head
:
cat <<EOF | kubectl apply -f -
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-from-default-ns
namespace: podtato
spec:
selector: component == 'podtato-head-entry'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'podinfo'
namespaceSelector: kubernetes.io/metadata.name == 'default'
destination:
ports:
- 9000
egress: []
EOF
Now let's verify the created NetworkPolicy
:
kubectl get networkpolicies.projectcalico.org -A
Possible output can be as below:
NAMESPACE NAME CREATED AT
podtato default.allow-from-default-ns 2025-03-15T21:50:07Z
Let's verify that traffic is allowed from podinfo
to podtato-head
by sending a request from the podinfo
pod:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
The output shows:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Next let's verify that traffic is blocked from podtato-head
to podinfo
by attempting to send a request from the podtato-head
pod:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898 --connect-timeout 10
The output shows that the connection timed out, indicating that the network policy is successfully blocking traffic from the podtato-head
pod to podinfo
:
curl: (28) Resolving timed out after 10003 milliseconds
How to allow / deny traffic from specific IP range ?
Let's get the IP addresses of our pods to use in the network policy. First, let's get the podtato-head pod IP:
kubectl get pods -n podtato -o json | jq -r '.items[] | select(.metadata.name | startswith("podtato-head-entry")) | .status.podIP'
192.168.41.196
kubectl get pods -n default -o json | jq -r '.items[] | select(.metadata.name | startswith("podinfo")) | .status.podIP'
192.168.41.194
192.168.209.137
As a next step let's create a Calico NetworkPolicy that allows traffic from specific podinfo
pod IPs to the podtato-head
pod:
kubectl -n podtato delete networkpolicies.projectcalico.org default.allow-from-default-ns
cat <<EOF | kubectl apply -f -
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-from-default-pod-ip
namespace: podtato
spec:
selector: component == 'podtato-head-entry'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
nets:
- 192.168.41.194/32
- 192.168.209.137/32
egress: []
EOF
After that verify the network policies that are currently active in our cluster:
kubectl get networkpolicies.projectcalico.org -A
Output:
NAMESPACE NAME CREATED AT
podtato default.allow-from-default-pod-ip 2025-03-15T22:04:17Z
Verify that traffic is allowed to flow from the podinfo
pods to the podtato-head
service by executing a curl command from one of the podinfo
pods:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
The output shows:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Now let's verify traffic flow in the opposite direction - from podtato-head
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898 --connect-timeout 10
The output shows that the connection timed out:
curl: (28) Resolving timed out after 10002 milliseconds
How to generate logs for specific traffic ?
Let's define a network policy that logs and denies traffic from podinfo to podtato-head:
kubectl -n podtato delete networkpolicies.projectcalico.org default.allow-from-default-pod-ip
cat <<EOF | kubectl apply -f -
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: log-and-deny-ingress
namespace: podtato
spec:
selector: component == 'podtato-head-entry'
types:
- Ingress
- Egress
ingress:
- action: Log
protocol: TCP
source:
selector: app == 'podinfo'
egress: []
EOF
Generate traffic from podinfo
to podtato-head
to trigger the logging policy
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
Check the logs from the Calico node pod to see the logged traffic:
kubectl logs -n calico-system calico-node-kh2jg | grep log-and-deny-ingress
The output shows the following:
Defaulted container "calico-node" out of: calico-node, flexvol-driver (init), install-cni (init)
2025-03-15 22:10:52.248 [INFO][89] felix/label_inheritance_index.go 185: Updating selector id=Policy(tier=default, name=podtato/default.log-and-deny-ingress) selector=(component == "podtato-head-entry" && projectcalico.org/namespace == "podtato")
2025-03-15 22:10:52.250 [INFO][89] felix/int_dataplane.go 2041: Received *proto.ActivePolicyUpdate update from calculation graph msg=id:<tier:"default" name:"podtato/default.log-and-deny-ingress" > policy:<namespace:"podtato" inbound_rules:<action:"log" protocol:<name:"tcp" > src_ip_set_ids:"s:pj5ATU1IVi7BRY37dKv1j9dhGgIqdfFkIpMDIQ" original_src_selector:"app == 'podinfo'" rule_id:"Vpf0HAInm19ya0fI" > original_selector:"(component == \"podtato-head-entry\" && projectcalico.org/namespace == \"podtato\")" >
2025-03-15 22:10:52.251 [INFO][89] felix/int_dataplane.go 2041: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"podtato/podtato-head-entry-68f945f584-hdkfj" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali19d64cc69db" profile_ids:"kns.podtato" profile_ids:"ksa.podtato.default" ipv4_nets:"192.168.41.196/32" tiers:<name:"default" ingress_policies:"podtato/default.log-and-deny-ingress" egress_policies:"podtato/default.log-and-deny-ingress" default_action:"Deny" > >
Clean up by deleting the network policy:
kubectl -n podtato delete networkpolicies.projectcalico.org default.log-and-deny-ingress
Cilium
Cilium network policy extends the capabilities of Kubernetes Network Policy. You can learn more about Cilium through an interactive course. Highly recommended labs include:
- Getting Started with Cilium
- Discovery: SecOps Engineer
- Discovery: Cloud Network Engineer
- Cilium Ingress Controller
- Cilium Gateway API
- Golden Signals with Hubble and Grafana
- Mutual Authentication with Cilium
- Migrating from Calico
In addition to Cilium, there are two important projects:
- Hubble for Network Observability
- Tetragon for eBPF-based Security Observability and Runtime Enforcement
These tools collectively enhance the security and observability of your system.
For working with network policies in Kubernetes, I recommend using the Network Policy Editor, which helps visualize configurations.
Below are examples demonstrating the usage of Cilium network policies.
How to allow / deny traffic from other namespace ?
Check traffic flow from podinfo
to podtato
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
Output:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Check traffic flow from podtato
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898
Output:
{
"hostname": "podinfo-7f9d98d56d-src4s",
"version": "6.7.1",
"revision": "6b7aab8a10d6ee8b895b0a5048f4ab0966ed29ff",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.7.1",
"goos": "linux",
"goarch": "arm64",
"runtime": "go1.23.2",
"num_goroutine": "8",
"num_cpu": "8"
}#
Define network policy to allow ingress and block egress traffic:
cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-access-podtato-by-podinfo
namespace: podtato
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchLabels:
app: podinfo
matchExpressions:
- key: io.kubernetes.pod.namespace
operator: Exists
toPorts:
- ports:
- port: "9000"
egress:
- toEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules:
dns:
- matchPattern: "*"
EOF
Check policy:
kubectl -n podtato get ciliumnetworkpolicies.cilium.io
NAME AGE
allow-access-podtato-by-podinfo 9s
Check traffic flow from podinfo
to podtato
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
Output:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Check traffic flow from podtato
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898 --connect-timeout 10
Output:
curl: (28) Failed to connect to podinfo.default port 9898 after 10002 ms: Timeout was reached
Canal
Flannel is focused on networking and does not support network policies. Canal is a project that combines Flannel and Calico for CNI networking, where Calico is used for network policy enforcement. Unfortunately, the Canal CNI is not supported for Kubernetes 1.28 or later, so no examples are provided as it is now considered a legacy solution.
Kindnet
Kindnet
implements Kubernetes network policies. Additionally, it supports admin network policies.
The examples below demonstrate the usage of Kindnet network policies.
How to allow / deny traffic from other namespace ?
Check traffic flow from podinfo
to podtato
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
Output:
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Check traffic flow from podtato
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898
Output:
{
"hostname": "podinfo-7f9d98d56d-wtkmp",
"version": "6.7.1",
"revision": "6b7aab8a10d6ee8b895b0a5048f4ab0966ed29ff",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.7.1",
"goos": "linux",
"goarch": "arm64",
"runtime": "go1.23.2",
"num_goroutine": "8",
"num_cpu": "8"
}#
Define network policy to allow ingress and block egress traffic:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-access-podtato-by-podinfo
namespace: podtato
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: podinfo
ports:
- port: 9000
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
EOF
Check network policy:
kubectl -n podtato get networkpolicies.networking.k8s.io
Output:
NAME POD-SELECTOR AGE
allow-access-podtato-by-podinfo <none> 9s
Check traffic flow from podinfo
to podtato
:
kubectl exec -it deployments/podinfo -c netshoot -- /bin/zsh
curl podtato-head-entry.podtato:9000
Output:
podinfo-7f9d98d56d-dm6mv ~ curl podtato-head-entry.podtato:9000
<html>
<head>
<title>Hello Podtato!</title>
<link rel="stylesheet" href="./assets/css/styles.css"/>
<link rel="stylesheet" href="./assets/css/custom.css"/>
</head>
<body style="background-color: #849abd;color: #faebd7;">
<main class="container">
<div class="text-center">
<h1>Hello from <i>pod</i>tato head!</h1>
<div style="width:700px; height:800px; margin:auto; position:relative;">
<img src="./assets/images/body/body.svg" style="position:absolute;margin-top:80px;margin-left:200px;">
<img src="./parts/hat/hat.svg" style="position:absolute;margin-left:200px;margin-top:0px;">
<img src="./parts/left-arm/left-arm.svg" style="position:absolute;top:100px;left:-50px;">
<img src="./parts/right-arm/right-arm.svg" style="position:absolute;top:100px;left:450px;">
<img src="./parts/left-leg/left-leg.svg" style="position:absolute;top:480px;left: -0px;" >
<img src="./parts/right-leg/right-leg.svg" style="position:absolute;top:480px;left: 400px;">
</div>
<h2> Version v0.1.0 </h2>
</div>
</main>
</body>
</html>#
Check traffic flow from podtato
to podinfo
:
kubectl -n podtato exec -it deployments/podtato-head-entry -c netshoot -- /bin/zsh
curl podinfo.default:9898 --connect-timeout 10
Output:
curl: (28) Failed to connect to podinfo.default port 9898 after 10003 ms: Timeout was reached
Summary
Networking in Kubernetes is a complex concept. This book touches on only some of the areas and available solutions. If you are searching for more materials, I strongly recommend taking a look at the following resources:
For a deeper understanding, you may also explore additional topics such as:
- Service Meshes (e.g., Istio, Linkerd)
- Ingress Controllers
These resources and topics will provide a more comprehensive view of Kubernetes networking and help you navigate its complexities more effectively.