Table of contents
Networking a vast and huge topic in any technology. Let's start to learn how K8s implements networking...
Networking is a central part of K8s itself but it can be challenging to understand how it works. Here in K8s, we call it cluster Networking. As the cluster grows the complexity of networking grows. Kubernetes is all about sharing machines between applications through network connections. The four main problems of networking in K8s were:
Highly-coupled container-to-container communications
Pod-to-Pod communications
Pod-to-Service communications
External-to-Service communications
Dynamic port allocation brings a lot of complications to the system. So K8s has its networking model.
Kubernetes IP addresses exist at the Pod
scope - containers within a Pod
share their network namespaces - including their IP address and MAC address. This means that containers within a Pod
can all reach each other's ports on localhost
. This also means that containers within a Pod
must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-pod"
model
The network model is implemented by the container runtime
on each node. The most common container runtimes use Container Network Interface
(CNI) plugins to manage their network and security capabilities.
Service
A Kubernetes Service is a mechanism to expose applications both internally and externally.
In K8s a Service is a single outward-facing endpoint for exposing a network application that is running as one or more Pods in your cluster
A Service in Kubernetes is an object (the same way that a Pod or a ConfigMap is an object).
The service holds access policies and is responsible for enforcing these policies for incoming requests.
The service is assigned a virtual IP address, known as a clusterIP, which persists until it is explicitly destroyed.
You can create, view or modify Service definitions using the Kubernetes API. Usually, you use a tool such as kubectl
to make those API calls for you.
Types of Services
Kubernetes allows the creation of these types of services:
ClusterIP (default Type of Service)
A Service is created by grouping similar types of pods( Example: web app, db groups) and IP address is assigned to it.
Service receives a name, cluster-internal IP address and a port, making its pods only accessible from within the cluster. This is called as Cluster IP.
In yaml, we need to mention only the target port(backend port) and port(service port) and both are 80.
Nodeport
A NodePort service builds on top of the ClusterIP service, exposing the node to a port accessible from outside the cluster.
The nodePort field in the service manifest is optional and lets you specify a custom port between 30000-32767.
In yaml, while creating this kind of service three ports are important node port(has range), target port(default as port) and service port(mandatory)
yaml file for reaching applications within a
single pod on a single node
using a service.For
multiple pods in a single node same cluster
, there is no need for any configuration change in yaml because all have the same label and the service uses a random algorithm to reach podsFor
multiple pods in multiple nodes
in the same cluster, the service expands across all the nodes in the cluster and uses the same port number with the respective node ip.
LoadBalancer
A LoadBalancer service is based on the NodePort service and adds the ability to configure external load balancers in public and private clouds via native load balancers.
It exposes services running within the cluster by forwarding network traffic to cluster nodes.
ExternalName
An ExternalName service maps the service to a DNS name instead of a selector.
It returns a CNAME record matching the contents of the externalName field
Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
Components in Ingress
Ingress Controller
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer.
To deploy an Ingress controller such as ingress-nginx.
Unlike other types of controllers which run as part of the
kube-controller-manager
binary, Ingress controllers are not started automatically with a cluster.ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
An Ingress controller is needed to satisfy an Ingress. Only creating an Ingress resource has no effect.
Ingress Resources
For the Ingress resource to work, the cluster must have an ingress controller running.
In Kubernetes are used to proxy layer 7 traffic to containers in the cluster. Ingress resources require an ingress controller component to run as a layer 7 proxy service inside the cluster
apiVersion: extensions/vbeta
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort:
Types of Ingress
Single Service Ingress
Simple fanout
Name-based virtual hosting
TLS
Load Balancing
Install NGINX in MINIkube
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.0/deploy/static/mandatory.yaml
Setting up an NGINX Ingress controller
minikube addons enable ingress
kubectl get pods -A | grep ingress
Deploying a sample application
kubectl run myapp --image=nginx
Accessing the application
kubectl expose pod myapp --port=80 --name myapp-service --type=NodePort
kubectl get svc | grep myapp-service
To confirm that the application is accessible, visit or use curl to view http://localhost:32017
.
Network Policies
Kubernetes network policy lets administrators and developers enforce which network traffic is allowed using rules.
The Kubernetes Network Policy API provides a standard way for users to define network policy for controlling network traffic. However, Kubernetes has no built-in capability to enforce the network policy. To enforce network policy, you must use a network plugin such as Calico.
Value
Kubernetes network policy lets developers secure access to and from their applications using the same simple language they use to deploy them.
Features
The Kubernetes Network Policy API supports the following features:
Policies are namespace scoped
Policies are applied to pods using label selectors
Policy rules can specify the traffic that is allowed to/from pods, namespaces, or CIDRs
Policy rules can specify protocols (TCP, UDP, SCTP), named ports or port numbers
Concept
From the point of view of a Kubernetes pod, ingress is incoming traffic to the pod, and egress is outgoing traffic from the pod. In Kubernetes network policy, you create ingress and egress “allow” rules independently
Default deny/allow behavior
Default allow means all traffic is allowed by default, unless otherwise specified. Default deny means all traffic is denied by default, unless explicitly allowed.
To install Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Create ingress policies
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
namespace: default
spec:
podSelector:
matchLabels:
color: blue
ingress:
- from:
- podSelector:
matchLabels:
color: red
ports:
- port: 80
Network policies apply to pods within a specific namespace.
Policies can include one or more ingress rules. To specify which pods in the namespace the network policy applies to, use a pod selector.
Within the ingress rule, use another pod selector to define which pods allow incoming traffic, and the ports field to define on which ports traffic is allowed.
In the above example, incoming traffic to pods with label color=blue are allowed only if they come from a pod with color=red, on port 80.
Reference:
DNS
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.
Kubernetes publishes information about Pods and Services which is used to program DNS. Kubelet configures Pods' DNS so that running containers can lookup Services by name rather than IP.
Namespaces of Services
A DNS query may return different results based on the namespace of the Pod making it.
DNS queries that don't specify a namespace are limited to the Pod's namespace. Access Services in other namespaces by specifying it in the DNS query.
DNS queries may be expanded using the Pod's /etc/resolv.conf
. Kubelet configures this file for each Pod.
DNS Records
What objects get DNS records?
Services
Pods
Example Kubernetes DNS Records
The full DNS A
record of a Kubernetes service will look like the following example:
service.namespace.svc.cluster.local
A pod would have a record in this format, reflecting the actual IP address of the pod:
10.32.0.125.namespace.pod.cluster.local
Additionally, SRV
records are created for a Kubernetes service’s named ports:
_port-name._protocol.service.namespace.svc.cluster.local
Reference: DNS for Services and Pods | Kubernetes
CNI
Container Network Interface (CNI) is a framework for dynamically configuring networking resources. It uses a group of libraries and specifications written in Go. The plugin specification defines an interface for configuring the network, provisioning IP addresses, and maintaining connectivity with multiple hosts.
When used with Kubernetes, CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods
CNI offers specifications for multiple plugins because networking is complex, and user needs may differ.
CNI networks can be implemented using an encapsulated or unencapsulated network model.
One of the CNI plugins is Calico
Configuring CNI
The CNI plugin is configured in the kubelet service on each node in the cluster. If you look at the kubelet service file, you will see an option called network-plugin set to CNI.
The same info is available in running kubelet service
ps -aux | grep kubelet
The CNI bin directory has all the supported CNI plugins as executables. Such as the bridge, dhcp, flannel etc.
$ ls /etc/cni/net.d
10-bridge.conf
$ cat /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
One of the CNI solutions is Weave.
For more information on the Weave, refer here.
Final Thoughts:
These are some of the important concepts in K8s Networking
Thanks for reading my blog. Hope it helps in gaining some insights on K8s Networking.
Suggestions are always welcomed.
Will see you in the next blog ........... :)