- Published on
kind@k8s
- Authors
- Name
- PatharaNor
Introduction
kind
is a tool for running local Kubernetes clusters using Docker container “nodes”. kind
was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Ref. https://kind.sigs.k8s.io/
Preparing Environment
Requirement
- CPU 2 Cores (recommended 4 Cores)
- Memory 2 GBs (recommended 4 GBs)
- LinuxOS, MacOS, WindowOS (recommended Linux)
- Command : kubectl , kind , golang
Installation
Create Cluster
Using Golang
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.8.1 && \
kind create cluster
Delete the cluster
$ kind delete cluster
Specific cluster configuration
Example creating cluster without worker in ./configs/cluster-config.yaml :
# cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# extraPortMappings:
# - containerPort: 80
# hostPort: 8000
# listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
# protocol: udp # Optional, defaults to tcp
Or creating cluster with 2-workers:
# cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
Run kind
creating the new cluster based on the config file :
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.8.1 && \
kind create cluster \
--name test-cluster \
--config=./configs/cluster-config.yaml
You should see output like this:
Creating cluster "test-cluster" ...
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-test-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-test-cluster
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Checking cluster context :
$ kubectl cluster-info --context kind-test-cluster
# The output should look like this(but may have different port number):
Kubernetes master is running at https://127.0.0.1:54209
KubeDNS is running at https://127.0.0.1:54209/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Open a browser and enter the address of the API server. You should receive this response:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
This means that you are not authorized to access the API server because it doesn’t know who you are. This is good, otherwise anyone could manipulate your cluster.
You can use kubectl as a proxy to authenticate to your API server:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
# Now open your browser to "http://localhost:8001/healthz".
# You should see text "ok".
Create & Deploy Simple WebApp
The web application using to solve "kind" command that can be serve the application.
Preparing Dockerfile
The Dockerfile
will be used in deployment.yaml
file to deploy to the cluster:
FROM alpine
RUN apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
rm -r /root/.cache
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ]
Simple Flask service
The service based on port number 8087
:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def blog():
return "Flask in kind Kubernetes cluster"
if __name__ == '__main__':
app.run(threaded=True,host='0.0.0.0',port=8087)
For requirements.txt :
Flask==0.10.1
After that wrapping the service with Docker
container:
$ docker build -t [DOCKER_HUB_NAME]/[YOUR_TAG_NAME]:[TAG_VERSION] .
ALTERNATIVE : All of above, you can fetching via DockerHub:
$ docker pull patharanor/test-k8s-kind-flask:latest
Next session we will create deployment script for k8s
.
Create deployment script
The deployment based on container from patharanor/test-k8s-kind-flask:latest
and binding port on the same port number (8087
).
# deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: test-flask-app-service
spec:
ports:
- targetPort: 8087
nodePort: 30087
port: 80
selector:
app: test-flask-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-flask-deployment
labels:
app: test-flask
spec:
replicas: 1
selector:
matchLabels:
app: test-flask-app
template:
metadata:
labels:
app: test-flask-app
spec:
containers:
- name: test-flask-app-container
image: patharanor/test-k8s-kind-flask:latest
ports:
- containerPort: 8087
Let's do deploy:
$ kubectl apply -f deployment.yaml
# Output:
# service/test-flask-app-service configured
# deployment.apps/test-flask-deployment created
checking the service is running :
$ kubectl get all
# Output:
#
# NAME READY STATUS RESTARTS AGE
# pod/test-flask-deployment-6f85b66857-tm4fd 1/1 Running 0 94s
#
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m55s
# service/test-flask-app-service NodePort 10.109.152.26 <none> 80:30087/TCP 2m11s
#
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/test-flask-deployment 1/1 1 1 94s
#
# NAME DESIRED CURRENT READY AGE
# replicaset.apps/test-flask-deployment-6f85b66857 1 1 1 94s
access to the container:
$ kubectl exec --stdin --tty POD_NAME --namespace POD_NAMESPACE -- /bin/bash
or monitor any event in each namespace:
$ kubectl get event -A —watch
to serve the web application :
$ kubectl port-forward test-flask-deployment-6f85b66857-tm4fd 8087:8087
# Output:
# Forwarding from 127.0.0.1:8087 -> 8087
# Forwarding from [::1]:8087 -> 8087
Now you can access the web application via http://localhost:8087
Remove Cluster
List all items in each namespace of K8s :
$ kubectl get all -A
Example existing items:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/ingress-nginx-controller-c96557986-qxgtr 1/1 Running 0 84m
kube-system pod/coredns-66bff467f8-g74ns 1/1 Running 0 98m
kube-system pod/coredns-66bff467f8-w55l2 1/1 Running 0 98m
kube-system pod/etcd-test-cluster-control-plane 1/1 Running 0 98m
kube-system pod/kindnet-2txt2 1/1 Running 0 98m
kube-system pod/kindnet-f897p 1/1 Running 0 98m
kube-system pod/kindnet-zxwkg 1/1 Running 0 98m
kube-system pod/kube-apiserver-test-cluster-control-plane 1/1 Running 0 98m
kube-system pod/kube-controller-manager-test-cluster-control-plane 1/1 Running 0 98m
kube-system pod/kube-proxy-6z5ts 1/1 Running 0 98m
kube-system pod/kube-proxy-g9656 1/1 Running 0 98m
kube-system pod/kube-proxy-l5gj6 1/1 Running 0 98m
kube-system pod/kube-scheduler-test-cluster-control-plane 1/1 Running 0 98m
kube-system pod/kubernetes-dashboard-975499656-b4dm5 1/1 Running 0 65m
local-path-storage pod/local-path-provisioner-bd4bb6b75-wttg8 1/1 Running 0 98m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.97.71.103 <pending> 80:31500/TCP,443:32014/TCP 97m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.107.176.237 <none> 443/TCP 97m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 98m
kube-system service/kubernetes-dashboard ClusterIP 10.97.34.0 <none> 443/TCP 65m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kindnet 3 3 3 3 3 <none> 98m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 98m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 97m
kube-system deployment.apps/coredns 2/2 2 2 98m
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 65m
local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 98m
NAMESPACE NAME DESIRED CURRENT READY AGE
ingress-nginx replicaset.apps/ingress-nginx-controller-76c6cf8f54 0 0 0 97m
ingress-nginx replicaset.apps/ingress-nginx-controller-c96557986 1 1 1 84m
kube-system replicaset.apps/coredns-66bff467f8 2 2 2 98m
kube-system replicaset.apps/kubernetes-dashboard-975499656 1 1 1 65m
local-path-storage replicaset.apps/local-path-provisioner-bd4bb6b75 1 1 1 98m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-nginx job.batch/ingress-nginx-admission-create 1/1 17s 97m
ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 30s 97m
Remove all of them in K8s
$ sh -c "$(kubectl get pod -A | grep "/" | awk '{print "kubectl delete pod "$2" --namespace="$1}')" && \
sh -c "$(kubectl get service -A | grep "/" | awk '{print "kubectl delete service "$2" --namespace="$1}')" && \
sh -c "$(kubectl get daemonset -A | grep "/" | awk '{print "kubectl delete daemonset "$2" --namespace="$1}')" && \
sh -c "$(kubectl get deployment -A | grep "/" | awk '{print "kubectl delete deployment "$2" --namespace="$1}')" && \
sh -c "$(kubectl get rs -A | grep "/" | awk '{print "kubectl delete rs "$2" --namespace="$1}')" && \
sh -c "$(kubectl get job -A | grep "/" | awk '{print "kubectl delete job "$2" --namespace="$1}')"
Or create simple shell script to delete all items with keyword :
# example-delete-all.sh
# usage : sh example-delete-all.sh SPECIFIC_KEYWORD
NS=$1
sh -c "$(kubectl get daemonset -A | grep $NS | awk '{print "kubectl --namespace "$1" delete daemonset "$2}')" && \
sh -c "$(kubectl get job -A | grep $NS | awk '{print "kubectl --namespace "$1" delete job "$2}')" && \
sh -c "$(kubectl get statefulset.apps -A | grep $NS | awk '{print "kubectl --namespace "$1" delete statefulset.apps "$2}')" && \
sh -c "$(kubectl get rs -A | grep $NS | awk '{print "kubectl --namespace "$1" delete rs "$2}')" && \
sh -c "$(kubectl get deployment -A | grep $NS | awk '{print "kubectl --namespace "$1" delete deployment "$2}')" && \
sh -c "$(kubectl get service -A | grep $NS | awk '{print "kubectl --namespace "$1" delete service "$2}')" && \
sh -c "$(kubectl get pod -A | grep $NS | awk '{print "kubectl --namespace "$1" delete pod "$2}')"
Delete kind service
$ kubectl delete all --all
service "kubernetes" deleted
Delete cluster
$ kind delete cluster --name test-cluster
Now you should see nothing or warning on serving port:
$ kubectl get all -A
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Ingress
Ingress for simple service foo/bar
Try this code:
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo:0.2.3
args:
- '-text=foo'
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
---
kind: Pod
apiVersion: v1
metadata:
name: bar-app
labels:
app: bar
spec:
containers:
- name: bar-app
image: hashicorp/http-echo:0.2.3
args:
- '-text=bar'
---
kind: Service
apiVersion: v1
metadata:
name: bar-service
spec:
selector:
app: bar
ports:
# Default port used by the image
- port: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 5678
- path: /bar
backend:
serviceName: bar-service
servicePort: 5678
---
then run kubectl proxy
and test foo
by calling endpoint below:
curl -vk http://127.0.0.1:8001/api/v1/namespaces/default/services/http:foo-service:5678/proxy/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
> GET /api/v1/namespaces/default/services/http:foo-service:5678/proxy/ HTTP/1.1
> Host: 127.0.0.1:8001
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 4
< Content-Type: text/plain; charset=utf-8
< Date: Mon, 20 Jul 2020 07:21:10 GMT
< X-App-Name: http-echo
< X-App-Version: 0.2.3
<
foo
Ingress Controller
Installing an Ingress Controller to kind
:
Ref. https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#docker-for-mac
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
Verify Installing
Press CTRL+C
to exit:
$ kubectl get pods -n ingress-nginx \
-l app.kubernetes.io/name=ingress-nginx --watch
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-s8bj6 0/1 ContainerCreating 0 3s
ingress-nginx-admission-patch-47vbv 0/1 ContainerCreating 0 3s
ingress-nginx-controller-c96557986-l7jg7 0/1 ContainerCreating 0 14s
ingress-nginx-admission-create-s8bj6 1/1 Running 0 13s
ingress-nginx-admission-create-s8bj6 0/1 Completed 0 14s
ingress-nginx-admission-patch-47vbv 0/1 Completed 0 16s 28m
Now we have Ingress Controller. To access to, let's check external IP :
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.248.112 <pending> 80:30329/TCP,443:31550/TCP 54s
ingress-nginx-controller-admission ClusterIP 10.108.20.178 <none> 443/TCP 55s
It’s <pending>
, which doesn’t sound good. kind
hasn’t implemented LoadBalancer support, so it will continue to loop in this state forever. We’ll fix this later. For now, let’s explore how kind
networking works from a bird’s eye view.
kind
Networking
kind uses the clean CNI configuration of the ptp plugin. Simultaneously and in tandem, kind
operates its networking helper daemon, called kindnetd, which helps the ptp plugin to discover the Node’s InternalIP
. The ptp CNI plugin creates a point-to-point link between a container and the host by using a veth device.
kind
maps each Kubernetes node to a Docker container:
$ docker ps
b716e2c605e8 kindest/node:v1.18.2 "/usr/local/bin/entr…" 6 minutes ago Up 6 minutes test-cluster-worker
43c0535629de kindest/node:v1.18.2 "/usr/local/bin/entr…" 6 minutes ago Up 6 minutes 127.0.0.1:51205->6443/tcp test-cluster-control-plane
0bb626857e53 kindest/node:v1.18.2 "/usr/local/bin/entr…" 6 minutes ago Up 6 minutes test-cluster-worker2
$ docker inspect test-cluster-control-plane -f '{{.NetworkSettings.IPAddress}}'
$ docker exec -it test-cluster-control-plane ip route show
default via 192.168.32.1 dev eth0
10.244.0.2 dev veth1ddfecf2 scope host
10.244.0.3 dev veth0f89391a scope host
10.244.0.4 dev veth8e3791a6 scope host
10.244.1.0/24 via 192.168.32.2 dev eth0
10.244.2.0/24 via 192.168.32.4 dev eth0
192.168.32.0/20 dev eth0 proto kernel scope link src 192.168.32.3
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.248.112 <pending> 80:30329/TCP,443:31550/TCP 4m53s
ingress-nginx-controller-admission ClusterIP 10.108.20.178 <none> 443/TCP 4m54s
Now it’s configured correctly, and no longer in a <pending>
state. Since this is a NodePort-type Service, the same ephemeral ports are allocated across all the nodes of your cluster, even on the banzai-control-plane
node.
References
- Principles => https://kind.sigs.k8s.io/docs/design/initial/
- Initial => https://kind.sigs.k8s.io/docs/design/initial/
- Local Registry => https://kind.sigs.k8s.io/docs/user/local-registry/
- Private Registries => https://kind.sigs.k8s.io/docs/user/private-registries/