Howto: Getting Started with Kubernetes and Basic Load Balancing
This posting describes a way to deploy a service, expose it via the NodePort, scale it to 3 and observe basic load balancing. Enjoy!
Run a microservice on Kubernetes
First create a deployment which also creates a pod. This deployment was used in several conferences to demonstrate Docker features. It’s proven as suitable container to explore load balancing for Kubernetes.
$ kubectl run micro --image=fmunz/micro --port=80
deployment "micro" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
micro-7b99d94476-9tqx5 1/1 Running 0 5m
Expose the micro service
Before you can access the service from the outside, it has to be exposed:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
micro 1 1 1 1 7m
$ kubectl expose deployment micro --type=NodePort
service “micro” exposed
Find out its port number
$ kubectl describe service micro | grep NodePort
Scale service to 3
$ kubectl scale --replicas=3 deployment/micro
deployment "micro" scaled
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-nginx 3 3 3 3 1d
Explore the load balancing
Now you will have 3 pods running the micro service. Acces the service via the browser with the following URL:
http://localhost:NODE_PORT
Refresh the page 3 times. You will see that different pods will serve your request because a different IP is returned each time.
[…] These steps are not necessary to run Fn project. I first deployed a little microservice to see if Kubernetes was running fine for me on my Mac. Feel free to skip that, or you could follow the steps for load balancing a microservice with K8s […]