Basic Load Balancing with Kubernetes

Howto: Getting Started with Kubernetes and Basic Load Balancing

This posting describes a way to deploy a service, expose it via the NodePort, scale it to 3 and observe basic load balancing. Enjoy!

Run a microservice on Kubernetes

First create a deployment which also creates a pod. This deployment was used in several conferences to demonstrate Docker features. It’s proven as suitable container to explore load balancing for Kubernetes.

$ kubectl run micro --image=fmunz/micro --port=80

deployment "micro" created


$ kubectl get pods

NAME                           READY     STATUS    RESTARTS   AGE
micro-7b99d94476-9tqx5         1/1       Running   0          5m

Expose the micro service

Before you can access the service from the outside, it has to be exposed:

$ kubectl get deployments

NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
micro   1         1         1            1           7m


$ kubectl expose deployment micro --type=NodePort

service “micro” exposed

Find out its port number

$ kubectl describe service micro | grep NodePort

Scale service to 3

$ kubectl scale --replicas=3 deployment/micro

deployment "micro" scaled

$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-nginx   3         3         3            3           1d

Explore the load balancing

Now you will have 3 pods running the micro service. Acces the service via the browser with the following URL:

http://localhost:NODE_PORT

Refresh the page 3 times. You will see that different pods will serve your request because a different IP is returned each time.

ReadyApp Framework for WebLogic: Good fit for Kubernetes

Oracle and CNCF

Oracle joined the CNCF in 2017. So it doesn’t come as a big surprise that at least since Open World 2017 you can observe the overall trend towards cloud native applications in the Oracle application space. The move towards integration Oracle Fusion Middleware and Kubernetes is for sure an important part of this movement.

WebLogic / Deployments and ReadyApp Framework

There is ongoing work to make WebLogic ready for Kubernetes. I will write about some more details later this year, however an interesting step is the ReadyApp Framework for WebLogic.

The ReadyApp framework helps load balancers to detect server readiness by providing a reliable health-check URL. Also EAR-based or WAR-based application can register with the framework by adding a single line to the application’s WebLogic deployment descriptor.

WebLogic and Kubernetes Readiness Probe

When running WebLogic on K8s the ReadyApp Framework feature becomes useful e.g. if you integrate it with the Kubernetes Readiness Probe. This way K8s will able to know that WebLogic and its applications is indeed functional (and not only that the Docker container was started).

ReadyApp Framework and JCS

If your use case isn’t WebLogic on K8s, obviously ReadyApp Framework makes sense as well. It can be used with JCS or on-premises.

Analytics and Data Summit 2018: Serverless and Machine Learning + Open Source Big Data in the Cloud

The year has just started and here is the first “good news” yet: My presentation about “Serverless Architectures and Machine Learning” was accepted for the Analytics and Data Summit 2018 (former BIWA conference). The presentation will include a live demo with Fn Project.

In addition to that I will give another presentation together with Edelweiss Kammermann about Open Source Big Data (with Hadoop, Hive, Spark and Kafka live demos) in the Cloud. IMHO, two fabulous topics – I am looking forward to see you there!

A Serverless / FaaS Classification

At the time of writing there are more than a dozen FaaS frameworks or platforms available. These frameworks or platforms can be classified into three different categories based on their objective and reach.

The three categories are as follows:

  1. Complexity:
    Reduce the complexity of a particular vendor’s cloud based FaaS implementation, e.g. the configuration of the API gateway and access management that is required for a REST based serverless function. A typical example for this category: AWS Chalice.
  2. Portability:
    Provide an abstraction framework for portability and ease of use on top of the FaaS implementation of various public cloud providers. A popular example is the serverless.com framework.
  3. Standards:
    Provide a standard based, serverless platform or framework to abstract running functions from the operation of servers. These frameworks are typically developed without a particular cloud provider in mind. When running such a framework on top of IaaS, servers are abstracted away, automated scaling is possible, but no true per invocation is achieved due to the IaaS pricing model. Examples for this category are Open FaaS, and Fn Project.

 

Fn Project in Public Clouds (aka Serverless on IaaS ?)

Fn in Public Clouds (IaaS)

Fn project is a cloud agnostic FaaS platform and a common question is how to use Fn in public clouds. Similar to the local installation that we used in the Oracle blog posting (link soon), it can also be installed on any public cloud IaaS. For most IaaS it is enough to pass the installation command directly to the creation of a compute instance as so called user-data. User data is commands that are acted upon when the instance is provisioned. Also when running Fn in a public cloud, don’t forget to enable access rules for Fn server allowing port 8080 – depending on requirements – either from your own IP or all public IP addresses.

Once Fn server is running on your favourite cloud provider, you could deploy the recommendation engine mock example mentioned in the posting above in two different ways.

Deploy Your Fn Function in the Cloud

# example 1 (for teaching purpose only, in production use approach below)
# note: run these commands on the cloud instance

$ fn apps create advtravel
$ fn routes create advtravel /fn-recommend DOCKER_ID/recommend:0.0.2

 

Another, probably even more useful way to deploy the function is to set the FN_API_URL environment variable locally, point it to the remote cloud instance, and run the local Fn deploy command against the remote cloud instance.

# example 2 (easier, what you'd do in real life)
$ export FN_API_URL=URLofRemoteCloudInstance
$ fn deploy --app advtravel 

Note, that with the commands above you never had copy over the function or the container image to the cloud instance. When the function will be invoked the first time, Fn will pull the Docker container from the registry, store it locally, and then simply run the function.

Test Fn in the Cloud

Once the Fn is running in the cloud and your application is deployed you can access the application from a local machine using the command-line or Postman. The invocation is the same as in the local example, just replace localhost with the public IP address of your cloud instance:

$ curl -X POST --data @testdata/syd.json PUBLIC_IP:8080/r/advtravel/fn-recommend 

Real FaaS?

Obviously, when running Fn project on an IaaS you do not get the true pay per invocation benefit as for a FaaS implemented by the cloud provider as PaaS. You will get automated scalability to some degree, since it is built into the load balancer Fn LB. At the end running Fn on IaaS is only serverless from a user perspective.

It will be interesting to see if a cloud platform (most likely Oracle, since Fn Project is driven a lot by Oracle) will provide a proper FaaS service with pay per invocation and automated scalability that is compatible with open source Fn Project.

Fn Cloud Demo

A recorded live demo from Devoxx conference about deploying Fn on IaaS can be seen here.

Get the demo app used in webcast from here.