How I passed the Google Associate Cloud Engineer Exam, May 2020 (Mindmap)

I passed the GCE associate exam a fortnight ago. Actually I enjoyed preparing for it! To do so, I browsed the courses on Coursera and Linuxacademy but at the end I learned most by sitting in the sun at Englischer Garten and speed reading the Google book. Yes, I like (real) books ūüôā

Frank Munz, GCP certified

While going through the book I created my own notes. In the exam you will be asked about CLI commands, so I compiled a mind map with the most essential and common commands.

I compiled this mindmap rather quickly, so let me know if you spot any typos and I will fix them. It’s not meant to be complete (check the docs for gsutils and gcloud CLI).

Altogether it was challenging but not too difficult for someone with just a bit of working knowledge of Google cloud like me (I am a tech evangelist for Amazon Web Services).

Fun fact: I am quite good with containers and K8s, but the question that made me sweat the most was about pod isolation in GKE. I am not going to spoiler the question or the answer (you didn’t come here for a GCP certification brain dump, did you?), but please make sure to read about containerd images and Gvisor on GKE.

Englischer Garten, (c) image Frank Munz

Basic Load Balancing with Kubernetes

Howto: Getting Started with Kubernetes and Basic Load Balancing

This posting describes a way to deploy a service, expose it via the NodePort, scale it to 3 and observe basic load balancing. Enjoy!

Run a microservice on Kubernetes

First create a deployment which also creates a pod. This deployment was used in several conferences to demonstrate Docker features. It’s proven as suitable container to explore load balancing for Kubernetes.

$ kubectl run micro --image=fmunz/micro --port=80

deployment "micro" created


$ kubectl get pods

NAME                           READY     STATUS    RESTARTS   AGE
micro-7b99d94476-9tqx5         1/1       Running   0          5m

Expose the micro service

Before you can access the service from the outside, it has to be exposed:

$ kubectl get deployments

NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
micro   1         1         1            1           7m


$ kubectl expose deployment micro --type=NodePort

service ‚Äúmicro‚ÄĚ exposed

Find out its port number

$ kubectl describe service micro | grep NodePort

Scale service to 3

$ kubectl scale --replicas=3 deployment/micro

deployment "micro" scaled

$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-nginx   3         3         3            3           1d

Explore the load balancing

Now you will have 3 pods running the micro service. Acces the service via the browser with the following URL:

http://localhost:NODE_PORT

Refresh the page 3 times. You will see that different pods will serve your request because a different IP is returned each time.

Another View: Your Data Center’s Degree of Cloud

Here is another view that I just developed¬†to describe¬†a customer’s level of cloud. I used¬†this view because quite often a public cloud is just regarded as an “outsourced datacenter”, however there are many steps in between the two extremes. Sometimes it’s easier to approach the cloud computing topic from the classical perspective of virtualization.

Anything you would like to add?

Screen Shot 2015-09-18 at 20.39.08

I recommend to compare what Amazon, Google or Oracle public cloud offers based on the table above compared to your on premise DC.

12 Public Cloud Benefits and Features You Should Know

20150311_114235The last few years I spent a quite a bit of time talking about public clouds. When I published my cloud computing book, public clouds were mostly still believed to be a hype. Availability, security, persistence of data, and much much more was questioned.

Today only few IT professionals are stuck with this old school thinking. The major public clouds are a superset of what is provided by a classical data centers.

What features would you check for when looking at a potential cloud provider? Does your data center offer every feature and service listed below?

  1. All IT in the cloud is software. There is an API for everything. The whole data center is a set of APIs. This includes load balancers, servers, storage, databases, application servers, API gateways, firewalls, storage, etc.
  2. Short term capacity is very cheap.
  3. Since capacity is cheap, typically you don’t update or redeploy in the cloud, instead you spin up new immutable servers.
  4. Changing your hardware costs nothing. If you find out or assume that your application will run better on high-CPU instances instead of high-memory instances you can simply swap.
  5. Availability comes with no extra cost. You can place two instances in two fully redundant data centers for the same cost as placing two instances in the same data center.
  6. Also parallelism comes with no extra cost. Using 1000 instances for 1h costs as much as using 1 instance for 1000 hours. You’ve got a massively parallel supercomputer at your hands.
  7. You save the time for capacity planning since capacity is available on demand.
  8. Capacity planning still makes sense for predicting future costs.
  9. Procurement happens within seconds or minutes.
  10. You don’t pay for unused resources. Scaling down reduces your costs.
  11. You can put IT resources close to the customer location where they are needed since the public cloud will be globally available.
  12. Cost for resources in the cloud used to drop by around 30% every year. Long term projects with constant resource usage will cost less every year.

You care to disagree?