What is Kubernetes All About?-A Brief Intro to Kubernetes

What is Kubernetes All About?-A Brief Intro to Kubernetes

回顧:Part 1 – The Era of Containerization

Part 2 – A Brief Intro to Kubernetes

The top trending container orchestration infrastructure.

The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic. K8s is an abbreviation derived by replacing the 8 letters “ubernete” with “8”. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Originally developed by Google, Kubernetes simply aimed to manage distributed container instances across a cluster of nodes. As of April 2018, Kubernetes is at version 1.10+, supporting a variety of deploy patterns, automations, container network interfaces (CNI), and storage integrations.

Before introducing powerful tools and utilities provided by Kubernetes, it is important for us to understand its fundamental infrastructure. Across a Kubernetes cluster, there are basically two types of components: K8s Master Components and K8s Node Components.

【Figure 1】Master Server & Components

【Figure 2】Node Server & Components

Masters are in charge of API service, cluster resource scheduling, controller management, and Etcd storage. Nodes are the actual workers, where Pods exists and runs. When looking at the two types of components, consistent and high-available traits are important for Masters, while Nodes require relatively more attention on computing resources. I highly recommend reading Julia Evans’ hand-drawn cartoons on how Masters and Nodes work with each other in the system.

Now, let’s talk about what K8s brings to the table.

As mentioned in the end of Part 1, containers can be a charm to execute but a pain to manage in the long run. Well, Kubernetes takes care of it all! It not only automates your container deployments but also has full orchestration over high-available, distributed, and load-balanced apps.

Several perks with Kubernetes are as follows:

  1. Cluster Scalability
  2. High Availability for Managements
  3. Wide Variety of Storage Class Supported
  4. Multi-tenancy
  5. Custom Resource Definitions
  6. Rolling Updates
  7. Service Scalability
  8. Service Discovery and Load Balancing
  9. Service Self-healing
  10. Network Policies (with specific CNI Plugins)

It is not surprising that an open-sourced container orchestration project, with such powerful functionalities, has become extremely popular among tech communities. And it doesn’t just stop there. Besides the open-source community edition of Kubernetes, there are several enterprise-supported solutions out there available; paid, of course. Some examples are as below.

  • Public Cloud Solutions:
  • Google Container Engine (GKE)
  • Amazon Elastic Container Service (EKS)
  • Azure Container Service (AKS)
  • Cloud Infrastructure Solutions:
  • SUSE Container as a Service (CaaS)
  • Pivotal Kubernetes Service (PKS)
  • IBM Cloud Private (ICP)
  • Cloud Platform Solutions:
  • Rancher
  • RedHat OpenShift

All of the above are production-ready solutions, including plain CE Kubernetes. Though we won’t dive into any of the EE solutions this time, inwinSTACK has worked with many of them in practice, I’m sure we’ll have a chance to talk about each use case shortly.

In Part 3, we will discuss how Operation teams and Developing teams may enjoy Kubernetes in their premises, with a few scenarios I have practically encountered myself or with our clients.

Read More:
Part 3 – Kubernetes on Ops’ Perspectives
Part 4 – Deploy Your Own Kubernetes Cluster!

by 陳逸凡 迎棧科技解決方案架構師


Select list(s)*