What is Kubernetes All About?-Kubernetes on Ops’ Perspectives

What is Kubernetes All About?-Kubernetes on Ops’ Perspectives

Previous
Part 1 – The Era of Containerization
Part 2 – A Brief Intro to Kubernetes

Part 3.1 – Kubernetes on Ops’ Perspectives

In our previous post, we listed quite a few advantages using Kubernetes, including automations on container deployments and load balancing features. Just this list itself should already appeal a cloud operator very much!

  1. Cluster Scalability
  2. High Availability for Managements
  3. Wide Variety of Storage Class Supported
  4. Multi-tenancy
  5. Custom Resource Definitions
  6. Rolling Updates
  7. Service Scalability
  8. Service Discovery and Load Balancing
  9. Service Self-healing
  10. Network Policies (with specific CNI Plugins)

Maintaining a cloud infrastructure isn’t easy at all. While services may require high-availability, they may also need to scale up at certain events, and scale back down afterwards. A classic example would reference the gaming industry:

Mainstream mobile games usually create in-game events, where players may be offered valuable gifts or discounts. It’s not hard to imagine large amounts of players flood online when such events kickoff. Traditionally, our operators would have to launch new service instances as more service requests occur, but with Kubernetes auto-scaling feature (Horizontal Pod Auto-scale), life is much, much easier. When HPA is set up correctly, it not only takes off operators’ tedious scale-up-workload but also optimizes cloud resources by scaling back to normal whenever the number of requests cools down.

Mainstream mobile games usually create in-game events, where players may be offered valuable gifts or discounts. It’s not hard to imagine large amounts of players flood online when such events kickoff. Traditionally, our operators would have to launch new service instances as more service requests occur, but with Kubernetes auto-scaling feature (Horizontal Pod Auto-scale), life is much, much easier. When HPA is set up correctly, it not only takes off operators’ tedious scale-up-workload but also optimizes cloud resources by scaling back to normal whenever the number of requests cools down.

Part 3.2 – Kubernetes on Devs’ Perspectives

As we all understand by now, containers can be easily and quickly deployed, which is why we conclude that it is the best way to ship microservices. This clicks right with the concept of DevOps (well, at least a great part of it). From there, developers took advantage and constructed CI/CD pipelines like never before; and, of course, extreme programming came about deploying containers on Kubernetes as well.

Shipping containers through a fast automation pipeline is intuitive. However, it may be difficult to manage a bunch of testing and/or in-dev containers with multiple versions rolling at the same time. This is where Kubernetes steps in, as the Continuous Delivery destination. Kubernetes’ Service and Ingress resources provide simple yet secure accesses to all applications running, even with static or dynamic domain name redirects. Working with operators, developers can maintain an optimal testing environment to polish their products.

Read More:
Part 4 – Deploy Your Own Kubernetes Cluster!

by 陳逸凡 迎棧科技解決方案架構師

EDM

Select list(s)*

 

Loading