Knative Fundamentals and basic Concepts (Part 2)

Knative Fundamentals and basic Concepts (Part 2)

Eventing

Eventing offers event building blocks that use consuming and producing, and adhere CloudEvents specification for implementation. One of the objectives of this component is to abstract events to enable developers not to focus on the backend details. By doing so, they do not need to worry and think which message queue system to use.

Knative Eventing also utilizes Kubernetes Custom Resource Definition (CRD) to define a set of new resources. These resources are applied for event producing and consuming. These resources mainly consist of the following components:

Channels

  • Publish/Subscribe (Pub/Sub) Topics to which publishers send messages; therefore, channels can be views as the catalog of places to get or put events.
  • Bus. The backing provider for channels. This is the messaging platform for the events in the platform, such as Google Cloud Pub/Sub, Apache Kafka, and NATS.
apiVersion: channels.knative.dev/v1alpha1 kind: Bus metadata: name: kafka spec: dispatcher: args: - -logtostderr - -stderrthreshold - INFO env: - name: KAFKA_BROKERS valueFrom: configMapKeyRef: key: KAFKA_BROKERS name: kafka-bus-config image: gcr.io/knative-releases/github.com/knative/eventing/pkg/buses/kafka/dispatcher@sha256:d925663bb965001287b042c8d3ebdf8b4d3f0e7aa2a9e1528ed39dc78978bcdb name: dispatcher
  • Specifying Knative Service for applications or functions, and to which messages will be passed from a channel. It is the entry address into the application or function.

Feeds: Providing an abstraction layer to enable external data source provisioning, and route it to a cluster. It will attach an individual event from an event source to an action.

  • EventSpurce or ClusterEventSource is a Kubernetes resource, which is described as possibly generated external system of EventTypes.
apiVersion: feeds.knative.dev/v1alpha1 kind: EventSource metadata: name: github namespace: default spec: image: gcr.io/knative-releases/github.com/knative/eventing/pkg/sources/github@sha256:a5f6733797d934cd4ba83cf529f02ee83e42fa06fd0e7a9d868dd684056f5db0 source: github type: github
  • EventType and ClustErventType also are all Kubernetes resources, which are used to indicate different event types that EventSource support.
apiVersion: feeds.knative.dev/v1alpha1 kind: EventType metadata: name: pullrequest namespace: default spec: description: notifications on pullrequests eventSource: github
  • Flows: It will bind event to a route, and choose the event route to channel or bus.
apiVersion: flows.knative.dev/v1alpha1 kind: Flow metadata: name: k8s-event-flow namespace: default spec: serviceAccountName: feed-sa trigger: eventType: dev.knative.k8s.event resource: k8sevents/dev.knative.k8s.event service: k8sevents parameters: namespace: default action: target: kind: Route apiVersion: serving.knative.dev/v1alpha1 name: read-k8s-events

Next, you are going to use Minikube to try the features delivered by Knative.

Minikube Fundamentals

This paragraph will introduce how to use Minikube to deploy Knative.

Prerequisites

  • At testing host, install a Minikube binary. Please go to Minikube Releases  for your downloading.
  • At your testing host, download Virtual Box to provide virtual machines to Minikube.
  • Other than using Oracle VirtualBox , you may choose any one of the virtualization tools, such as KVM, or xhyve, if you like.
  • Download kubeclt, a Kubernetes CLI tool.

Start Minikube

First of all, use Minikube to start a VM to deploy single node Kubernetes. Because this will use many system services and resources, you have better optimize your VM resource allocation beforehand.

$ minikube start --memory=8192 --cpus=4 \ --kubernetes-version=v1.10.5 \ --extra-config=apiserver.admission-control="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"

After finishing, check your status by using kubectl:

$ kubectl get no NAME STATUS ROLES AGE VERSION minikube Ready master 1m v1.10.5

Deploy Knative

Since Knative is built based on Istio, you need to use kubectl for deployment:

$ curl -L https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml \ | sed 's/LoadBalancer/NodePort/' \ | kubectl apply -f - # 設定 inject namespace $ kubectl label namespace default istio-injection=enabled

It will take you a little more time to download the image file and start Istio. The following illustrates the details after downloading and starting:

$ kubectl -n istio-system get po NAME READY STATUS RESTARTS AGE istio-citadel-7bdc7775c7-jn2bw 1/1 Running 0 5m istio-cleanup-old-ca-msvkn 0/1 Completed 0 5m istio-egressgateway-795fc9b47-4nz7j 1/1 Running 0 6m istio-ingress-84659cf44c-pvqd5 1/1 Running 0 6m istio-ingressgateway-7d89dbf85f-tgm24 1/1 Running 0 6m istio-mixer-post-install-lvrjv 0/1 Completed 0 6m istio-pilot-66f4dd866c-zmbv5 2/2 Running 0 6m istio-policy-76c8896799-cqmdn 2/2 Running 0 6m istio-sidecar-injector-645c89bc64-9mdwx 1/1 Running 0 5m istio-statsd-prom-bridge-949999c4c-qhdgf 1/1 Running 0 6m istio-telemetry-6554768879-b6vss 2/2 Running 0 6m

Then, deploy Knative components to Kubernetes cluster. Official Website provides a yaml file, release-lite.yaml , to assist in establishing a lean testing environment. Use Kubectl to deploy directly:

$ curl -L https://storage.googleapis.com/knative-releases/serving/latest/release-lite.yaml \ | sed 's/LoadBalancer/NodePort/' \ | kubectl apply -f -
  • You will deploy a monitoring system of Prometheus, as well as Serving and Build of Knative.
  • Please refer to Knative Install, if you want to choose a particular Kubernetes platform.

Similarly, it is going to take a little while for downloading and starting related services. Once completed, the result will be displayed as follows:

# Monitoring $ kubectl -n monitoring get po NAME READY STATUS RESTARTS AGE grafana-798cf569ff-m8w9c 1/1 Running 0 4m kube-state-metrics-77597b45f8-mxhxv 4/4 Running 0 1m node-exporter-8wbxd 2/2 Running 0 4m prometheus-system-0 1/1 Running 0 4m prometheus-system-1 1/1 Running 0 4m # Knative build $ kubectl -n knative-build get po NAME READY STATUS RESTARTS AGE build-controller-5cb4f5cb67-bs94k 1/1 Running 0 6m build-webhook-6b4c65546b-fzffg 1/1 Running 0 6m # Knative serving $ kubectl -n knative-serving get po NAME READY STATUS RESTARTS AGE activator-869d7d76c5-fngdm 2/2 Running 0 7m autoscaler-65855c89f6-pmzhr 2/2 Running 0 7m controller-5fbcf79dfb-q8cb8 1/1 Running 0 7m webhook-c98c7c654-lpnjj 1/1 Running 0 7m

Now the Knative component installation is completed. The following example will demonstrate how Knative works:

Deploy Knative Application

After completing the above steps, you can start to deploy Knative applications or functions. Here you can use simple HTTP Server with Slack to implement a simple channel message sending. However, during the process, you will be using resources, including Build, BuildTemplate, and Knative Service. Before starting, you can use git clone command to obtain example project. The following describes how to use its Kubernetes to deploy files:

$ git clone https://github.com/kairen/knative-slack-app $ cd knative-slack-app

This example utilizes Kaniko to deploy container image of an application. It will automatically upload the deployed image file to Docker Hub. In order to ensure correctly uploading to user’s Docker Hub, establish secret and Service Account to provide Docker ID and password for Knative serving:

$ export DOCKER_ID=$(echo -n "username" | base64) $ export DOCKER_PWD=$(echo -n "password" | base64) $ cat deploy/docker-secret.yml | \ sed "s/BASE64_ENCODED_USERNAME/${DOCKER_ID}/" | \ sed "s/BASE64_ENCODED_PASSWORD/${DOCKER_PWD}/" | \ kubectl apply -f - $ kubectl apply -f deploy/kaniko-sa.yml

Then, deploy a secret to store Slack information and offer it to Slack App, such as token:

$ export SLACK_TOKEN=$(echo -n "slack-token" | base64) $ export SLACK_CHANNEL_ID=$(echo -n "slack-channel-id" | base64) $ cat deploy/slack-secret.yml | \ sed "s/BASE64_ENCODED_SLACK_TOKEN/${SLACK_TOKEN}/" | \ sed "s/BASE64_ENCODED_SLACK_CHANNEL_ID/${SLACK_CHANNEL_ID}/" | \ kubectl apply -f -

After that, deploy Kaniko Build template to provide it to Knative Service:

$ kubectl apply -f deploy/kaniko-buildtemplate.yml $ kubectl get buildtemplate NAME CREATED AT kaniko 7s

After finishing, simply deploy Knative service and Istio HTTPs Service Entry to provide application, and to enable Pod to access Slack HTTPs API:

$ kubectl apply -f deploy/slack-https-sn.yml $ kubectl apply -f deploy/slack-app-service.yml $ kubectl get po -w NAME READY STATUS RESTARTS AGE slack-app-00001-9htqm 0/1 Init:2/3 0 8s slack-app-00001-9htqm 0/1 Init:2/3 0 8s slack-app-00001-9htqm 0/1 PodInitializing 0 3m slack-app-00001-9htqm 0/1 Completed 0 4m slack-app-00001-deployment-75f7f8dd8c-tskq8 0/3 Pending 0 0s slack-app-00001-deployment-75f7f8dd8c-tskq8 0/3 Pending 0 0s slack-app-00001-deployment-75f7f8dd8c-tskq8 0/3 Init:0/1 0 0s slack-app-00001-deployment-75f7f8dd8c-tskq8 0/3 PodInitializing 0 7s slack-app-00001-deployment-75f7f8dd8c-tskq8 2/3 Running 0 33s

Upon completion of the above deployment, it will take longer to execute for the first time, since it needs to download Knative Build-related image file. After completing, use the following commands to check the service status:

$ export IP_ADDRESS=$(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}') $ export DOMAIN=$(kubectl get services.serving.knative.dev slack-app -o=jsonpath='{.status.domain}') # 透過 cURL 工具以 Get method 存取 $ curl -X GET -H "Host: ${DOMAIN}" ${IP_ADDRESS} <h1>Hello slack app for Knative!!</h1> # 透過 cURL 工具以 Post method 傳送 msg $ curl -X POST \ -H 'Content-type: application/json' \ -H "Host: ${DOMAIN}" \ --data '{"msg":"Hello, World!"}' \ ${IP_ADDRESS} success

If the status indicates “success”, then check Slack channel to see if it sends message:

Finally, since Knative Serving is request-driven, it will reduce its copy to zero after no request for a long time. It will be activated once again when it receives request. Nevertheless, Knative utilizes Prometheus to monitor information, in which it allows you to observe changes.

Use kubectl to obtain Grafana NodePort information:

$ kubectl -n monitoring get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... grafana NodePort 10.96.197.116 <none> 30802:30326/TCP 1h prometheus-system-np NodePort 10.99.64.228 <none> 8080:32628/TCP 1h ...

Then open a browser and link to http://minikube_ip:port to see the result:

Also, you can check HTTP request status:

EDM

Select list(s)*

 

Loading