Services in Google Kubernetes Engine

Photo by Growtika on Unsplash

Google Kubernetes Engine (GKE) is a cloud based managed solution of Kubernetes. GKE works similar to Kubernetes but comes with the advantages of cloud like high availability, scalability and affordability. Just create a cluster type of your choice and deploy your containers. Here's how

Skill Up on Google Cloud Faster

Stop wasting time on training you don't need. Engage a GCP coach for one-on-one training. Plus get free training content and hands-on labs.

Learn about coaching

Types of Clusters & Modes

  • Autopilot Mode: This is the new and recommended mode for creating clusters, you just have to define the location and deploy your workloads, rest Google will take care starting from the node configuration to the security of the cluster.
  • Standard Mode: This mode gives you room to decide the configuration by yourself and deploy your application by managing the underlying infra as per your need.
  • Private Cluster: It is meant when you don’t want to expose your nodes, pods or deployments to the external traffic.
  • Public Cluster: This type of cluster is created by default.
  • Zonal Cluster: Zonal clusters are segmented into Single Zone and Multi Zonal Cluster. Single Zone is meant for scheduling your master node only in one zone, whereas Multi Zonal is used when you want a single replica of your master node in some other zone with the worker nodes running in multiple zones.
  • Regional Cluster: Regional Clusters have multiple replicas of worker and master nodes running in multiple zones within a region.
  • Alpha Cluster: These types of cluster have all Kubernetes alpha APIs enabled. Alpha clusters are not supported for production workloads, cannot be upgraded, and expire within 30 days.

What are pods?

Pods are the smallest, most basic deployable objects in Kubernetes. A pod represents a single instance of a running process in your cluster. Generally we have single container per pod but we can have multiple containers per pod.

What are nodes?

Nodes are just like a machine, may it be a virtual machine or a physical machine. Nodes are that computational resource on which pods are scheduled. In GKE you need to create Node Pools where each Pool will have a similar kind of Nodes.

Each node regardless of machine family or type, runs at least these two components:

  1. Kubelet is responsible for communication between the Kubernetes control plane and the node
  2. A container runtime like Docker Daemon which is responsible for pulling the container image from a registry, unpacking the container, and running the application.

Why services?

Kubernetes assigns Private Ephemeral IP addresses to the pods as soon as they are created. A pod can shut down or go under maintenance due to multiple factors, hence you can distribute the IP Address of the pod to the End Users, furthermore you can’t even hard-code the IP addresses in the dependent services as the IP address is not certain, it can remain the same or it can change.

In order to solve this IP Address issue, services were introduced which provide a static IP address for the end users. It adds another layer of abstraction in the Kubernetes Model, pods are still used under the hood but this time you communicate with the service at the first hand which in turn send the requests to pods assigned using the Labels and Selectors options.

As soon as you create a service and attach pods to it, two things are created:

  1. IP Address: This is the static IP Address that will be used to access the application
  2. DNS Records Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form. <service-name>.<namespace>.<svc.cluster-name>.<local>. This resolves to the cluster IP of the service.

In general, there are five types of services in Kubernetes:

  • ClusterIP
  • NodePort
  • LoadBalancer
  • ExternalName
  • Headless

How will the service know which pod to contact?

The answer is selectors and labels.

Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to identify attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organise and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.

Labels do not provide uniqueness. In general, we can say many objects can carry the same labels. Label selector are core grouping primitive in Kubernetes. They are used by the users to select a set of objects.

Here is how services flow:

  1. Create pods by defining the labels.
  2. Create a service and configure the selector to match labels mentioned in the pods.
  3. While the service is being created it looks for the pods that matched the required key-value pair.
  4. As soon as the service is created, it creates a static IP Address.
  5. Any hit on the specific IP address leads traffic to any of the attached pods.

Service Types

Let’s discuss each service type first and then implement it in Google Cloud.

  • ClusterIP: This is the default service type that exposes the service on a cluster-internal IP by making the service only reachable within the cluster. It provides a load-balanced IP address. One or more pods that match a label selector can forward traffic to the IP address.
  • NodePort: This exposes the service on each node’s IP at a static port. Since a ClusterIP service, to which the NodePort service will route, is automatically created. We can contact the NodePort service outside the cluster. It is one of the most common ways of exposing the application to external traffic. This service is accessible by using the IP address of any node along with the NodePort value. If you forget to supply the nodePort value, then Kubernetes will itself assign a value and you can check it by running the kubectl get service -o yaml.

    We have three ports in this type of service:
    • Port: It exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
    • TargetPort: It is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
    • NodePort: It exposes a service externally to the cluster by means of the target node's IP address and the NodePort. NodePort is the default setting if the port field is not specified.
  • LoadBalancer: It is one of the service types which was specifically designed for Cloud Workloads. To know more about this service type head over to the Google Cloud Documentation.

 

How to Implement Each GKE Service Type

  1. Login to your Google Cloud account.
  2. Search Kubernetes Engine in the search panel and select the Kubernetes Engine service.
  3. Now, we will create a cluster, Choose the Standard mode for this tutorial.
  4. Enter the desired configurations like name, number of nodes, Boot Disk size and Machine type as per your requirement and create the cluster.
  5. The creation of a cluster may take around 5-6 minutes. After the successful creation you will see a green checkmark.
  6. After the creation of the cluster we need to connect to the cluster from the shell. Click on the Cloud Shell icon from the top window.
  7. Paste the command to connect to the kube-apiserver.
    gcloud container clusters get-credentials <Name-of-cluster> --zone us-central1-a --project <Project-ID>
  8. Create a directory using the mkdir gke-services command where we will be storing our YAML files. Switch to the created directory using the cd gke-services command.
  9. As we will be using the kubectl command for most of the steps, it’s always beneficial to create an alias using the below command.
    alias k=kubectl
  10. It’s time to create a deployment using the wordpress image and count of replicas as 2. Create a file with name ci-deployment.yaml either using the nano or vim editor. Paste the code and save it.
    apiVersion: apps/v1
    
    kind: Deployment
    
    metadata:
    
     name: ci-deployment
    
     labels:
    
       company: cloudinstitute
    
    spec:
    
     replicas: 2
    
     selector:
    
       matchLabels:
    
         company: cloudinstitute
    
     template:
    
       metadata:
    
         labels:
    
           company: cloudinstitute
    
       spec:
    
         containers:
    
         - name: wordpress
    
           image: wordpress
    
           ports:
    
           - containerPort: 80
  11. We need to create a deployment from the above file by firing the k apply -f ci-deployment.yaml command.
  12. Wait for a few seconds and run the k get deploy command to get the list of the deployments. The output would be like the following:
  13. Let’s get the list of the pods by using the k get pods command.
  14. We have successfully created a deployment, it’s time to attach it with a service.
  15. Create a file for the Cluster IP service with the name ci-cluster-ip-service.yaml using any editor of your choice where our YAML code will be stored.
  16. Paste the Code for the Service, Note that we have the same selectors as defined in the labels of our deployment.
    apiVersion: v1
    
    kind: Service
    
    metadata:
    
     name: ci-cluster-ip-service
    
    spec:
    
     selector:
    
       company: cloudinstitute
    
     type: ClusterIP
    
     ports:
    
     - protocol: TCP
    
       port: 80
    
       targetPort: 8080

  17. Create the service by entering the k apply -f ci-cluster-ip-service.yaml command.
  18. Get the list of the service by firing the k get svc command.
  19. We have successfully created a Cluster-IP service. Now let’s create it for NodePort, first delete the Cluster IP Service by running the k delete svc <Name-of-Cluster-IP-Service>.
  20. Enter the k get svc command to confirm the deletion of the service.
  21. Now create one more file with the name ci-nodeport-service.yaml which will store the code of our NodePort Service.
  22. Paste the code and fire the k apply -f ci-nodeport-service.yaml command to get the service created.
    apiVersion: v1
    
    kind: Service
    
    metadata:
    
     name: ci-nodeport-service
    
    spec:
    
     selector:
    
       company: cloudinstitute
    
     type: NodePort
    
     ports:
    
     - protocol: TCP
    
       port: 80
    
       targetPort: 8080

  23. Again enter the same k get svc command and you would be able to see the list of the services.
  24. Let’s create the third type of service i.e. LoadBalancer but before that delete the NodePort service by entering the  k delete svc <Name-of-NodePort-Service>.
  25. Enter the k get svc command to confirm the deletion of the service.
  26. We can create LoadBalancer service using the YAML file but another easy way to create is by shooting a simple following command:
    k expose deployment <Name-of-Deployment> --type=LoadBalancer --port 8080

  27. Wait for a few minutes and enter the k get svc command to get the details of the service.
  28. If you want to create the service by using a YAML file, you can perform similar steps with the below code.
    apiVersion: v1
    kind: Service
    metadata:
     name: ci-loadbalancer-service
    spec:
     selector:
       company: cloudinstitute
     ports:
       - port: 8080
         targetPort: 8080
     type: LoadBalancer
  29. You have successfully created all 3 types of services.

NOTE: Make sure to delete the created resources like GKE Cluster, directories in order to prevent the additional cost.

Looking for a better way to learn Google Cloud? Get personalized instruction from GCP experts to learn skills faster. Learn more about our coaching subscription.

 

Get new blogs in your inbox

click here