
In this tutorial, we are going to learn how to deploy Kubernetes cluster and containers with ease with Google Cloud.
Please perform the below steps:
Confirm that needed APIs are enabled
- In the GCP Console, on the Navigation menu click APIs & Services.
- Scroll down in the list of enabled APIs, and confirm that both of these APIs are enabled:
- Kubernetes Engine API
- Container Registry API
If either API is missing, click Enable APIs and Services at the top. Search for the above APIs by name and enable each for your current project.
Start a Kubernetes Engine cluster
3. In the GCP console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue. This will open a Cloud shell window.
4. Now setup Your Zone. Run command like below:
export MY_ZONE=us-central1-a

5. Start a Kubernetes cluster managed by Kubernetes Engine. Name the cluster webfrontend and configure it to run 2 nodes:
gcloud container clusters create webfrontend --zone $MY_ZONE --num-nodes 2

It takes several minutes to create a cluster as Kubernetes Engine provisions virtual machines for you.
6. After the cluster is created, check your installed version of Kubernetes using the kubectl version
command:

7. View your running nodes in the GCP Console. On the Navigation menu, click Compute Engine > VM Instances. You will see 2 VM instances running.

Your Kubernetes cluster is now ready for use now.
Run and deploy a container
Now since we have configured a Kubernetes Cluster, let’s now deploy an Nginx Container.
8. From your Cloud Shell prompt, launch a single instance of the Nginx container.
kubectl create deploy nginx --image=nginx:1.17.10

In Kubernetes, all containers run in pods. This use of the kubectl create
command caused Kubernetes to create a deployment consisting of a single pod containing the Nginx container.
A Kubernetes deployment keeps a given number of pods up and running even in the event of failures among the nodes on which they run.
In this command, you launched the default number of pods, which is 1.
9. View the pod running the Nginx container.
kubectl get pods

10. Expose the Nginx container to the Internet:
kubectl expose deployment nginx --port 80 --type LoadBalancer

Kubernetes created a service and an external load balancer with a public IP address attached to it.
The IP address remains the same for the life of the service. Any network traffic to that public IP address is routed to pods behind the service: in this case, the Nginx pod.
11. View the new service:
kubectl get services

You can use the displayed external IP address to test and contact the Nginx container remotely. It may take a few seconds before the External-IP field is populated for your service.
12. Open a new web browser tab and paste your cluster’s external IP address into the address bar. The default home page of the Nginx browser is displayed.

13. Scale up the number of pods running on your service:
kubectl scale deployment nginx --replicas 3

Scaling up a deployment is useful when you want to increase available resources for an application that is becoming more popular.
14. Confirm that Kubernetes has updated the number of pods:
kubectl get pods

15. Confirm that your external IP address has not changed:
kubectl get services

16. Return to the web browser tab in which you viewed your cluster’s external IP address. Refresh the page to confirm that the Nginx web server is still responding.

This concludes our Tutorial for Configuring Kubernetes Cluster in GCP and Deploying Container.
Attractive section of content. I just stumbled upon your site and in accession capital to assert that I acquire in fact enjoyed account your blog posts.Anyway I’ll be subscribing to your feeds and even I achievement you access consistently quickly.