Google Cloud: Configuring an Internal TCP Load Balancer

Share At:

Google Cloud offers Internal Load Balancing for your TCP/UDP-based traffic. Internal Load Balancing enables you to run and scale your services behind a private load balancing IP address that is accessible only to your internal virtual machine instances.

In this lab, you create two managed instance groups in the same region. Then you configure and test an internal load balancer with the instances groups as the backends, as shown in this network diagram:

Objectives

In this lab, you learn how to perform the following tasks:

  • Create internal traffic and health check firewall rules
  • Create a NAT configuration using Cloud Router
  • Configure two instance templates
  • Create two managed instance groups
  • Configure and test an internal load balancer

Task 1. Configure internal traffic and health check firewall rules.

Configure firewall rules to allow internal traffic connectivity from sources in the 10.10.0.0/16 range. This rule allows incoming traffic from any client located in the subnet.

Health checks determine which instances of a load balancer can receive new connections. For HTTP load balancing, the health check probes to your load-balanced instances come from addresses in the ranges 130.211.0.0/22 and 35.191.0.0/16. Your firewall rules must allow these connections.

Explore the my-internal-app network

The network my-internal-app with subnet-a and subnet-b and firewall rules for RDPSSH, and ICMP traffic have been configured for you.

  • In the Cloud Console, on the Navigation menu > VPC network > VPC networks. Notice the my-internal-app network with its subnets: subnet-a and subnet-b.
  • Each Google Cloud project starts with the default network. In addition, the my-internal-app network has been created for you as part of your network diagram.
  • You will create the managed instance groups in subnet-a and subnet-b. Both subnets are in the us-central1 region because an internal load balancer is a regional service. The managed instance groups will be in different zones, making your service immune to zonal failures.

Create the firewall rule to allow traffic from any sources in the 10.10.0.0/16 range

Create a firewall rule to allow traffic in the 10.10.0.0/16 subnet.

  1. On the Navigation menu > click VPC network > Firewall. Notice the app-allow-icmp and app-allow-ssh-rdp firewall rules.

These firewall rules have been created for you.

2. Click Create Firewall Rule.

3. Specify the following, and leave the remaining settings as their defaults:

4. Click Create.

Create the health check rule

Create a firewall rule to allow health checks.

  1. On the Navigation menu > VPC network > Firewall.

2. Click Create Firewall Rule.

3. Specify the following, and leave the remaining settings as their defaults:

4. For tcp, specify port 80.

5. Click Create.

Task 2: Create a NAT configuration using Cloud Router

The Google Cloud VM backend instances that you setup in Task 3 will not be configured with external IP addresses.

Instead, you will setup the Cloud NAT service to allow these VM instances to send outbound traffic only through the Cloud NAT, and receive inbound traffic through the load balancer.

Create the Cloud Router instance

  1. In the Cloud Console, on the Navigation menu > Network services > Cloud NAT.

2. Click Get started.

3. Specify the following, and leave the remaining settings as their defaults:

4.Click Cloud Router, and select Create new router.

5. For Name, type nat-router-us-central1.

6. Click Create.

7. In Create a NAT gateway, click Create.

Wait until the NAT Gateway Status changes to Running before moving onto the next task.

Task 3. Configure instance templates and create instance groups

A managed instance group uses an instance template to create a group of identical instances. Use these to create the backends of the internal load balancer.

Configure the instance templates

An instance template is an API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, boot disk image, subnet, labels, and other instance properties. Create an instance template for both subnets of the my-internal-app network.

  1. On the Navigation menu > Compute Engine > Instance templates.

2. Click Create instance template.

3. For Name, type instance-template-1

4. Under Machine configuration, For Series, Select N1.

5. Machine type f1-micro(1 vCPU).

6. Click Management, security, disks, networking, sole tenancy.

7. Click Management.

8. Under Metadata, specify the following:

The startup-script-url specifies a script that is executed when instances are started. This script installs Apache and changes the welcome page to include the client IP and the name, region, and zone of the VM instance. You can explore this script here.

9. Click Networking.

10. For Network interfaces, specify the following, and leave the remaining settings as their defaults:

The network tag backend-service ensures that the firewall rule to allow traffic from any sources in the 10.10.0.0/16 subnet and the Health Check firewall rule applies to these instances.

11. Click Create. Wait for the instance template to be created.

Create another instance template for subnet-b by copying instance-template-1:

12. Select the instance-template-1 and click Copy.

13. Click Management, security, disks, networking, sole tenancy.

14. Click Networking.

15. For Network interfaces, select subnet-b as the Subnet.

16. Click Create.

Create the managed instance groups

Create a managed instance group in subnet-a (us-central1-a) and subnet-b (us-central1-b).

  1. On the Navigation menu> click Compute Engine > Instance groups.
  2. Click Create Instance group.
  3. Specify the following, and leave the remaining settings as their defaults:

Managed instance groups offer autoscaling capabilities that allow you to automatically add or remove instances from a managed instance group based on increases or decreases in load. Autoscaling helps your applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. Just define the autoscaling policy, and the autoscaler performs automatic scaling based on the measured load.

4. Click Create.

Repeat the same procedure for instance-group-2 in us-central1-b:

5. Click Create Instance group.

6. Specify the following, and leave the remaining settings as their defaults:

7. Click Create.

Verify the backends

Verify that VM instances are being created in both subnets and create a utility VM to access the backends’ HTTP sites.

  1. On the Navigation menu, click Compute Engine > VM instances. Notice two instances that start with instance-group-1 and instance-group-2.

These instances are in separate zones, and their internal IP addresses are part of the subnet-a and subnet-b CIDR blocks.

2. Click Create Instance.

3. Specify the following, and leave the remaining settings as their defaults:

4. Click Management, security, disks, networking, sole tenancy.

5. Click Networking.

6. For Network interfaces, click the pencil icon to edit.

7. Specify the following, and leave the remaining settings as their defaults:

8. Click Done.

9. Click Create.

Note that the internal IP addresses for the backends are 10.10.20.2 and 10.10.30.2.

If these IP addresses are different, replace them in the two curl commands below.

10. For utility-vm, click SSH to launch a terminal and connect. If you see the Connection via Cloud Identity-Aware Proxy Failed popup, click Retry.

11. To verify the welcome page for instance-group-1-xxxx, run the following command:

curl 10.10.20.2

The output should look like this :

12. To verify the welcome page for instance-group-2-xxxx, run the following command:

curl 10.10.30.2

The output should look like this :

This will be useful when verifying that the internal load balancer sends traffic to both backends.

13. Close the SSH terminal to utility-vm

Task 4. Configure the internal load balancer

Configure the internal load balancer to balance traffic between the two backends (instance-group-1 in us-central1-a and instance-group-2 in us-central1-b), as illustrated in the network diagram:

Start the configuration

  1. In the Cloud Console, on the Navigation menu >click Network Services > Load balancing.
  2. Click Create load balancer.
  3. Under TCP Load Balancing, click Start configuration.

4. For Internet facing or internal only, select Only between my VMs.

Choosing Only between my VMs makes this load balancer internal. This choice requires the backends to be in a single region (us-central1) and does not allow offloading TCP processing to the load balancer.

5. Click Continue.

6. For Name, type my-ilb.

Configure the regional backend service

The backend service monitors instance groups and prevents them from exceeding configured usage.

  1. Click Backend configuration.

2. Specify the following, and leave the remaining settings as their defaults:

3. Click Done.

4. Click Add backend.

5. For Instance group, select instance-group-2 (us-central1-b).

6. Click Done.

7. For Health Check, select Create a health check.

8. Specify the following, and leave the remaining settings as their defaults:

Health checks determine which instances can receive new connections. This HTTP health check polls instances every 10 seconds, waits up to 5 seconds for a response, and treats 2 successful or 3 failed attempts as healthy threshold or unhealthy threshold, respectively.

9. Click Save and Continue.

10. Verify that there is a blue check mark next to Backend configuration in the Cloud Console. If there isn’t, double-check that you have completed all the steps above.

Configure the frontend

The frontend forwards traffic to the backend.

  1. Click Frontend configuration.

2. Specify the following, and leave the remaining settings as their defaults:

3. Specify the following, and leave the remaining settings as their defaults:

4. Click Reserve.

5. For Ports, type 80.

6. Click Done.

Review and create the internal load balancer

  1. Click Review and finalize.
  2. Review the Backend and Frontend.
  3. Click Create. Wait for the load balancer to be created before moving to the next task.

Task 5. Test the internal load balancer

Verify that the my-ilb IP address forwards traffic to instance-group-1 in us-central1-a and instance-group-2 in us-central1-b.

Access the internal load balancer

  1. On the Navigation menu, click Compute Engine > VM instances.
  2. For utility-vm, click SSH to launch a terminal and connect.

3. To verify that the internal load balancer forwards traffic, run the following command:

curl 10.10.30.5

The output should look like this:

4. Run the same command a couple of times:

You should be able to see responses from instance-group-1 in us-central1-a and instance-group-2 in us-central1-b. If not, run the command again.

Happy Learning !!!


Share At:
0 0 votes
Article Rating
Subscribe
Notify of
guest
3 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
gate.io türkiye
19 days ago

At the beginning, I was still puzzled. Since I read your article, I have been very impressed. It has provided a lot of innovative ideas for my thesis related to gate.io. Thank u. But I still have some doubts, can you help me? Thanks.

"oppna ett binance-konto

Your article helped me a lot, is there any more related content? Thanks! https://accounts.binance.com/sv/register-person?ref=JHQQKNKN

gate io
3 months ago

Your article helped me a lot. what do you think? I want to share your article to my website: gate.io

Back To Top

Contact Us