Get startedGet started for free

Network load balancing

1. Network load balancing

Network Load Balancers are Layer 4 load balancers that can distribute traffic to backends located either in a single region or across multiple regions. Let's discuss these next. Network Load Balancers are Layer 4 load balancers that can handle TCP, UDP, or other IP protocol traffic. These load balancers are available as either proxy load balancers or passthrough load balancers. You can pick a load balancer depending on the needs of your application and the type of traffic that it needs to handle. Choose a proxy Network Load Balancer if you want to configure a reverse proxy load balancer with support for advanced traffic controls and backends on-premises and in other cloud environments. Choose a passthrough Network Load Balancer if you want to preserve the source IP address of the client packets, you prefer direct server return for responses, or you want to handle a variety of IP protocols, such as TCP, UDP, ESP, GRE, ICMP, and ICMPv6. Let's explore these in more detail. Proxy Network Load Balancers are Layer 4 reverse proxy load balancers that distribute TCP traffic to virtual machine instances in your Google Cloud VPC network. Traffic is terminated at the load balancing layer and then forwarded to the closest available backend by using TCP. Proxy Network Load Balancers can be deployed externally or internally depending on whether your application is internet-facing or internal. We will discuss internal proxy Network Load Balancers later in this module. External proxy Network Load Balancers are Layer 4 load balancers that distribute traffic that comes from the internet to backends in your Google Cloud VPC network, on-premises, or in other cloud environments. These load balancers are built on either Google Front Ends (GFEs) or Envoy proxies. These load balancers can be deployed in the following modes: global, regional, or classic. Proxy Network Load Balancers are intended for TCP traffic only, with or without SSL. For HTTP(S) traffic, we recommend that you use an Application Load Balancer instead. Depending on the type of traffic your application needs to handle, you can configure an external proxy Network Load Balancer with either a target TCP proxy or a target SSL proxy. This network diagram illustrates an external proxy Network Load Balancer configured with a target TCP proxy. In this example, traffic from users in Iowa and Boston is terminated at the global external proxy Network Load Balancer layer. From there, a separate connection is established to the closest backend instance. As in the target SSL proxy example in the next slide, the user in Boston would reach the us-east region, and the user in Iowa would reach the us-central region, if there is enough capacity. Now, the traffic between the proxy and the backends can use SSL or TCP, and we also recommend using SSL here. This network diagram illustrates an external proxy Network configured with a target SSL proxy. In this example, traffic from users in Iowa and Boston is terminated at the global external proxy Network Load Balancer layer. From there, a separate connection is established to the closest backend instance. In other words, the user in Boston would reach the us-east region, and the user in Iowa would reach the us-central region, if there is enough capacity. Now, the traffic between the proxy and the backends can use SSL or TCP. We recommend using SSL. For HTTP(S) traffic, we recommend that you use an external Application Load Balancer. Passthrough Network Load Balancers are Layer 4 regional, passthrough load balancers. These load balancers distribute traffic among backends in the same region as the load balancer. They are implemented by using Andromeda virtual networking and Google Maglev. These load balancers are not proxies. Load-balanced packets are received by backend VMs with the packet's source and destination IP addresses, protocol, and, if the protocol is port-based, the source and destination ports unchanged. Load-balanced connections are terminated at the backends. Responses from the backend VMs go directly to the clients, not back through the load balancer. The industry term for this is direct server return (DSR). These load balancers are deployed in two modes, depending on whether the load balancer is internet-facing or internal. We will discuss internal passthrough Network Load Balancers later in this module. External passthrough Network Load Balancers are built on Maglev. Clients can connect to these load balancers from anywhere on the internet regardless of their Network Service Tiers. The load balancer can also receive traffic from Google Cloud VMs with external IP addresses or from Google Cloud VMs that have internet access through Cloud NAT or instance-based NAT. Backends for external passthrough Network Load Balancers can be deployed using either a backend service or a target pool. For new deployments, we recommend using backend services. The architecture of an external passthrough Network Load Balancer depends on whether you use a backend service or a target pool to set up the backend. New network load balancers can be created with a regional backend service that defines the behavior of the load balancer and how it distributes traffic to its backend instance groups. Backend service-based external passthrough Network Load Balancers support IPv4 and IPv6 traffic, multiple protocols (TCP, UDP, ESP, GRE, ICMP, and ICMPv6), managed and unmanaged instance group backends, zonal network endpoint group backends with GCE_VM_IP endpoints, fine-grained traffic distribution controls, failover policies, and let you use non-legacy health checks that match the type of traffic (TCP, SSL, HTTP, HTTPS, or HTTP/2) that you are distributing. You can also transition an existing target pool-based network load balancer to use a backend service instead. But what is a target pool resource? A target pool is the legacy backend supported with external passthrough Network Load Balancers. A target pool resource defines a group of instances that receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, the load balancer picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port. These target pools can only be used with forwarding rules that handle TCP and UDP traffic. Now, each project can have up to 50 target pools, and each target pool can have only one health check. Also, all the instances of a target pool must be in the same region, which is the same limitation as for the Network Load Balancer.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.