Application Load Balancing
1. Application Load Balancing
An Application Load Balancer that is configured to use HTTP(S) has the same basic structure when configured to balance using HTTP, but differs in the following ways: An Application Load Balancer uses a target HTTPS proxy instead of a target HTTP proxy. An Application Load Balancer requires at least one signed SSL certificate installed on the target HTTPS proxy for the load balancer. The client SSL sessions terminate at the load balancer. Application Load Balancer supports the QUIC transport layer protocol. QUIC is a transport layer protocol that allows faster client connection initiation, eliminates head-of-line blocking in multiplexed streams, and supports connection migration when a client's IP address changes. For more information on the QUIC protocol, refer to the documentation link in the Course Resources. To use HTTPS, you must create at least one SSL certificate that can be used by the target proxy for the load balancer. You can configure the target proxy with up to 15 SSL certificates. For each SSL certificate, you first create an SSL certificate resource, which contains the SSL certificate information. SSL certificate resources are used only with load balancing proxies, such as a target HTTPS proxy or target SSL proxy, which we will discuss later in this module. Backend buckets allow you to use Cloud Storage buckets with Application Load Balancing. An external Application Load Balancer uses a URL map to direct traffic from specified URLs to either a backend service or a backend bucket. One common use case is: Send requests for dynamic content, such as data, to a backend service; Send requests for static content, such as images, to a backend bucket. In this diagram, the load balancer sends traffic with a path of /love-to-fetch/ to a Cloud Storage bucket in the europe-north region. All the other requests go to a Cloud Storage bucket in the us-east region. After you configure a load balancer with the backend buckets, requests to URL paths that begin with /love-to-fetch/ are sent to the europe-north Cloud Storage bucket, and all other requests are sent to the us-east Cloud Storage bucket. A network endpoint group, or NEG, is a configuration object that specifies a group of backend endpoints or services. A common use case for this configuration is deploying services in containers. You can also distribute traffic in a granular fashion to applications running on your backend instances. You can use NEGs as backends for some load balancers and with Traffic Director. Zonal and internet NEGs define how endpoints should be reached, whether they are reachable, and where they are located. Unlike these NEG types, serverless NEGs don't contain endpoints. A zonal NEG contains one or more endpoints that can be Compute Engine VMs or services running on the VMs. Each endpoint is specified either by an IP address or an IP:port combination. An Internet NEG contains a single endpoint that is hosted outside of Google Cloud. This endpoint is specified by hostname FQDN:port or IP:port. A hybrid connectivity NEG points to Traffic Director services running outside of Google Cloud. A serverless NEG points to Cloud Run, App Engine, Cloud Run functions services residing in the same region as the NEG. For more information on using NEGs, please refer to the Network endpoint groups overview page.2. Let's practice!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.