Get startedGet started for free

Compute options

1. Compute options

Now that you have completed the lab, let's dive deeper into the compute options that are available to you in Google Cloud, by focusing on CPU and memory. You have three options for creating and configuring a VM. You can use the Google Cloud console as you did in the previous lab, the Cloud Shell command line, or the RESTful API. If you'd like to automate and process very complex configurations, you might want to programmatically configure these through the RESTful API by defining all the different options for your environment. If you plan on using the command line or RESTful API, I recommend that you first configure the instance through the Google Cloud console and then ask Compute Engine for the equivalent REST request or command line, as shown in the demo earlier. This way you avoid any typos and get dropdown lists of all the available CPU and memory options. Speaking of CPU and memory options, let's look at the different machine types that are currently available. When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. There are several machine families you can choose from and each machine family is further organized into machine series and predefined machine types within each series. A machine family is a curated set of processor and hardware configurations optimized for specific workloads. When you create a VM instance, you choose a predefined or custom machine type from your preferred machine family. Alternatively, you can create custom machine types. These let you specify the number of vCPUs and the amount of memory for your instance. There are four Compute Engine machine families. General-purpose, Compute-optimized, Memory-optimized, and Accelerator-optimized. Let's look at each in more detail. The general-purpose machine family has the best price-performance with the most flexible vCPU to memory ratios, and provides features that target most standard and cloud-native workloads. The E2 machine series is suited for day-to-day computing at a lower cost, especially where there are no application dependencies on a specific CPU architecture. E2 VMs provide a variety of compute resources for the lowest price on Compute Engine, especially when paired with committed-use discounts. You simply pick the amount of vCPU and memory you want, and Google provisions it for you. The Standard E2 VMs have between 2 to 32 vCPUs with a ratio of 0.5 GB to 8 GB of memory per vCPU. They are a great choice for web servers, small to medium databases, development and test environments, and many applications that don't have strict performance requirements. They offer a compatible performance baseline with the current N1 VMs for those of you who have been using them. The E2 machine series also contains shared-core machine types that use context-switching to share a physical core between vCPUs for multitasking. Different shared-core machine types sustain different amounts of time on a physical core. In general, shared-core machine types can be more cost-effective for running small, non-resource intensive applications than standard, high-memory, or high-CPU machine types. Shared-core E2 machine types have 0.25 to 1 vCPUs with 0.5 GB to 8 GB of memory. The N2 and N2D are the next generation following the N1 VMs, offering a significant performance jump. N2 and N2D are the most flexible VM types and provide a balance between price and performance across a wide range of VM shapes, including enterprise applications, medium-to-large databases, and many web and app-serving workloads. Committed use and sustained use discounts are supported. N2 VMs support the latest second generation scalable processor from Intel with up to 128 vCPUs and 0.5 to 8 GB of memory per vCPU. Cascade Lake is the default processor for machine types up to 80 vCPUs. For larger machine types, Ice Lake is the default processor for specific regions and zones. N2D are AMD-based general purpose VMs. They leverage the latest EPYC Milan and EPYC Rome processors, and provide up to 224 vCPUs per node. Tau T2A and Tau T2D VMs are optimized for cost-effective performance of demanding scale-out workloads. T2D VMs are built on the latest 3rd Gen AMD EPYC processors and offer full x86 compatibility. They are suited to scale-out workloads including web servers, containerized microservices, media transcoding, and large-scale Java applications. T2D VMs come in predefined VM shapes, with up to 60 vCPUs per VM and 4 GB of memory per vCPU. Tau T2A machine series is the first machine series in Google Cloud to run on Arm processors. The Tau T2A machine series runs on a 64 core Ampere Altra processor with an Arm instruction set and an all-core frequency of 3 GHz. If you have containerized workloads, Tau VMs are supported by Google Kubernetes Engine to help optimize price-performance. You can add T2D nodes to your GKE clusters by specifying the T2D machine type in your GKE node pools. The compute-optimized machine family has the highest performance per core on Compute Engine and is optimized for compute-intensive workloads. C2 VMs are the best fit VM type for compute-intensive workloads, including AAA gaming, electronic design automation, and high-performance computing across simulations, genomic analysis, or media transcoding. They might also be applications that have very expensive per core licensing and thus would benefit from higher per core performance. Powered by high-frequency Intel- scalable processors, Cascade Lake, C2 machine types offer up to 3.8 Ghz sustained all-core turbo and provide full transparency into the architecture of the underlying server platforms, enabling advanced performance tuning. The C2 series comes in different machine types ranging from 4 to 60 vCPUs, and offers up to 240 GB of memory. You can also attach up to 3 TB of local storage to these VMs for applications that require higher storage performance. The C2D machine series provides the largest VM sizes and are best-suited for high-performance computing. The C2D series also has the largest available last-level cache per core. The C2D machine series come in different machine types ranging from 2 to 112 vCPUs, and offer 4 GB of memory per vCPU . You can also attach up to 3 TB of local storage to these machine types for applications that require higher storage performance. C2D VMs are available on the third generation AMD EPYC Milan platform. The H3 series offer 88 cores and 352 GB of DDR5 memory and are available on the Intel Sapphire Rapids CPU platform and Google's custom Intel Infrastructure Processing Unit The memory-optimized machine family provides the most compute and memory resources of any Compute Engine machine family offering. They are ideal for workloads that require higher memory-to-vCPU ratios than the high-memory machine types in the general-purpose machine family. The M1 machine series has up to 4 TB of memory, while the M2 machine series has up to 12 TB of memory. These machine series are well-suited for large in-memory databases such as SAP HANA, as well as in-memory data analytics workloads. Both the M1 and M2 machine series offer the lowest cost per GB of memory on Compute Engine, making them a great choice for workloads that utilize higher memory configurations with low compute resources requirements. Additionally, they offer up to 30% sustained use discounts and are also eligible for committed use discounts, bringing additional savings of greater than 60% for three-year commitments. M3 VMs offer up to 128 vCPUs, with up to 30.5 GB of memory per vCPU, and are available on the Intel Ice Lake CPU platform. These machines are well-suited for memory intensive applications, such as genomic modeling and electronic design automation and high performance computing. The accelerator-optimized machine family is ideal for massively parallelized Compute Unified Device Architecture compute workloads, such as machine learning and high-performance computing. This family is the optimal choice for workloads that require GPUs. The A2 series has 12 to 96 vCPUs, and up to 1,360 GB of memory. Each A2 machine type has a fixed number (up to 16) of NVIDIA's Ampere A100 GPUs attached. An A100 GPU provides 40 GB of GPU memory - ideal for large language models, databases, and high-performance computing. G2 VMs offer 4 to 96 vCPUs, up to 432 GB of memory, and are available on the Intel Cascade Lake CPU platform. These machines are well-suited for CUDA-enabled ML training and inference, video transcoding, and remote visualization workstations. Additional information, including the latest specs for currently available VM machine types, can be found in the machine types documentation. If none of the predefined machine types match your needs, you can independently specify the number of vCPUs and the amount of memory for your instance. Custom machine types are ideal for the following scenarios: When you have workloads that are not a good fit for the predefined machine types that are available to you. Or when you have workloads that require more processing power or more memory, but don't need all of the upgrades that are provided by the next larger predefined machine type. It costs slightly more to use a custom machine type than an equivalent predefined machine type, and there are still some limitations in the amount of memory and vCPUs you can select: Only machine types with 1 vCPU or an even number of vCPUs can be created. Memory must be between 1 GB and 8 GB per vCPU. The total memory of the instance must be a multiple of 256 MB. Selected custom machine types can allow up to 8 GB of memory per vCPU. However, this might not be enough memory for your workload. At an additional cost, you can get more memory per vCPU beyond the 8 GB limit. This is referred to as extended memory, and you can learn more about this in the link provided in the module PDF located in Course Resources.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.