Get startedGet started for free

Lab Review: VPC Networking

1. Lab Review: VPC Networking

In this lab, you explore the default network, and determine that you cannot create VM instances without a VPC network. So you created a new auto mode VPC network, with subnets, roots, firewall rules, and two VM instances, and tested connectivity for those VM instances. Because auto mode networks aren't recommended for production, you converted the auto mode network to a custom mode network. Next, you create two more custom mode VPC networks with firewall rules and VM instances using the GCP console, and the GCloud command line. Then you test the connectivity across VPC networks, which worked when you pinged external IP addresses, but not when you pinged internal IP addresses. VPC networks are, by default, isolated private networking domains. Therefore, no internal IP address communication is allowed between networks, unless you set up mechanisms such as VPC peering, or a VPN connection. You can stay for a lab walk through. But remember, that GCP's user interface can change, so your environment might look slightly different. All right. So here I am in the GCP console. The first thing I'm going to do is I'm just going to explore the default network. So if I, on the left-hand, side click on the navigation menu, and scroll down to VPC network, we will see that this project has a default network. Every project has a default network. That is unless you have an organizational policy that prevents this default network from being created. But essentially, all the different projects that used through Qwiklabs will always have this. So in here, we can see we have a different subnet in each of the different regions. All of these are private IP addresses. I can also go to the routes, and these are established automatically with the networks. So we can see routes between subnets, as well as to the default route, to the Internet. We can even look at the firewall rules. The default network comes with some preset firewall rules to allow ICMP traffic from anywhere. RDP traffic, as well as SSH. Then also, all protocols imports within the network. So this is the range of the network. So we also allow all traffic from within the network itself. So let's go ahead and let's actually delete these firewall rules. I can just check them all right here and delete them. Let's just assume that we want to get rid of everything that's been created for us, and just create our own network instead. So I'm going to go ahead and delete these. I can look at the status up here. We can see that all four are being deleted. It'll update as each as being deleted. Once that is done which is now, I can head to the network, select the default network, and we're also just going to delete that entire network. Once we delete this network, we should see that there should be no routes without a network because there's no use case for them. So let's just wait for the network to be deleted and then we'll verify that. So we can, again, see the progress bar up here, that's deleting, you can also hit refresh, and this should just take a couple seconds. You can see that as I'm refreshing, some of the subnets are disappearing. It's actually just deleting them all the subnets first, and then it's getting rid of the network as a whole, because the network is really nothing else than just a combination of subnets. So all these subnets have to be deleted. There we go. They're all gone now. Now, it's just the network itself that is remaining. If I go to routes, we should see that all the routes already gone, because without the subnets, there's really no need for the routes. If I go back to the network, we should see that any moment now the network itself also disappears. There we go. All right. So without a VPC network now, we shouldn't be able to create any VM instances, containers, or app engine application. Let's actually verify that. I'm going to go to the navigation menu, go to compute engine, and let's just try to create an instance, just going to click create. I'm going to leave everything as its default. If I go actually under networking, we should see that it's going to complain here. If I click on networking, that actually doesn't have a local network available. But let's just click create and see what happens, and it does indeed give us an error, and point out the fact that this tab has an issue. So we clearly cannot create an instance, because again, these instances live in networks, and without a network, we can't create it. So let's hit cancel, and what we're going to do now is we're going to create our own auto mode network. So I'm going to head back to VPC networks. You can pin, by the way, the services. So I'm just going to pin VPC network, compute engine, because we're going to be going back and forth between these. Then within VPC network, we're just now going to create our own network. I can give it a name. I'm going to use the same name that I have in lab instructions, which is My Network. Now I have the option of creating a custom or an automatic. Let's start off by creating an automatic network. So that's going to preset all the distance subnets for us in all the different regions that are available. You can scroll through those and see those all in here. They have a preset to side arrange. You can expand that side arrange later. But again, as an auto network, you don't define the actual IP address range. There are also firewall rules that are available. What's interesting here is you see that there's a deny-all ingress and allow all ingress firewall rule. So these are here by default, and they're actually. You can't even uncheck them. So these are actually with all networks that you create, and you can see that this has the highest party integer, which really means it's a lowest priority. So by default, all ingress traffic is denied, and all ingress traffic is allowed. Unless we create other firewalls to see differently. So if I check all these boxes, we're now allowing ingress traffic for these IP ranges, and these protocols imports. So let's go ahead and click create, and we're going to wait for that network to be created. Then we're going to look at the IP addresses for two of the different regions, and we're going to create instances in those regions, and verify that it's taking those IP addresses. So here, you can see the subnets already all populated here. I can monitor the progress also up here, but this is really done any second now. I'm actually going to start heading over to compute engine, and to create our instances. So let's click create. I'm going to give it a name mynet-us-vm. This is going to be in your central one, specifically, the Zone C. I don't really need a big machine. We're just doing some testing here. So let me just create a micro that reduces the cost a little bit, and I'm going to now click create. Then we're going to repeat. I can close this panel over here. The same workflow and create an instance in Europe. So I'm going to grab the name from the lab instructions for that. I'm going to select the Europe West One region, specifically the Zone 1C. Again, a micro machine which is just a shared-core, and click create for that as well. We can see the US Central 1C machine is already up. We also see the internal IP address that has been provided. Again, there are some reserved IP addresses. The dot zero is reserved as well as the dot one. So in both of these ranges, the dot two is the first available address. Now, we can verify that these are part of the right subnet, if I click on nic0, I go to the network interface details. Here, we can see it's part of the sub-network. Now the sub-network, in this case, has the same name as the network because this is an auto network. Here, we can see that it's part of this range. So 1012800/20. Let's verify that, and that is correct. We are in there with a dot two, and let's verify that the other should be now a 10132.00/20. So again, click on nic0, go to the sub-network, and we can see that's true. You can also see here that this address is reserved for the gateway. All right. So that way, the dot two was really the first usable address within that range. So now, these are on the same network. So let's verify some connectivity between those. I'm going to grab the internal IP address of mynet-eu-vm, just copy that, and then we're going to SSH too this other instance. So again, these instances are in two separate regions but in the same network. So we should be able to ping these addresses now. So if I ping three times using the internal address, then we can see that this works. This works because we have that allow internal firewall rule that we selected earlier. I can actually repeat the same by using the name of the instance. You can see that it's taking that name. It's actually has, here, the fully qualified domain name, and it's just using the IP address for that. So VPC networks have an internal DNS service that allows you to address instances by that DNS names, instead of their internal IP addresses. That's very useful because, well, this internal IP address could change, right? But the name is not going to change. So it's always good to be aware of that, that you can use the fully qualified domain name to ping those. All right. Now we can try this whole thing the other way round. Let me exit this instance, grab the internal IP address of the instance in the US, and SSH to the instance in Europe. We're also going to ping the internal IP address here. We can see that works. We could even now try to ping the external IP address. So that's 34, in my case, 671818, and that works as well. The reason that I'm able to ping the external is because I have firewall rule that allows ICMP externally. I can verify those again. If I click on the network interface details, here I can see all of the firewall rules, and what filters they have, and what protocols, and ports. So this all works fine, and let's assume that this workflow has worked for us but now we have decided that we want to convert the auto mode network that we have to a custom mode network. So let's go ahead and do that. We're going to go to "VPC networks", and we're going to click on "my network", and then we're going to click on "edit", and we're going to change this subnet creation mode from auto to custom, and hit "save". So now we can navigate back. You can see that this is in progress up here. The mode still says "auto". We could have also flipped that here. Let's wait for that to be refreshed, and now we can see that this subnetwork is now a custom subnet. So let's say that this has worked so far, and now we've realized that we need a couple more networks. There's a network diagram in the lab that has two other networks, as well as some instances and everything. So let's go ahead and create those. So now we're going to go to "create a VPC network". We're going to create the management network, and rather than starting with automatic and converting, we're just going to start with the custom net. For that we have to define each of these subnets. The minimum information we need to provide is a name, the region, so let's select "us-central1", and then the IP address range, and then can click "done". Now I can add, if I wanted to, another subnet. But the other thing that's very interesting about this is, I'm creating this right now through the GCP console but you can also create networks, as well as subnets, from the command line using Cloud Shell. If I click down here on command line, I'm actually provided with the commands to do that. The first one just creates the network itself. You don't have to use the project flag in here. So we could just say G Cloud compute, networks create, the name of the network, and the fact that this subnet is a custom mode. Similarly then, we create these subnets which is "networks subnets create" the name of the subnets, add the subnet itself, the name of the network, the region, and the range. So again, that's the sort of minimal information. Let's just click close and "create". We'll create the other one from the command line. So it's creating that network, and in parallel I can go and now activate Cloud Shell by clicking up here in the right corner. Yes, I want to start using Cloud Shell. I'm just going to make this a little bit bigger, and once this is up, we're going to use those commands that we just saw to create first a network, and this is going to be the privatenet, which is also of the mode custom. Once we have that, we're going to create two subnets within that network. So you can see in the console that the other network was created. Privatenet, is being created right now here, and once that is ready we can add the two subnets to that. So there we go. There is the subnet. It's also telling us this is a new network. You don't have any firewall rules, here are some commands if you want to create some firewall rules. We'll do that in a second. Let's just create these subnets in here. So first we're going to create one in the US, and then we're also going to create one in Europe. If you wanted to speed this up you could actually launch another Cloud Shell session. Now that the network is up, you could create these subnets in parallel. But we're just going to wait for this to complete and then we'll paste that command in there. You can monitor all of this in a console. If we click "refresh," there we see it. It's also completed. It just returns, I've done exactly what you told me. Let's create the other one. It didn't copy the command correctly. There we go. This is now in Europe, specifically Europe-west1. Refresh. You see that's already being created there. So we can definitely display all of those in the GCP console. If you click the button over here in Cloud Shell you can actually open this in a new window. This actually opens it in a new tab, that way you preserve your real estate. You can keep focusing on the console, as well as focusing on Cloud Shell. So let me actually create some real estate by just clearing this, and then paste the command to list all the networks with just G Cloud compute networks list. So we can see them, three networks. They're all Custom. We can dig deeper into this by also listing the subnetworks, and using the "sort-by" command to sort these by network. So now we'll see my network has a lot of subnets because it used to be in auto mode. Then, I mentioned that we want subnet and for permanet. We've two subnets. So now we're going to create some firewall rules. So let's click on "Firewall rules" up here. You can see the ones that are already there. Click create "Firewall rule". We'll repeat the same process we did earlier. We'll first create this using the console, and then we'll repeat the firewalls for a different network using Cloud Shell. So let me give it a name. Let's make sure I select the right network that the firewall rule applies to. Let's just do all instances. For the IP ranges select all addresses. I'm allowing, in this case, ICMP, SSH, and RDP. So let me define ICMP, and then 22 for SSH, and 3389 for RDP, and now down here I can click on "command line", and we can see this as one long command. Again, you don't need to define the project flags as gcloud compute, firewalls create. The name of the rule, the fact it's an ingress party, that is also actually default. We could leave that out. Importantly is the name of the network. Action allows default too, and then the rules as well as source ranges. So let's create that in the console, and we'll grab the command from the lab instructions to do the same for either network. So here you can see. We paste that in, and that should now create the other firewall rule for us. We can monitor the firewall rules in the console, as well as in Cloud Shell. So we'll run a command to list all the firewall rules in a second. So they're all created. If we list them, we can see them all here. If I refresh this we can also see them right here. So now it's time to create some more Instances, and then explore the connectivity. So let's head back to compute engine. I'm going to create instances in these new networks I created. So let me click "create instance". I'm actually going to close Cloud Shell for now. Let me just make it smaller. We're going to provide a name, and "US- Central1-c". Small machine is very fine. Now importantly I need to expand this option down here to select the right network. We've three options right now, and it has actually pre-selected that network. That's because from an order, it's listed up top. That is correct. So let's click "done" and there's again a command. There's a lot of information here that we don't need. You'll see that in a second when we run our command, like the BootDisk. We're selecting a lot of standard options. So let's just hit "create", and let's pull the command from the lab that creates the same in a different network. That's "gcloud compute instance create", the name of the instance, the zone, the machine type, the subnet. That is the bare minimum that we need to provide. So let's run that. You can see the other instance is already created. I can refresh this. See that the other Instances are already coming up too, and once Cloud Shell is updated we can list all the instances. Let's do that here. Can sort them by zone, or we could sort them by network. So we can see in one zone here we have an instance, and then in another zone we have three instances. Keep in mind these Instances are in different networks, and we can display that if we go to columns and check "Network", you can see that these Instances, with exception of the "mynet" these are on the same IVPC network, the others are indifferent. That's going to now go into the connectivity that we're going to explore. We're going to try to again ping IP addresses, both external and internal, and see what works. So let me grab the Management USVM external IP address, and we're going to SSH to the "mynet-us-vm". They are in the same zone, but they're in different networks. So let's see if we can ping the external IP address, and then we'll try the internal. So external works. That's because we set up the firewall rules for that. I can also do the same for privatenet. I can plug that IP address in which is 35.188.20.220 That works as well. So you can ping those, even though they are in different networks. Now from an internal perspective I should only be able to ping mynet-uvm which we actually tried earlier already. So let me just hop on the other ones. I'm going to try 10.130.0.2, and we can see that's not leading to anything. We should be getting a 100 percent packet loss, and then we'll try the same from the other one. So 172.16.0.2 and we can see that again isn't working either. So even though this Instance is in the same zone as these other instances I'm trying to ping, the fact that they are in a different network does not allow me to ping on the internal IP, unless we set up other mechanisms such as VPC peering or a VPN. That's the end of the lab.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.