Get startedGet started for free

Linear regression

1. Linear regression

Now that you understand how to construct loss functions, you're well-equipped to start training models. We'll do that for the first time in this video with a linear regression model.

2. What is a linear regression?

So what is a linear regression model? We can answer this with a simple illustration. Let's say we want to examine the relationship between house size and price in the King County housing dataset. We might start by plotting the size in square feet against the price in dollars. Note that we've actually plotted the relationship after taking the natural logarithm of each variable, which is useful when we suspect that the relationship is proportional. That is, we might expect an x% increase in size to be associated with a y% increase in price.

3. What is a linear regression?

A linear regression model assumes that the relationship between these variables can be captured by a line. That is, two parameters--the line's slope and intercept--fully characterize the relationship between size and price.

4. The linear regression model

In our case, we've assumed that the relationship is linear after taking natural logarithms. Training the model will involve recovering the slope of the line and the intercept, where the line intersects the vertical axis. Once we have trained the intercept and slope, we can take a house's size and predict its price. The difference between the predicted price and actual price is the error, which can be used to construct a loss function. The example we've shown is for a univariate regression, which has only one feature, size. A multiple regression has multiple features, such as size and location.

5. Linear regression in TensorFlow

Let's look at some code to see how this can be implemented. We will first define our target variable, price, and feature, size. We also initialize the intercept and slope as trainable variables. After that, we define the model, which we'll use to make predictions by multiplying size and slope and then adding the intercept. Again, remember that we can do this using the addition and multiplication symbols, since these are overloaded operators and intercept and slope are tensorflow operations. Our next step is to define a loss function. This function will take the model's parameters and the data as an input. It will first use the model to compute the predicted values. We then set the function to return the mean squared error loss. We, of course, could have selected a different loss.

6. Linear regression in TensorFlow

With the loss function defined, the next step is to define an optimization operation. We'll do this using the adam optimizer. For now, you can ignore the choice of optimization algorithm. We will discuss the selection of optimizers in greater detail later. For our purposes, it is sufficient to understand that executing this operation will change the slope and intercept in a direction that will lower the value of the loss. We will next perform minimization on the loss function using the optimizer. Notice that we've passed the loss function as a lambda function to the minimize operation. We also supplied a variable list, which contains intercept and slope, the two variables we defined earlier. We will execute our optimization step 1000 times. Printing the loss, we'll see that it tends to decline, moving closer to the minimum value with each step. Finally, we print the intercept and the slope. This is our linear model, which enables us to predict the value of a house given its size.

7. Let's practice!

You now have all of the tools you'll need to train a linear model in TensorFlow, so let's try that in an exercise!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.