Train a linear model
In this exercise, we will pick up where the previous exercise ended. The intercept and slope, intercept
and slope
, have been defined and initialized. Additionally, a function has been defined, loss_function(intercept, slope)
, which computes the loss using the data and model variables.
You will now define an optimization operation as opt
. You will then train a univariate linear model by minimizing the loss to find the optimal values of intercept
and slope
. Note that the opt
operation will try to move closer to the optimum with each step, but will require many steps to find it. Thus, you must repeatedly execute the operation.
This exercise is part of the course
Introduction to TensorFlow in Python
Exercise instructions
- Initialize an Adam optimizer as
opt
with a learning rate of 0.5. - Apply the
.minimize()
method to the optimizer. - Pass
loss_function()
with the appropriate arguments as a lambda function to.minimize()
. - Supply the list of variables that need to be updated to
var_list
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Initialize an Adam optimizer
opt = keras.optimizers.____(0.5)
for j in range(100):
# Apply minimize, pass the loss function, and supply the variables
opt.____(lambda: ____(____, ____), var_list=[____, ____])
# Print every 10th value of the loss
if j % 10 == 0:
print(loss_function(intercept, slope).numpy())
# Plot data and regression line
plot_results(intercept, slope)