Train a linear model
In this exercise, we will pick up where the previous exercise ended. The intercept and slope, intercept and slope, have been defined and initialized. Additionally, a function has been defined, loss_function(intercept, slope), which computes the loss using the data and model variables.
You will now define an optimization operation as opt. You will then train a univariate linear model by minimizing the loss to find the optimal values of intercept and slope. Note that the opt operation will try to move closer to the optimum with each step, but will require many steps to find it. Thus, you must repeatedly execute the operation.
Deze oefening maakt deel uit van de cursus
Introduction to TensorFlow in Python
Oefeninstructies
- Initialize an Adam optimizer as
optwith a learning rate of 0.5. - Apply the
.minimize()method to the optimizer. - Pass
loss_function()with the appropriate arguments as a lambda function to.minimize(). - Supply the list of variables that need to be updated to
var_list.
Praktische interactieve oefening
Probeer deze oefening eens door deze voorbeeldcode in te vullen.
# Initialize an Adam optimizer
opt = keras.optimizers.____(0.5)
for j in range(100):
# Apply minimize, pass the loss function, and supply the variables
opt.____(lambda: ____(____, ____), var_list=[____, ____])
# Print every 10th value of the loss
if j % 10 == 0:
print(loss_function(intercept, slope).numpy())
# Plot data and regression line
plot_results(intercept, slope)