Compile and fit the model
Now that you have a model with 2 outputs, compile it with 2 loss functions: mean absolute error (MAE) for 'score_diff'
and binary cross-entropy (also known as logloss) for 'won'
. Then fit the model with 'seed_diff'
and 'pred'
as inputs. For outputs, predict 'score_diff'
and 'won'
.
This model can use the scores of the games to make sure that close games (small score diff) have lower win probabilities than blowouts (large score diff).
The regression problem is easier than the classification problem because MAE punishes the model less for a loss due to random chance. For example, if score_diff
is -1 and won
is 0, that means team_1
had some bad luck and lost by a single free throw. The data for the easy problem helps the model find a solution to the hard problem.
This exercise is part of the course
Advanced Deep Learning with Keras
Exercise instructions
- Import
Adam
fromkeras.optimizers
. - Compile the model with 2 losses:
'mean_absolute_error'
and'binary_crossentropy'
, and use the Adam optimizer with a learning rate of 0.01. - Fit the model with
'seed_diff'
and'pred'
columns as the inputs and'score_diff'
and'won'
columns as the targets. - Use 10 epochs and a batch size of 16384.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Import the Adam optimizer
____
# Compile the model with 2 losses and the Adam optimzer with a higher learning rate
model.____(loss=['____', '____'], optimizer=____)
# Fit the model to the tournament training data, with 2 inputs and 2 outputs
model.____(games_tourney_train[['____', '____']],
[games_tourney_train[['____']], games_tourney_train[['____']]],
epochs=____,
verbose=True,
batch_size=____)