Exercise

# Compile and fit the model

Now that you have a model with 2 outputs, compile it with 2 loss functions: mean absolute error (MAE) for `'score_diff'`

and binary cross-entropy (also known as logloss) for `'won'`

. Then fit the model with `'seed_diff'`

and `'pred'`

as inputs. For outputs, predict `'score_diff'`

and `'won'`

.

This model can use the scores of the games to make sure that close games (small score diff) have lower win probabilities than blowouts (large score diff).

The regression problem is easier than the classification problem because MAE punishes the model less for a loss due to random chance. For example, if `score_diff`

is -1 and `won`

is 0, that means `team_1`

had some bad luck and lost by a single free throw. The data for the easy problem helps the model find a solution to the hard problem.

Instructions

**100 XP**

- Import
`Adam`

from`keras.optimizers`

. - Compile the model with 2 losses:
`'mean_absolute_error'`

and`'binary_crossentropy'`

, and use the Adam optimizer with a learning rate of 0.01. - Fit the model with
`'seed_diff'`

and`'pred'`

columns as the inputs and`'score_diff'`

and`'won'`

columns as the targets. - Use 10 epochs and a batch size of 16384.