CommencerCommencer gratuitement

Decode your predictions

Your LSTM model has already been trained (details in the previous exercise success message) so that you don't have to wait. It's time to define a function that decodes its predictions. The trained model will be passed as a default parameter to this function.

Since you are predicting on a model that uses the softmax function, numpy's argmax() can be used to obtain the index/position representing the most probable next word out of the output vector of probabilities.

The tokenizer you previously created and fitted, is loaded for you. You will be making use of its internal index_word dictionary to turn the model's next word prediction (which is an integer) into the actual written word it represents.

You're very close to experimenting with your model!

Cet exercice fait partie du cours

Introduction to Deep Learning with Keras

Afficher le cours

Instructions

  • Use texts_to_sequences() to turn the test_text parameter into a sequence of numbers.
  • Get the model's next word prediction by passing in test_seq . The index/position representing the word with the highest probability is obtained by calling .argmax(axis=1)[0] on the numpy array of predictions.
  • Return the word that maps to the prediction using the tokenizer's index_word dictionary.

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

def predict_text(test_text, model = model):
  if len(test_text.split()) != 3:
    print('Text input should be 3 words!')
    return False
  
  # Turn the test_text into a sequence of numbers
  test_seq = tokenizer.texts_to_sequences([____])
  test_seq = np.array(test_seq)
  
  # Use the model passed as a parameter to predict the next word
  pred = ____.predict(____).argmax(axis = 1)[0]
  
  # Return the word that maps to the prediction
  return tokenizer.____[____]
Modifier et exécuter le code