Session Ready
Exercise

Defining the encoder

Here you'll be taking your first step towards creating a machine translation model: implementing the encoder. The encoder that you will implement is a very simple model compared to the complex models that are used in real-world applications such as the Google machine translation service. But don't worry, though the model is simple, the concepts are the same as of those complex models.

You will see that we are choosing en_vocab to be smaller (150) than the actual value (228) that we found. Making the vocabulary smaller reduces the memory footprint of the model. Reducing the vocabulary slightly is fine as we are removing the rarest words when we are doing so. For machine translation tasks, rare words usually have less value than common words.

Instructions
100 XP
  • Define an Input layer for an input which has a vocabulary size en_vocab and a sequence length en_len, using the shape argument.
  • Define a keras.layers.GRU layer that has hsize hidden units and returns its state.
  • Get the outputs from the GRU layer by feeding in en_inputs and assign the GRU state to en_state and the output to en_out.
  • Define a keras.models.Model whose input is en_inputs and the output is the en_state and print the model summary.