Get startedGet started for free

Reversing sentences

Here you will learn how to reverse sentences for the encoder model. As discussed, reversing source sentences helps to form a strong initial connection between the encoder and the decoder, which boosts the performance of the model. However, always remember that the benefit is dependent on the two languages you are translating between. As long as they have the same subject, verb ,and object order, it will benefit the model.

In this exercise you will modify the sents2seqs() function to be able to reverse sentences if needed. The user can specify a boolean keyword argument reverse which does text reversing.

This exercise is part of the course

Machine Translation with Keras

View Course

Exercise instructions

  • Write the sents2seqs() function signature by adding a new keyword argument reverse which defaults to False.
  • Reverse the returned sequence IDs on the time dimension (using ::-1 syntax), so that the first word ID becomes the last.
  • Call sents2seqs() and reverse the given sentences and keep all the other default parameters values unchanged.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

sentences = ["california is never rainy during july ."]
# Add new keyword parameter reverse which defaults to False
def ____(input_type, sentences, onehot=False, pad_type='post', ____=____):     
    encoded_text = en_tok.texts_to_sequences(sentences)
    preproc_text = pad_sequences(encoded_text, padding=pad_type, truncating='post', maxlen=en_len)
    if reverse:
      # Reverse the text using numpy axis reversing
      preproc_text = preproc_text[:, ____]
    if onehot:
        preproc_text = to_categorical(preproc_text, num_classes=en_vocab)
    return preproc_text
# Call sents2seqs to get the padded and reversed sequence of IDs
pad_seq = ____('source', ____, ____=____)
rev_sent = [en_tok.index_word[wid] for wid in pad_seq[0][-6:]] 
print('\tReversed: ',' '.join(rev_sent))
Edit and Run Code