ComeçarComece de graça

Generate sentence with context

In this exercise, you are going to experiment on a pre-trained model for text generation. The model is already loaded in the environment in the model variable, as well as the initialize_params() and get_next_token() functions.

This later uses the pre-trained model to predict the next character and return three variables: the next character next_char, the updated sentence res and the the shifted text seq that will be used to predict the next one.

You will define a function that receives a pre-trained model and a string that will be the start of the generated sentence as inputs. This is a good practice to generate text with context. The sentence limit of 100 characters is an example, you can use other limits (or even without limit) in your applications.

Este exercício faz parte do curso

Recurrent Neural Networks (RNNs) for Language Modeling with Keras

Ver curso

Instruções do exercício

  • Pass the initial_text variable to the initialize_params() function.
  • Create conditions to stop the loop when the counter reaches 100 or a dot (r'.') is found.
  • Pass the initial values res, seq to the get_next_token() function to obtain the next char.
  • Print the example phrase generated by the defined function.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

def generate_phrase(model, initial_text):
    # Initialize variables  
    res, seq, counter, next_char = initialize_params(____)
    
    # Loop until stop conditions are met
    while counter < ____ ____ next_char != r'.':
      	# Get next char using the model and append to the sentence
        next_char, res, seq = get_next_token(model, ____, ____)
        # Update the counter
        counter = counter + 1
    return res
  
# Create a phrase
print(____(model, "I am not insane, "))
Editar e executar o código