ComeçarComece de graça

Requesting moderation

Aside from text and chat completion models, OpenAI provides models with other capabilities, including text moderation. OpenAI's text moderation model is designed for evaluating prompts and responses to determine if they violate OpenAI's usage policies, including inciting hate speech and promoting violence.

In this exercise, you'll test out OpenAI's moderation functionality on a sentence that may have been flagged as containing violent content using traditional word detection algorithms.

Este exercício faz parte do curso

Multi-Modal Systems with the OpenAI API

Ver curso

Instruções do exercício

  • Check if "My favorite book is To Kill a Mockingbird." violates OpenAI’s policies using the Moderations endpoint.
  • Print the category scores to see the results.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

client = OpenAI(api_key="")

# Create a request to the Moderation endpoint
response = client.____.____(
    ____="My favorite book is To Kill a Mockingbird."
)

# Print the category scores
print(response.results[0].____)
Editar e executar o código