ComenzarEmpieza gratis

Requesting moderation

Aside from text and chat completion models, OpenAI provides models with other capabilities, including text moderation. OpenAI's text moderation model is designed for evaluating prompts and responses to determine if they violate OpenAI's usage policies, including inciting hate speech and promoting violence.

In this exercise, you'll test out OpenAI's moderation functionality on a sentence that may have been flagged as containing violent content using traditional word detection algorithms.

Este ejercicio forma parte del curso

Multi-Modal Systems with the OpenAI API

Ver curso

Instrucciones del ejercicio

  • Check if "My favorite book is To Kill a Mockingbird." violates OpenAI’s policies using the Moderations endpoint.
  • Print the category scores to see the results.

Ejercicio interactivo práctico

Prueba este ejercicio y completa el código de muestra.

client = OpenAI(api_key="")

# Create a request to the Moderation endpoint
response = client.____.____(
    ____="My favorite book is To Kill a Mockingbird."
)

# Print the category scores
print(response.results[0].____)
Editar y ejecutar código