Get startedGet started for free

Product review moderation

You're building a content moderation system for user reviews on an e-commerce platform. Your goal is to define and implement a function that uses Claude to detect inappropriate content based on varying levels of strictness.

The boto3 and json libraries, bedrock client and model_id have been preloaded.

This exercise is part of the course

Introduction to Amazon Bedrock

View Course

Exercise instructions

  • Define a moderate_content() function that accept a text and a strictness level with "medium" as the default.

  • Use a dictionary to set the instruction based on strictness: "high", "medium", and "low".

  • Add a temperature of 0.2 to keep the response consistent.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Define the function
def ____(____, strictness_level="____"):
    # Define the dictionary of moderation instructions
    instruction = {"____": "Strictly analyze for inappropriate content. ",
                   "____": "Check for obviously toxic language. ",
                   "____": "Check the tone. "}

    request_body = json.dumps({"anthropic_version": "bedrock-2023-05-31", "max_tokens": 50,
                               # Add a low temperature
                               "temperature": ____,
                               "messages": [{"role": "user", "content": f"{instruction[strictness_level]}\n{text}"}]})
    
    response = bedrock.invoke_model(body=request_body, modelId=model_id)
    response_body = json.loads(response.get('body').read().decode())
    return response_body
Edit and Run Code