Get startedGet started for free

Mitigating prompt injection

Your are developing a customer service chatbot for online banking and have been advised that one of the concerns of the team is that malicious actors might use prompt injection to extract sensitive information from the user. The chatbot you're developing should only answer high level questions and not go into any details about personal information.

Which of the following are techniques to mitigate the risk of prompt injection from malicious actors?

This exercise is part of the course

Developing AI Systems with the OpenAI API

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise