Mitigating prompt injection
Your are developing a customer service chatbot for online banking and have been advised that one of the concerns of the team is that malicious actors might use prompt injection to extract sensitive information from the user. The chatbot you're developing should only answer high level questions and not go into any details about personal information.
Which of the following are techniques to mitigate the risk of prompt injection from malicious actors?
Questo esercizio fa parte del corso
Developing AI Systems with the OpenAI API
Esercizio pratico interattivo
Passa dalla teoria alla pratica con uno dei nostri esercizi interattivi
Inizia esercizio