Get startedGet started for free

Adversarial attack classification

Imagine you're a Data Scientist on a mission to safeguard machine learning models from malicious attacks. In order to do so, you need to be aware of the different attacks that you and your model could encounter. Being aware of these vulnerabilities will allow you to protect your models against these adversarial threats.

This exercise is part of the course

Deep Learning for Text with PyTorch

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise