Adversarial attack classification
Imagine you're a Data Scientist on a mission to safeguard machine learning models from malicious attacks. In order to do so, you need to be aware of the different attacks that you and your model could encounter. Being aware of these vulnerabilities will allow you to protect your models against these adversarial threats.
Deze oefening maakt deel uit van de cursus
Deep Learning for Text with PyTorch
Praktische interactieve oefening
Zet theorie om in actie met een van onze interactieve oefeningen.
Begin met trainen