Platform
Desktop
Audience
Python developers working on machine learning systems
Preparedness
General machine learning and Python development
Standards and references
CWE and Fortify Taxonomy
Group size
12 participants
Outline
What you will learn
Description
Your machine learning application works as intended, so you are done, right? But did you consider somebody poisoning your model by training it with intentionally malicious samples? Or sending specially-crafted input – indistinguishable from normal input – to your model that will get completely misclassified? Feeding in too large samples – for example, an image of 16Gbs to crash the application? Because that’s what the bad guys will do. And the list is far from complete.
As a machine learning practitioner, you need to be paranoid just as any developer out there. Interest in attacking machine learning solutions is gaining momentum, and therefore protecting against adversarial machine learning is essential. This needs not only awareness, but also specific skills to protect your ML applications. The course helps you gain these skills by introducing cutting edge attacks and protection techniques from the ML domain.
Machine learning is software after all. That’s why in this course we also teach common secure coding skills and discuss security pitfalls of the Python programming language. Both adversarial machine learning and core secure coding topics come with lots of hands on labs and stories from real life, all to provide a strong emotional engagement to security and to substantially improve code hygiene.
So that you are prepared for the forces of the dark side.
So that nothing unexpected happens.
Nothing.