Platform
Desktop
Audience
Python developers working on machine learning systems
Preparedness
General machine learning and Python development
Standards and references
CWE and Fortify Taxonomy
Group size
12 participants
Outline
What you will learn
Description
The course bridges the two worlds of cybersecurity and machine learning. Starting from the core cybersecurity principles, it highlights how ML systems are exposed to threats – both pre-existing threats from the world of software security affecting these systems in unexpected ways and completely new kinds of threats that require a deeper understanding of adversarial machine learning.
The first step of understanding the security of ML is to analyze the relevant threats. We synthesize a threat model (the assets to protect, the security requirements, the attack surface, potential attacker profiles, and the actual threat model represented via attack trees) based on the existing threat models of NIST, Microsoft, BIML, and OWASP. We then explore the relationship of security and ML, from ML-driven static analysis tools and IDS to a brief glimpse at ML-assisted attack tools used by hackers today. We look at the most significant threats against Large Language Models (LLMs), following the OWASP LLM Top 10 2025 (among others). The bulk of the course deals with adversarial machine learning, and a detailed discussion of the four main attack subtypes: evasion, poisoning, model inversion, and model stealing as well as practical aspects of these attacks. Various labs on adversarial attack techniques (model editing, poisoning, evasion, transfer attacks, model inversion, model extraction) offer practical insights into vulnerabilities, while a discussion of defense techniques such as adversarial training, certified robustness, and gradient masking provide the possible countermeasures.
In the rest of the course we discuss some common software security weakness categories, such as input validation, improper use of security features, time and state, error handling and using vulnerable components, putting them in the context of machine learning wherever relevant. Finally, participants are equipped with a solid foundation in cryptography, covering essential knowledge and skills every developer should have as well as techniques of special interest for machine learning such as multiparty computation, differential privacy, and fully homomorphic encryption.