From predicting medical outcomes to managing retirement funds, we put a lot of trust in machine learning (ML) and artificial intelligence (AI) technology, even though we know they are vulnerable to attacks, and that sometimes they can completely fail us. In this course, instructor Diana Kelley pulls real-world examples from the latest ML research and walks through ways that ML and AI can fail, providing pointers on how to design, build, and maintain resilient systems.
Learn about intentional failures caused by attacks and unintentional failures caused by design flaws and implementation issues. Security threats and privacy risks are serious, but with the right tools and preparation you can set yourself up to reduce them. Diana explains some of the most effective approaches and techniques for building robust and resilient ML, such as dataset hygiene, adversarial training, and access control to APIs.
Learn More