Fundamentals of AI algorithms in 90 minutes
The need for explainable AI
It seems like AI is everywhere these days, yet few people trust this technology, due in large part to black box algorithms, which prevent people—including the systems’ creators—from knowing and understanding why such algorithms make recommendations and decisions. Left unchecked, this lack of transparency can lead to biased outcomes that put people and businesses at risk. Luckily, there’s an answer to this problem: explainable AI—machine learning that can be understood by and explained to those outside engineering.
Join expert Lauren Maffeo for a 90-minute dive into explainable AI. You’ll learn how explainable AI’s more transparent nature decreases bias and increases trust in AI and leads teams toward interpretability by helping them predict how algorithms will behave. Whether you're trying to incorporate AI into your business or want to understand how algorithms make decisions, this course will help you quickly learn the core tenets of explainable AI, the cases for and against it, and how to use it in your own business.
What you'll learn-and how you can apply it
By the end of this live online course, you’ll understand:
- What explainable AI is
- How explainable AI differs from black box algorithms
- When explainable AI is necessary
- Who should be involved in designing and using explainable AI
- How to define transparency requirements for data sources and algorithms
And you’ll be able to:
- Understand the trade-offs between fairness and accuracy
- Conquer the interpretability problem
- Know why bias testing is important
This training course is for you because...
- You’re a business person looking to use AI in your organization and are concerned about bias and security.
- You’re a product manager, software developer, or data scientist who wants to better understand AI.
- You want to increase trust in your products by using techniques that your consumers can understand.
- Basic familiarity with machine learning, classifiers, and other aspects of datasets that are used to train AI-powered products
- No coding or data science experience required
- Take Artificial Intelligence: An Overview of AI and Machine Learning (live online training course with Alex Castrounis)
- Read Can We Solve AI’s ‘Trust Problem’? (book)
About your instructor
Lauren Maffeo an associate principal analyst at GetApp (a Gartner company), where she covers the impact of emerging tech like AI and blockchain on small and midsize business owners. She’s also a distinguished speaker with the Association for Computing Machinery and a community moderator for OpenSource.com. Lauren started her career as a freelance journalist in London covering tech trends for the Guardian and The Next Web and has since continued to report on and work within the global technology sector. She’s been cited by sources such as Information Management, TechTarget, CIO Online, DevOps Digest, the Atlantic, Entrepreneur, and Inc.com, and her writing on technology has also been cited by researchers at Cornell Law School, Northwestern University, and the University of Cambridge. She’s spoken at global events including Gartner’s Symposium in Florida, the World Web Forum in Zurich, Open Source Summit North America in Vancouver, and DrupalCon in Seattle.
In 2017, Lauren was named to The Drum’s “50 under 30” list of women worth watching in digital. That same year, she helped organize Women Startup Challenge Europe, which was the continent’s largest venture capital competition for women-led startups. She’s served as a mentor for Girls in Technology’s Maryland chapter, and DCA Live included her in its 2018 list of “the NEW Power Women of Tech.” Lauren was also shortlisted for the Future Stars of Tech Award in AI and Machine Learning by Information Age in 2019. She holds an MSc from the London School of Economics and a certificate in artificial intelligence implications for business strategy from MIT’s Sloan School of Management. She was interviewed for ACM’s Ubiquity journal in 2019 and has served as a guest speaker for ACM’s Washington, DC, speaker series.
The timeframes are only estimates and may vary according to how the class is progressing
Intro to explainable AI (30 minutes)
- Lecture: The history of explainable AI; the rise of black box algorithms
- Group discussion: Consider explainable AI in the context of your business. What types of decisions would it make? Who might ask you to explain its decision-making process?
- Hands-on exercise: Determine how an algorithm that makes biased decisions could negatively impact your businesses
Explainability versus accuracy (30 minutes)
- Lecture: The case against explainable AI; how to summarize a model’s inputs and outputs
- Group discussion: Do you believe explainability should be a prerequisite for using AI? Why or why not?
- Hands-on exercise: Determine whether an explainable or accurate AI model is the best fit for a given scenario
Writing your explainable AI road map (30 minutes)
- Lecture: How to design explainable AI systems
- Group discussion: Which stakeholders should you involve? How will you keep them informed?
- Hands-on exercise: Write a tech spec identifying transparency requirements for data sources and algorithms