O'Reilly logo
live online training icon Live Online training

Fundamentals of AI Algorithms in 90 minutes

The Need for Explainable AI

Lauren Maffeo

It seems like AI is everywhere these days, yet few people trust this technology. As a result, not enough businesses are using it in their strategies.

This distrust is due in large part to black box algorithms, which prevent people — including the systems’ creators — from knowing and understanding why such algorithms make recommendations and decisions. Left unchecked, this lack of transparency can lead to biased outcomes that put people and businesses at risk. The answer to this is explainable AI.

This 90-minute course introduces you to explainable AI: machine learning that can be understood by and explained to those outside engineering. Its more transparent nature decreases bias and increases trust in AI. It also leads teams towards interpretability by helping them predict how algorithms will behave.

Whether you're trying to incorporate AI into your business or want to understand how algorithms make decisions, this course will guide you through the core tenets of explainable AI, explain the cases for and against it, and show you how to use it in your own business.

What you'll learn-and how you can apply it

By the end of this live, hands-on, online course, you’ll understand:

  • What explainable AI is
  • How explainable AI differs from black box algorithms
  • When explainable AI is necessary
  • Who should be involved in using/designing explainable AI
  • How to define transparency requirements for data sources and algorithms

And you’ll be able to:

  • Understand the trade-offs between fairness and accuracy
  • Conquer the interpretability problem
  • Know why bias testing is important

This training course is for you because...

  • You’re a business person looking to use AI in your organization and are concerned about bias and security
  • You’re a product manager, software developer, and/or data scientist and want to better understand AI
  • You want to increase trust in your products by using techniques that your consumers can understand

Prerequisites

  • Attendees should be interested in machine learning, classifiers, and other aspects of datasets that are used to train AI-powered products. No coding or data science experience is necessary

Recommended preparation:

Recommended follow-up:

About your instructor

  • Lauren Maffeo has reported on and worked within the global technology sector. She started her career as a freelance journalist covering tech trends for The Guardian and The Next Web from London.

    Today, Lauren works as an associate principal analyst at GetApp (a Gartner company), where she covers the impact of emerging tech like AI and blockchain on small and midsize business owners. She is also a distinguished speaker with the Association for Computing Machinery and a community moderator for OpenSource.com.

    Lauren has been cited by sources such as Information Management, TechTarget, CIO Online, DevOps Digest, The Atlantic, Entrepreneur, and Inc.com. Her writing on technology has also been cited by researchers at Cornell Law School, Northwestern University, and the University of Cambridge. She has spoken at global events including Gartner’s Symposium in Florida, The World Web Forum in Zurich, Open Source Summit North America in Vancouver, and DrupalCon in Seattle.

    In 2017, Lauren was named to The Drum’s 50 Under 30 list of women worth watching in digital. That same year, she helped organize Women Startup Challenge Europe, which was the continent’s largest venture capital competition for women-led startups. She has served as a mentor for Girls in Technology’s Maryland chapter, and DCA Live included her in its 2018 list of “The NEW Power Women of Tech”. Lauren was also shortlisted for the Future Stars of Tech Award in AI and Machine Learning by Information Age in 2019.

    Lauren holds an MSc from The London School of Economics and a certificate in Artificial Intelligence: Implications for Business Strategy from MIT’s Sloan School of Management. She was interviewed for ACM’s Ubiquity journal in 2019 and has served as a guest speaker for ACM’s Washington, DC Speaker Series.

Schedule

The timeframes are only estimates and may vary according to how the class is progressing

Intro to Explainable AI (30 minutes)

  • Presentation: The history of explainable AI
  • Discussion: Consider explainable AI in the context of your business. What types of decisions would it make? Who might ask you to explain its decision-making process?
  • Presentation: The rise of black box algorithms
  • Exercise: The presenter will share a case study asking learners to imagine a scenario where an algorithm makes a biased decision. How would this negatively impact your businesses?
  • Q&A

Explainability vs. Accuracy (30 minutes)

  • Presentation: The case against explainable AI.
  • Presentation: How to summarize a model’s inputs and outputs
  • Exercise: More AI case studies: Is an explainable or accurate AI model the best fit for each scenario?
  • Q&A

Writing Your Explainable AI Roadmap (30 minutes)

  • Presentation: How to Design Explainable AI systems
  • Discussion: Which stakeholders should you involve? How will you keep them informed?
  • Exercise: Write a tech spec identifying transparency requirements for data sources and algorithms.
  • Q&A