Catherine Yeo: Fairness in AI and Algorithms - Machine Learning Engineered

Episode 5

Catherine Yeo: Fairness in AI and Algorithms

Catherine Yeo is a Harvard undergrad studying Computer Science. She's previously worked for Apple, IBM, and MIT CSAIL in AI research and engineering roles. She writes about machine learning in Towards Data Science and in her new publication Fair Bytes.

Learn more about Catherine:

Read Fair Bytes:

Want to level-up your skills in machine learning and software engineering? Subscribe to our newsletter:

Take the Giving What We Can Pledge:

Subscribe to ML Engineered:

Follow Charlie on Twitter:


(02:48) How she was first exposed to CS and ML

(07:06) Teaching a high school class on AI fairness

(10:12) Definition of AI fairness

(16:14) Adverse outcomes if AI bias is never addressed

(22:50) How do "de-biasing" algorithms work?

(27:42) Bias in Natural Language Generation

(36:46) State of AI fairness research

(38:22) Interventions needed?

(43:18) What can individuals do to reduce model bias?

(45:28) Publishing Fair Bytes

(52:42) Rapid Fire Questions


Defining and Evaluating Fair Natural Language Generation

Man is to Computer Programmer as Woman is to Homemaker?

Gender Shades

GPT-3 Paper: Language Models are Few Shot Learners

How Biased is GPT-3?

Reading List for Fairness in AI Topics

Machine Learning’s Obsession with Kids’ TV Show Characters

About the Podcast

Show artwork for Machine Learning Engineered
Machine Learning Engineered
Helping you bring ML out of the lab and into products that people love.

About your host

Profile picture for Charlie You

Charlie You

Charlie currently works as a Machine Learning Engineer at Workday. He graduated from RPI with a B.S. in Computer Science and taught himself ML through online courses and projects. In his free time, Charlie enjoys investing and trading, playing poker, and training Brazilian Jiu-Jitsu.