Alleviating Bias in AI Systems with Data Profiling and Synthetic Data Sets

Posted May 18, 2021 | Technology |
AI bias
Neural networks and other machine learning (ML) model development typically requires large amounts of data for training and testing purposes. Because much of this data is historical, there is the chance that the artificial intelligence (AI) models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. This Advisor explores these and other issues around data that can also contribute to biases and inaccuracies in ML algorithms.
About The Author
Curt Hall
Curt Hall is a Cutter Expert and a member of Arthur D. Little’s AMP open consulting network. He has extensive experience as an IT analyst covering technology and application development trends, markets, software, and services. Mr. Hall's expertise includes artificial intelligence (AI), machine learning (ML), intelligent process automation (IPA), natural language processing (NLP) and conversational computing, blockchain for business, and customer… Read More
Don’t have a login? Make one! It’s free and gives you access to all Cutter research.