Neural networks and other machine learning (ML) model development typically requires large amounts of data for training and testing purposes. Because much of this data is historical, there is the chance that the artificial intelligence (AI) models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. This Advisor explores these and other issues around data that can also contribute to biases and inaccuracies in ML algorithms.
Don’t have a login? Make one! It’s free and gives you access to all Cutter research.
Member/Guest loginForgot your password?