Neural networks and other ML model development typically use large amounts of data for training and testing purposes. Because much of this data is historical, there is the risk that the AI models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. This Advisor explores how the Data & Trust Alliance consortium created an initiative to help end-user organizations evaluate vendors offering AI-based solutions according to their ability to detect, mitigate, and monitor algorithmic bias over the lifecycle of their products.
December 14, 2021 | Authored By: Curt Hall