Cutter Members-only Peer-to-Peer

We’re experiencing the third wave of AI — this one fueled by machine learning (ML). How do machines learn? By being continuously fed ever-larger data sets — the more it’s fed, the smarter it gets. But is it possible ML could result in the wrong lessons learned? How would you know if this was happening in your ML application? How dire might the consequences be?

Join this Cutter Consortium members-only peer-to-peer conversation, moderated by Cutter Fellow Lou Mazzucchelli, to discuss the efficacy of machine learning. Topics on the table include:

  • How can the negative consequences of AI be prevented? 
  • How can the fairness of algorithms be ensured? Is it even possible?
  • How can organizations prepare themselves for any possible fallout from AI/ML? What steps should be taken?

We place a lot of trust in ML systems — systems that are increasingly the decision-makers in scenarios such as who to hire, how Barbie listens and responds to a child, how to allocate money among a variety of financial products, which patients will respond to a new drug therapy, plus additional systems that have life-and-death consequences. Getting ahead of the issues around ML from the start is easier than dealing with negative consequences after the fact. Participate in this peer-to-peer session to share your experiences and discover how others are dealing with the risks machine learning can pose.

 

to