Advisor

Caution! AI Consequences Ahead — An Introduction

Posted October 16, 2019 | Leadership | Technology | Amplify
Mazzucchelli

AI: The Third Time Is Not the Charm …

Let’s begin with some disclosure: I have always been fascinated with the concept of artificial intelligence (AI). As a college undergraduate, I immersed myself in the fundamental technologies that defined the field at the time and designed an independent study program that resulted in the first bachelor’s degree granted in AI in 1977.

I did not aspire to academia, however, and one of my better career moves was not pursuing AI in industry after graduation. Nonetheless, I did keep up with the technology over the years, wondering if the possibilities explored at school might ever be instantiated. Since college, I have experienced three AI “waves.” Let’s take a closer look at them before delving into the thoughtful articles of this issue of Cutter Business Technology Journal (CBTJ).

Wave 1: LISP

The first wave, in the late 1970s, was driven by the emergence of LISP machines. For the uninitiated, LISP was the AI programming language of choice back in the day (and remains a personal guilty pleasure), but its runtime performance was notoriously slow, par­tially because its language grasp exceeded the hard­ware reach of general-purpose computer implementations. Lisp Machines, Inc., and, later, Symbolics both attempted to create markets for specialized hardware that would run LISP code faster than general-purpose hardware and, therefore, enable the creation of “real” AI applications and systems.

I remember talking with Patrick Winston at Symbolics, who blithely remarked that, “in the future, every computer will be a LISP machine.” (He was right, but only because general-purpose architectures later got fast enough to run LISP at scale.) One of my indus­try scars (or, perhaps, a badge of courage) is when Cadre Technologies, a global software design tools/environment supplier company that I founded back in 1982, purchased a Symbolics computer, at my insistence, to facilitate a joint project with General Electric (GE) to commercialize a LISP-based AI system for Ada software development. As with many projects of the period, this one was ambitious but met with limited success (the only road to commercialization of this project was to reimplement it in another, more portable, language).

In the wake of the visible and costly failures of LISP machines, the first AI wave subsided. This does not mean that AI disappeared; rather, it merely went off­stage for a bit while people worked on it in the wings.

Wave 2: Natural Language

Natural language recognition ushered in the second AI wave. Drastically increased value in terms of CPU performance per dollar allowed designers to create devices that would do a reasonable job of dealing with voice in ever-less-restricted domain areas. With further enhancement of this technology, it became the harbinger of the third, and current, AI wave.

Wave 3: Machine Learning

Today, machine learning (ML) leads the way for the third AI wave. ML systems generate responses using directed pattern-matching and feedback to satisfy a goal, like “detect an edge,” or “signify whether this is a picture of a cat,” or “indicate whether this human is a felon.” These systems gain “skill” by ingesting ever-larger data sets (“training”). And we feed the systems ever-larger data sets to improve their training. We have observed some spectacular results (e.g., world-class Go-playing systems “grown” in weeks). However, we are at a loss to explain, at a micro level, exactly how these systems make decisions.

In the human world, accepting a decision without questioning the facts leading up to it is a definition of “trust.” By that definition, we are placing a lot of trust in ML systems that are increasingly running the joint, from Google pet searches to hiring decisions. This brings several problems into our view, leading to some important questions:

  1. Where is the record of initial training approaches for any ML system? We cannot be sure that bias has not been introduced into the system without knowing the initial conditions and weights of the data sets.

  2. Where is the record of changes in response to training input, and how is that input supplied or collected? Who gets to decide?

  3. Where does liability lie for ML mischaracterizations?

While we can laugh at stories about ML misidentifying members of the US Congress as criminals (an easy mistake?), life becomes more difficult for someone denied a job or promotion by an opaque ML-based system.

These thoughts, among others, prompted my interest in an issue of CBTJ that would explore the state of AI from a mostly nontechnical perspective, focusing on emerging ethical challenges that we will face, or ignore. While the issue was being prepared, I’ve been noticing others raising similar questions. For example, Rich Caruana and his team at Microsoft Research are focusing on “intelligible, interpretable, and transparent” machine learning, something our first author, Cutter Consortium Fellow Lynne Ellyn, also points out. Moreover, leading data analytics firm SAS has recently released a white paper, “Machine Learning Model Governance,” that describes a process to manage some of the issues we explore in this issue. However, while the process described in the white paper asks “what, when, and how” ML models are used, it omits the question of “why?”

In This Issue

The contributions in this issue of CBTJ will help us get up to speed with the current state of AI and to think about some of the issues raised when we look beyond systems that appear to work as intended. Our contributors span industry and academia, and their commentary provides a good way to gain an overview of the problem.

We begin the issue with an article by Lynne Ellyn in which she recounts her experiences with AI technology in the real world, surveys the current landscape, and identifies key nontechnical issues that companies are likely to face when deploying AI-based systems.

From Ellyn’s on-the-ground view, we then go to outer space (well, low Earth orbit, actually) to examine the issues around AI (in its ML incarnation) employed in a NASA system to track orbital debris. In his article, William Jolitz, the inventor of OpenBSD (open source Berkeley Software Distribution), makes the case for organization-wide awareness and alignment around ML and suggests that, like security, transparency cannot be bolted on later; it must be addressed at a project’s origin.

Experienced IT practitioners know that errors will occur. A big part of building and managing complex systems is dealing with risk management (which includes identification and mitigation strategies). This is hard enough when documentation and source code exist. But the current state of ML-based AI tends to result in opaque black boxes, which make this activity, um, challenging. This brings us to our next article by David Biros, Madhav Sharma, and Jacob Biros, who explore the implications for organizations and their processes.

One way of getting an off-course system (or person) back on track is by nudging. This concept can be particularly useful in goal-directed systems. But, to reiterate, errors will occur. In his article, Richard Veryard describes technologically mediated nudging; the possible unintended consequences; and the need to consider the planning, design and testing, and operation of the system for robust and responsible nudging.

As AI becomes more visible as a corporate strategic tool, organizations will have to incorporate issues surrounding AI as part of corporate strategy. Pavankumar Mulgund and Sam Marrazzo help us by providing a framework for developing an AI strategy. The authors discuss the “minimum viable model” approach to the development of the underlying AI/ML models, along with the platform on which these models run and the inevitable tradeoffs. They conclude their piece by examining some best practices for the successful implementation of AI initiatives.

In the closing article, Cutter Consortium Senior Consultant Paul Clermont describes some of the impact that AI has had at the boundaries of commer­cial organizations and public policy in an article aptly entitled, “Who Knew THAT Would Happen?” Those of us who have experienced unintended consequences of other technologies will want to answer “anybody” but should remind ourselves that some may not have the memory of prior years, and that hindsight is perfect. Clermont explores how to identify possible unintended consequences in advance and proposes countermeasures to negative unintended consequences in the form of design principles and public policies.

It’s my hope that this collection of papers can help you “set the table” for further exploration, discussion, and policy development in your organizations. I firmly believe that creators and implementers will either get ahead of these issues, or regulators will act after the fact. Or, perhaps, the free-market rush to AI market dominance will lead our society implicitly to the same place that communist China is heading explicitly. Is that the future we want?

Perhaps the most dangerous aspect of the third wave of AI is its apparent efficacy. As Young Frankenstein taught us, putting a monster in a tuxedo may make it appear less threatening but likely only masks other problems.


Caution! AI Consequences Ahead

The six articles in this issue of Cutter Business Technology Journal will help you get up to speed with the current state of artificial intelligence (AI) and to think about some of the issues raised when we look beyond systems that appear to work as intended.  Available for purchase in the Cutter Bookstore. Cutter Members and Subscribers access here.

About The Author
Lou Mazzucchelli
Lou Mazzucchelli is a Fellow of Cutter Consortium and a member of Arthur D. Little's AMP open consulting network. He provides advisory services to technology and media companies. He will lend his broad expertise (and considerable wit) to Cutter Summit 2022 as its Moderator. Recently, Mr. Mazzucchelli was the coordinator of Bryant University’s Entrepreneurship Program, where he retooled and taught senior-level entrepreneurship courses. He… Read More