Article

The Role of Ethics in Algorithm Design -- Opening Statement

Posted June 2, 2016 | | Amplify
Robert N. CharetteWe are rushing headlong into the robotics revolution without consideration for the many unforeseen problems lying around the corner. It is time now to step back and think hard about the future of the technology before it sneaks up and bites us when we are least expecting it.  — Noel Sharkey, Foundation for Responsible Robotics If we consider recent events, it’s clear that we had better heed Sharkey’s plea to step back and think hard about the robotics revolution very soon. For in April, Ch
In this issue:

 


We are rushing headlong into the robotics revolution without consideration for the many unforeseen problems lying around the corner. It is time now to step back and think hard about the future of the technology before it sneaks up and bites us when we are least expecting it.

Noel Sharkey
Foundation for Responsible Robotics

 

If we consider recent events, it’s clear that we had better heed Sharkey’s plea to step back and think hard about the robotics revolution very soon. For in April, China announced the details of its plan to triple its robotic production in the country over the next five years.  A Financial Times story on the implications of China’s robot initiative stated that China has already bought more industrial robots each year than any other country since 2013, including a quarter of the world’s total ­supply last year alone. By the end of this year, the FT notes, China will be the largest operator of industrial robots in the world, surpassing Japan.

There are several driving factors behind China’s wholehearted embrace of robots. One is that China’s working-age population is predicted to fall over the next three decades (due in part to its former one-child policy), and industrial automation is aimed at filling the expected labor shortfall. Another factor is that other manufacturing countries are investing heavily in industrial robots as a way to undercut China’s current competitive edge in manufacturing. For instance, last year Japan’s gov­ernment announced a major initiative to create a “robotic revolution” that would “spread the use of robotics from large-scale factories to every corner of our economy and society.” South Korea’s government, in response to Japan’s action, immediately announced a US $2.69 billion investment in its local robotics industry in order to keep competitive. A third factor is the development of AI knowledge: China doesn’t want to be left behind in the emerging “AI arms race” that the US and Russia, as well as 80 other countries, seem prepared to embark on and that has Sharkey and other leading technologists and scientists concerned.

There may be good reason for their concern. In a recently published report titled “Autonomous Weapons and Operational Risk,” Paul Scharre, a former US Department of Defense official who helped establish US policy on autonomous weapons, warns that such weapons could lead not only to significant civilian casualties and fratricide, but to “unintended escalation” during a precarious international political confrontation if their software algorithms malfunction or their security is compromised by intruders. An open ethical question is whether such autonomous weapon systems can — and will — be programmed to follow the international rules governing modern warfare. For example, what would an autonomous fighting robot in a war zone do if a tall child approached carrying a bucket? Would the robot comprehend that it was a child and that the bucket was full of mud and not explosives? Or what if that same child approached but attempted to cover the robot’s sensors and cameras with the mud from the bucket? The worry is that autonomous weapon systems will be deployed before these risks are, to echo Sharkey’s words, thought long and hard about.

The question of ethical algorithms doesn’t just affect autonomous robotic operations, either. As more devices are being increasingly connected into an “Internet of Things,” how, when, and why should the gathered information be used and to whom should it be made available? For example, as home appliances, heating and electrical systems, and home security systems ­interconnect to form a “smart home,” will insurance companies intent on reducing their risk profile begin to penalize homeowners who don’t have these smart devices in place, or decide because of the types of devices owned, the homeowner can afford to pay more for their insurance? As a freshly published Obama administration study titled “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights” makes abundantly clear, the “potential of encoding discrimination” in algorithmically driven decisions is a real and growing risk.

Then there are the more “common” IT system ethical situations, as highlighted by Volkswagen’s use of software to cheat emission control tests or by the host of banks that conspired to manipulate global bank interest rates. Software, given its invisibility, provides tempting opportunities for unethical behavior.

WHAT IS COMPUTER ETHICS?

It should be readily apparent that information systems and technology pose a wide range of thorny ethical questions. And yet the generally agreed ­definition of computer ethics is still the one developed in 1985 by philosophy professor James Moor, which states that it is:

... the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology.

One notices quickly that this definition focuses on the “ethical use” of IT, as Moor makes the strong case that what is or is not “ethical” cannot be universally prescribed in advance. Indeed, what is ethical is highly context-sensitive.

What Moor’s definition really is trying to highlight is that computing systems and related technologies create choices — and resulting ethical questions or dilemmas — that did not arise before computers. For instance, it is entirely legal for US banks to reorder their depositors’ withdrawals and deposits such that the withdrawals are paid before the deposits are credited, instead of by chronological order. The banks claim they are doing their depositors a favor by ensuring the timely payment of important bills like mortgages and car payments. However, the rearrangement also increases the likelihood that their depositors will incur multiple overdraft fees, which can make the bank millions of dollars as a result. Few (other than the banks themselves) defend the practice as being entirely ethical. Furthermore, without the aid of their computer systems, banks wouldn’t be able to efficiently process deposits and withdrawals in any order other than chronologically. Nor would it be possible for a criminal to be able to physically steal 80,000,000 paper-based health records files all at once, which is relatively easy when those health records are digitized.

EVER-INCREASING ETHICAL CHOICES

More and more over the past 70 years, computing ­technology and systems have changed what decisions humans are able to make and how they make them. As important, they have changed the perceived significance and value of the decisions and activities we want to undertake. We are now entering a period where computing systems are increasingly going to take decision making away from humans because, frankly, those ­systems will be better than humans at:

  • Uncovering more decision alternatives
  • Quickly choosing the one(s) that have the highest likelihood of success utilizing the fewest resources, and
  • Implementing them

However, as has often been said, with great power comes great responsibility to ethically manage the risks that power can create if misused deliberately, ­accidentally, or by not understanding its unintended consequences. The growing power of computing algorithms and the technology that supports them may have massive impacts on the employability of hundreds of millions of people worldwide over the coming decades. One study has predicted that half of current US jobs — or some 60 to 70 million — could theoretically be automated by 2035, for example. The same prospects of job destruction face every country that is investing heavily in automation. Even the Chinese government acknowledges that its investment in industrial robotics in manufacturing might start eliminating the traditional jobs path out of poverty used by tens of millions of its citizens living in the countryside. It is an ethical imperative for governments to help those likely to be affected by automation, which for the most part will be individuals already at the bottom of the economic ladder.

Similarly, as government services become ever more automated, it is critically important to ensure that the automation works properly, because when it doesn’t, those least able to afford it again end up suffering the most. For example, when the state of North Carolina decided in 2013 to go live with its new $484 million benefits system NC FAST before it was fully tested, tens of thousands of food-assistance recipients were notable to receive their benefits for weeks, and some for months, until the system’s flaws were adequately fixed. While government officials apologized for the “inconvenience” caused, they never perceived their behavior as possibly being unethical.

IN THIS ISSUE

In this issue of Cutter IT Journal, we have assembled four articles that address different choices created by information systems, along with the many ethical ­questions raised by the algorithms that underpin them. Our first article is by Cutter Senior Consultant Paul Clermont, who explores “the boundary between machine capabilities and what once seemed uniquely human.” Clermont provides clarity on the areas where computers and algorithms seemingly have the edge over humans and those where humans are likely to be needed for a long time yet. For example, humans are able to apply common sense when an unexpected situation arises and can be held accountable for their unethical activities.

Next, Darren Dalcher focuses on the need to think deeply about what ethics, trust, and responsibility mean in an age of smart machines. For instance, what does it mean to trust a robot? Should the same level of ethical behavior be expected from a robot as a human? What are the responsibilities of a robot’s designers in ensuring that the robot acts in a safe (i.e., reliable, responsible, and ethical) manner? How should policy makers react to smart machines? Do they need to define what is illegal or unethical for smart machines as we do currently for people? Dalcher examines these various questions and more and discusses the societal risk-reward tradeoffs that arise.

Our third article is by Hal Berghel, who takes a look at what he calls “bad faith technology.” Technology is generally viewed as being ethically “neutral” — the way it is used defines the ethics involved. For example, a kitchen knife is neither good nor bad in itself, but it can be used for either good or bad purposes. Berghel then asks a provocative question: “is it possible to design a technology with unethical use in mind from the start?” With this as a starting point, he looks at what would characterize bad faith technology and how we can ­recognize and prevent it.

We conclude the issue with an article by Jesse Feiler, who discusses making ethical considerations a required part of software system development. He describes ­multiple opportunities for interjecting ethical thinking into system development, such as when stakeholders are initially defining the system, or just as importantly, in the maintenance of existing software systems. Feiler also offers insight into the types of ethical issues that should be considered and practical ways to address them.

I hope you will enjoy the articles in this issue. I think you’ll find them especially thought-provoking.

About The Author
Robert Charette
Robert N. Charette is a Cutter Fellow and a member of Arthur D. Little’s AMP open consulting network. He is also President of ITABHI Corporation and Managing Director of the Decision Empowerment Institute. With over 40 years’ experience in a wide variety of technology and management positions, Dr. Charette is recognized as an international authority and pioneer regarding IT and systems risk management. Along with being a Contributing Editor to… Read More