Article

Let’s Hope 2016 Is the Year of the Ethical Algorithm

Posted January 27, 2016 | | Amplify

 
CUTTER IT JOURNAL VOL. 29, NO. 1
  


One cannot have superior science and inferior morals. The combination is unstable and self-destroying.

— Arthur C. Clarke

 

The late futurist and science fiction writer Arthur C. Clarke’s observation has long been a staple theme of science fiction stories, especially those involving smart machines and whether the algorithms used to make decisions would be for the benefit of humankind or its destruction. As artificial intelligence (AI) and robotics research has progressed along with growth in computing power, that programming question has steadily moved out of the realm of science fiction and into the computing technical community over the past decade. This has been especially true in the military establishment as the use of robotics rapidly increased far beyond even the most optimistic projections made only a decade ago.

Yet the widespread use of robotics in war, and the implications for programming smart machines, didn’t really penetrate the general public’s consciousness. P.W. Singer, the author of Wired for War, a book that delves into the military’s use of robotic warriors, told me he was amazed that even senior military leadership didn’t seem to fully understand what transitioning to smart machines involved. While they acknowledged that robotics-enabled warfare was “revolutionary,” they didn’t fully comprehend that “technologies are revolutionary not only because of the incredible new capabilities they offer you, but because of the incredible new questions they force you to ask — questions about what’s possible that was never possible before and also new questions about what’s proper, what’s right or wrong that you didn’t have to think about before.”

RISE OF THE ROBOTIC KILLING MACHINES

In 2015, the question of what’s right or wrong when using ever smarter machines started to filter into more mainstream public discussion. One reason has been the very public statements from Elon Musk, Stephen Hawking, and other scientists warning that AI technology has reached a point of practicality where there is a real threat of a “global AI arms race.” They worry that countries are already vying to build ever more capable autonomous weapons that can select and engage tar­gets without human intervention. 2015 also saw more urgent calls to formally ban such weapons as part of current military treaties prohibiting the use of inhumane weapons. As a response to what they perceive as a potential “existential threat” to humanity, Musk and several other venture capitalists pledged over US $1 billion to set up a nonprofit AI research center in hopes of ensuring that AI is used for the benefit of humankind.

AUTONOMOUSLY DRIVEN VEHICLES BECOME A REALITY

Another force bringing the question of increasing algorithmic control of our lives to the forefront of the public’s mind has been the extraordinary improvement in autonomously driven vehicles since 2005, to the point that all major car manufacturers and many technology firms have promised to have such cars available for sale by 2020 if not before. The sticking point to their introduction has been less technological and more legal and political. California’s Department of Motor Vehicles (DMV), for example, recently published draft regulations on self-driving cars requiring that all such cars in California have a steering wheel and operating pedals, and that a licensed driver with an “autonomous vehicle operator certificate” be present to take control in case of an emergency.

The contentious legal arguments over defining what is considered to be acceptable smart machine operation have involved not only autonomously driven vehicles, but commercial unmanned aerial vehicles (UAVs) as well. The US Federal Aviation Administration (FAA) issued regulations in December 2015 mandating that nearly all UAVs for sale in the US (including those already sold) will need to be registered with the federal government. The rationale for the regulation is the worrying increase in the number of UAV-aircraft near misses that have been reported and UAV crashes that have resulted in property damage and personal injuries. It is a first step toward future regulations that the FAA is drafting in regard to fully out-of-sight UAVs, such as those that some companies want to use for package delivery.

Discover the steps makers and users of technology should be taking to offset what appears to be a rising backlash against automation.


Watch Bob Charette's Webinar Now

 

AUTOMATED DUPLICITY, RAPACITY, AND PERFIDY

A third driving force that has brought the question of what should be considered acceptable smart machine practice has been the increasing number of incidents of companies using software to defraud the government, their customers, or both.

For example, in September 2015 Volkswagen admitted that, since 2008, it had used engine control software to illegally help some 11 million of its vehicles sold around the world pass emission tests. The company has set aside $7.2 billion dollars to cover the anticipated costs of governmental fines and consumer lawsuits. In another case, Barclay’s Bank agreed to pay $635 million in penalties in 2015 for twice manipulating its electronic foreign exchange trading, one instance relating to manipulating spot market ­trading and the other involving its “Last Look” trade system. In the latter instance, which began in 2011, Barclay’s dishonestly rejected client trades whenever they would cost the bank money and then lied to their clients as to the real reasons why their trades were rejected. And in a third case, both Honda Motors and Fiat Chrysler were fined $70 million apiece by the US National Highway Transportation Safety Administration for failing for more than a decade to report vehicle safety issues to the government as required. Both auto companies blamed inadvertent “computer programming errors” for the oversight, a claim that did not seem credible to anyone.

THE LAW AND ETHICS AND THE ERA OF SMART TECHNOLOGY

It is a truism that policy makers and legislators struggle to keep up with the societal implications of emerging technologies. On one hand, there is the desire not to stifle technological innovation and the benefits it brings, but on the other, it is also important to protect the public from harm. In this nascent era of smart machines, where technology is moving from artifacts that are used to artifacts that we interact with, the law is falling further and further behind in providing needed guidance. For instance, US states such as Texas have no legal ­constraints on self-driving cars, and state officials ­indicate that they won’t impose any anytime soon. That could set up an interesting situation where a self-­driving car that would be “street legal” in Texas would be illegal to drive in California.

There are even more vexing issues to consider. For example, what path should a self-driving car be programmed to take in the event it finds itself in a situation where it may either have to crash into a bus stop full of schoolchildren or a nearby single adult pedestrian? The “trolley problem,” as it is called, is just one ethical dilemma that the designers of smart machines now have to confront, solve, and defend to politicians, the public, and their lawyers. Similar trolley problems are arising as smart machines are being developed for use in healthcare, aviation, finance, law enforcement, and so on.

TURNING POINT

When the law doesn’t exist, our only guidance to programming the algorithms of our smart machines is the ethics or value judgments we hold regarding what does and does not constitute acceptable behavior. But the question is — as it has been for thousands of years — in whose ethics and whose interests should we ultimately place our trust? I predict 2016 will be a year when that question becomes a topic of major public interest.

One reason is that, according to a Bloomberg News story, the year 2015 marked a turning point for AI. Signif­icant progress has been made on a host of difficult ­problems, ranging from image recognition to machine learning. Furthermore, we can expect improvements in AI to start accelerating in 2016, making the question of whether the algorithms being used to control smart machines are ethical — with all that word signifies — more important than ever.

If 2016 is not the year that we start paying attention to the ethics of algorithms, it had better be soon. For as Noel Sharkey, cofounder and executive chair of the newly formed Foundation for Responsible Robotics, has observed:

We are rushing headlong into the robotics revolution without consideration for the many unforeseen problems lying around the corner. It is time now to step back and think hard about the future of the technology before it sneaks up and bites us when we are least expecting it.

About The Author
Robert Charette
Robert N. Charette is a Cutter Fellow and a member of Arthur D. Little’s AMP open consulting network. He is also President of ITABHI Corporation and Managing Director of the Decision Empowerment Institute. With over 40 years’ experience in a wide variety of technology and management positions, Dr. Charette is recognized as an international authority and pioneer regarding IT and systems risk management. Along with being a Contributing Editor to… Read More