Article

Robots, Algorithms, Ethics, and the Human Edge

Posted June 2, 2016 | Leadership | Amplify
Paul Clermont
In this issue:

 

CUTTER IT JOURNAL VOL. 29, NO. 5
  


Robots have fascinated people ever since the Czech author Karel Capek introduced the word in his 1920 play R.U.R., so worrying about robots and how they’ll interact with (or replace) us is nothing new. Early conceptions of robots were eerily anthropomorphic, but the first practical robots were anything but. Introduced to factories in the 1980s to do specific repetitive tasks like welding or painting, they replaced muscle power and resembled disembodied arms with multiple elbows and wrists. Their “intelligence” was provided by algorithms their “masters” — programmers — coded. (Household robots can now vacuum floors and mow grass.)

An early concern about robots was their ability to replace human labor, leaving the question of what unemployed welders and painters would do, a question as old as the first time a beast of burden carried a load that would have required several people. That remains important from a societal perspective (more on this later), but it has recently been eclipsed by bigger questions as the related field of artificial intelligence (AI) has, in essence, begun to robotize tasks requiring brain power applied in prescribed and structured ways, such as lending decisions, legal research, and medical diagnoses. (I have taken the liberty of extending the concept of “robotizing” to the use of algorithms to execute non-physical tasks once the exclusive province of people.)

Computer scientists in labs and universities have pursued AI for more than half a century, making advances apace with the ever-growing power of computers. They made headlines in 1997 when IBM’s Deep Blue defeated the world’s greatest chess player. Language translation has evolved from the butt of humor to something ­serviceable like Google Translate. The latest headline accomplishment was the AlphaGo computer beating the world’s best Go player about 10 years before experts believed that would be possible. AlphaGo succeeded by incorporating heuristics — not even today’s supercomputers have the horsepower to evaluate exhaustively the myriad combinations by brute force — and these heuristics had the ability to improve themselves through trial and error and pattern recognition without human intervention. One somewhat disquieting aspect of AlphaGo’s victory is that it made some very unusual, almost unorthodox, moves that would not occur to most human Go masters but which proved highly effective. AlphaGo’s designer was himself surprised.

Much sensationalist (as well as thoughtful) press has appeared about computers exhibiting such seemingly superhuman capabilities in human pursuits. Impressive as AlphaGo and its ilk are, however, we need not yet fear robots and computers taking over the world. Go, like chess, is well suited to AI. It involves complex and rigorous “thought” processes that are natural to computers but make most people’s heads hurt. One hundred percent of the relevant information is available. There is no possibility of cheating. The goal is unambiguous and morally and ethically neutral. In other words, it’s not like much of real life.

We should be scared — very scared — if and when a computer starts beating Las Vegas poker champions or even our buddies from a friendly game. Poker is only peripherally about cards and odds; it’s really a psychological contest in which players try to read their opponents’ “tells” while concealing or falsifying their own. In other words, it’s like real life distilled down to its most stressful intensity. Such a computer would pass the Turing test with an A+!

THE EMERGENCE OF ROBOETHICS

The purpose of this article is to explore the boundary between machine capabilities and what once seemed uniquely human. That boundary has certainly moved over the years, justifying concerns that the relatively new field of roboethics addresses. Roboethics goes beyond job losses and looks at the impact of robotization on society as a whole; that is the major topic here. (I will address job losses at the end.)

An algorithm can be unethical in both obvious and subtle ways. It could be illegal, as may have been the case with Volkswagen’s engine management algorithms for its “clean” diesel engines. It could be unethical in the sense that it violates a sense of fair play, as I discuss in the “Overdraft Handling” sidebar.

OVERDRAFT HANDLING

Say you unintentionally write a number of checks that hit your bank on the same day, when the available funds are insufficient to clear them all. Your account might be with any of three generically named banks. Tenth National’s algorithm processes the checks in a sequence that minimizes the number of overdrafts, each incurring a typical $35 fee, usually accompanied by the payee’s $25 fee for a bounced check. Fifth State processes them in no particular order. Third City processes them in the order that maximizes the number of overdrafts and thus the fees it collects.

A strong argument could be made that Tenth National’s algorithm is ethical; it extracts an appropriate penalty but reflects a level of decency. Third City’s algorithm, by that standard, is unethical by design; it takes advantage of an opportunity to increase its profits by piling fees onto what are probably its least well-off depositors. Fifth State’s algorithm is not by design either ethical or unethical, but when ethics could be designed in, the bank’s failure to do so borders on unethical.

 

More subtly, an algorithm could take on decision-­making roles that a human is better equipped to play, thereby yielding unethical results. While algorithms are better at minimizing stereotyping and personal prejudice in decision making, and they guarantee thorough and complete data collection and analysis, people still offer critical strengths. I call these the “human edge”:

  • Nuanced judgment based on circumstances and ­context, differentiating between situations that are the same only technically or on the surface
     
  • Emotional intelligence and empathy
     
  • Plain old common sense applied when the algorithm produces absurd or unjustifiable results
     
  • Intuition, imagination, and creativity
     
  • A sense of fairness, decency, and the golden rule — the essence of ethics — and the ability to apply it when an algorithm would violate that sense based on data that a human could recognize as stray, incorrect, or irrelevant
     
  • Being accountable for results without the defense that “the algorithm made me do it”
     

I use the term “edge” in two senses, as both a boundary and an advantage, and I suggest that the boundary will prove robust for a very long time.

A corollary of this is that algorithmic approaches don’t necessarily involve computers and AI. Consider, for example, mandatory sentencing rules that take over part of the traditional role of judges.

Another conclusion is that algorithms are only part of this emerging discussion because most algorithms depend on data about the situation at hand, plus knowledge developed from large volumes of statistics related to that situation. Data don’t just appear; they have to be collected, primarily from us, often without our knowledge. The sheer power of IT to collect, store, transmit, analyze, and distribute exabytes (a billion billion bytes) of data — all of these capabilities growing exponentially — has raised possibilities for abuse and misuse only now imaginable and well outside the scope of laws and regulations developed to address yesterday’s issues. Today’s data collection can provide real and important benefits to individuals and society as a whole, but we must not ignore the potential for data misuse and abuse (a subject that merits an article of its own).

ALGORITHMS USED AND MISUSED

The following examples show how robotizing activities that call for nuanced thinking, fine judgment, common sense, and/or plain decency is not just unethical, but often absurd and abusive. Some of the examples have nothing to do with computers but still represent ­robotizing.

Home Mortgages: Then and Now

  • In 1969 and 1978, my wife and I were granted mortgages based not just on verifiable data, but by coming across to the banker who interviewed us as serious and responsible young people. The officer had to be careful, since the loan would stay in the bank’s portfolio for multiple years. It’s possible our results would have been different — even if everything else were the same — had our skin been darker or had my wife been instead a male “housemate.”
     
  • In 1993, despite a large down payment and my wife’s great job, my self-employment status raised an algorithmic red flag (a false negative in this case). As a result, we had to jump through numerous hoops, some of them ridiculous, and engage a mortgage ­broker to advocate for us. We never met a single employee of the bank that ultimately granted the mortgage, but then why would they have bothered, given that they sold it off in a few months?
     
  • The collapse of the housing bubble was largely the fault of false positives — granting huge mortgages to people who had little chance of paying them off. This was driven in part by the extraordinary profitability of subprime loans if they are repaid, a very big “if” as it turned out. The incentive structure virtually guaranteed the horrendous days of reckoning we experienced, which almost collapsed the world’s financial system.
     

The $400 Hammer

A number of years ago, a US defense contractor was publicly pilloried by a Senate committee for charging the Air Force $400 for a hammer. By chance, I was consulting with that contractor at the time, and their CFO showed me in detail how that figure was developed using the Defense Department’s own algorithms for calculating the price. Strictly speaking, nothing unethical happened, but when an algorithm can create the appearance of unethical behavior, it makes sense for a human to override it, as the contractor thereafter did.

Criminal Justice

During the 1970s and 1980s, when US crime rates were soaring, concerns were raised that some judges were “coddling criminals” by turning them loose with nominal or suspended sentences. In response, both US state and federal legislatures passed minimum sentencing laws, imposing algorithms on judges.

Laws are blunt instruments that address offenses, not individual offenders. The traditional role of judges has been to exercise — yes — judgment, recognizing that two people convicted of the same offense can pose ­drastically different risks to society if not locked up. The result of these laws is a far higher rate of incarceration in the US than ever in the past or than in any other developed country, largely for nonviolent offenses. The toll of lives ruined by excessive algorithmic sentences fairly screams “unethical.”

Sex Offender Registries

Few crimes are more heinous than the torture, rape, or murder of children. Sometimes perpetrators, after finishing even long sentences, do it again. The emotional outcry when this happens is understandable: “How can we protect our children when we don’t know these monsters might be lurking right next to a school?” Laws get passed, often named for the victim, requiring that sex offenders no longer in prison register their whereabouts and live a safe distance from possible prey. Unfortunately, in the implementation, some jurisdictions have used very expansive definitions of sex offenses to include consensual sex between minors and even children playing “doctor.” A recent New Yorker article cited a number of cases where behavior of minors that’s at worst undesirable or unwise gets classified as a sex crime, casting a permanent pall over the lives of these minors when their names show up on publicly accessible registries. Even when appeals to common sense have expunged the names from the official registry, there is no requirement for private websites that have copied public registries to update their copies. Even some of the most vocal proponents of registry laws have decried the gross injustices — aka unethical algorithms — built into their implementation.

Speed Traps

Soon after automobiles were introduced, it was clear that they could go a lot faster than was safe in the circumstances, hence speed limits. After not too many years, it also became clear that catching people speed­ing could be a good source of non-tax revenue for local jurisdictions. The advent of radar and lasers — plus quotas — turned speed traps into an industry. Perversely, they were usually placed to catch people in locations where modest speeding would not be dangerous — a technical violation unrelated to the intent of the law. You’re caught and fined by algorithm because it’s easy, unlike detecting and dealing with driving that’s actually dangerous. While the practice is not technically unethical, the bigger picture is more questionable, as scarce police resources are diverted from public safety to collecting government revenue. The police practices uncovered in Ferguson, Missouri, crossed the ethical line unambiguously when African-Americans became a special target of enforcement.

Airport Security

As a frequent flyer with a metal hip, I get to spend a fair bit of time being examined at airports. Not only am I taken aside and 100% wanded to locate the metal and assure its harmlessness, which is perfectly reasonable, I am also patted down, a “service” only rarely offered to people who don’t set off the metal detector. Two algorithmic approaches crash head-on here. One is avoiding even the appearance of ethnic profiling, devoting the same level of attention to all, regardless of any clues about the likelihood of being an actual terrorist (such as age and, yes, ethnicity). The other algorithm would favor efficient use of security resources, which would argue that clues be taken into account when determining the level of attention. Recent test findings showing that contraband is still getting onto flights with an alarming frequency suggests the balance between the competing algorithms may be a bit off.

“Zero Tolerance”

Zero tolerance policies, often applied in schools to students based on behavior or possessions, bespeak a no-nonsense approach: no exceptions, applying equally to all, and so on. Unfortunately, this rejection of human discretion can lead to absurdities such as a five-year-old being accused of making a “terroristic threat” for talking in the bus line about her Hello Kitty bubble gun. Clearly such policies should sound an alarm to anyone who seriously ponders the ethics of algorithms. They are a flat-out denial that the human edge can add value, and they’re coming under critical scrutiny.

WHAT TO DO

Roboethics owes its existence as a new discipline to robots and algorithms, but these are not themselves the real ethical threat. Rather, the threat comes from robotic and algorithmic approaches to situations where the human edge is critical to ensuring results that are fair and beneficial to individuals and society at large. Computers may or may not be involved; it’s the approach that matters. Addressing the threats needs to happen at multiple levels.

Public Policy

  • Only legislation or judicial decisions can deal with existing laws such as mandatory minimum sentences or the overinclusive definition of sexual offenders. This means recognizing that justice is not the same as law enforcement. No matter how necessary or well intentioned, a statute cannot make the fine distinctions that justice calls for if lives are not to be unnecessarily blighted.
     
  • Governments need to embrace the notion that fines should be levied as punishment for infractions with the goal of minimizing occurrence of those infractions — not as a source of predictable revenue. (Good luck with this!)
     
  • Unethical algorithms need to be exposed and dealt with by, for example, consumer protection agencies.
     
  • New laws should better protect whistle-blowers who call out ethical issues with algorithms.
     
  • New laws should mandate that third-party repositories of official data keep their copies of that data up to date when the official source changes, with penalties for failure to do so.
     

Media, Watchdog, and Advocacy Groups

Such organizations can play a constructive role by highlighting laws that result in unethical outcomes so as to generate popular support for change. They can also play a part in naming and shaming businesses that deploy unethical algorithms, such as Third City’s handling of overdrafts, with the goal of banning them. By building awareness, such publicity makes it worthwhile for better-behaved companies like Tenth National to incorporate their “code of ethics” into their marketing. (Refer back to sidebar.)

Businesses and Governments

Businesses and governments need to remove robotic algorithms from jobs where the human edge matters. Algorithms can be tremendously helpful in decision making up to making recommendations, but not actually deciding in cases where the human edge plays an important role in ensuring fairness and applying common sense. Explicit liability for bad robotic decisions is needed.

These entities also need to recognize that as algorithms become more sophisticated, they may generate unpredictable results à la AlphaGo. This suggests a need for the equivalent of the nuclear industry’s containment vessels to avoid algorithms going out of control, as may have contributed to the home mortgage meltdown in 2007-2008.

Military Policy

To the extent that autonomous weapons replace physically present soldiers who have clear visibility into the scene (the “fog of war” notwithstanding) such that they can exercise judgment, common sense, and decency, robots as soldiers would be another example of unethical use of algorithms. Robots fighting robots sounds like a lot of games and films.

IT Practitioners Need a Code of Ethics

Under an IT code of ethics, practitioners would:

  • Refuse to participate in illegal IT (e.g., VW’s emission test–cheating software)
     
  • Call out attempted misuse based on robotizing ­activity where the human edge is critical
     
  • Call out algorithms that offend standards of human decency in pursuit of profit (e.g., Third City’s overdraft handling)
     
  • Establish “containment vessel” processes for recognizing unpredictable and possibly erroneous algorithmic outcomes in time to enable human intervention
     
  • Avoid premature public release of applications when the likelihood of problems adversely affecting users is more than very low (unless accompanied by explicit warnings and waivers)
     
  • Ensure the security of sensitive personal data
     

Is this idealistic? Of course. Companies are not democracies. IT professionals have mortgages to pay and children to educate, making pressure to build something of dubious ethics extremely difficult to resist. When whistle-blowers reveal that they were asked — or, more accurately, told — to do something illegal, they may have the satisfaction of knowing they did the right thing, but too often at great cost to their careers and their families.

Just Because We Can

… should we build it? Concerns over new technologies can be overblown by the media and politicians, but they should not be reflexively dismissed. Yes, such concerns can slow down innovation, but that is not necessarily a bad thing. DDT and thalidomide did their intended jobs beautifully — but then we saw their devastating side effects. The pressure to move fast is particularly intense in IT, where speed to market is critical and tech executives with a libertarian bent want governments and ­public interest groups to stay out of the way. That ­doesn’t mean every idea should proceed at full throttle, though, assuming nasty flaws will be kind enough not to materialize. When members of the public could be adversely affected by things going wrong, prudence and caution are in order.

WHAT ABOUT THE JOBS?

Every technology in human history has destroyed some jobs. They have also created new jobs, requiring more skill, for people who improve, manage, and operate the technology. With respect to jobs — not social impact — robots and AI are simply the latest manifestation of this phenomenon. In this article, I argue that we are far, perhaps a bridge too far, from computers that truly mimic the capabilities I’ve lumped together as the human edge. What we can expect is that jobs requiring no element of the human edge (e.g., welding and painting on an assembly line) will continue to disappear, along with even more skilled work (e.g., a lot of legal research and basic accounting) as clever technologists figure out how to do it by machine. But the jobs won’t disappear entirely. Some may simply never be economical to robotize, like a lot of housework. (Who knows, we may see a resurgence in the use of live-in domestic servants, in one stroke filling up the McMansions and relieving the housing shortage in affluent areas!)

Jobs requiring the human edge will not disappear and in fact could increase as people take back work that has been overly delegated to algorithms. Naturally, the more that jobs require multiple elements of the human edge, the more interesting and emotionally (and financially) rewarding they become, as the element of scut work is minimized by the use of algorithms. For example, in the financial industry, the labor-saving benefit of algorithms that analyze large volumes of data will be offset by time spent doing more analyses and improving the algorithms.

Concerns about job displacement are legitimate, and the labor market left to its own devices may not be a match for the dislocation. Society as a whole has to ensure fair and responsible solutions, but that’s above the pay grade of technologists.

IN CONCLUSION

Where technologists need to play a real role, both in terms of actions and advocacy, is in dealing with the ethical issues we face as IT ever more thoroughly ­infiltrates our whole lives, not just our jobs. Future technologies — perhaps prodded by laws and regulations and social pressure — may help address some of the ethical concerns we have about today’s technologies, but they will likely add new concerns. Responding to them requires clear thinking about the encroachment of computers and algorithms into the realm of the human edge. In this article, I have tried to contribute to that clarity.

Go is an ancient game from Asia played on a 19x19 matrix. Players have either black or white stones that they place to mark territory and capture their opponent’s stones. It seems simple until you try to play it!

I owe this analogy to my former Nolan Norton colleague Bruce Rogow.

Just as I got senior enough to delegate spreadsheet work, VisiCalc came on the market!

About The Author
Paul Clermont
Paul Clermont is a Cutter Expert. He has been a consultant in IT strategy, governance, and management for 40 years and is a founding member of Prometheus Endeavor, an informal group of veteran consultants in that field. His clients have been primarily in the financial and manufacturing industries, as well as the US government. Mr. Clermont takes a clear, practical view of how information technology can transform organizations and what it takes… Read More