Article

AGI and the Ethical Challenges Ahead for 2018

Posted February 5, 2018 | Leadership | Technology | Amplify
Alexandre Rodrigues

CUTTER BUSINESS TECHNOLOGY JOURNAL  VOL. 31, NO. 1

As a follow-up to his article from last year’s trends issue, Cutter Senior Consultant Alexandre Rodrigues expands on his commentary surrounding AGI — artificial general intelligence. AGI capabilities continue to play an influential role in the business arena but as Rodrigues points out, there are two issues that companies need to firmly address: (1) setting realistic expectations for the technology and (2) various ethical challenges (with a need for more legislation and regulation).

The phenomenon of artificial general intelligence (AGI) will continue in many ways to influence the way organizations do business and societies organize themselves. The process has become irreversible. I addressed this matter in my predictions for 2017, and, given its continued relevance, this subject is worth pursuing as we look into 2018.

The development of AGI capabilities and their increasingly influential role in the business arena continues, ranging from war drones and self-driving cars to the optimization of sales processes, among many others. But there are two aspects we should address now: (1) setting realistic expectations as to how far AGI can go and in what time frame and (2) acknowledging the important ethical challenges that lie ahead.

From a science-fiction perspective, it seems quite reason­able to anticipate that AGI-based robots that look and behave like us will ultimately outmaneuver human beings due to their superior intellectual and physical capabilities. But is such a scenario likely to occur anytime soon? Will it ever happen? Why would humans build robots capable of overtaking our own species unless it were accidental? Do we have valid reasons to fear such a scenario, given the likely absence of human-like emotions in such robots? Should we develop a “safety” technology, side-by-side, to ensure we can exterminate, stop, or otherwise switch off those AGI-enabled robots should they become dangerous?

On the ethics side, there are two main aspects to consider. First, we need to think carefully about the “programmed” behavioral logic we may introduce into the robots. Some examples include the self-driving car that, based on its programmed logic, decides in an unavoidable collision whom to protect and whom potentially to injure — either its passenger or the pedestrian crossing the road. Or, an AGI sales program implemented to sell useless insurance products to uninformed consumers to improve profits (note that humans have long been committing fraud and deceit even without the existence of AGI).

Second, and, in my opinion, the far more relevant aspect in the short term, is the use (or abuse) of AGI devices to “hide” human unethical behavior. Examples include drones bombing the allegedly “wrong” targets, or myriad other actions in which companies may respond (once their misdeeds are uncovered in an attempt to deflect blame onto the robots) with statements along the lines of, “We apologize for the incorrect actions of our robots; we are working hard to further improve the ethical rules we incorporate into our AGI devices.” Or, consider operating systems that degrade older versions of the hardware they run on by reducing their performance, with the aim of stimulating users to dispose of old hardware and buy new devices, thereby increasing sales. Since AGI robots have the potential to exacerbate the worst tendencies in human behavior, where will we draw the line between direct human accountability and accidental, unintended AGI behavior?

We are still quite far away from AGI robots reaching the level of outmaneuvering humans and becoming auton­omous beings in themselves. It is arguable whether we will ever reach such a scenario (despite being possible). To reach the AGI point of so-called singularity, there is a long road of cumulative progress required, from (1) logically optimized programmed behavior (e.g., finding the fastest route between two geographical points on the map); (2) “animal-like” sustained and efficient learning (most likely based on artificial neural networks); (3) artificial self-awareness; and, finally, (4) a form of self-sustainability and/or a system to ensure continuation and evolution (the equivalent to procreation in biological beings).

While Step 1 largely has been achieved, we are still developing technology for Step 2 while Step 3 is still under philosophical discussion. In my opinion, humans are more likely to use technology and AGI to extend our own capabilities and life expectancy, or, in other words, to introduce AGI technology into our evolutionary path, rather than creating new “AGI beings” from scratch capable of taking over humankind (which would be Step 4). In my view, no major breakthroughs to singularity are expected in the short term.

The ethical dimension, on the other hand, is becoming increasingly relevant. Not so much because of the accidental emergence of “unethical” behavior of AGI-enabled robots, but because of the unethical use of AGI by humans in the business world and in social affairs (e.g., the selling of useless insurance products or the misuse of drones in warfare). These areas, where crucial and important issues exist in the short term, require a deliberate consideration of those ethical ramifications.

In fact, I believe that the dangers and risks from a self-sufficient, AGI-enabled device that acts “unethically” due to possibly lacking (benign) “human emotions,” are far less than the dangers and risks of humans taking advantage of AGI devices to pursue their own agendas — often focused on obtaining immense benefits for a few individuals at the expense of great loss for vast segments of society.

Areas where concerns may be immediately identified relate to the use of drones — mainly in, but not limited to, warfare — and in the use of AGI in the areas of security, privacy, and sales. The main concerns relate to the current lack of legislation and regulations, not only about what can be done with AGI devices (e.g., airports have only recently supported legislation to restrict the use of private drones that dangerously interfere with landing planes), but also in terms of what kind of AGI itself can be developed. Should it be legal to develop software that deliberately markets profitable but wrong insurance products to the prejudice of clients? Or to program an AGI war drone to identify its targets based on racial or religious prejudices, for example?

The potential for the development of “unethical” AGI-enabled devices is immense and the financial and social interests at stake are very high. Indeed, I believe these stakes are so high that humans will not easily resist the temptation to make use of this potential. Furthermore, unethical actions, when perpetrated by AGI devices, will make it very difficult, if not impossible, to trace accountability to one or a group of specific individuals.

In summary, I anticipate that AGI is an arena with rapidly emerging issues, triggering concerns and the need for a more proactive approach to legislation and regulation (which so far has been primarily reactive). The label “We use AGI ethically and responsibly” should be honestly used and rapidly promoted as a marketing asset. Ethical and responsible use of AGI will benefit society as a whole. Unregulated and unethical use will plunge society further into socio­economic and environmental problems, conflicts, and social inequality, all of which over the last few years have increased globally at an unprecedented rate.

About The Author
Alexandre Rodrigues
Alexandre Rodrigues is a Cutter Expert and a member of Arthur D. Little's AMP open consulting network. He is Executive Partner of PMO Projects Group, delivering advanced project management training and consulting services to clients worldwide. Dr. Rodrigues has more than 25 years’ experience in project management and has lead the implementation of earned value management (EVM) systems for multibillion-dollar projects and control programs,… Read More