Article

What’s Really Ahead for Generative AI?

Posted August 28, 2023 | Technology | Amplify
What’s Really Ahead for Generative AI?

AMPLIFY  VOL. 36, NO. 8
  
ABSTRACT
Cutter Expert Paul Clermont takes a down-to-earth look at what we can expect from AI in the near term. For one thing, he says, we’re still in the garbage-in, garbage-out phase with LLMs; for another, it’s nowhere close to artificial general intelligence. There are, of course, ethical and social implications, including the fact the AI puts what we don’t like about today’s Internet (disinformation, loss of privacy, and more) on steroids. A host of new legal issues also needs attention, Clermont notes, which may lead to governments playing a role in the evolution of AI usage that they did not assume in the advent of the computer or the Internet.

 

The term “artificial intelligence” (AI) was coined in 1956, when a group of mathematicians and computer scientists (surely inspired by Alan Turing, who died in 1952) conceived of a computer that could not only solve mathematical and combinatorial problems but could learn to become progressively better at doing so.1 AI stayed well out of the public eye, confined to academic and corporate laboratories until 1996, when Deep Blue, an IBM supercomputer, beat the world’s best chess player. Other notable successes followed, including Watson (another IBM supercomputer) winning Jeopardy! in 2008 and DeepMind (software developed by a British company subsequently bought by Google) beating the world’s best Go player in 2016. Mass media took note. Today, AI news and opinions are daily fare, driven in part by the emergence of generative AI (GAI), which can generate original text and pictures about almost anything with minimal human instruction.

This is far more impressive than winning games with clear objectives and strict rules. New publicly accessible software has inspired both awe and fear as chatbots and illustrators (e.g., DALL-E) appeared and began improving, seemingly by the day. The speed of innovation has stoked fears that GAI will lead to terrible consequences. Governments in the US, UK, EU, and China are scrambling to develop AI regulations in an effort to minimize potential harm.

GAI promises (threatens?) to be a conceptual breakthrough on the level of automobiles, TV, and personal computers. Although none of these were very useful or universally embraced at first, they did not face active hostility as they worked their way into our daily lives. GAI faces a more skeptical public:

  • The long honeymoon that Big Tech enjoyed is over.

  • Less endearing aspects of the Internet have come to the fore (more on this later).

  • Some tech gurus and pundits have raised the specter of super-intelligent, amoral machines taking over and destroying humankind.

  • Tech industry moguls are calling for government regulation, a first for a group in which libertarians are well represented.

Anxiety, expressed in words like “trustworthiness” and “harms,” has become prominent in the public discourse and, more importantly, in government documents, such as proposed laws and requests for public comment.

Current Capabilities & Limitations

Before we can talk meaningfully about GAI, we should think about what the word “intelligence” conveys. When we describe people as intelligent, we may be referring to intellectual capabilities; for example:

  • Ability to command a great deal of information, both general and specific to a person’s career and interests

  • Ability to see nonobvious patterns and connections

  • Ability to identify and methodically evaluate a rich set of options before making a decision

  • Ability to see beyond conventional wisdom when problem solving

  • Ability to express oneself in clear, coherent, grammatically correct language

All of these have been successfully replicated on computers. The last item can be seen in GAI like GPT-4 and DALL-E, which can create high-quality, readable, and understandable text, pictures, and graphics for a variety of purposes:

  • It can help with research by finding and summarizing relevant published content.

  • It can find and summarize relevant legal cases and statutes.

  • It can do not-very-creative writing, including boilerplate and form letters. It can draft documents for people to finish (e.g., proposals and business letters), a boon for those spooked by a blank screen.

  • It can create pictures or modify existing pictures from a text description. Uses can be commercial or personal.

  • It can help software developers find and incorporate segments of proven open source code.

  • It can simplify and accelerate extracting and summarizing relevant information from large numbers of responses to open-ended questions.2

As we’ll see, fact-checking is still needed when the stakes are high and/or when there may be legal considerations or possible copyright infringement. Nevertheless, it’s enough to scare folks. If a computer can do all that and get better at it almost by the day …

How Scared Should We Be?

On the spectrum between Alfred E. Neuman (the “What, me worry?” kid of Mad Magazine fame) and Chicken Little (“The sky is falling!”), I suggest leaning toward Alfred. Lots of technologists and other pundits disagree, so I’ll try to make the case as clear as possible.

However impressive the GAI we have seen thus far is, it falls well short of a reliable tool. It is still in the stunt (or, more politely, the proof-of-concept) stage of development. For example, partners in a law firm were recently fined US $5,000 for using ChatGPT to write a legal brief. The output looked great, citing decisions in relevant cases that supported the brief’s arguments. The problem was that no such cases or decisions existed. The tool made them up, and the partners did not check the work as they would have done for a paralegal or junior associate.3

This is not an isolated case. In April, the Washington Post reported on an incident in which an AI chatbot generated a list of legal scholars who had sexually harassed someone and included a law professor who had never been accused of sexual harassment.4 The AI term for this is “hallucinations,” which includes coming up with inappropriate non-facts or highly plausible fakery. Large language model (LLM) mavens haven’t yet figured out why this happens or how to prevent it but are presumably working feverishly on the problem. A text-generating tool that requires thorough fact-checking before using its products may be worse than no tool at all.

On a positive note, GAIs write rather well. An LLM uses vast amounts of written material to learn what words tend to follow other words, similar to what we see when composing text on a smartphone. The quantity of material it uses for this training is the paramount consideration, assuming that material is grammatical and uses words correctly. Judging by current LLMs, this part of the training works well.

However, training LLMs in the subjects it may be requested to write about requires concentration on the quality (not quantity) of the training data. LLMs cannot currently recognize obsolete information or detect subtle biases in training materials, and if they’re there, they’ll make their way into its written products: GIGO (garbage in, garbage out), as always. (The people tasked with cleaning up training data aren’t infallible either.)

In the meantime, GAI images are getting better. When DALL-E first came out, I asked for pictures of a smiling woman with a martini. It returned crude, almost grotesque images as though the intelligence itself had been dulled by more than a few drinks. I tried again last week and got high-quality, believable pictures. Since images of good-looking young women are plentiful, I asked for the smiling woman to be in her 80s — again, high-quality and realistic images were returned.

Generally speaking, current design concepts appear robust enough to permit more sophistication, assuming the engineers who tackle problems like hallucinations succeed. The quality of subject-area training data can, and will, be improved as AIs focus on more limited knowledge pools. AI learns autonomously and can increasingly participate in the campaign to eradicate obsolete and biased training data, especially when augmented by human reinforcement. Caveats and favorable assumptions notwithstanding, one can only say that generative AI is an impressive achievement on a par with the first modern digital computer and the Internet.

But how close is it to artificial general intelligence?

Although GAI shows some impressive signs of human intelligence, it is still an idiot savant.5 It far exceeds human capability at tasks computers are good at (e.g., instant recall from a gargantuan photographic memory or rapid computation or pattern recognition), and it will tirelessly and monomaniacally pursue an objective without being distracted by anything.

But there are hallmarks of human intelligence that are less easily defined (but recognizable when you see them) and vital to making the world work. Examples include judgement (pragmatic, ethical, moral); intuition; reading between the lines; sensing implicit questions, dealing with nuance; creating truly original artifacts (stories, images, music) out of one’s imagination; seeing subtle connections and analogies and distinguishing them from coincidence; sensitivity to context; plain old common sense; and the panoply of people skills, not least persuasiveness. People given an objective can sense implicit constraints that matter when their effort might be carried to the point of conflicting with something more important.

To be truly artificially generally intelligent, the technology must reflect these additional hallmarks to some degree. In this endeavor, we are essentially nowhere, which should surprise no one. The history of AI is littered with erroneous assumptions that because a complex intellectual feat has been achieved (champion-level checkers in 1962 plus the milestones cited earlier), less intellectually demanding tasks will be relatively easier.

In fact, the opposite has proven true. The more “primitive” the mental activity humans have been engaging in for thousands of years (before chess), the more difficult it is to capture it in software.6 It may be impossible in some cases. Ironically, some of the most intelligent people in terms of “softer” hallmarks are not the most brilliant in “harder” ones, and vice versa.7 Games like chess reward a computer’s strengths; real life rewards uniquely human strengths, supplemented with appropriate computer strengths.

We should also bear in mind that the concepts and techniques underlying GAI were developed years ago and could have been brought to market much sooner but for the lack of sufficiently fast processors and sufficiently massive storage. The fast progress of the past few months does not represent a steady-state development rate.

This is not to say that we will never build machines with artificial general intelligence. We may do so eventually (it’s a mistake to underestimate the creative power of amazingly clever people setting their minds to an intellectually challenging task), but we ought not to hold our breath.

Some very prestigious people, including tech executives, are clamoring for AI regulation out of concern for artificial general intelligence (or maybe even super intelligence) wreaking havoc in the not-too-distant future. A more cynical interpretation is that this is a deliberate diversion from having governments worry about the plentiful harms that today’s limited AI can (and probably will) do in the short term.8 Warning: This argument applies only to generative AI (see sidebar “Potential Dangers of Non-Generative AI”).

Potential Dangers of Non-Generative AI

AI built into highly networked systems that control physical facilities like the power grid present far more opportunity for near-apocalyptic, intermediate-term mischief. They may seem easier to build since they don’t require artificial general intelligence, but doing it right is far from easy. It requires building in the software equivalents of guardrails and containment vessels needed to respond to all that can go wrong. Unless multilevel constraints are built in, an AI given an objective will pursue that objective relentlessly in both reasonable and crazy ways, and any attempt to work around them must be recognized and thwarted. Common sense is not an AI feature. Obviously, security against intrusion and hacking are of paramount importance. There’s also a case to be made for not pursuing the last iota of efficiency in choosing what to automate and whether to use AI. Overoptimization is a trap: it assumes everything will work correctly all the time — those whose supply chains were massively disrupted by the pandemic or the Fukushima nuclear meltdown have presumably learned this lesson.

Ethical & Social Implications

In the short term, it’s widely understood that AI can be misused to do real harm to individuals and society. That was not a consideration for computers, per se. It wasn’t for the Internet either, but in retrospect should have been as we wrestle with loss of privacy; advertising revenue–driven surveillance capitalism;9 an algorithm-enhanced attention economy sending people down informational rabbit holes; scammers; screen addiction (particularly among children and teens); trolling; cyberbullying to the point of driving some to suicide; and the proliferation of fake news, disinformation, conspiracy theories, and outright lies that can polarize societies. Unfortunately, AI puts what we don’t like about today’s Internet on steroids — mass production with minimal human labor. Implications to consider include:

  • Chatbots hosted by large tech companies are likely to reflect honest efforts to make them as good as possible because the stakes are high, but that’s not necessarily true for smaller players.10

  • AI facilitates undesirable (and sometimes already illegal) hyperrealistic deepfakes (pictures, videos, or audios) that purport to show someone doing or saying something they never did or said in a place they may never have been, perhaps in the company of people they never met.11 The potential for this to sway elections and destroy careers, reputations, and marriages is obvious.

  • Face recognition without laws to restrain its use could lead to a level of privacy loss that would make the world of George Orwell’s 1984 seem benign.

Bias-free is a goal that must be pursued assiduously, but it cannot be an a priori requirement. Essentially, all potential training data other than hard science reflects some bias (conscious or unconscious). No effort to “clean” it can be 100% effective, but it must still be remediated when detected. Fortunately, a side industry of think tanks, both independent and within major players, has emerged to address AI ethics.12 We need them to be not only smart, but skillful in ensuring they’re heard and heeded.

Additionally, a host of new legal issues needs attention:

  • Compensating the owners of the data used for training. (Will LLMs like the one behind ChatGPT that hoovered up training data from everywhere without regard to copyright have to be replaced?)

  • Footnoting and citing reference sources.

  • Limits on free speech when producing and disseminating assertions known to be false (e.g., “the Earth is flat”) or overwhelmingly discredited by numerous reputable bodies (e.g., “Trump won in 2020”).

  • Classifying AI applications by level of risk to guide the scope and depth of regulation as the EU is doing with four categories: forbidden, strictly regulated, loosely regulated, and unregulated, based on the potential to do harm.

  • Identification of material substantively generated by AI (not just improved in grammar and style) as an AI product.

  • Treatment of libelous material when generated by AI rather than humans.

  • Liability of AI app users versus providers of the LLMs they rely on.

  • Licensing and auditing AI applications.

  • Penalties for malicious deepfakes.

There is also the issue of jobs. Every wave of industrialization and automation has spawned predictions of massive job loss. So far, the lost jobs have been replaced by new jobs.13 The lump-of-labor fallacy has proven durable, though — a bit like the boy who cried wolf.14 Will generative AI be a real wolf at last? Probably not, but there are no guarantees.

What Can We Expect in the Near Term?

First, generative AI will gradually become less controversial, although some will continue to decry it, emphasizing its problems and potential for harm. Some early critics will find ways to use it to good effect. For example, many teachers who initially saw only the problem of high-tech plagiarism have now embraced AI as a way to teach critical reading and thinking, including lessons on wording requests to get the results you seek (the term “prompt engineering” refers to this). AI will get better, and we’ll see cautious (very cautious if the hallucination problem isn’t substantially mitigated) uptake in areas where its value is clear.

Second, barring major fiascos, building in AI will become the latest corporate fad, with almost every large company claiming to use it, no matter how trivial the application. (Remember reengineering?)

Third, governments will play a role in the evolution of AI usage that they did not play in the advent of the computer or the Internet. The EU and the UK government are devising and enacting laws. China is making rules as well, untrammeled by messy democratic processes. The US government (both Congress and the Executive Branch) is gathering testimony and commentary from industry executives and thought leaders, but the path to laws and regulations will likely be circuitous. Of necessity, US companies doing business overseas will need to conform to local regulations. GAI is sufficiently new and different, with effects on everyone, that attempting to regulate it by tweaking existing laws and regulations would be futile. Instead, legislators and bureaucrats need to approach the job from the ground up, based on principles that fully account for GAI’s uniqueness.15

Fourth, AI’s potential value in waging war will provide an impetus for rapid development wholly apart from business and personal use.

Fifth, any further talk of moratoriums on AI development will be just talk, no matter how eminent the people who call for it. In a field as exciting as AI in an industry as fiercely competitive as high tech, the idea that super-motivated inventors and engineers will simply take an extended vacation is ludicrous.

Conclusion

For better or worse, GAI is here. It may be a boon, reducing drudgery, leveraging talent, increasing productivity, and simplifying life at home and at work. It may also be a bane, as I’ve suggested.

The assumption that a free market will somehow grow the good AI and suppress the bad is hopelessly naive, as our Internet experience has shown. Potential profits are too high for that. We must hope our political leaders have the fortitude to push entrepreneurs and executives to do the right thing, which will require making doing the wrong thing unprofitable.16 Turning good ideas into enforceable laws and regulations that capture their spirit (i.e., not riddled with loopholes) will be the challenge of the decade. Still, we have to hope.

References

1 One of them was Marvin Minsky (1927–2016), whose course in AI I took many years ago. It was irrelevant to my career until recently.

2 I got this from my son, a political pollster. Before generative AI, pollsters avoided open-ended questions despite the deeper insights they offer because of the labor required to analyze the answers.

3 Weiser, Benjamin. “ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness.” The New York Times, 22 June 2023.

4 Verma, Pranshu, and Will Oremus. “ChatGPT Invented a Sexual Harassment Scandal and Named a Real Law Prof as the Accused.” The Washington Post, 5 April 2023.

5 This is an old-fashioned term, now surely politically incorrect, used to describe some unusual people who would today be considered on the autism spectrum. They seemed to be of low intelligence but exhibited superhuman capabilities at tasks like rapidly multiplying three-digit numbers in their heads or quickly identifying the day of the week, given the date on which some long-past event occurred. Sadly, they were exploited as curiosities.

6 It has proven extremely difficult to teach a robot to run across a randomly cluttered room without bumping into things, something most one-year-olds teach themselves to do.

7 US President Franklin D. Roosevelt, generally considered among the top three presidents (with Washington and Lincoln), was described by US Supreme Court Justice Oliver Wendell Holmes as having a second-class intellect but a first-class temperament.

8 This possibility has been mentioned in several publications, including: Heaven, Will Douglas. “How Existential Risk Became the Biggest Meme in AI.” MIT Technology Review, 19 June 2023.

9 Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.

10 Thompson, Stuart. “Uncensored Chatbots Provoke a Fracas Over Free Speech.” The New York Times, 3 July 2023.

11 This is not a new idea; AI just makes it easier. Prior-year photographs of Stalin’s Politburo in the 1930s were seamlessly doctored not to show former members whom Stalin had subsequently defenestrated, perhaps literally.

12 Knight, Will, Khari Johnson, and Morgan Meaker. “Meet the Humans Trying to Keep Us Safe from AI.” Wired, 27 June 2023.

13 A 1964 study commissioned by the US government predicted that the automation just getting started would in 30 years result in massive unemployment. Almost 60 years on, the US is enjoying a record-low unemployment rate.

14 The assumption that if some technology made people doing X twice as productive, employers would simply sack half of them rather than use them to increase output or perform some new tasks.

15 Symons, Charles, Ben Porter, and Paul Clermont. “Twelve Principles for AI Regulation.” Prometheus Endeavor, 5 July 2023.

16 The EU seems to be on that track, levying fines stiff enough that even a multi-billion-dollar corporation feels some pain.

About The Author
Paul Clermont
Paul Clermont is a Cutter Expert. He has been a consultant in IT strategy, governance, and management for 40 years and is a founding member of Prometheus Endeavor, an informal group of veteran consultants in that field. His clients have been primarily in the financial and manufacturing industries, as well as the US government. Mr. Clermont takes a clear, practical view of how information technology can transform organizations and what it takes… Read More