Article

Generative AI: A Conversation with the Future — Opening Statement

Posted August 28, 2023 | Technology | Amplify
Generative AI: A Conversation With the Future — Opening Statement
In this issue:

AMPLIFY  VOL. 36, NO. 8

The technology industry is certainly not immune to hyperbole, but the speed of development in generative artificial intelligence (GAI) over the course of this year is unprecedented. In fact, it’s mind-boggling. For quite a while, the term “AI” has been bandied about, applied to several market segments, and talked up in numerous articles — yet somehow never delivered the “wow factor” that was promised. That changed on 30 November 2022 with the release of OpenAI’s ChatGPT.

Online AI bots that mimic human conversation aren’t new, but ChatGPT seemed to operate on a different level, able to almost instantly answer complex questions and engage in philosophical debate. Most importantly, it did this in the style and syntax of a human, creating the experience of talking to a genuine artificial intelligence.

After catapulting GAI into the media spotlight, it became apparent that ChatGPT was still far from perfect, and attention shifted to copyright issues raised by a GAI model that learned by scraping the Internet for content. Within the technology industry, many believed that only major companies would have the computing power and resources necessary to train the large language models (LLMs) needed to properly run GAI apps.

That stance contrasts sharply with the open source community, which was inspired to new levels of innovation after the initial “leak” of Meta’s LLaMA in early March. Things have moved quickly since the start of the year, and it’s now possible to train much smaller language models on commodity hardware that can still solve very complex tasks. This is a genuine game changer in the potential application of GAI across all aspects of business and our lives online.

GAI will prove transformative, changing the way companies and organizations work forever, streamlining existing processes beyond recognition. Rather than working in a world in which interactions with data are rigidly rule-based and transactional, GAI will allow for questions and inquiries at a highly sophisticated level, producing in-depth, detailed answers in a format much more user-friendly than a standard data dump. Rather than performing a transaction, it will be like having a complicated conversation.

For example, internal knowledge management is lacking at most companies — they sit on a trove of information that is poorly indexed and saved in a variety of formats and languages. Imagine being able to ask your system to generate a report on your company’s experience in a particular area, including customer use cases, summarized in German? Or how about using GAI to perform primary research on the latest academic or scientific papers or to ensure that your company’s operations are always compliant with global regulatory frameworks, no matter how often they change?

GAI also has the potential to transform online customer relations. Today’s online bots can lead to deeply frustrating experiences, causing unseen reputational damage. What if customers could talk to an online assistant that not only understood what they wanted (or could quickly find out) but also had access to real-time financial information, could instantly look up all known solutions to a problem, and could recommend a product based on detailed requirements? What if it could do all this in a conversational style tailored for the individual, rather than using predefined scripts?

The potential applications for GAI are almost limitless, saving companies enormous amounts of time and money compared to current processes, whether they relate to internal knowledge sharing and exploitation or external market analysis and customer service.

We are still at the very beginning of this revolutionary curve, and if we are to fully enjoy its advantages, there are important issues to be resolved in areas such as intellectual property (IP) protection, regulation, security, and environmental impact. This Amplify takes a look at these issues — and more.

In This Issue

We begin with a fascinating dive into data on key GAI trends. Cutter Expert and frequent Amplify contributor Curt Hall examines findings from a Cutter survey of more than 100 organizations worldwide. So many respondents are already using GAI tools that Hall calls the rate of adoption “amazing.” Most companies are still using basic tools, but many report they’re open to using a range of tools, including domain-specific ones. Hall‘s article looks at enterprise adoption of LLMs, strategy and oversight for GAI adoption and usage, and enterprise experience with GAI to date. In addition to graphs showing the survey results, Hall highlights direct quotes from respondents, including this one from a communications executive:

We are doing comparisons that show promise in amplifying the creativity and productivity of our teams in significant ways, which vary depending on the type of job. This is already showing that the use of these tools will lead to significant savings.

Next, Cutter Fellow Stephen J. Andriole presents a no-holds-barred discussion of the predictions and fearmongering swirling around GAI. Clearly, Andriole says, we should stop panicking and start thinking about how to optimize GAI. We should also acknowledge that some form of regulation is necessary. Andriole turns to ChatGPT and Bard (who else?) for advice on potential regulation, looks closely at what other countries and regions are doing in this area, and highlights the importance of addressing IP infringement issues. He concludes by saying that regulatory decisions should not be anchored in technology capabilities, pointing out that social, political, and economic concerns about the impact of regulation will exert as much, if not more, influence on the regulatory scenarios that emerge.

In our third piece, me and my Arthur D. Little (ADL) colleagues Michael Papadopoulos, Nicholas Johnson, Philippe Monnot, Foivos Christoulakis, and Greg Smith debunk the idea that security concerns about LLMs are entirely new. We examine each concern to show that these issues are merely new manifestations of existing security threats — and thus manageable. “LLMs highlight and stress test existing vulnerabilities in how organizations govern data, manage access, and configure systems,” we assert. The article concludes with a list of 10 specific ways to improve LLM-adoption security.

Our fourth article comes from Ryan Abbott and Elizabeth Rothman who believe we must address the legal, ethical, and economic implications of AI-generated output if we want to foster innovation, promote the responsible use of AI, and ensure an equitable distribution of the benefits arising from AI-generated works. The authors look at the complicated relationship between AI and IP, then discuss the Artificial Inventor Project, which filed two patent applications for AI-generated inventions back in 2018 in the UK and Europe. The project aims to promote dialogue about the social, economic, and legal impact of frontier technologies like AI and generate stakeholder guidance on the protectability of AI-generated output. Clearly, say Abbott and Rothman, AI systems challenge our existing IP frameworks and necessitate a thorough rethinking of what rules will result in the greatest social value.

Next, ADL‘s Greg Smith, Michael Bateman, Remy Gillet, and Eystein Thanisch scrutinize the environmental impact of LLMs. Specifically, they compare carbon dioxide equivalent (CO2e) emissions from LLMs with using appliances such as electric ovens and kettles, streaming videos, flying from New York City to San Francisco, and mining Bitcoin. Next, the authors look at how fit-for-purpose LLMs and increased renewable energy usage could help LLM operators reduce their carbon footprint. Finally, this ADL team points out the relationship between smaller LLMs and responsible, democratized AI.

Finally, Cutter Expert Paul Clermont takes a down-to-earth look at what we can expect from AI in the near term. For one thing, he says, we’re still in the garbage-in, garbage-out phase with LLMs; for another, it’s nowhere close to artificial general intelligence. There are, of course, ethical and social implications, including the fact the AI puts what we don’t like about today’s Internet (disinformation, loss of privacy, and more) on steroids. A host of new legal issues also needs attention, Clermont notes, which may lead to governments playing a role in the evolution of AI usage that they did not assume in the advent of the computer or the Internet.

We hope the articles in this issue of Amplify offer you insightful ways to examine and ponder the potential of GAI going forward and recognize the importance of viewing this complicated technology with an objective eye.

About The Author
Michael Eiden
Michael Eiden is a Cutter Expert, Partner and Global Head of AI & ML at Arthur D. Little (ADL), and a member of the ADL’s AMP open consulting network. Dr. Eiden is an expert in machine learning (ML) and artificial intelligence (AI) with more than 15 years’ experience across different industrial sectors. He has designed, implemented, and productionized ML/AI solutions for applications in medical diagnostics, pharma, biodefense, and consumer… Read More