Article

Generative AI: Love, Hate, Ignore, or Just Regulate?

Posted August 28, 2023 | Technology | Amplify
Generative AI: Love, Hate, Ignore, or Just Regulate?
In this issue:

AMPLIFY  VOL. 36, NO. 8
  
ABSTRACT
Cutter Fellow Stephen J. Andriole presents a no-holds-barred discussion of the predictions and fearmongering swirling around GAI. Clearly, Andriole says, we should stop panicking and start thinking about how to optimize GAI. We should also acknowledge that some form of regulation is necessary. Andriole turns to ChatGPT and Bard (who else?) for advice on potential regulation, looks closely at what other countries and regions are doing in this area, and highlights the importance of addressing IP infringement issues. He concludes by saying that regulatory decisions should not be anchored in technology capabilities, pointing out that social, political, and economic concerns about the impact of regulation will exert as much, if not more, influence on the regulatory scenarios that emerge.

 

We’re on the verge of creating the smartest assistants in history, ones that can help us cure cancer, plan cities, improve the legal system, and manage environmental disaster, among other tasks that humans have had difficulty performing.

Large language models (LLMs) distribute power to individuals who have been trying to optimize intelligent systems for years. This provides conversational connectivity to old, newly created, and real-time knowledge that can help solve problems humans have avoided or just plain botched. And who wouldn’t want to connect to what venture capitalist Rob Toews describes in Forbes as “the world’s total stock of usable text data.… This includes all the world’s books, all scientific papers, all news articles, all of Wikipedia, all publicly available code, and much of the rest of the Internet”?1

There are some legitimate critics of the nature of the “intelligence” that LLMs reflect, but it’s safe to say that the impact of LLMs and their access platforms will be enormous. It’s not a matter of “if” but “when” this impact will be felt across all industries and within every household in the world that opts into its potential.

Note that this “intelligence” will initially take the form of “assistants” but will soon advance to “partners,” and in some cases “bosses.” Again, no one knows when these promotions will occur, but they will selectively happen across tasks, domains, industries, and even households. There’s no question about the outcome — so we can fight it or welcome the inevitability.

Stupidity, Panic & Pauses

Every new technology has critics, skeptics, and those just plain terrified of what the technology can do. US President Benjamin Harrison and his wife were afraid to turn on the light switches in the White House.2 There were critics of bicycles, cars, nail polish, talkies, laptops, answering machines, and even cheeseburgers.3 

Artificial intelligence (AI), machine learning (ML), and its accelerant, generative AI (GAI), have triggered a new round of debates, with some contemporary heavyweights like Elon Musk sounding yet another attention-seeking alarm. But is the concern justified or another case of technology panic?

The recent proposal by AI experts to “pause” research in AI, ML, and GAI is, of course, silly.4 Does anyone really believe the proposal will have any impact at all, other than some publicity?5 Will the US Congress actually do something? Most likely, the idea is to force people to think about the longer-term implications of handheld access to massive digital intelligence.

Good & Bad, but Who’s Counting?

LLMs can be extremely helpful. The current conversation (which will no doubt morph into something else in six months) is more negative than positive. We should stop panicking about GAI and start thinking about how to optimize its use. Due diligence is appropriate where “good” and “bad” are assessed as objectively as possible.

Here’s a short list of “good”:6

  • Chatbot/virtual assistants

  • Fraud detection

  • Translation

  • Content creation/research

  • Sentiment analysis

Here are some good questions to ask: Could GAI cut healthcare costs or develop new cancer drugs?Could it disrupt healthcare completely?Could it disrupt city planning?9 What about climate change?10 Product design?11 Customer service?12

We asked ChatGPT about all this.

Question: What are the good things you can do?

Answer (all text in italics has been written by AI):

  • Creative content generation

  • Personalization

  • Healthcare

  • Education

  • Environmental science

  • Language translation

  • Gaming

Question: What are some bad things you can do?

Answer:

  • Create fake news or propaganda

  • Generate fake images or videos

  • Cybersecurity threats

  • Deepfakes

  • Unintended biases

Make no mistake, technology assassinates jobs — which is its raison d’etre. Efficiency, cost savings, profitability, and market share are always the targets of technology, which is why we love (and hate) technology. Those who own old business models hate it; those creating new ones love it. “New entrants” are praised for their creativity and ingenuity. It all depends on where you sit. It remains to be seen if job displacement is a 7 or a 10. Every company on the planet is trying to measure the impact that AI, ML, and GAI will have on itself, its competitors, and the structure of its industry. At home, everyone will eventually make some new digital friends.

GAI can also distort reality in so many ways. This activity makes GAI a close friend of liars, cheats, and tyrants. GAI can rewrite cybersecurity algorithms, invade our privacy, and make us believe we’re not seeing what we’re seeing or hearing what we’re hearing — even as it develops lifesaving vaccines.

Love, Hate, Ignore, or Regulate?

This depends on where one sits. Those who use AI, ML, and GAI to make money (no matter how it’s made) love GAI. Those who worry about truth and reality may hate what GAI can do. (No one should plead ignorance.) Facebook’s algorithms that threaten the mental well-being of young women is a perfect example of how ML can increase advertising dollars. GAI will improve these algorithms.

Some form of regulation is clearly necessary. Even ChatGPT and Google Bard agree. Some countries are on their way to legislative control.13 At the very least, this technology (like all technologies) must be “managed.” 

When asked about this regulation, ChatGPT knows exactly what to do:

  • Establish clear ethical guidelines. Governments, industries, and institutions should collaborate to establish ethical guidelines to ensure the responsible development and use of generative AI. These guidelines should address issues such as privacy, security, transparency, and accountability.

  • Require transparency. Generative AI systems should be designed to be transparent, meaning that their decision-making processes should be clear and understandable. Developers should be required to provide explanations of how their systems work and how they make decisions.

  • Implement data privacy laws. Generative AI systems rely on vast amounts of data to learn and improve. Regulations should be in place to protect individuals’ privacy rights and ensure that data is collected and used ethically and legally.

  • Create oversight agencies. There should be agencies responsible for overseeing the development and deployment of generative AI systems. These agencies should have the authority to review and approve or reject new systems and to investigate any issues or complaints that arise.

  • Encourage collaboration. The development and use of generative AI should be a collaborative effort between government, industry, academia, and the public. These stakeholders should work together to identify and address potential ethical, social, and legal issues.

  • Ensure accountability. Generative AI developers and users should be held accountable for any negative consequences that result from their systems. There should be a system in place to monitor and regulate the use of these systems, with appropriate penalties for noncompliance.

Bard agrees:

  • Content moderation. Governments could require companies that use generative AI to moderate the content that is generated. This could involve filtering out harmful or offensive content or requiring users to verify their identity before generating content.

  • Transparency. Governments could require companies that use generative AI to be transparent about how their systems work. This would allow users to understand how their data is being used and to make informed decisions about whether or not to interact with generative AI systems.

  • Education. Governments could educate the public about the potential risks and benefits of generative AI. This would help people understand how to use generative AI responsibly and to be aware of the potential harms that it could cause.

  • Research. Governments could fund research into the development of generative AI that is safe and ethical. This would help to ensure that generative AI is used in a way that benefits society and does not harm individuals or groups.

Is anyone listening to these guys?14

Those who want to “pause” have their list, too:

  • “Mandate robust third-party auditing and certification for specific AI systems.

  • Regulate organizations’ access to computational power.

  • Establish capable AI agencies at the national level.

  • Establish liability for AI-caused harms.

  • Introduce measures to prevent and track AI model leaks.

  • Expand technical AI safety research funding.

  • Develop standards for identifying and managing AI-generated content and recommendations.”15

Progress or Paralysis?

Regulatory lists are everywhere, but who’s actually regulating what?

In April 2023, the Chinese government released a draft set of regulations for GAI.16 These regulations would require providers of GAI services to take several steps to ensure that their products are used responsibly, including:

  • Obtaining user consent before using their data to train GAI models

  • Taking steps to prevent the generation of harmful or misleading content

  • Implementing security measures to protect user data

The US government has not yet implemented any specific regulations on GAI, but there is a growing debate about the need for such regulations. Some experts argue that GAI poses myriad risks, such as its potential to be used to generate deepfakes or spread disinformation. Others argue that it has the potential to be used for good, such as to create educational content or help people with disabilities.

The EU has implemented a number of regulations that could impact the development and use of GAI. For example, the General Data Protection Regulation (GDPR) requires companies to obtain user consent before collecting or using their personal data. The GDPR also requires companies to take steps to protect user data from unauthorized access or use.

A post on New York University’s law blog notes that the Italian Data Protection Authority’s orders against OpenAI’s operations of ChatGPT in Italy highlighted tensions between the EU’s GDPR and GAI infrastructures trained on massive data sets involving both personal and nonpersonal data.17 The emergence of GAI infrastructures has led to rethinking in the EU’s proposed Artificial Intelligence Act, which aims for comprehensive, risk-based, product safety–based AI regulation. National agencies, including the Cyberspace Administration of China, are exploring new regulatory measures in this area. In regulation, licensing, contracts, and litigation, the allocation of risk and responsibilities along the GAI supply chain is vigorously in contention.

The US faces several somewhat unique regulatory challenges, ranging from the technology ignorance of lawmakers to lobbyists who own much of the legislative process, not to mention partisan politics and the relationships many US lawmakers have with the companies and industries they’re expected to regulate.

Although it’s impossible to predict whether the US will meaningfully regulate AI, ML, and GAI, there are signs that progress is at least possible. ChatGPT notes that:

The Algorithmic Accountability Act, which was reintroduced in Congress in 2022, would require companies to conduct impact assessments for certain high-risk AI systems, including generative AI, to identify and mitigate potential harms … the bipartisan Artificial Intelligence Initiative Act, introduced in 2021, would provide funding for research and development of AI, including studies on the ethical, legal, and social implications of AI.

US states like California and New York may take the lead. It’s possible that a bottom-up regulatory approach will be more effective than a federal top-down approach, but that remains to be seen. State-by-state regulations will complicate cross-border commerce, which is why a federal approach may be necessary. Partnerships with contiguous countries might offer some regulatory promise. For example, Canada introduced the Artificial Intelligence and Data Act in 2022, which could form the basis of a NAFTA-like agreement among Canada, Mexico, and the US.

There are some unusually challenging issues that may paralyze regulatory efforts simply because of their complexity. Should artists be compensated if GAI mimics their work? New challenges around copyright and intellectual property (IP) rights are far from understood. Compensation and ownership questions are complicated issues that are far from resolved. ChatGPT suggests that licensing payments be made to artists when works similar to their originals are created, shared, or published.

And it’s not just about copyright.

There are additional challenges that must be managed. In her article “Generative AI Is a Legal Minefield,” Axios Chief Technology Correspondent Ina Fried writes:

At issue is whether or not such training falls under a principle known as “fair use,” the scope of which is currently under consideration by the Supreme Court. Much of the early legal battles have been about this issue. Getty, for example, is suing Stable Diffusion, saying the open source AI image generator trained its engine on 12 million images from Getty’s database without getting permission or providing compensation. It’s not just about copyright. In a lawsuit against GitHub, for example, the question is also whether the CoPilot system — which offers coders AI-generated help — violates the open source licenses that cover much of the code it was trained on.18

Nor are the potential IP infringement issues limited to the data that trains such systems. Many of today’s GAI engines are prone to spitting out code, writing, and images that appear to directly copy from one specific work or several discernible ones.

The US National Institute of Standards (NIST) recently entered the regulatory picture by providing, as it always does, a set of suggestions about how to proceed with standards:

On March 30, NIST launched the Trustworthy and Responsible AI Resource Center, which will facilitate implementation of, and international alignment with, the AI RMF. On January 26, 2023, NIST released the AI Risk Management Framework (AI RMF 1.0) along with a companion NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives.19

Technology is clearly moving faster than regulators can (or want to) move. And even as efforts are underway to regulate AI, ML, and GAI, there are also efforts to delay or avoid any kind of regulation. It’s safe to say that the world is both confused and challenged by this technology. Many regulatory drafts have been developed and shared, but nothing is final. One especially challenging aspect of regulation is enforcement. What happens when some individual, company, or country violates the regulations? 

Concluding Thoughts

Regulatory action depends on how quickly the power of GAI is revealed. We know, for example, that orders of performance magnitude separate ChatGPT-3 from ChatGPT-4. What tasks and processes will ChatGPT-5 or -6 enable? As more industries, functions, and processes yield to LLMs, there will be additional pressure to regulate at some level. Of course, if there’s sufficient coverage of GAI’s limitations and a few high-profile regulations that quell the most serious fears, broader regulatory efforts will likely collapse.

Decisions around regulation will not be completely anchored in technology capabilities. Social, political, and economic concerns about the impact of regulation will exert as much, if not more, influence on whatever regulatory scenarios emerge. This changes the game, the players, and the rules. All of the activity around draft and proposed regulations will have several filters through which proposed regulations must pass. This means meaningful legislation will be slow to proceed. It’s also likely that the US will lose the regulatory game to countries that are outpacing the US’s regulatory efforts.

Predictions are impossible to make in areas as complicated as the regulation of AI, ML, and GAI, but it’s safe to say there will be a lag between regulatory policy and the growing power of this technology. Regulations may lag applications for years and perhaps even permanently. This happens when technology moves as fast as intelligent systems technology is moving — and is likely to move in the future.

The old ways of treading lightly in the regulatory world will not work for GAI. This technology represents a sea change; treating it as just another incremental advance is a huge mistake. That warning aside, all of this assumes that there’s a real desire to regulate the technology. Although there may be an honest desire to regulate the technology in several countries and a few US states, it remains to be seen whether the US is capable of developing (and enforcing) impactful regulations for such a fast-moving technological target.

We cannot ignore AI, ML, and GAI. We should not love or hate them, either. The only answer is regulation, regardless of who takes the lead.

References

1 Toews, Rob. “The Next Generation of Large Language Models.” Forbes, 7 February 2023.

2 Lantero, Allison. “The History of Electricity at the White House.” US Department of Energy, 14 October 2015.

3 Edwards, Phil. “7 World-Changing Inventions People Thought Were Dumb Fads.” Vox, 29 June 2015.

4 “Pause Giant AI Experiments: An Open Letter.” Future of Life Institute, 22 March 2023.

5 Loizos, Connie. “1,100+ Notable Signatories Just Signed an Open Letter Asking ‘All AI Labs to Immediately Pause for at Least 6 Months.’” TechCrunch, 29 March 2023.

6 “5 Practical Business Use Cases for Large Language Models.” Open Data Science (ODSC), 2 March 2023.

7 Knutsson, Kurt. “How Generative AI Could Cut Health Care Costs, Develop New Cancer Drugs.” Fox News, 8 March 2023.

8 Jones, Brian, and Rod Fontecilla. “Calling Dr. GPT: The Impact of Generative AI on Healthcare.” Guidehouse, 9 February 2023.

9 Murphy, Patrick. “The Role of Generative AI in Creating More Pleasant Cities.” Maket, accessed August 2023.

10 Maharaj, Sahir. “Generative AI: A Tool to Combat Climate Change.” Medium, 7 February 2023.

11 “Generative AI in Product Design & Development — Benefits & Tips.” RedBlink Technology, 23 March 2023.

12 Afshar, Vala. “How Can Generative AI Improve the Customer Experience?” ZDNET, 1 February 2023.

13 Dans, Enrique. “Why Do We Always See New Technology as a Threat?” Medium, 13 April 2023.

14 Apparently, ChatGPT and Bard are male, but they also don’t have to be.

15 ”Policy Making in the Pause.” Future of Life Institute, 12 April 2023. 

16 Luo, Yan, et al. “China Proposes Draft Measure to Regulate Generative AI.” Covington, 12 April 2023.

17 “GPT, GDPR, AI Act: How (Not) to Regulate ‘Generative AI’?” The Docket, New York University School of Law, 19 April 2023.

18 Fried, Ina. “Generative AI Is a Legal Minefield.” Axios, 24 February 2023.

19 “AI Risk Management Framework.” National Institute of Standards and Technology (NIST), US Department of Commerce, accessed August 2023.

About The Author
Steve Andriole
Stephen J. Andriole is a Fellow with Cutter Consortium, a member of Arthur D. Little's AMP open consulting network, and the Thomas G. Labrecque Professor of Business Technology at Villanova University. His specialty areas include digital transformation, emerging technology trends, cloud computing, social media, technology due diligence, software IP valuation, business technology strategy, business technology management, technology governance,… Read More