What do the true implementers of generative AI technologies (i.e., CIOs) think and what do they see as reality? This perspective is sorely needed by CEOs, CMOs, and the public in general. To address this need, Cutter contributor and data business leader Myles Suer asked CIOs some important questions regarding generative AI. This Advisor shares some insights gleaned from those conversations.
From Your Vantage Point, Can Generative AI Act as a Transformational Agent?
CIOs clearly see generative AI’s potential for improving business productivity and operations. They believe it can be a force multiplier in many industries. But to be fair, I have noticed dramatic improvements in Word and Google Docs over the last year. Does generative AI have the potential to make us all awesome wordsmiths and to make our emails safer?
Pedro Martinez Puig, Head of Americas Technology at Edenred, says, “It is always great when new technology becomes mainstreamed and draws attention from all levels of the organization. Our current bet is focused on embedding generative AI in our data-powered products and improving our chatbot solutions with improved natural language capabilities.” Meanwhile, FIRST CIO Deb Gildersleeve says that she hopes generative AI “can help non-coders do basic coding. As well, it could help by being the starting point for research.”
In higher education, David Seidl, CIO at Miami University, hopes that generative AI could help level the playing field by creating a whole new realm of research and scholarship. “It has the potential to make data insights and queries more accessible and may lower the barriers by creating new tools. I think it may also help make ideation easier. For example, I don’t have a strong ability to convert thoughts into images via drawing,” says Seidl. Likewise, Mevotech CIO Martin Davis sees generative AI’s research potential: “The ability to provide adaptive, intelligent assistance to workers will enhance their productivity and capabilities. This could be a game changer for businesses by enhancing personal performance and the business’s profitability.”
Manufacturing CIO Joanne Friedman claims that “Generative AI could help manufacturers improve the time to data and subsequently accelerate decision-making and reduce the time-to-value delivery across a multiple of business processes.” Could it help deal with strained supply chains? Clearly, generative AI could impact both sides of the equation for CIOs — productivity and business models. For this reason, former CIO Isaac Sacolick claims, “Impressive tools require experimentation. I find value as a step to simplify research, a major advancement for marketing, and a new tool for software developers. But it’s easy to find generative AI and ChatGPT limitations and risks, too.”
Scott Likens, Emerging Technology and Innovation Leader at PwC, adds to the CIOs’ thoughts, saying:
At its core, generative AI has the potential to completely transform industries: it’s being used in the legal space to help attorneys handle their day-to-day workload; it’s heavily influencing how marketing agencies are creating all of their content (text, image, video, audio), and it’s being used by managed services providers to create business transformation strategies, from scratch. We’ve seen similar transformational trends over time, but nothing has spread through the C-suite as quickly and addictive as generative AI. I’m personally excited about the deeper content, coding, and imagery that can be created by having generative AI solutions kickstart our creative process.
What Are the Risks?
So, what are generative AI’s limitations and risks? Are they real? CIOs believe generative AI does have potential risks. Given this, I asked CIOs to parse through these risks and determine which are the most realistic. Being fun, Puig quotes Star Wars’ Darth Vader: “Don’t be too proud of this technological terror you’ve constructed.” Some CIOs worry about the implications to data, “garbage in/garbage out,” or risks around security. Clearly, this was the case with Target a few years ago. The marketing and societal impacts matter. These include bias, incorrect answers, and privacy.
Seidl, for example, says:
There’s a huge danger in trusting the answers [generative AI] provides. These systems are trained on the Internet. This means that relying on them to be considerate, caring, humane, or even reasonable isn’t necessarily safe. It could get ugly fast. Additionally, this may turn out to be at the bleeding edge of copyright and other issues. Just because you’ve trained your AI on a huge set of data doesn’t mean that what it creates will be fair use or not require a license. Like other emerging technologies, US courts and legislators are unprepared to deal with these issues. I had the opportunity to hear Kelly Cohen from the University of Cincinnati speak about “explainable AI” a couple of weeks ago. I think that is critical to get after how AI gets to an answer, and why that answer was correct. Something that derives from that might be useful — but could also be prone to implicit bias or other issues.
Friedman gets more specific, saying that with generative AI, “There is a lack of provenance with the training data used in machine learning or modeling, a lack of domain specificity, and a lack of understanding for the age of the data.” For this reason, Puig claims:
The big work ahead is to avoid potential biases and unintended consequences. It is our collective responsibility to ensure generative AI is developed with appropriate safeguards. Beyond the capability to generate realistic but fake content, there is the potential for generative impersonation or identity fraud. Finally, we need to avoid a lack of interpretability and accountability for supported decision-making.
To address this, Jim Russell, Manhattanville College CIO, suggests that content discernment matters: “There is a lot of content out there, not all of it has equal value. We need to insure its ethical use — tertiarily considering data I am a steward of and how to expose or protect that from ingestion.” To address these issues, Sacolick suggests that organizations avoid use cases that expose limitations of the technology, including:
Inability to sort out the facts
Use of copyright content without consent
Exposing company IP or sensitive data
Decision-making without additional research
Likens worries about these issues, too, saying:
Generative AI solutions can produce harmful, misleading, or inaccurate content; can struggle with common sense reasoning; and are subject to being stochastic parrots, especially when used irresponsibly. Most generative AI models are trained on pools of data that can contain human biases, which can then be amplified by the model once trained and in use. The combination of large data sets with complex algorithms can lead to inaccurate responses from a model. This becomes especially problematic when the quality or completeness of the model combines with inaccurate or unmaintained data.
Cleary, there is a lot to think about here.
How Are You Visualizing the Potential?
A reasonable portion of every CIO’s job is about helping their business visualize the potential from disruptive technology. Given this, I asked CIOs how they are helping their business conceive of the potential. Russell says, “First, by ensuring the processes and teams I’ve nurtured or deployed are working to consider and then address this latest disruption. Secondly, by increasing awareness at the top by developing SWOT awareness of this and other technology. I don’t want people building the plane in the air.”
Davis claims that CIOs should take “a balanced approach and provide a picture of the possibilities with a view of the risks. Coupling this with the enterprise attitude to risk will determine what to experiment with, when, and how.” To get people considering the potential, Puig says: “I have invited ChatGPT to the start of the year meetings and convention. I have shown responses relevant to the audience live. This has opened the eyes of those not familiar with the potential of this technology.”
Clearly, the potential is to move from a data catalog focused on analysts and data scientists to a catalog focused on business users.
What’s the Business Plan?
I asked CIOs If they were putting a business plan together for generative AI for their organization, including functions they would prioritize for application. At this point, Seidl says, “This is where we all mutter things about trade secrets, right?” But, he continues, “We’re using chatbots for knowledgebase-type support for folks who want it. But I’ve seen general biases against even an outsourced, human-driven helpdesk. I wonder how long it will take organizations to accept an AI-driven one.”
Russell thinks that AI “will impact us first within existing tools, including search engines, collaboration tools, etc. AI/ML will impact how we engage users.” Meanwhile, Sacolick believes: (1) the technology will end up enabling marketing — do more, better, with less; (2) knowledge management and search will finally get a usable set of practices to combat tribal knowledge; and (3) software development will be enabled by researching code examples instead of writing it oneself. Puig concludes, “On top of adding new features on the existing data-powered solutions and improving our chatbots, we are playing it also within our development teams in terms of code quality and effectiveness.”
Regarding generative AI, CIOs clearly see concrete promise and mostly manageable risks. And without question, the promises will enable organizations to increase effectiveness and business productivity. If carefully constructed, the impacts can be significant on digital offerings and marketing. Emphasis should clearly be placed on careful construction to avoid many clear issues. The change that will ensue will be important to watch.