Advisor

Navigating the Prospects and Perils of AI — An Introduction

Posted June 16, 2021 in Cutter Business Technology Journal
CBTJ AI

Artificial intelligence (AI) as a concept has been with us for decades now, but how much of a reality is it? Over the past 10 years in particular, AI has been both promoted as the solution to a multitude of problems and decried as a threat to jobs and human potential. But what do we even mean by AI? Is it destined to be a universal panacea or the ultimate disruptor of society?

I have no doubt that AI is here to stay, especially given the sheer amount of progress we have made in research over the last decade. However, AI is also in danger of being reduced to a buzzword, and of being set up to fail in real-world business settings.

If we were to compare the AI revolution with the Industrial Revolution, then we’re only a few years on from the invention of the steam engine. One major challenge we face is that there are few widely recog­nized standards or procedures for the development and lifecycle management of AI models. If AI is to be taken seriously — and be genuinely deemed useful — it needs to be developed using best-practice methodologies. This starts with the initial selection and quality control of the input data, followed by the selection of the appropriate algorithmic concept; the parametrization and architec­ture; the training, validation, and deployment; and, finally, the monitoring and safeguarding processes once put into production. Poorly developed AI applications that fail at any of these stages can infringe on privacy or contain unintended bias — and with AI being used in myriad sensitive scenarios as diverse as credit references and medical diagnoses, this can have very serious real-world consequences.

It’s Still Early

Although tremendous progress has been made in the field of AI research, we are still at the beginning stages when it comes to bringing AI into real-world business applications. Deep learning has produced models with superhuman performance in certain fields of application, such as natural language processing and computer vision, but is it really the go-to approach to solve real-world business problems? It obviously depends on the circumstances, but in many business contexts, extremely large data sets aren’t always available. On the contrary, data sets are often small in size and sparsely populated.

Even more importantly, data sets are quite often intrin­sically incomplete (i.e., they only partially describe a more complex system in the real world that isn’t fully observable). Deep learning approaches are clearly not applicable here. Instead, probabilistic machine learning (ML) coupled with reinforcement learning provides a more promising approach in such settings, particularly where transparency is also a key requirement.

Another concept generating more and more traction in dealing with complex real-world problems is coupling ML to graph representations. Graphs are highly scalable, fully transparent, and human-readable representations of systems of interest. This approach allows for easier human intervention and interrogation — and also facilitates an elegant expansion to different use cases as well as the straightforward integration of additional data sets.

More importantly, though, it also means that domain experts can be brought into the loop when models are actually being built. While rare at the moment, it is vital that this becomes a standard industry practice going forward, not only to improve the efficiency and accuracy of AI systems, but also to increase trust in such systems from the people who use them. For models to actually do the job in which they were intended, they must be subject to domain expert scrutiny before being released, and they need to be regularly reassessed as new information emerges.

Going Forward

How will AI transform the jobs we do? It’s difficult to say with any precision, but my vision is of a future where AI helps to make smart people smarter. It’s not about taking agency away from experts; rather, it’s about augmenting the decision-making process, particularly when those decisions require the consid­eration of multiple dimensions. As the COVID-19 crisis has amply demonstrated, even the cleverest humans aren’t always good at thinking exponentially and identifying the right course of action when so many different factors are in play. In such scenarios, AI could help experts see the bigger picture and react to it more quickly.

The immediate future of AI isn’t about building entirely autonomous systems, but about this type of augmen­tation. It also makes the adaptation process for the end user easier, as inevitably there will be differing levels of resistance to using AI in the workplace. Importantly, data science has to work in tandem with “data story­telling” (i.e., illustrating how data is used to generate AI models as a part of the overall solution approach so that end users clearly understand what a model can and can’t do). True creative thinking, judgment, and the ownership of ideas are still very much in the human sphere of activity. AI is merely there to perform repetitive tasks and ensure that all possibilities are covered as decisions are made.

As a fast-evolving area, AI presents innumerable opportunities and applications that we haven’t even imagined yet. In this issue of Cutter Business Technology Journal (CBTJ) issue, we discuss the current factors and consider­ations surrounding AI today and take a look at where trends might be heading in the future.

In This Issue

We begin with Cutter Consortium Senior Consultant Paul Clermont diving straight into the three over­arching issues related to AI. The first is unintended consequences like erosion of human skills and the scope expansion that takes us from reconnecting with old friends online to channels that broadcast “un-fact-checked ‘news.’” The second is unintended bias, especially for systems that could have life-changing consequences (think business loans, hiring, college admissions, and bail-setting). The third is privacy, which AI’s “hunger for data” takes to a new level as evidenced by deepfake photos used by trolls and recent misuse of facial recognition technology by Everalbum. Clermont offers no-nonsense advice for dealing with these issues, advocating for laws that make organizations respon­sible for the algorithms they use (whether bought or built) and prohibit unexplainable AI in applications that could harm people physically or affect their lives in significant ways.

AI has an important role to play in product develop­ment, in particular, says Michael Jastram in our second article. He outlines the four trends driving product complexity and explains how AI has the potential to help us overcome the limitations of current develop­ment approaches. Both systems engineering and Agile struggle to keep up with today’s exponential growth in complexity. Model-based systems engineering (MBSE) was built to address complexity but requires a large up-front investment and frequently meets with cultural resistance. Jastram advocates for AI-based solutions that offer some of the benefits of MBSE without the need for long, expensive training processes. Regardless of the exact path, he’s excited for the coming years, saying ready-to-use solutions like IBM’s Watson barely scratch the surface of what’s possible.

Next up, Cutter Consortium Senior Consultant Claude Baudoin and Clayton Pummill give us the keys to achieving trust in AI. The first step is building cross-disciplinary teams, including psychologists, ethicists, sociologists, spiritual leaders, and legislators to develop solid AI policies. Then we must impart AI with emotional intelligence, which involves not only trans­parency, but also explainability and accountability. Eliminating bias and ensuring fairness must, of course, be in the mix. Baudoin and Pummill explore critical details, like what is the modern equivalent of Isaac Asimov’s three laws of robotics and whether an AI system should know how to lie (or whether your car should tell your insurance company how fast you were driving). The future is unlikely to be an AI utopia bringing unprecedented efficiencies nor a dystopia, write the authors, and right now, we have control over whether AI will be a trusted aide to humanity or a threat.

In our fourth article, Aswani Kumar Cherukuri, Annapurna Jonnalagadda, and Cutter Consortium Senior Consultant San Murugesan look at potential applications and impacts of AI on education. Although not initially embraced by the education sector, AI can help students receive personalized lessons, pro­vide educators with deep insights into students’ learning styles, revolutionize skills improvement for professionals, and lower the cost of education. The authors present the AI technologies being applied in education and then describe the platforms and applications now available in each of eight categories: adaptive and personalized learning; content prepa­ration; proctoring and assessment; online learning and immersive learning through augmented reality/virtual reality; language learning; coding and robotics; tutoring and mentoring; and management and scheduling.

Finally, Jayashree Arunkumar outlines how five AI trends are being slotted into real-world use, including graph-accelerated ML (NASA uses it to extract knowl­edge from its Lessons Learned database), generative AI (which helped Reuters create a fully automated sports newscast), edge AI (like cameras used to control traffic and catch criminals), artificial general intelligence (OpenNARS and OpenCog are two examples), and coding (such as finding and fixing human errors). Arunkumar then examines how AI is helping the environment by accelerating the pace of delivering on the United Nation’s Sustainability Development Goals and how it might apply similar tactics to help improve world health. The article closes with four of the most recent AI developments, including data sets and an advanced recommendation system from Facebook, an interesting self-driving car development, and language algorithms that can write a coherent article from a text prompt.

Historians may look back at this period in time as the point at which AI really started to have an impact across industry and society. AI may still be in a state of relative infancy, but one thing is for certain: it is an unstoppable wave, because anybody can access the tools to build AI systems and potentially produce world-changing applications. The downside of this is that, as we have already seen, bad actors can use AI to manipulate information that humans consume, which at its worst amounts to malign social engineering. However, AI can just as easily be used to counteract such misuse and build systems for good. It is a power­ful form of technology-based democratization, the like of which has never been seen before. We hope this issue of CBTJ enlightens you on the state of AI today and helps you navigate the prospects and perils along your AI journey.

About The Author
Michael Eiden
Michael Eiden is a Senior Consultant with Cutter Consortium's Data Analytics & Digital Technologies practice. Dr. Eiden, who serves as Head of AI at Arthur D. Little, is an expert in machine learning (ML) and artificial intelligence (AI) with more than 15 years' experience across different industrial sectors. He has designed, implemented, and productionized ML/AI solutions for applications in medical diagnostics, pharma, biodefense, and… Read More