Advisor

Going Deeper with Explainable AI

Posted July 20, 2022 | Technology |
AI

Artificial intelligence (AI)-based systems work on vast amounts of structured and unstructured data to identify patterns and trends within that data. The ensuing analytics generates insights for decision makers. The systems, however, do not and cannot provide explanations as to why a certain recommendation is made.

Users want to move away from this black-box model due to lack of explainability. In fact, all AI systems stakeholders, including developers and vendors of AI solutions, want visible fairness and accuracy that would increase trust and use of these systems.

The underlying philosophy of AI-based systems is that they work on correlations, not causations. In other words, an AI-system can analyze millions or billions of records to correlate them and, thereby, arrive at the probability of a certain occurrence. The system can then present that probability in a way humans can understand and incorporate in their decision-making process. The system, however, cannot provide the cause for the probability of that occurrence. AI systems do not reason or argue, but simply execute the algorithms on the data provided.

Neural networks, in particular, work on correlating a vast amount of data that is used to train and validate the system and test its logic. Since AI systems work on probabilities, the better the data, the better the analytics. The algorithms do not provide any “reasoning”; therefore, the recommendations and subsequent decisions lack a human touch. Consequently, when a highly complex algorithm is executed on a large data set, chances of data bias and potential inaccuracies in the algorithm cannot be ruled out. Decisions based on poor or biased data or algorithms can lead to many social and business challenges, including legal wrangling and court suits. Legal challenges arising out of such unexplainable analytics have serious business and social repercussions.

Enter XAI

Explainable AI (XAI) is the discipline of going deeper within the AI system, identifying the reasoning behind the recommendations, verifying the data, and making the algorithms and the results transparent. XAI attempts to make the analytical recommendations of a system understandable and justifiable — as much as possible. Such explainability reduces biases in AI-based decisions, supports legal compliance, and promotes ethical decisions.

XAI is vital for the acceptance of AI and machine learning (ML) technologies in business and society. “Explainability” of an AI-based system includes but is not limited to:

  • The need to understand the context in which decisions are made based on the system’s recommendations

  • The need to justify the recommendations suggested by the system

  • The need to avoid biases in decision making

  • The ability to provide evidences in a court of law, if required

  • The ability to modify or override the recommended decisions should the context change

  • The ability to ensure that the recommendations are ethical and moral

What Are the Risks of Lack of Explainability in AI Systems?

Decisions based on AI systems impact individuals and societies every day. Systems analyze data in a wide variety of ways, ranging from pattern descriptions and predicting a trend to providing prescriptive analytics. Business decision makers rely on these recommendations in tactical, operational, and strategic situations. And, increasingly, automated systems (e.g., autonomous driving and robotic warehouses) execute decisions as well.

The analytics are based on the systems designed and coded by the developers and owned and executed by users. The algorithms are coded to traverse large data sets and establish correlations. But there is no onus on the AI system to explain its analytics and recommendations. An understanding of the data features and the high-level system architecture is not enough to explain or justify a particular recommendation. Biased data and algorithms lead to loss of trust and confidence in the systems and, worse still, moral and legal challenges. Biased decisions can ruin individual lives and threaten communities.

These risks are becoming increasingly apparent in AI-based decision making. AI systems lack contextualization, which presents interesting challenges. As Professor John H. Hull explains, “Teaching machines to use data to learn and behave intelligently raises a number of difficult issues for society.” For example, while the data that is fed into the system is factual, biases in the data can lead to biases in recommendations. Feedback loops in AI decisions can exacerbate the original biases, and biases in algorithms create further challenges that are difficult to detect before multiple system executions.

AI systems grow and expand their knowledgebase in an iterative and incremental manner, using large, historical data sets (also called big data) for analytics. Each decision based on an AI recommendation is fed back into the system. Additionally, data is used to train the system to make recommendations and also to test validity of the results. Incorrect, incomplete, or skewed data at any point can lead to skewed decisions. As fellow Cutter Expert Curt Hall states in a Cutter Advisor:

Because much of this data is historical, there is the risk that the AI models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. Other issues with training and testing data can also contribute to bias and inaccuracies in ML algorithm development, particularly the use of incomplete data sets that are not fully representative of the domain or problem the developers seek to model.

AI systems also make rapid, split-second decisions that are not humanly possible. Humans cannot match the speed of crunching and correlating vast amounts of data. However, AI-based recommendations without explainability present substantial risks and challenges. Some examples include:

  1. AI systems can help with medical diagnostics such as identifying the tiniest dot on a scan as the beginning of cancer by correlating millions of data points within seconds, but cannot explain why that dot is likely to become cancer.

  2. AI-based systems have plotted COVID-19 pandemic pathways with reasonable accuracy, thereby helping the health domain to prepare for ICU capacities and vaccine administration, but cannot provide reasons for increased requirements, leaving that insight up to human scientists.

  3. The systems can assist police with identifying the likelihood of a crime spot, but cannot provide the reason behind the increased activities.

  4. In education, AI systems can predict which cross-sections of students are most likely to drop a course, but without reporting why.

Each scenario (and many more) has associated risks. Decisions based on these recommendations present ethical and moral challenges that are not within the scope of AI-based systems because they lack an understanding of the context in which the decisions are made. Therefore, despite the obvious advantages of speed and increasing accuracy, the vastness of data and the complexity of its analytics lead to situations wherein the “reasoning” behind those insights remains elusive.

Automation leads to straightforward predictions based on a clean set of data. For example, algorithms for autonomous driving, robotics processes in warehouses, or executing a stock market order wherein the context has not changed and there are no exceptions, can be coded relatively easily. Automated processes relieve the “routine” decision making by humans and free up that time for more creative utilization. However, as soon as the context changes and the nonroutine comes into picture, coding and execution become challenging. The uncertainty of the context in which the decisions are made presents a risk. Fully automated decisions that leave humans completely out of the loop are risky, especially if the context keeps changing. Users also have subjective interpretation of their needs, and their values keep evolving.

An explanation for an AI decision that is understandable to people is imperative for the acceptance of these systems in business and society. XAI is based on the need to provide a reason or justification for the analytics generated. The greater the adoption of AI in daily lives, the greater is the responsibility of the systems to explain the reasons behind the recommendations. In addition, analytical insights generated from AI should not violate the legal and ethical contexts of the regions in which they are executed.

[For more from the author on this topic, see: “How to Explain? Explainable AI for Business and Social Acceptance.”]

Photo by DeepMind on Unsplash

About The Author
Bhuvan Unhelkar
Bhuvan Unhelkar (BE, MDBA, MSc, PhD; FACS) is a Cutter Expert. He has decades of strategic as well as hands-on professional experience in the information and communi­cations technologies (ICT) industry. Dr. Unhelkar is a full Professor at the University of South Florida, Sarasota-Manatee campus. As a founder of MethodScience and PLATiFi, he has demonstrated consulting and training expertise in big data (strategies), business analysis (use cases… Read More