Advisor

Opportunities for XAI in the Financial Sector

Posted May 10, 2023 | Technology |
AI finance

Explainable artificial intelligence (XAI) is becoming a critical component of operations undertaken in the financial industry. It stems from the growing sophistication of state-of-the-art AI models and the desire for them to be deployed in a safe, understandable manner. Responsible artificial intelligence (RAI) principles ensure that machine learning (ML) technology is applied in a transparent way while safeguarding the interest of each player in the financial ecosystem.

The need for more transparent and explainable AI methods is not limited to banking and finance. In fact, it’s so widespread and important that the US Defense Advanced Research Projects Agency (DARPA) has initiated a multiyear program solely dedicated to XAI.

Not surprisingly, banking and financial services regulators have shown an interest in adopting XAI and RAI techniques to help them meet the need for model governance, operational servicing, and compliance in the digital world.

In addition to the fundamental need for explainability, the financial sector faces sophisticated adversaries with the ability to steal or tamper with large amounts of data. This calls for robust, stable methods that can handle cybersecurity-related “noise” and persist in the face of adversarial data corruption.

XAI Opportunities

The financial sector is held to higher standards around trust and transparency than many other industries, in part because these companies are the foundation of our financial stability and economic mobility.

AI adoption in banking and financial services is underpinned by three elements: ML, nontraditional data, and automation. Although innovative deployment of these three elements holds significant promise for increasing financial services’ convenience and accessibility and reducing costs, it has the potential to reduce the trustworthiness and responsible use of AI systems.

ML draws on concepts from statistics and probability theory and is often used to perform the type of analytical tasks that were handled by traditional statistical methods in the past. Two of the features that make ML attractive (the ability to accommodate more data and more complex relationships in the data) can create challenges in understanding how a model’s output relates to its input variables.

These difficulties in discerning how the model works are commonly referred to as “black box” or “model-opacity” problems. The World Economic Forum notes that the opacity of AI systems poses a serious risk to the use of AI in the financial sector: a lack of transparency could lead to loss of control by financial institutions, damaging consumer confidence.

Opacity can occur due to inscrutability, which arises when a model is so complex that determining relationships between model inputs and outputs based on a formal representation of the model is difficult to achieve. These types of models are opaque to everyone, even people with high levels of specialized technical knowledge.

For example, deep learning networks lack transparency by design, with millions or billions of parameters identifiable to their developers not with human-interpretable labels, but only in terms of their placement in a complex network (e.g., the activity value of the node I in layer J in network module K). Consequently, deep learning networks are not fully interpretable for human experts and do not allow attempts at causal inference. The inability to spot check is unlikely to result in user trust in a finance application.

Opacity can also occur due to nonexpertise. Even models that are theoretically intelligible can be complex enough that they appear opaque to anyone without a certain level of expertise. In some cases, a basic level of statistics training may be sufficient to avoid this form of opacity. In other cases, understanding the model may require advanced forms of ML expertise. For example, a recent article in Expert Systems with Applications evaluating the model robustness for stress scenario generation in credit scoring requires technical expertise in stochastic gradient boosting.

Importantly, we expect an expansion in the types of data that inform business tasks in financial services. This includes data that did not previously exist or was not accessible, as well as data that was available but went unused due to a lack of technical capabilities. For example, financial services companies should soon be able to use ML methods to analyze nontraditional structured data like financial transactions for evaluating loan applicants, profiling credit risk, and predicting mortgage delinquency.

Similarly, banks and others may soon be able to use deep learning to map unstructured data like news content, recorded company-earnings calls, and satellite images of soil moisture (to predict stock and commodity prices), as well as employing textual, user-generated data from social media to predict things like credit scores and potential defaults.

ML techniques can be used to discover complex relationships within data, including situations in which variables interact with each other in new ways or do not have a straight-line relationship with the predicted outcome. At the same time, ML algorithms have a random seed and are hence not entirely replicable. Every repetition with a different random seed will lead to slightly divergent results (or substantially different results if the gradient descent does not converge near optimal).

A recent XAI study by Bank of England provided insights into future trends in the finance industry. The survey found that by 2025, many financial institutes will need to incorporate intelligent algorithms to fulfill customers’ demands and that banks must adopt AI to raise stakeholder confidence.

XAI aims to help humans understand why a machine decision has been reached and whether the outputs are trustworthy, all while maintaining high predictive performance levels. XAI is thus an important tool in increasing trust in the use of AI by the financial sector, creating a bridge between machine intelligence and human intelligence, with the goal of broadening the acceptance of AI systems by humans.

[For more from the author on this topic, see: “Explainable & Responsible AI in Digital Banking Transformation.”]

About The Author
Cigdem Gurgur
Cigdem Z. Gurgur is Associate Professor of Decision and System Sciences at Purdue University, and a member of Arthur D. Little’s AMP open consulting network. She is a data and management science expert with experience in optimization models under uncertainty and decision support systems development with algorithmic theory design. Dr. Gurgur’s work utilizes meta-analytics, computational models, and artificial intelligence (AI) techniques for… Read More