Beyond the Black Box: How to Practically Implement Explainability in Financial AI
Recently, while consulting on AI planning and architecture for a company that creates financial services, I felt that this kind of discussion is accelerating in the financial industry. Although generative AI such as ChatGPT is receiving a lot of attention, the financial sector has been actively utilizing AI for a long time. From fraud detection to credit risk management, and even ultra-short-term trading strategies, AI is playing a key role in many core financial tasks. However, there are still many tasks left for AI to be actually trusted and used ethically. The most important issue among them is explainability. In the financial sector, the more complex the AI model, the more difficult it becomes to understand how it makes decisions. This is the so-called 'black box' problem. Even if an AI model makes accurate predictions, if it cannot explain the criteria and process by which the predictions were made, it can be a serious problem, especially in a field like finance where trust is essential. So today, I would like to take a deep look at what this 'explainability' is and how it can be implemented in the financial sector. Can I give you an interesting example? In 2019, Apple Card became a social issue due to the controversy over gender-discriminatory loan screening. A couple with the same income and credit rating applied, but the husband's credit limit was set much higher than the wife's. People immediately criticized this decision as 'gender-based discrimination.' However, the card issuers and financial institutions that managed the screening algorithm failed to explain exactly why this problem occurred. As a result, their image suffered serious damage. This case illustrates the potential problems that can arise when AI operates in the financial sector. AI makes decisions based on data, but if the data itself is biased or the algorithm’s judgment criteria are not clearly revealed, financial institutions can face serious ethical and legal responsibilities. In this context, the financial sector must ask the following questions when using AI: “Why did our AI model make that decision?” “Are the decisions made by AI really fair?” “Can we explain the judgment criteria of AI models?” Three Key Elements of AI Explainability Explainability is more than just showing the technical details of how a model works. To properly implement explainability in AI in finance, all three of the following elements must be present: (1) Transparency It's about making it clear to stakeholders how the AI model is structured, what data it was trained with, and what prerequisites or assumptions it operates on. For example, trust can be built by disclosing to customers and regulators the data sources for credit rating models and the reasons for selecting assessment variables. (2) Interpretability The goal is to make AI decisions easily understandable to humans. The way the model works should be explained using simple algorithms or visual tools. For example, you should be able to explain why you declined a loan application with specific data points (“Your loan was declined because of your high credit card utilization”). (3) Accountability It's about establishing clear accountability for decisions made by AI models and deciding in advance how to respond when problems arise. When a model makes a bad decision, establish clear processes and accountability to immediately correct it and remediate the damage. An integrated approach that embraces all three elements is key to properly implementing the explainability of AI in finance.
- Haebom

1