Contextualizing Artificial Intelligence Risk for Banks
Adopting a fact-based approach to understand the risk implications and required responses.
There is a plethora of information on Artificial Intelligence (AI) in the media, and making sense of it amidst the hype and frenzy can be very challenging. If you have decided to pause and read this article, let me express my appreciation upfront for the time and effort you have taken to do so. I sincerely hope that it will provide some valuable perspectives to help you shape your thoughts and views on the subject and better manage AI risk in your organization. I have structured my thoughts below as a series of topics, highlighting in each of them what I believe the implications for banks are. It is based on my experience as a banker working in global systemically significant banks across the lines of defense.
AI is not new to banks and has existed for decades.
AI and its associated risks are often cited as being new and novel to banks. This is certainly not the case. Older generations of AI, in particular machine learning, have existed for a number of years and have been deployed into limited use cases in banks where decision-making is dependent on large volumes of data. We are witnessing a further evolution of AI that has grown exponentially in sophistication and is likely to take on more widespread and pervasive adoption.
Figure 1.
If we had to plot this evolving AI continuum on a graph, we would uncover a directly proportional relationship between uncertainty and risk.
Implications for banks: The core capability of a bank is the management of risk. Notwithstanding the step change in risk introduced by Generative AI (genAI), banks already have entrenched governance, risk management, and compliance frameworks and practices to manage a variety of risks. AI can be considered a new principal risk type and / or an embedded risk that will need to be made explicit in a bank's risk management framework and managed using the principles of proportionality, identification, mitigation, measurement, and monitoring.
All AI risks are not equal, and the principle of proportionality needs to be applied in risk mitigation.
The European Union AI Act, the most comprehensive AI legislation at present in the world, is structured firmly on a risk-based approach. It has identified four risk categories for AI as follows:
Figure 2.
A risk based approach is vital for assessing whether risks are within or outside the appetite of the organization and the response that is required, if one is necessary.
Implictions for banks: In keeping with regulatory principles, banks need to develop a risk-based approach to managing AI risks. This requires governance, oversight, risk management to be proportional to the level of risk being taken. It will determine which AI activities require ‘humans-in-the-loop’ and to what extent. It will determine what governance processes need to be followed for use-cases and which ones can be approved by individuals, and which ones need the approval of senior management committees and or the board.
The AI risk landscape will remain uncertain and unpredictable and banks need to brace for new and emergent risks.
Given the level of uncertainty, I remain surprised by the extent to which some commentators express certainty about current and future risks. I have found the known unknown matrix a useful tool for communicating how we should think about and prepare for AI risks.
Figure 3.
There are areas where understanding and awareness of the risks exist, the known knowns e.g., model and data risks. We are best prepared to deal with these. At the opposite end of the spectrum, there are likely risks that are neither understood nor apparent, e.g., new risks that are unknown at this stage. We are likely to be least prepared to deal with these. In the intermediate scenarios there are likely to be well-understood risks of which there is little awareness, the known unknowns, e.g., understanding the impact of AI on liquidity. And finally, there are those risks where understanding is poor or non-existent but of which we are aware, the unknown knowns, e.g., bias and hallucinations.
Implications for banks: Banks need to be aware that all risk scenarios are likely to arise that include no changes to existing risks, the amplification of existing risks, the emergence of new risks and the elimination or reduction of existing risks. As AI evolves and its sophistication increases, so too will the complexity of risks (Figure 1). Hence there is a need to ensure vigilance and to avoid complacency.
Risks are likely to manifest at different levels.
AI risks could manifest at different levels and be direct and or indirect. At its most basic level, AI risks are likely to manifest at the level of individual use cases. Managing risks associated with data quality, completeness, accuracy and comprehensiveness will be key for successful use case deployment. Managing model risks, particularly model outputs and the risks of bias, discrimination, and hallucinations, will also be key. Interestingly, we have observed a trend in the past where regulators have reverted from sophisticated and bespoke internal ratings-based models used to calculate risk capital back to the use of standardized models due to widespread variability and their disconnection from the underlying risk. It will be interesting to understand how this stance of regulators is likely to play out in the AI models arena. AI use cases that involve customers will be considered higher risk, particularly where reputational risks could arise from harmful or flawed model outcomes and or the inability to explain model behavior.
At the enterprise level, banks will need to aggregate AI risk exposure across business lines, countries, and subsidiaries and manage these against defined risk thresholds. These metrics would need to be reported through appropriate governance structures to senior management and the board to enable them to fulfill their oversight responsibilities. Reports will need to highlight reporting limitations and risk exposure relative to appetite.
Most global banks have a stated intention to accelerate their AI strategies through partnerships with 3rd parties. 3rd party risk management is likely to grow even further in prominence as underlying risks in the 3rd party ecosystem are exacerbated, including cybersecurity, data privacy, 4th and 5th party risks,and operational resilience.
Lastly, there are the risks to financial stability. Regulators will be focused on this level and monitoring trends and developments likely to increase financial stability risks, including financial market contagion. These include growing marketplace risk concentrations due to common service providers e.g., cloud vendors, potential acceleration of procyclicality arising from the influence of AI on asset price declines, asset fire sales, deleveraging and investment portfolio rebalancing and connectivity risks with sectors outside banking e.g. insurance sector.
Implications for banks: Banks will need to integrate AI into their enterprise risk management framework supported by an articulation of risk appetite metrics, policies and procedures, and robust governance and oversight that is implemented embracing the principles of proportionality and applied across the AI ecosystem (that includes 3rd party service providers). A key question that banks will need to answer is whether AI risk is a separate principal risk in the enterprise risk management framework or one that is embedded in all other principal risks. Either way, an aggregated view of AI risk will be required to assess its exposure relative to defined risk thresholds. It will also serve banks well in considering their macroprudential impact and influence on financial stability and preempting what could arise as possible concerns for regulators.
AI has the potential to improve the effectiveness of risk and compliance programs significantly.
As much as AI is the object of risk management within banks, it has an equally important role to play as the subject. As AI adoption and use grows in the first line of defense, Risk and Compliance program custodians need to assess the effectiveness of their oversight capabilities and consider how they evolve to stay ahead. Earlier forms of AI, such as machine learning, have already been found applicable in the financial (quantitative) risk types. AI has an important role to play in improving both the effectiveness and efficiency of risk and compliance programs, transforming all elements of it, including regulatory change management, management of the policies lifecycle, risk and control assessments, continuous monitoring and assessment, issues remediation, advice, reporting, and training. AI can transform programs from being reactive and backward-looking to being proactive and forward-looking, becoming predictive about emerging risks and vulnerabilities and becoming an enabler of business strategy and growth as opposed to a constraint.
Implications for banks: Banks interested in embracing AI and genAI in their risk and compliance programs, particularly those focused on non-financial (qualitative risk types), will need to consider skills, technology and process requirements. An AI strategy is best developed and executed against the background of sound governance and AI risk management framework designed on the principles of proportionality. Execution is best achieved utilizing an agile approach using sprint methodology - this has proven highly effective at accelerating ideas and use cases into early proof-of-concepts for further refinement prior to scaling. As organizations implement further use cases, they progress down the experience curve and can institutionalize processes for future successful execution.
I trust you found the views and perspectives shared herein valuable. Please reach out if you would like to discuss any of the topics in more detail or explore their implementation in your organization. Also, feel free to join the conversation online, share your own experiences, and share this article with those in your network who you believe could benefit.
Warm regards,
Risk and Compliance Partner
Ulysses Partners