Artificial Intelligence and risk management
Financial institutions are increasingly using or exploring AI to gain a competitive edge, offer tailored products and services, and operate in a more efficient and profitable manner.
Suchitra Nair – Director - EMEA Centre for Regulatory Strategy - Deloitte
Valeria Gallo – Senior Manager - EMEA Centre for Regulatory Strategy - Deloitte
Published on 5 November 2019
Firms are increasingly embedding Artificial Intelligence into their long-term strategies to deliver better customer service, improve operational efficiency and effectiveness, and gain a competitive advantage. While the number of live, full-scale AI use cases is still relatively small, the number of pilots and proofs-of-concept is rapidly growing.
EUROPEAN BANKING AUTHORITY
RECENTLY ESTIMATED ...
In response to these trends, EU and international regulators have started taking an active interest in AI. Both financial services and cross-sector regulators recognise the potential benefits that AI can bring to financial markets, consumers, and their own work. However, they are also increasingly mindful of the impact of AI use on a firm’s risk profile – through the amplification of existing risks or the introduction of new ones – and its unintended consequences. Therefore, effective risk management and governance will be key to ensuring that these risks are managed in line with each firm’s risk culture and appetite.
Governing AI is often less about dealing with completely new types of risk and more about identifying, assessing, controlling and monitoring a nuanced intensification of known ones. Firms are therefore unlikely to require a complete upheaval of their existing risk management processes in order to deal with AI, but they will need to review or enhance them.
EMBEDDING AI IN YOUR RISK MANAGEMENT FRAMEWORK
The level of risk and the necessary controls will vary significantly according to the context and nature of the AI application. However, in general terms, AI is very likely to have a significant impact on a firm’s risk management frameworks.
Frequent and necessary updates to potentially complex AI models, as well as the relative immaturity of AI use in Financial Services, mean that the ways and the magnitude with which many AI risks manifest themselves may evolve rapidly over time. This could have important ramifications for firms, both from a conduct (e.g., price discrimination or exclusion) and financial stability perspective (e.g., widespread mis-selling).
Firms will need to review their governance and methodology for identifying, assessing, controlling and monitoring risks, and adopt a more comprehensive, frequent and dynamic approach to risk management. For high-risk use cases at least, this means that risk and compliance functions should become much more involved on a day-to-day basis throughout the AI system development lifecycle – from conceptual design through to implementation and monitoring.
Firms need to determine how AI risk considerations should be integrated into their existing risk management frameworks, and to what extent those frameworks need to change.
Such considerations include regulatory and ethical implications such as algorithmic bias and the ability of AI models to make inferences from data sets without establishing a causal link. Diversity of views, technical experience, social and cultural backgrounds will be a key control in relation to questions of data ethics, fairness, and social impact.
More generally, effective AI risk management will also require increased engagement, and sign-off, from a much wider set of stakeholders, including AI subject matter experts, risk and control functions such as technology risk and regulatory compliance, as well as representatives from the business.
Finally, it is worth noting that risk management frameworks should consider both AI risks associated with the adoption of specific AI use cases (e.g., a credit risk profiling) as well as broader risks to the organization (e.g., impact on employee relations and corporate culture), and what AI adoption means for the organization’s human capital (e.g., loss of experience gained from hands-on decision making and judgement calls) in the short and longer term.
WHAT WILL REGULATORS FOCUS ON?
Regulators understand the potential benefits of AI, both for the industry and themselves, and overall have been supportive and engaging. They have steered clear of stifling AI exploratory projects by regulating too early.
There is no prescribed overarching set of rules for AI yet, but some regulatory authorities, such as the Dutch National Bank  and the UK ICO , have already issued draft guidance and frameworks to help firms interpret how existing rules and supervisory expectations would apply to AI models.
In addition to recent supervisory statements, existing rules relating to the use of algorithmic trading, supervision of internal models, the UK Senior Managers and Certification Regime (SM&CR), and the wider requirements around systems and controls, give a good indication of what regulators and supervisors are likely to expect in relation to governance and risk management around AI.
We expect the key areas of focus for AI models to be effective governance, including meaningful understanding and effective challenge of AI models from the board; clear lines of accountability; capability and engagement of control functions; effective and independent model validation; documentation and audit trails; and the ability to detect and prevent unauthorized use, or misuse, of material models.
These are not new areas, but the introduction of AI is likely to increase the speed, scale and complexity of these challenges. For example, risk management frameworks will need to balance the utility of AI models against regulatory expectations in relation to interpretability, auditability, and explainability to end users. These challenges will increase further if an AI system uses personal data, as additional GDPR requirements will also apply, or if any components or processes are outsourced or provided by a third party.
AI & DATA ETHICS
Consumer data plays an undisputed role in fueling AI innovation in Financial Services. In the coming years, we expect the ethical debate and policy response over the extent and purpose of the use of personal data in AI solutions to intensify significantly.
This complex debate will not be resolved in 2020, but it is now a clear priority for both Financial Services and cross-sector policy makers, both at EU and national level. It will have important long-term implications for Financial Services firms’ AI & data strategies, and risk management approaches.
AI will increasingly become a core component of many Financial Services firms’ innovation strategies. An essential part of the innovation process will involve understanding the implications of AI from a risk perspective. This is a business imperative, but also a regulatory one, given how extensively regulated the Financial Services sector is.
AI will introduce some important nuances to the way familiar risks (e.g., bias) may manifest themselves, or the speed and intensity with which they will materialize. AI-specific considerations should be integrated into existing risk management frameworks to ensure they remain fit for purpose.
Regulators are also increasingly mindful of the potential risks and unintended consequences of AI adoption in Financial Services, and the challenge of finding the right balance between supporting beneficial innovation and competition, and safeguarding customers, market integrity, and financial stability. Governance and risk practices should fully take evolving regulatory requirements and increasing risk management expectations into account.
AI and risk management
Financial services (FS) firms are increasingly incorporating Artificial Intelligence (AI) into their strategies to drive operating and cost efficiencies, as well as critical business transformation programs. Overall, however, adoption of AI in Financial Services is still in its early stages.