image

InsideNOW

Model risk management: building supervisory confidence in complex models and modelling

New, advanced modelling techniques such as artificial intelligence (AI) and machine learning (ML) mean the scope of application and complexity of models are growing by the day.

Authors

Authors

Andew Bulley - Sponsoring Partner - EMEA Centre for Regulatory Strategy - Deloitte

Henry Jupe - Director - EMEA Centre for Regulatory Strategy - Deloitte

Alexander Denev - Director - Head of AI - Financial Services - Deloitte

Alexander Marianski - Associate Director - Financial Risk Measurement - Deloitte

Kate Lavrinenko - Senior Consultant - Risk Analytics - Deloitte

Published on 3 December 2019

Share this article

image

Models serve a range of strategic purposes for financial services firms. Recently, this has expanded as progress in artificial intelligence (AI) and machine learning (ML) has enabled innovative new applications – such as investment advice chat-bot services – and has improved model performance across a range of model applications. However, these developments expose firms to new and changing model risks, and have prompted debate on how these models should be risk-managed and supervised.


Supervisory principles for model risk management (MRM) are long established[1], but remain an important framework for the industry and supervisors. The growing adoption and complexity of models demands constant re-evaluation of how these principles are applied, and supervisors continue to identify gaps in firms’ MRM.


[1] For a broader discussion of model risk management and the supervision or models, please see our white paper on Model Risk Management: Building Supervisory Confidence, at www.deloitte.co.uk/modelrisk.

Evolution of models and their supervision

AI and ML are already applied across a range of applications (for examples in the insurance sector, see the charts below). Their scope of application is still expanding fast, and is likely, moreover, to accelerate as analytics teams start to have more capacity as the European Central Bank’s Targeted Review of Internal Models (TRIM) comes to a close.

Absent other factors, we therefore expect increasing priority to be applied to MRM and supervisory issues around AI, ML and other complex modelling.


In a May 2019 speech, the UK Prudential Regulation Authority’s David Rule summed up regulatory concerns when he concluded that, “as insurers deploy increasing numbers of models, drawing on massive amounts of data and with new analytical techniques, it is vital that they give sufficient attention to effective model risk management. Insurers need to be sufficiently confident to use their models but not so over-confident that they misuse them."

Implications of AI and ML for model risk management and supervision

The complexity of AI and ML opens a new frontier in MRM, to which firms will need to adapt. Model risks, for example, may now interact with risk classes (for instance, conduct) that were previously largely or wholly independent. Model development and coding may also be vastly more complex, placing different demands on technology, data and governance, and demanding new skills and resources across all three lines of defense.


While many of these demands are new or different, supervisors generally consider the principles set out in “traditional” MRM guidance, as well as other requirements such as individual senior manager accountability, to remain valid. Supervisors will look to how firms have applied MRM principles, adapted to the demands of their models and level of model development.


In order to build supervisory confidence in their application of complex modelling, we consider it essential for firms to address the following practical risks and supervisory concerns, which we expect to see emerge in firms’ dialogues with supervisors:

Board understanding, oversight and challenge

Supervisors uniformly look to board oversight and challenge as the ultimate MRM safeguard. However, the technical implementation of AI and ML models can be highly complex, even compared to already-complex “traditional” models. This, and an expanding range of business applications, can present a number of practical oversight challenges.


Supervisors expect the board and senior management to understand the material models implemented across the business, and, just as importantly, their limitations. While the same depth of technical understanding is not expected of all board members, supervisors nonetheless expect all members to understand key strengths, limitations, assumptions and judgements, as well as how the board has satisfied itself with the appropriateness of material models. Access to appropriate skills and expertise is therefore crucial, both in the oversight functions supporting the board’s assessment (for example, model validation), and on the board itself.

Management information and reporting

Management information and reporting should enable oversight across all stages of model development, validation, use, and decision making, and should capture the complexity of the firm’s modelling activities in a useful way, proportionally to the risks posed by different models.

Data requirements and data governance

AI and ML models are likely to be more data-intensive than traditional models. In addition to the increased complexity and the oversight challenges this poses, this may put strain on underlying data and data governance. Supervisors are likely to view data or data governance inadequacies as key sources ofmodel risk, unless the limitations they create are known, understood, and factored into decision making.

AI ethics and consumer outcomes

Aside from the technical implementation of models, regulators and supervisors are paying increasing attention to the notion of “trustworthy AI.” This may capture a wide range of characteristics, including accountability, transparency, explainability, lack of bias, and fairness. Boards are expected to demonstrate meaningful oversight of these issues, including through validation.

More broadly, all models that directly affect consumer outcomes are likely to receive significant supervisory attention, with similar considerations applying around whether the use of models promotes fair outcomes for consumers.

Model change and model drift

Model change management and control is a significant supervisory focus for both traditional and emerging models. For AI and ML models, change management may be increasingly complicated where models apply dynamic recalibration. Supervisors will expect robust governance over model updates, for example, guardrails, version control, comparative testing and validation.

Supervisors have recently emphasized the risks posed by “drift” in the estimates calculated using capital models, particularly for insurers, as sufficient post-Solvency II data becomes available to identify trends. While a particular concern for regulatory capital, un-identified drift could also pose material risks for other analytical applications, such as risk assessment or pricing.

What does the future hold for the regulation and supervision of complex models?

Current regulatory policy has, to date, accommodated the use of models, while setting strict standards for MRM. However, regulators are keeping issues around the regulation and supervision of AI, ML and other advanced modelling under review, and supervisory guidance is likely to appear in due course. The European Insurance and Occupational Pensions Authority (EIOPA), for example, notes that it will further assess how AI and ML can be best supervised in practice, including assessing how their supervision differs from other models commonly used in insurance, and the Bank of England released a report, Machine learning in UK financial services, in October 2019.


It is a matter for debate whether the regulatory framework will, ultimately, present barriers to the application of some of the more advanced analytical techniques. For example, it could be highly challenging for regulatory capital approval processes to accommodate dynamic recalibration models. The limited interpretability of some models could also make it harder to comply with GDPR requirements.


However, it appears most likely that, absent a significant change in policy approach, regulators will accommodate AI, ML and other complex modelling developments, but with robust expectations for MRM. A challenge for firms will be to demonstrate how their deployment of complex models meets these expectations, including through formal model controls, and through the oversight, validation and challenge of models by the board.

Conclusion

  • We expect increased deployment of AI, ML and other complex modelling to continue across the financial services industry.
  • Regulators will keep under review the supervision of AI, ML and other complex models, and we expect firms to see supervisory engagement on how accepted MRM principles have been applied in new modelling contexts.
  • Firms deploying AI, ML and other complex modelling need to re-evaluate their MRM frameworks to make sure they have the necessary breadth of coverage across risk classes and functional areas, as well as depth of coverage for AI, ML and other techniques.
  • As firms implement changes to MRM, we expect access to skills and resources to be critical across all three lines of defense, including at the level of board oversight.

Share #DeloitteInsideNOW

image
image

Model risk management

Models serve many important strategic purposes for financial services firms, and in many jurisdictions are relied upon in the regulatory framework.

© 2020. See Terms of Use for more information.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about to learn more about our global network of member firms.

The Luxembourg member firm of Deloitte Touche Tohmatsu Limited Privacy Statement notice may be found at www.deloitte.com/lu/privacy.