Definition

Model risk management (MRM) is the organizational framework for identifying, measuring, monitoring, and mitigating risks arising from the use of quantitative models — increasingly applied to machine learning and AI systems.

Key Takeaways

  • Originated in financial regulation (SR 11-7 / OCC 2011-12) and now widely applied to AI/ML.
  • Core MRM cycle: model development → validation → approval → deployment → ongoing monitoring.
  • Independent model validation is a fundamental MRM control.
  • The EU AI Act's conformity assessment requirements for high-risk AI parallel traditional MRM obligations.

Model Risk Management — AI and ML Framework

Model risk management (MRM) governs the development, validation, and monitoring of quantitative and AI models. Learn the framework, regulatory requirements, and application to machine learning systems.

Core MRM Framework Stages

The MRM lifecycle follows a structured governance gate model: (1) Model Development — documentation of assumptions, data sources, methodology, and known limitations. (2) Independent Validation — review by a team separate from model developers, testing performance claims and stress-testing edge cases. (3) Approval — governance committee sign-off before deployment. (4) Deployment — version-controlled deployment with change management controls. (5) Ongoing Monitoring — tracking performance, data drift, and decision distribution post-deployment.

AI MRM vs. Traditional MRM

Traditional MRM was designed for statistical and econometric models with relatively stable assumptions. Machine learning models introduce additional challenges: distributional shift (training and deployment data diverge over time), opaque model internals (interpretability), feedback loops (model decisions influence future training data), and rapid iteration cycles that traditional approval gates were not designed to handle. Emerging AI governance standards are extending MRM concepts to address these challenges.