Building a Testing Framework for Model Assurance
The Challenge
Many insurers using hx Renew face the same challenge: how to ensure model updates are correct before releasing them into production. Small logic changes or data updates can affect thousands of policies, and manual checking is slow, inconsistent, and difficult to evidence for audit.
Our client wanted a robust, repeatable way to compare model outputs between versions — to detect differences, understand their cause, and clearly communicate them to underwriters before go-live.
Our Approach
We developed an automated testing framework that runs batch tests of existing policies through both the “current” and “new” model versions. The framework compares all rating results, flags discrepancies, and quantifies the impact at both policy and portfolio level.
Where differences are intended, the framework produces a clear explanation and evidence set that can be shared with underwriters and actuaries. Where differences are unexpected, they can be investigated immediately — before any release takes place.
This framework not only supports model testing but also creates a structured audit trail for compliance purposes. Internal and external auditors can see that model releases are verified, reconciled, and documented, all within a consistent, automated process.
The Result
Transparency: Underwriters receive clear, quantified summaries of rating changes before release.
Speed: Model testing cycles are faster and repeatable, without relying on manual comparisons.
Assurance: Actuarial and governance teams have evidence-ready outputs for internal audit and model validation reviews.
Reusability: The framework is modular, allowing future models to be added with minimal setup.
Our client now runs these tests routinely before every model release — turning what used to be a reactive, manual process into a transparent, automated control that builds trust between developers, actuaries, and underwriters.