Warning

The content on this page has been converted from PDF to HTML format using an artificial intelligence (AI) tool as part of our ongoing efforts to improve accessibility and usability of our publications. Note:

  • No human verification has been conducted of the converted content.
  • While we strive for accuracy errors or omissions may exist.
  • This content is provided for informational purposes only and should not be relied upon as a definitive or authoritative source.
  • For the official and verified version of the publication, refer to the original PDF document.

If you identify any inaccuracies or have concerns about the content, please contact us at [email protected].

Technical Actuarial Guidance: Models (July 2023)

The FRC's purpose is to serve the public interest by setting high standards of corporate governance, reporting and audit and by holding to account those responsible for delivering them. The FRC sets the UK Corporate Governance and Stewardship Codes and UK standards for accounting and actuarial work; monitors and takes action to promote the quality of corporate reporting; and operates independent enforcement arrangements for accountants and actuaries. As the competent authority for audit in the UK the FRC sets auditing and ethical standards and monitors and enforces audit quality.

The FRC does not accept any liability to any party for any loss, damage or costs howsoever arising, whether directly or indirectly, whether in contract, tort or otherwise from any action or decision taken (or not taken) as a result of any person relying on or otherwise using this document or arising from an omission from it.

© The Financial Reporting Council Limited 2023

The Financial Reporting Council Limited is a company limited by guarantee.

Registered in England number 2486368. Registered Office:

8th Floor, 125 London Wall, London EC27 5AS.

1. Introduction

Purpose

1.1.The FRC issues guidance for a number of purposes, for example to support compliance with requirements, or for interpretive, explanatory, contextual or educational purposes to support the use of judgement in applying principles-based standards. The overall purpose of the FRC's guidance is to improve the quality of technical actuarial work. The guidance is persuasive not prescriptive, and compliance is encouraged.

1.2.Following our consultation on the revision of TAS 100 and associated stakeholder engagement, practitioners have expressed desire for additional guidance to be issued in relation to the Models Principle in TAS 100. Areas of clarification cited include where the technical actuarial work involves third-party proprietary models or modelling work is undertaken by other functions, or how model governance (including validation and change control) activities are taken into account across key functions in an organisation.

1.3.The purpose of this guidance is to help practitioners in complying with Principle 5 Models of 'TAS 100 (General Actuarial Standards)' which states:

Practitioners must ensure models used in their technical actuarial work are fit for purpose and subject to sufficient controls and testing, so that the intended user can rely on the resulting actuarial information.

1.4.It comprises five provisions:

  • P5.1 Practitioners must ensure they understand the models used in their technical actuarial work, including intended uses and limitations.
  • P5.2 Practitioners must ensure that the models they use for technical actuarial work have in place an appropriate level of model governance.
  • P5.3 Practitioners must identify the extent of any material biases within the models that are used.
  • P5.4 Where material limitations exist in models or methodologies used, the practitioner must consider the implications of those material limitations.
  • P5.5 Where key stakeholders such as management, sponsors, trustees and regulators require the model to incorporate effects of material actions, practitioners must consider the implications of these actions.

1.5.Our regulatory expectations on model use and bias are further set out in three application statements within TAS 100 (see Appendix 3).

Intended audience

1.6.The guidance is aimed at practitioners who develop, use, validate and / or own models for technical actuarial work. Intended users of actuarial work may also find it useful.

2. Models in scope of TAS 100

2.1.A successful application of Principle 5 requires practitioners to determine which models fall within the scope of TAS 100. This section guides practitioners in determining this. Appendix 2 provides illustrative examples to further aid understanding.

Model definition

2.2.The TAS 100 glossary defines a model as follows:

A simplified representation of some aspect of the world. The model produces a set of outputs from inputs in the form of data, assumptions and parameters. Inputs and outputs may be qualitative or quantitative. The model is defined by a specification that describes the matters that should be represented, the inputs, and the relationships between the inputs, and the resulting outputs. The model is implemented through a set of mathematical formulae and algorithms (e.g., a computer program).

2.3.Other model definitions are in use, notably the following in the PRA's 'SS1/23 - Model risk management principles for banks':

'A model is a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output. The definition of a model includes input data that are quantitative and / or qualitative in nature or expert judgement-based, and output that are quantitative or qualitative.'

2.4.These definitions have in common three elements: 1) inputs; 2) outputs; and 3) a quantitative approach to processing the inputs into outputs.

Scope - models used in technical actuarial work

2.5.Within any given entity (as defined in TAS 100) there may be a very large number of models, but the scope of Principle 5 is models used in technical actuarial work1. Principle 5 therefore applies to all models (as defined in paragraph 2.2) used to carry out technical actuarial work, regardless of whether the models themselves use principles or techniques of actuarial science and judgement is exercised in the model.

2.6.Practitioners may apply proportionality (see Section 7 and Appendix 1) when considering the efforts to be expended in their compliance with Principle 5 in respect of each model.

Scope - indirect use of models

2.7.Practitioner A carrying out technical actuarial work may use information provided by a second practitioner (B) as input to practitioner A's model. Practitioner B may have used a model (model B) to generate the information which is provided to practitioner A. We refer to this as 'indirect' use of practitioner B's model by practitioner A.

2.8.Practitioner A's TAS 100 responsibility starts once they use the information from practitioner B's model in their technical actuarial work:

  • If the information received from practitioner B is used as a data input in practitioner A's model, then TAS 100 Principle 3 applies i.e. practitioner A needs to ensure that the data is accurate, complete and appropriate.
  • If the information received from practitioner B is used as an assumption in practitioner A's model, then TAS 100 Principle 4 applies, and practitioner A needs to ensure that the assumption is appropriate.

2.9.Practitioner A does not bear responsibility for the compliance with Principle 5 of practitioner B's model. Compliance responsibility in this regard rests with practitioner B, if their work to generate the information for practitioner A meets the definition of technical actuarial work.

Scope - third party model or code

2.10.It is common for practitioners to use a model developed by a third party for technical actuarial work. Also, practitioners may be involved in the development of models for use by a third party. The third party may be within the same entity or an external entity.

2.11.From the perspective of a practitioner A providing the third-party model, if their work meets the technical actuarial work definition, then practitioner A must comply with TAS 100.

2.12.From the perspective of a practitioner B using the third-party model, when the practitioner uses the third-party model for their technical actuarial work to provide output to their intended users, then practitioner B must comply with TAS 100 in respect of their technical actuarial work for their intended users, and the third-party model comes within the scope of Principle 5. Practitioner B must therefore ensure the model used is fit for purpose and subject to sufficient controls and testing. They may exercise judgement on the extent to which reliance may be placed on practitioner A's TAS 100 compliance and evidence, where available, when complying with Principle 5.

2.13.Practitioners may also utilise open-source code. This is code that is designed by a third party to be publicly accessible. Where practitioners use such code for their technical actuarial work, the code falls within the scope of Principle 5 and the responsibility for compliance rests with the practitioner.

Scope - artificial intelligence and machine learning

2.14.The use of Artificial Intelligence (AI), including Machine Learning (ML), continues to grow. We consider models which use AI and ML techniques to fall within the scope of TAS 100 Principle 5 if they meet the model definition and are used in technical actuarial work as set out above.

3. Model understanding

3.1.TAS 100 P5.1 requires practitioners to ensure they understand the models used in their technical actuarial work, including intended uses and limitations. We cover model limitations in detail in Section 5, so cover them only briefly in this section.

3.2.An understanding of a model's use is essential to avoid its use in ways that are inconsistent with the original intent or capabilities without consideration of the risks of doing so.

3.3.Practitioners may wish to understand the use(s) of the model intended by the model design. This includes understanding:

  • The purpose (e.g., regulatory capital, pricing, valuation, assumption setting, setting funding and investment strategy for scheme journey-planning).
  • The use by country, legal entity, product / liability / asset types, and customers / clients.
  • The environment in which a model performs reliably, for example, economic and market circumstances (including whether the model has sufficient regard to extreme events or outliers per TAS 100 A5.1).

3.4.Practitioners using models in their technical actuarial work may wish to have a good understanding of the operation of the model, including its user controls (to mitigate misuse risk), the methodology underpinning the model, the model's intended use and the model's limitations.

3.5.Practitioners may also wish to understand their responsibilities and those of others in relation to the models. In addition, practitioners may wish to understand the material judgements within the model, the model outputs, and the model's governance framework.

3.6.Beyond this, the level and type of model understanding required may depend on the nature of the technical actuarial work. For example:

  • A practitioner whose technical actuarial work is model development (responsible for designing, developing, testing and documenting a model) may need to understand the detailed specification and the source code.
  • A practitioner whose technical actuarial work is model validation (responsible for validating that a model performs as expected) may similarly need to understand the detailed specification but also the design and effectiveness of the model control framework.

3.7.Gaining an understanding of models where AI / ML techniques are used could be more challenging than the more traditional models, especially where the inner workings cannot be explained in a way that can be easily understood or accessed.

3.8.The mechanism of achieving this understanding may need to vary if a traditional approach is not available. For example, practitioners may wish to perform analysis to help explain how a model has generated a particular result to aid understanding of a model. This can be done by creating data sets which are fed into the model, with the subsequent outputs recorded and analysed to develop an understanding of the process occurring within the model. If a practitioner is unable to achieve an understanding of the model due to the AI / ML techniques used, then they are unable to comply with TAS 100 P5.1 and will wish to reconsider the use of the model in their work and / or their role in carrying out the work.

3.9.Where a practitioner uses models indirectly, by virtue of receiving output / information from those models for use in their own models, then the practitioner is the intended user of that model output. In these circumstances it may be important for the practitioner to understand the model governance of those models, the material judgements in the models and their limitations, but an understanding beyond this may not be necessary.

3.10.Practitioners may exercise judgement in deciding the necessary level of understanding to satisfy themselves that the model they are using in their technical actuarial work is fit for purpose.

3.11.A greater understanding may be needed the more significant a model is in terms of its potential impact on the decisions made by the intended users, and / or the higher the degree of model complexity or judgement within the model.

4. Model risk and model governance

4.1.The following sections describe the sources of model risk, and how model governance can mitigate this risk, in order to assist practitioners in determining an appropriate level of model governance for their models.

Model risk

4.2.The TAS 100 glossary defines model risk as:

The risk that models are either incorrectly implemented (with errors) or make use of assumptions that cannot be justified rigorously, or assumptions that do not hold true in a particular context.

4.3.Some examples of common areas of model risk are set out below:

  • Data: inaccurate, incomplete and / or inappropriate.
  • Assumptions: incorrect and / or inappropriate.
  • Methodology: unsuitable model design choices (e.g., features captured / omitted, use of proxies, choice of statistical method, non-compliance).
  • Specification: inaccurate specification, or specification not reflective of the intended methodology.
  • Coding: source code incorrect and not reflective of the intended specification, and / or coding design sub-optimal and detrimental to performance.
  • Misuse: model inputs entered incorrectly and / or model outputs extracted incorrectly.
  • Misapplication: model used in circumstances not intended (e.g., for products, entities / structures, geographies, markets or time horizons not intended, or in economic circumstances not envisaged, or for pension increase caps and collars not designed).
  • People: insufficient skills or knowledge or resources to implement and / or operate the model.

4.4.Additionally, the IT infrastructure may give rise to model risk if it is not stable and / or at risk of becoming obsolete (and unsupported by the IT function or external vendor) and / or not aligned to the IT strategy.

4.5.Model risk may increase with the volume of data inputs, the number of and uncertainty in underlying assumptions, the complexity of the model design, and the materiality of the model. Model risk may also increase when assessed in aggregate as interactions / dependencies between models or reliance on common inputs or methodologies or code may adversely impact several models at the same time.

Model governance

4.6.TAS 100 P5.2 requires practitioners to ensure that the models they use for technical actuarial work have in place an appropriate level of model governance. Model governance is defined in the TAS 100 glossary as:

A set of activities, policies and procedures for identifying, managing and mitigating model risks. Actions to mitigate model risks include clear model ownership and responsibilities, documentation, model validation, a change control process including for example, appropriate checks to ensure the stability of model outputs.

4.7.The term model governance is sometimes used to refer only to model roles, responsibilities, and approval and oversight mechanisms (e.g., committees / forums). For TAS 100, a broader definition applies, including additionally model risk identification and management, and model risk mitigants such as model documentation, model change controls, and model validation.

4.8.TAS 100 P5.2 requires practitioners to ensure there is an appropriate level of model governance in place for their models. This does not mean that practitioners are required to be responsible for managing or overseeing all model governance activities, policies, and procedures. To ensure there is an appropriate level of model governance in place, it could be sufficient for practitioners to be sighted of, and satisfied with, the model governance for their models, including the model governance for models provided by third parties.

4.9.In this section we describe key elements of model governance. However, what is 'appropriate' may have regard to proportionality and we are expecting different levels of model governance across different models. The concept of proportionality, as applied to all TAS 100 provisions and regulatory expectations, is covered in 'TAS 100 Guidance – Proportionality'. Proportionality, as applied to Principle 5 in particular, is covered in Section 7 and Appendix 1 of this guidance.

Model risk identification and management

4.10.Risk identification refers to the process for detecting and assessing sources of potential model risk. Paragraph 4.3 of this guidance gives some examples of common areas of model risk, but it is important for practitioners to consider all potential sources. Having a standardised model risk taxonomy (ideally across entity as defined in TAS 100) may assist in the consistent identification and classification of model risks across all models.

4.11.An assessment of the identified risks involves analysis and measurement. It is preferable for model risks to be assessed consistently across models (again, ideally across entity and at group level where applicable) and this may be facilitated by setting out possible measurement approaches alongside a model risk taxonomy. For example:

  • How recently the model was reviewed without reliance on previous assessment (also known as baselining the model).
  • The number and magnitude of out-of-model adjustments2 (OOMAs), to address model limitations, by category (e.g., data, assumptions, methodology, misuse, misapplication).
  • The results of backtesting the model by using historical data and comparing the output to past results.
  • The results of sensitivity analysis and scenario testing to reveal the material assumptions and the reliable range for the model.
  • The number of run failures, model outages and the trend in runtimes.
  • The number of model risk incidents recorded (e.g., specification / coding errors identified post implementation, incorrect outputs provided to intended user, policy / standards / regulatory compliance breaches).

4.12.Risk identification processes reveal the exposure to model risk. Model risk management is then about managing the identified model risk exposures. To aid this, it is helpful to define and agree how much model risk exposure is acceptable for the entities or models in the context of the intended user and intended use of the model, perhaps as part of a wider risk management framework.

4.13.A common way of setting boundaries is to establish the model risk appetite and tolerances, and associated triggers. Appetite is frequently expressed as a high-level qualitative statement of the attitude to risk (e.g., 'no appetite' or 'limited appetite' or 'some appetite'). These are then implemented through risk tolerances and triggers, using quantitative measures where possible, aligned to the model risk taxonomy and measurement approaches outlined in paragraphs 4.10 and 4.11 above.

4.14.The successful identification of model risks and the monitoring of their exposures against risk tolerances and appetite are key elements of model risk management. Practitioners may wish to consider these elements for their models when assessing an 'appropriate level of model governance', allowing for proportionality which is further discussed in Section 7 of this guidance.

Model documentation

4.15.TAS 100 sets out the requirements for documentation of technical actuarial work in general, requiring documentation on: judgements; data; assumptions; model use, limitations and how it is fit for purpose; model governance and controls / testing; and material modelled actions.

4.16.Good documentation is a mitigant for model risk and may include the following:

  • Policies and procedures, which establish minimum requirements and actions in areas such as model development and testing, data quality, risk identification, monitoring and reporting, model change and model validation.
  • Model documentation (e.g., model inventory and / or documentation inventory, model methodology and technical specification, user guide, model success criteria and results).
  • A control framework which sets out the risks in the design and operation of the model(s) and the controls in place to mitigate (e.g., risk of user input error, do / review control for all inputs into the model(s) to mitigate risk).
  • Management information on the ongoing adequacy of the model(s) (e.g., performance against model success criteria, risk measurements and tolerances / triggers, findings from model validation and remediation status).

4.17.Practitioners will wish to apply judgement when considering the necessary documentation:

  • For a material model used to generate information for published communications, a comprehensive suite of documentation may be appropriate.
  • For a simple model or an immaterial model informing local, internal decisions, then reduced documentation may be more suitable. For example, commentary within the model itself summarising the model purpose and use, roles, description, limitations, version history and operating instructions, together with an articulation of the model controls (either within the model itself or set out outside the model).

Model change controls

4.18.TAS 100 defines a change control process as:

A process that: '(i) only allows authorised changes to the model; (ii) documents any changes made, testing carried out, and any material impact on the model or its outputs; and (iii) allows any changes to be reversed.

4.19.Practitioners may exercise judgement when considering the adequacy of the model change process, including the appropriate level of authorisation for changes, considering both the materiality of the model and the materiality of the changes.

4.20.For material models, it may be appropriate to have a trigger framework (setting out triggers for review / change and materiality thresholds for change approvals) and regular monitoring and reporting of those triggers. A material change to a material model may require a formal validation of the change together with a formal governance process.

4.21.Where the materiality threshold does not bite, either individually or cumulatively for a change, then it may be acceptable for the model owner (who is accountable for the model) to simply authorise the change subject to being satisfied that relevant controls have been applied (e.g., documentation and review of changes).

4.22.TAS 100 defines materiality in the context of influencing significant or relevant decisions. For model changes, it is helpful to have specific materiality criteria linked to the nature of the model, for example, based on funding level, investment value at risk, capital requirements, solvency, profit metrics, asset and / or liability and / or net asset valuations, or price as relevant. Such criteria are helpful in both determining triggers for change and assessing the materiality of the change in model outputs.

Validation

4.23.The TAS 100 defines model validation as:

The processes and actions verifying that a model is performing as expected and is fit for purpose.

4.24.The aim of validation is to provide an unbiased opinion on the adequacy of a model. An unbiased view of a model is best achieved through independence between the developers and validators. This can take many forms: from use of another organisation to perform the validation to having the validation performed by another team / individual within the organisation in a separate division / unit / reporting line. The greater the separation the greater the independence.

4.25.In larger entities there may typically be 'three lines of defence' with a second line risk function overseeing first line and a third line internal audit function overseeing the first and second lines. In these circumstances, validation responsibilities often reside with the second line. In smaller entities there may not be the opportunity for a three-lines model and validation may be carried out within the same team.

4.26.The following processes are commonly used to verify that the model is performing as expected:

  • Assessing the design and operational effectiveness of the model controls, including the application of the model governance framework.
  • Assessing the quality of data inputs.
  • Reviewing the completeness and quality of the model documentation.
  • Assessing the model methodology for compliance against relevant regulations.
  • Evaluating the appropriateness of the methodology by comparison with alternative methodologies.
  • Evaluating the appropriateness of the assumptions and methodology by backtesting the model and benchmarking.
  • Assessing the stability and limitations of the model through sensitivity testing and scenario analysis, including extreme event scenarios where appropriate.
  • Evaluating the performance of the model, including model convergence where relevant.
  • Assessing the accuracy of the model by reproducing output in part or in full using an alternative model.

4.27.Validation is an important process for confirming that a model is performing as expected, and for identifying model limitations and interdependencies with other models. In particular, sensitivity testing and scenario analysis assist in understanding a model's reliable range with regards to economic, market and demographic circumstances.

5. Model limitations and bias

Model limitations

5.1.TAS 100 P5.1 and P5.4 require practitioners to understand model limitations and consider the implications of material limitations.

5.2.A model limitation may be regarded as a known issue with a model, as compared with a model risk which is a potential issue i.e. a chance of something happening that may have an adverse impact.

5.3.Models used in technical actuarial work cannot typically be designed to fully capture the impact of all elements that may affect an actual outcome. Limitations therefore arise from approximations inherent in the model design. Limitations are also a consequence of assumptions underlying a model that may limit the scope of application to a specific set of circumstances.

5.4.Model design approximations frequently arise as a result of trade-offs between model accuracy, complexity, efficiency and cost. Some examples of common limitations are as follows:

  • Classes of assets or liabilities, or features of classes, may not be captured in a model.
  • Data model points may be used rather than a full dataset, for example for capacity reasons and runtimes.
  • Asset and liability proxy models may be deployed rather than full valuation models.
  • Assumptions may be calibrated to proxy data and / or by expert judgement and / or to data that does not adequately capture future uncertainty.
  • Some risks may not be captured in the modelling.
  • Limitations in stochastic modelling from the choice of statistical approach or scenario generator or the number of simulations.

5.5.Limitations may be identified as a consequence of design decisions in the development phase of a model. Validation may further highlight limitations using validation techniques such as backtesting, sensitivity and scenario testing, and comparisons with alternative models or methodologies.

5.6.It may also be the case that limitations emerge as a model is used over time, if the environment or portfolios change, or as errors are identified, or if a model is used for a different purpose than its original intent. It is important that all limitations are recorded, whatever the source.

5.7.Practitioners may wish to quantify a limitation through re-running or estimation, and the impact may then be compared with any triggers and materiality thresholds established as part of any model change process.

5.8.However, it is not always straightforward to assess the materiality of a limitation without considerable effort especially, for example, where the limitation relates to unmodelled features of assets or liabilities. In these cases, a practitioner may exercise judgement as to whether a qualitative assessment can be undertaken based on an understanding of the risk factors underpinning the limitation.

5.9.For limitations assessed as material, when considering the implications as required by TAS 100 P5.4, practitioners may wish to consider an appropriate course of action to mitigate the limitation. For example:

  • Scaling data inputs for incomplete data.
  • Re-fitting proxy models where the fit is inadequate.
  • Re-calibrating assumptions where the calibration data and / or economic or market or demographic circumstances have changed.
  • Applying a different methodology, for example, choice of statistical approach.
  • Building new features into the model.
  • Applying an out-of-model adjustment3 (OOMA) to address limitations.
  • Restricting the use of the model in circumstances where the limitations bite.
  • Abandoning the model.

5.10.The choice of action may have regard to the nature of the limitation. It may also have regard to an aggregate view of the limitations and the associated actions. For example, applying an OOMA is reasonable to address a limitation in a model when considered in isolation, but potentially not if the OOMAs for that model are already significant, in number and / or magnitude, in which case model change(s) or restricting the model's use may need to be considered.

5.11.Practitioners may wish to consider whether the assessment of materiality and the determination of an appropriate course of action for a limitation is a material judgement. In which case, in compliance with TAS 100 P2.3, the judgment should be reviewed to ensure that it remains appropriate over time.

5.12.More generally, practitioners may wish to consider reviewing the materiality assessments periodically to ensure they remain accurate, particularly where the exposure to the limitations changes. For example, a limitation in respect of an unmodelled feature may be assessed as being of low materiality, but the projected or actual exposure may grow (e.g., as a result of sales or a change in investment strategy or economic movements) and modelling the feature may need to be re-considered.

Model bias

5.13.TAS 100 P5.3 requires practitioners to 'identify the extent of any material biases within the models that are used', where bias is defined in TAS 100 as a 'disproportionate weight in favour of or against something'. In accordance with the application statements (TAS 100 A5.2 and A5.3), practitioners are expected to:

  • Consider whether the model leads to consistent over or under estimation.
  • Consider whether the model contains systematic error leading to results that are not representative of the intended design.
  • Improve the model to reduce the impact of material bias.

5.14.Bias is a type of model limitation and may therefore be considered as part of the wider consideration of limitations as outlined above, including adjusting for material bias.

5.15.Some common sources of model bias are as follows:

  • Data – data bias may arise if the methods of sampling data have inherent bias and / or if the data itself reflects historical bias and / or if attributes that may help provide an unbiased prediction of matters being modelled are omitted from the data.
  • Algorithm – the design of the model's algorithm may cause model bias, for example, if an algorithm is written in a way that erroneously gives more weight to some data than other, or to exclude variables that might otherwise improve unbiased estimation, or to systematically round imputed values such that it introduces rounding error into the model output.
  • Assumptions – the data used to calibrate model assumptions may be biased per above, or judgement may be applied to the calibration and this judgement may be biased, for example, as a result of cognitive bias.
  • Outputs – bias may arise as a consequence of the way in which model output is designed, perhaps as a consequence of the designer downplaying or overemphasising results seen as less or more desirable respectively.

5.16.Practitioners may typically be familiar with how to evaluate the bias of methods and models through comparisons of estimated values versus actual values and statistical metrics (e.g., Chi-squared test). Practitioners may also wish to consider the possibility of bias across each element of their models, for example, by comparing model outputs using different data sets or methods or assumptions. This may include consideration of the ethical and reputational impact of generating biased output.

5.17.As set out in TAS 100 A5.3, where material biases are identified, practitioners should seek to address them. Data bias may be mitigated by eliminating problematic data or removing / adding specific components of data. Potentially biased judgements made in the algorithmic design or assumption calibration may be mitigated through seeking a broader range of views. An adjustment may also be made to model outputs to remove bias.

5.18.AI / ML models could be complex and can be self-learning systems (they recognise patterns in the data, learn from it and become better at connecting inputs to outputs over time). As with any models, such models are at risk of bias, and the use of potentially complex and opaque models makes the risk of algorithm and data bias especially significant.

6. Modelled actions

6.1.TAS 100 P5.5 requires that 'where key stakeholders such as management, sponsors, trustees and regulators require the model to incorporate effects of material actions, practitioners must consider the implications of these actions.'

6.2.Examples of such actions include:

  • Discretionary pension increases.
  • With-profits management actions.
  • Future charges / fees.
  • Terms for member / customer options.
  • Cost management.
  • Discretionary pay.
  • Future employment / payroll levels.
  • Dividends.
  • Capital-raising.
  • Re-structuring and risk mitigation activities.
  • Investment strategy (e.g., modelling a scheme journey plan allowing for rebalancing of the investment portfolio following a period of under or outperformance).

6.3.To incorporate the effects of actions, practitioners may wish to understand:

  • The nature of the actions being modelled and the timeframe over which they are assumed to be implemented.
  • The context and why the actions are being incorporated; in particular, whether stakeholders intend to implement the actions and under what circumstances would these actions be implemented (e.g., normal or worst / best case).
  • How and where the model outputs are being used.

6.4.Practitioners may also wish to be satisfied that consideration has been given or is being given to:

  • How realistic / viable the actions are (e.g., if there is an action to manage costs through outsourcing, whether there are third parties able to provide the administration and at what cost).
  • Any legal implications arising from the actions (e.g., if modelling actions to change customer terms in specified circumstances).
  • Whether the actions are consistent with any applicable regulations.
  • Whether the actions are in line with customer / member contract terms and conditions or Scheme rules.
  • Whether the actions are aligned with the entity's internal policies and standards.
  • Any new risks arising or any existing risks becoming more / less material as a consequence of the actions.
  • The procedures for implementing the actions.

6.5.Where the regulatory rules are prescriptive in the area of assumptions about future management actions in regulatory valuations, it may be appropriate for practitioners to familiarise themselves with those rules.

6.6.TAS 100 P5.5 requires practitioners to consider the implications of material actions. The considerations above should be commensurate with the materiality, and the implications of material modelled actions are expected to be described in communications to intended users as set out in A7.6 e) of TAS 100.

7. Proportionality as applied to Principle 5

7.1.As set out in TAS 100 paragraph 1.5, practitioners are encouraged to have regard to the guidance on proportionality to inform how they will comply with TAS 100. This includes TAS 100 Principle 5.

7.2.The 'TAS 100 Guidance – Proportionality' states that 'the requirements of the TAS 100 should be met in a way that is proportionate to the nature, scale and complexity of the decision or assignment to which the technical actuarial work relates and the benefit that the intended user would be expected to obtain from the work'.

7.3.Practitioners may wish to consider a proportionate approach to complying with Principle 5 based on a risk assessment of the model. Such an approach may build on the risk identification processes described in Section 4 of this guidance, considering the model risks, the materiality of the model and the extent / nature of its use.

7.4.It is good practice for entities to establish an entity-wide approach to categorising models by risk rating and to periodically assess models according to this categorisation. This may then be used to inform the application of TAS 100 Principle 5. For example, a reasonable outcome from this approach may be a risk-based sliding scale of model governance. We provide an illustration of this as an example in Appendix 1 overleaf.

8. Appendix 1: Risk-based model governance illustration

Very low risk model Very high risk model
Model risk framework Entity framework – identification and reporting (taxonomy, measurement basis, appetite, tolerances, triggers) Entity framework – identification and reporting (taxonomy, measurement basis, appetite, tolerances, triggers)
Policies Entity policies – model development / testing, data quality, model change, validation Entity policies – model development / testing, data quality, model change, validation
Inventory Model inventory – model reference, owner, purpose, risk assessment Model inventory – model reference, owner, purpose, risk assessment
Documentation - Model purpose / use / roles / version history / limitations
- Description of methodology
- Instructions
- Model purpose / use / roles / version history / limitations
- Methodology documentation
- Technical / functional specification
- User guide
Defined model success criteria - No - Yes, aligned with nature of model (e.g., could be in relation to model fit and / or model risk measurements)
Controls - Checklist of actions with doer / reviewer - Documented control framework (risks / controls)
- Evidence of completion of controls
Change process - Model owner authorisation subject to the changes being recorded and tested - Trigger framework
- Material changes validated
- Formal approval of changes and validation
Validation - Review by model owner - Maximum independence (e.g., through 2nd line)
- Full suite of validation tests
- Results reported to senior management
Monitoring - Model risk incidents only - Yes - model success criteria, model risk measurements, model change triggers

9. Appendix 2: Scenarios

Scope – indirect models

Example 1

9.1.A practitioner is setting expense assumptions to recommend to the senior management of an insurance company. The activity involves projecting costs, allowing for policyholder demographics and economic assumptions, and exercising judgment in the demographic and economic assumptions as well as in other areas (e.g., the allowance for new business).

9.2.The activity of expense assumption-setting is technical actuarial work, and the projection model is used directly in the work. Practitioner must therefore comply with TAS 100, and the projection model is within the scope of Principle 5.

9.3.A key input to the projection model is the output from a cost allocation model, which allocates all of the entity's costs by type (e.g., initial, renewal, fixed, variable, recurring / non-recurring) and line of business. The practitioner is the intended user of the output.

9.4.The cost allocation model is used indirectly by the practitioner in their technical actuarial work through use of the model output. The practitioner is not therefore responsible for Principle 5 compliance of the cost allocation model (unless the practitioner is also responsible for the cost allocation activity, and it constitutes technical actuarial work). The practitioner does, however, have to comply with Principle 3 Data of TAS 100 in respect of the allocated cost inputs.

Example 2

9.5.A practitioner (A) is performing a funding valuation for a pension scheme. This involves using a valuation model to project member benefits allowing for demographic and other assumptions.

9.6.The activity of performing a funding valuation is technical actuarial work, and the valuation model is used directly in the work. Practitioner A must therefore comply with TAS 100, and the valuation model is within the scope of Principle 5.

9.7.A key input to the valuation model is the mortality assumption. This is provided by another practitioner (B) within the same entity who has used a model to analyse the scheme mortality experience and fitted it to (a variation of) a CMI model.

9.8.Practitioner A is therefore the intended user of the mortality assumption produced by practitioner B. Practitioner A's TAS 100 Principle 5 compliance responsibility is in respect of the funding valuation model only, not the mortality model for which practitioner B has TAS 100 Principle 5 compliance responsibility. Practitioner A does, however, have to comply with Principle 4 Assumptions in respect of the mortality assumption inputs.

Scope – third-party models

Example 3

9.9.A practitioner develops an Economic Scenario Generator (ESG) for a client for use in modelling guarantees. The contractual arrangement is such that the client is permitted to place reliance on the model, but the practitioner has not been engaged to provide advice or assurance in relation to the client's subsequent use of the model.

9.10.The practitioner's work meets the technical actuarial work definition and the actuarial information provided to the client is the model (and associated documentation). As such, the practitioner is obligated to comply with TAS 100 in respect of the model, including Principle 5, and should be prepared to provide evidence of compliance to the client on request. The subsequent use of the model and its output by the client is not, however, within the scope of the practitioner's technical actuarial work.

9.11.The client runs the ESG model to generate scenarios for a pricing model to cost the guarantees for a potential new product in order to make a recommendation to the Pricing Committee.

9.12.The client's pricing work meets the technical actuarial work definition. The client therefore has a separate TAS 100 obligation to the intended users, the Pricing Committee. The client is using the ESG model in this work and so the ESG model comes within the scope of Principle 5. The client may exercise judgement on the extent to which reliance can be placed on the TAS 100 compliance by the practitioner providing the ESG model.

Example 4

9.13.A firm providing actuarial consulting services to defined benefit pension schemes develops a tool for the use of trustees, corporate sponsors and investment managers of UK schemes. The output from the tool is intended to facilitate discussions between the trustees, sponsors and investment managers and support decisions on asset allocations and contribution levels.

9.14.The tool allows users (typically with no actuarial expertise) to project the scheme funding position under various scenarios, by varying key inputs such as asset and liability data, asset allocations, contribution levels, asset returns, inflation and liability discount rates. The tool is intended to be used autonomously without ongoing support from the consulting firm and the setting of the inputs is at the discretion of the users of the tool and not the consulting firm.

9.15.The development of the tool constitutes technical actuarial work and the trustees, sponsors and investment managers who will purchase it are the intended users as it is their decisions that the tool is aiming to assist. The practitioner responsible for the development of the tool at the consulting firm is therefore obligated to comply with TAS 100, including Principle 5 in respect of the tool.

9.16.Given the direct use of the model by a wide group of users, the practitioner may wish to consider, in particular, the documentation and communication of the intended uses of the model and any material limitations (TAS 100 P6.1 and A7.6).

9.17.The practitioner may also wish to pay close consideration to the communication of the scope and purpose of their technical actuarial work (TAS 100 P7.4), notably that the setting of inputs by the intended users is outside of their scope but that the output from the tool, given the inputs, is within scope and intended users can place reliance on the tool and its outputs for any valid set of input assumptions.

Model risk and model governance

Example 5

9.18.A practitioner at an insurance company has worked with a third-party consultancy firm and internal stakeholders to develop a stress testing model. The model will be used by the company to analyse the impact on solvency and profit of a range of economic, expense and demographic stresses.

9.19.The practitioner and the company's Chief Finance Officer (CFO) have agreed that the practitioner will be the model owner. The practitioner's team will operate the model. The output from the model will be used in actuarial information provided by the practitioner for business planning, hedging, risk management, and reporting. The development project is sponsored by the CFO, who has executive responsibility for risks arising from models owned within the Finance division.

9.20.The practitioner considers the model will be used to carry out technical actuarial work by virtue of the techniques used in the calculations, the judgement exercised and the actuarial information which will be communicated. As such, the practitioner will be required to comply with Principle 5 of TAS 100 when using the model in the work.

9.21.The practitioner is considering how to ensure that the model has in place an appropriate level of model governance as required by TAS 100 P5.2. To assist in determining what is appropriate, the practitioner undertakes a model risk identification exercise on the model. The practitioner concludes that the model risk exposure is high. This is due to the large number of inputs required, the complexity of its design, its wide usage in important areas and the need for validation by use including statutory entity considerations within the insurance group of companies. If the model is inaccurate or misused, it will result in unexpected solvency / profit movements and / or reputational damage. The practitioner decides that strong model governance is necessary.

9.22.The practitioner assesses the quality of the model documentation to be very good but considers there to be gaps in other areas of model governance such as a documented change and authorisations process and independent validation of the model.

9.23.The practitioner considers that this is a matter for the CFO's attention, both as sponsor of the model development and the executive responsible for risks arising from models owned within the Finance division. The practitioner therefore makes a recommendation to the CFO, setting out the results of the model risk assessment, the current gaps in model governance and the actions needed to close the gaps in order to facilitate compliance with TAS 100 P5.2 when the model is used in future technical actuarial work.

10. Appendix 3: Application statements

A5.1In ensuring models are appropriate for their intended use, practitioners should consider whether the model has sufficient regard to extreme events or outliers.

A5.2In identifying whether models include any material bias, the practitioner should consider whether:

  1. The model leads to consistent overestimation or underestimation.
  2. The model contains systematic error, leading to results that are not representative of the aspect of the world that it is designed to model.

A5.3If material biases are identified, the practitioner should seek to improve the model, by adjusting it, if appropriate, to reduce the impact of this bias. Where model bias gives rise to material limitations in actuarial information, the practitioner should assess the implications.

Financial Reporting Council 8th Floor 125 London Wall London EC2Y 5AS +44 (0)20 7492 2300

www.frc.org.uk

Follow us on Twitter @FRCnews or Linked in


  1. Work performed for the intended user where the use of principles and/or techniques of actuarial science is central to the work and which involves the exercise of judgement, or which the intended user could reasonably regard as technical actuarial work by virtue of the manner of its communication. 

  2. An out-of-model adjustment is an adjustment which is made to the output of the model 

  3. An adjustment which is made to the output of the model 

File

Name Technical Actuarial Guidance: Models (July 2023)
Publication date 27 September 2023
Type Guidance
Format PDF, 293.1 KB