Warning

The content on this page has been converted from PDF to HTML format using an artificial intelligence (AI) tool as part of our ongoing efforts to improve accessibility and usability of our publications. Note:

  • No human verification has been conducted of the converted content.
  • While we strive for accuracy errors or omissions may exist.
  • This content is provided for informational purposes only and should not be relied upon as a definitive or authoritative source.
  • For the official and verified version of the publication, refer to the original PDF document.

If you identify any inaccuracies or have concerns about the content, please contact us at [email protected].

Annual evaluation processes ISQM (UK) 1

The FRC does not accept any liability to any party for any loss, damage or costs howsoever arising, whether directly or indirectly, whether in contract, tort or otherwise from any action or decision taken (or not taken) as a result of any person relying on or otherwise using this document or arising from any omission from it.

© The Financial Reporting Council Limited 2025 Financial Reporting Council 13th Floor 1 Harbour Exchange Square London E14 9GE

1. Introduction

Why is the annual evaluation process important for a firm's System of Quality Management (“SoQM”)?

Performing a robust evaluation drives firms to plan and undertake monitoring and to periodically identify, assess and remediate gaps in their SoQMs, which supports and enables audit quality. It also requires the individual/s assigned ultimate responsibility to exercise their oversight and responsibility for the SoQM. The outcomes of the evaluation also provide an important input into the appraisals for those assigned responsibilities within the SoQM to support their accountability.

What does the standard say about the annual evaluation process?

ISQM (UK) 1 requires those with ultimate responsibility to undertake an, at least annual, evaluation of the SoQM. They are required to conclude, as at a point in time, on behalf of the firm, if the SoQM:

  • provides the firm with reasonable assurance that its quality objectives are being achieved; or
  • provides the firm with reasonable assurance that its quality objectives are being achieved, except for matters related to deficiencies with a severe but not pervasive effect on the system's design, implementation, and operation; or.
  • does not provide the firm with reasonable assurance that its quality objectives are being achieved.

The firm is also required to prepare documentation that includes the basis for this conclusion. If either of the latter two conclusions are reached the firm is required to take prompt and appropriate action, and communicate the conclusion, as appropriate, within the firm and to external parties. The firm is also required to undertake periodic performance evaluations of those assigned ultimate and operational responsibilities within the SoQM, taking into account the evaluation conclusion reached. The expected process for the evaluation is:

This diagram illustrates a three-step process for evaluation:

  1. Identifying findings (from monitoring processes)

    • From results of monitoring over responses to risks
    • From monitoring other sources of information
      → (arrow points to step 2)
  2. Identifying deficiencies from findings

    • Assess the significance of each finding and identifying themes
    • Aggregate per risk/objective
      → (arrow points to step 3)
  3. Assessing deficiencies

    • Assess severity/pervasiveness individually and in aggregate
    • Assess extent/effectiveness of actions to remediate/mitigate

Below these steps, a box indicates: Phased review and involvement by those with operational and ultimate responsibility. Robust evidence of the procedures and conclusions.

What does the standard say about the annual evaluation process (continued):

The conclusion for the evaluation can then be determined (using the framework in the IAASB first time implementation guidance):

This diagram presents a flowchart for determining the evaluation conclusion based on deficiencies:

Start: "Are there any deficiencies?"

  • If "None severe":

    • Result: "Reasonable assurance objectives are achieved"
  • If "Severe but not pervasive":

    • Branch 1: "Effect corrected and effective remediating actions" → "Reasonable assurance objectives are achieved"
    • Branch 2: "Effect not corrected or effective remediating actions not completed" → "Reasonable assurance except for matters that are severe but not pervasive"
  • If "Severe and pervasive":

    • Branch 1: "Effect corrected and effective remediating actions" → "Reasonable assurance objectives are achieved"
    • Branch 2: "Effect not corrected or effective remediating actions not completed" → "SoQM does not provide reasonable assurance objectives are achieved"

Scope of this piece

This piece will share some examples of good practice and common pitfalls seen in our supervision of firms during the stages of identifying findings, identifying deficiencies from findings, assessing deficiencies, and structuring the review and involvement of leadership, including how these stages are robustly documented. The examples of good practice demonstrate ways that firms can approach the requirements of the standard, but not all will be appropriate for all firms, and we would not expect to see all these practices at any one firm. Firms' processes and evidencing for their annual evaluation will vary based on their size and complexity. This variation is expected to be particularly visible with regards to:

  • The extent and formality of the evidencing of the process, as it is likely firms with smaller and/or less complex audit portfolios will need fewer individuals and formal processes to reach an annual evaluation.
  • The formality and structure of leadership's review, as the leadership of smaller firms will have more ongoing involvement in the performance and review of monitoring activities and assessment of deficiencies. Therefore, they are more embedded in the operation of the annual evaluation processes and so would not need the reporting and oversight process to be as formally structured to support their annual evaluation conclusion.

2. Good practice and common pitfalls

Good practice Common pitfalls
Identifying findings Considering a wide range of inputs, such as audit inspection findings, RCA, ethics breaches, trends in prior period adjustments, internal audit reports, staff feedback and speak-up, regulatory findings, claims, complaints, training data, compliance and CPD records. Considering a narrow range of information and excluding sources despite indicators of findings,
Assessing in-progress quality initiatives and planned changes to quality risks and responses for potential gaps as-at the annual evaluation date. Lack of evidence of how information was reviewed and why potential indicators did not give rise to findings, including reviewing information solely through internal discussions, without retaining robust minutes
Where large teams are used to monitor responses, developing guidance to support consistent and robust assessments of exceptions, to conclude where responses are ineffective. Ignoring indicators of findings because related responses are effective, without considering why these might contradict each other.
Assessing all historic findings and deficiencies for potential recurrence. Lack of clarity on how sources were weighed up against each other to conclude if findings are present.
Critically comparing the timeliness of varied information to identify findings as-at the evaluation date.
Identifying deficiencies from findings Where there are a range of findings arising from separate monitoring processes, preparing a holistic assessment paper, at the level of components, objectives, or risks, to consider the aggregate impact. Not identifying why ethics breaches or audit inspection findings arose to identify where in the system deficiencies exist
Identifying the causal factors for a wide range of findings and categorising findings to identify themes, e.g., as relating to unclear understanding of roles or failure to capture formal evidence. Lack of assessment of whether aggregations of findings imply concerns on tone from the top or sufficiency of investment.
Identifying more significant findings and assessing these in greater depth to challenge the completeness of deficiencies. Overlooking the potential significance of similar findings arising across different areas
Identifying which quality risks are linked to more than one finding to assess the aggregate impact on each risk. Lack of assessment of aggregate impact from multiple findings in an area.
Assuming mitigating actions are effective without evidence
Lack of consideration of where findings in one response may affect the operation of other responses or mitigating actions.
Assessing deficiencies Developing a framework of set questions, to assess severity and pervasiveness by considering a range of factors on a sliding scale, considering the guidance in A163 of the standard Not identifying the causal factors for all identified deficiencies when assessing severity and pervasiveness.
Setting indicative thresholds for findings or deficiencies in areas such as prevalence of root causes, file review results or compliance testing. Lack of evidence to support that deficiencies have been remediated or that mitigating responses are effective and able to sufficiently address relevant risks.
Monitoring effectiveness for remediating actions to identify fully, partially, and un-remediated matters Not considering if recurrence increases severity or pervasiveness
Additional monitoring over mitigating or remediated responses to assess if a deficiency still exists as-at the evaluation date Where a deficiency is identified, not considering if it indicates that other, similar issues may have arisen and assuming ethics or audit quality issues are isolated without sufficient analysis.
Performing additional testing/analysis to determine the scope and impact of deficiencies. Not considering the range of impacted areas, when determining aggregate pervasiveness of deficiencies.
Where monitoring teams are larger, involving a moderation panel to drive consistent review and challenge of judgements.
Review and involvement Iterative reporting to leadership during the monitoring and evaluation process to enable ongoing engagement Assuming a deficiency is only pervasive if it affects all, or almost all, audits, or that if it affects all areas of the SoQM equally.
Reporting all findings to support assessing if deficiencies are complete. Leadership only receiving reporting when the annual evaluation conclusion is due to be reached.
Minuting meetings with component leads, to capture key judgements and challenges, and with leadership, to capture their review of reporting Brief and high-level reporting so that it is unclear if leadership were aware of all findings identified.
Capturing the key challenges raised by leadership and any additional information and analysis they requested. Failure to evidence the basis for leadership's conclusion, including how their review and challenge of the results of monitoring.
Evidencing leadership's review of each deficiency and close call.
These examples will be more relevant to larger firms where leadership is less involved in the underlying annual evaluation processes.

Overhead illustration of a person working on a laptop displaying charts, surrounded by a tablet with a graph, calculator, and other office items.

Financial Reporting Council

London office: 13th Floor, 1 Harbour Exchange Square, London, E14 9GE

Birmingham office: 5th Floor, 3 Arena Central, Bridge Street, Birmingham, B1 2AX +44 (0)20 7492 2300

www.frc.org.uk

Follow us on LinkedIn

File

Name Annual evaluation processes ISQM (UK) 1
Publication date 04 September 2025
Type Report
Format PDF, 506.1 KB