caalley logoThe alley for Indian Chartered Accountants

EY and ACCA urge stronger AI checks to bridge trust gaps

August 5, 2025

The policy paper, AI Assessments: Enhancing Confidence in AI, argues that effective evaluations of AI systems can help organisations deploy technology that is safe, reliable and compliant.

The report comes as global AI adoption accelerates and incidents linked to AI systems continue to rise — OECD data shows adverse AI incidents grew almost twentyfold between November 2022 and January 2025.

 To download the report, click here 

Three types of AI assessments

The guidance categorises AI evaluations into three types:

Governance assessments, which examine whether internal structures and policies adequately manage AI risks and reliability.

Conformity assessments, which determine compliance with laws, regulations and standards.

Performance assessments, which measure AI outputs against predefined quality and performance criteria.

EY and ACCA stress that these assessments, whether voluntary or mandatory, are essential to corporate governance, risk management and public confidence.

However, current frameworks vary widely by jurisdiction, methodology and scope, creating inconsistency and gaps in assurance.

Challenges and recommendations

The report identifies challenges undermining current AI assessments, including unclear objectives, ambiguous terminology, a lack of qualified providers and the rapid pace of AI development outstripping technical standards.

To address these, the paper calls for clearer definitions, standardised methodologies, and greater professional accountability.

Marie-Laure Delarue, EY Global Vice-Chair, Assurance, said:

“AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. Rigorous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI’s full potential as a driver of growth and prosperity.”

Helen Brand, ACCA Chief Executive, said:

“As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business.”

Global policy momentum

The paper notes that policymakers in nearly 70 countries have proposed more than 1,000 AI policy initiatives, with 204 enacted into law, according to Stanford University.

Frameworks such as the EU AI Act, the UK AI Assurance Toolkit, and the US AI Action Plan are driving regulatory activity, but standards and requirements remain fragmented.

EY and ACCA recommend that policymakers clearly define the purpose and criteria of assessments, align standards internationally to reduce costs, and build capacity to provide high-quality evaluations.

For business leaders, the report urges consideration of voluntary assessments to enhance governance and stakeholder confidence even in the absence of regulatory mandates.

[Accountancy Age]

Don't miss an update!
Subscribe to our email newsletter
Important Updates