Karachi: Businesses that effectively assess their artificial intelligence systems can better harness the technology’s potential to drive innovation, productivity, and growth, according to a policy paper released by the Association of Chartered Certified Accountants (ACCA) and global professional services firm EY.
The report, titled “AI Assessments: Enhancing Confidence in AI,” delves into the emerging field of AI assessments, which includes a wide array of evaluations ranging from technical, governance, and compliance assessments to traditional assurance and audits.
The paper emphasizes the role these assessments play in determining whether AI systems are well governed, comply with relevant laws and regulations, and meet the expectations of business leaders and other stakeholders. It argues that effective AI assessments can lead to the deployment of AI systems that are more reliable and trustworthy.
The report also identifies current challenges associated with AI assessments and highlights key elements needed to make these evaluations robust and meaningful. As the adoption and deployment of AI accelerate globally, businesses, investors, insurers, and policymakers are increasingly considering AI assessments to build and enhance trust in the technology.
The publication comes at a time when the policy landscape for AI assessment is evolving. Notably, the Trump administration has released an AI Action Plan emphasizing the importance of rigorous evaluations in defining and measuring AI reliability and performance in regulated industries.
The report categorizes AI assessments into three types: governance assessments to evaluate internal governance structures, conformity assessments to ensure compliance with laws and standards, and performance assessments to measure systems against predefined metrics.
To address challenges in AI assessment frameworks, the report suggests adopting well-specified objectives, clearly defined methodologies, and involving competent, objective, and professionally accountable providers.
The paper offers actionable recommendations for business leaders and policymakers to enhance AI assessments. Business leaders are encouraged to consider voluntary assessments to improve governance and risk management, while policymakers should define the purpose and methodology of assessments, support international standards, and build market capacity for high-quality evaluations.
Helen Brand, chief executive of ACCA, emphasized the importance of public trust in AI as the technology scales across the economy. Marie-Laure Delarue, EY Global Vice-Chair of Assurance, noted that rigorous assessments are crucial for ensuring safe and effective AI adoption, unlocking AI’s potential as a growth driver.