AI audit washing: Why algorithmic audits need more clarity to bring AI accountability | HCLTech

AI audit washing: Why algorithmic audits need more clarity to bring AI accountability

Auditing could be a robust means for holding AI systems to account, but current AI auditing is more like “audit washing”
 
6 min read
Mousume Roy
Mousume Roy
APAC Reporter, HCLTech
6 min read
Why algorithmic audits need more clarity to bring AI accountability Banner

According to German Marshall Fund (GMF) recent report, inadequate and ill-defined algorithmic auditing processes are being used to mask problematic or illegal practices with artificial intelligence. Otherwise known as “audit-washing”, the report said that while algorithmic audits can help correct for the opacity of AI systems, poorly designed or executed audits are at best meaningless, and at worst can deflect attention from, or even excuse, the harms they are supposed to mitigate.

Published under the GMF think-tank’s Digital Innovation and Democracy Initiative, the report mentions that many of the tech industry’s current auditing practices provide false assurance because companies are either conducting their own self-assessments or, when there are outside checks, are still assessed according to their own goals rather than conformity to third-party standards.

The explosion of ethical frameworks for artificial intelligence has brought little change in how the technology is developed and deployed, with experts, societies and governments questioning the tech industry’s commitment to making it a positive social force.

Commenting on the report, Phil Hermsen, Solutions Director, Data Science & AI, at HCLTech says, “To develop trustworthy AI systems, policies, governance, traceability, algorithms, security protocols are needed, along with ethics and human rights. Organizations should be ready to build algorithms and attributes open to inspection and establish processes for handling issues and inconsistencies if they arise—and a foolproof audit helps achieve that.”

The rise of algorithmic auditing industry

Artificial intelligence is central to today’s most critical problems such as healthcare, climate disruption and social inequality. Its significance requires that stakeholders exercise greater regulation and governance over AI solutions and hold AI systems accountable for their potential harms, including bias, discriminatory impact, opacity, error, insecurity and privacy violations.

The algorithmic audit is still a vague and ill-defined area where social media platforms or AI systems are concerned. There is a high risk that inadequate audits will obscure problems with algorithmic systems and create a permission structure around poorly designed or implemented AI.

A poorly executed audit is meaningless and even excuses harms that the audits claim to mitigate. Without clear standards, this ‘audit washing’ practice provides false assurance of compliance with norms and laws. Like green-washing, the audited entity can claim credit without doing the work.

In response to these risks, calls for audits to expose and mitigate harms related to algorithmic decision systems are increasing and audit provisions are coming into force. The EU’s Digital Services Act is an unprecedented new standard for the accountability of online platforms regarding illegal and harmful content, which aims to provide better protection for internet users and their fundamental rights.

Research organizations working on technology accountability have called for ethics and/or human rights auditing of algorithms, and an artificial intelligence (AI) audit industry is rapidly developing, signified by the organizations such as KPMG and Deloitte marketing their services.

Making algorithmic audits a reliable AI accountability mechanism

This report assesses the effectiveness of various auditing regimes and proposes guidelines for creating trustworthy auditing systems. To make algorithmic audits a reliable AI accountability mechanism, the report identifies some basic questions that need answering:

  1. Who: Includes the person or organization conducting the audit, with clearly defined qualifications, conditions for data access, and guardrails for internal audits.
  2. What: Includes the type and scope of audit, including its position within a larger sociotechnical system.
  3. Why: Includes audit objectives, whether narrow legal standards or broader ethical goals, essential for audit comparison.
  4. How: Includes a clear articulation of audit standards, an important baseline for the development of audit certification mechanisms and to guard against audit-washing.

The Stanford University’s 2022 AI Audit Challenge lists the benefits of AI auditing, including: verification, performance and governance allowing public officials or journalists to verify the statements made by companies about the efficacy of their algorithms, thereby reducing the risk of fraud and misrepresentation.

Auditing improves competition on the quality and accuracy of AI systems, while it could also allow governments to establish high-level objectives without being overly prescriptive about the means to get there.

 

Powering reimagined experiences for E.ON

Watch the video

 

According to Rethinking data and rebalancing digital power, the Ada Lovelace Institute argued for greater public participation in the scrutiny of data and algorithms, which could help overcome some of these issues.

“Panels or juries of citizens could be coordinated by specialized civil society organizations to provide input on the audit and assessment of datasets and algorithms that have significant societal impacts and effects,” it said, adding that “participatory co-design or deliberative assemblies” could also help bake public interest considerations into the design process.

TAGS:
Share On