White Paper / August 2022

AI’s Redress Problem: Recommendations to Improve Consumer Protection from Artificial Intelligence

AI's Redress Problem PDF
Download the Report

For all its potential benefits, artificial intelligence (AI) can cause an array of harms, from racial discrimination to injury or even death. Yet existing legal frameworks do not provide sufficient means for affected individuals to seek redress when they have been harmed by AI-based technologies, which often are not transparent or explainable.

A new CLTC White Paper explores this issue, and provides recommendations for policymakers, corporations, and civil society organizations to create pathways for affected individuals or groups to seek redress when they are adversely affected by AI. The paper, AI’s Redress Problem: Recommendations to Improve Consumer Protection from Artificial Intelligence, was authored by Ifejesu Ogunleye, a graduate of the Master of Development Practice program at UC Berkeley who conducted the research as a graduate researcher at the Center for Long-Term Cybersecurity’s AI Security Initiative.

“With AI systems increasingly being deployed across vital sectors such as finance, healthcare, criminal justice, and recruitment, it is important that redress mechanisms are established and maintained to ensure that consumers, data subjects, or users of AI systems have access to a range of effective redress options in the event that they suffer harm,” Ogunleye writes.

Ifejesu Ogunleye
Ifejesu Ogunleye

In the paper, Ogunleye explains how the question of redress is addressed in emerging regulations related to data protection — such as Europe’s General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) — as well is in guidelines related to artificial intelligence, such as the EU Artificial Intelligence Act and the Federal Trade Commission AI and Algorithms Guidance. While these regulations provide some language related to redress, she explains, “there has been no adoption of extensive mechanisms to provide effective redress for harms caused by deployed AI systems.”

AI is different than other technologies, Ogunleye writes, because AI-based systems often operate largely in the background, and consumers may not even be aware how they have been affected, for example if they are denied a loan based on an algorithm. The decisions made by AI systems are in many cases unexplainable, with even the system’s developers unaware of how how a particular decision was reached. “Existing rules that set out liability as a result of negligence or design defect may prove hard to enforce in complex situations where, for instance, it is difficult to ascertain what design feature specifically led to the harm or who was responsible for it,” she writes.

The paper provides a set of recommendations for different stakeholders to establish redress mechanisms, such as ombudsman services specifically dedicated to reviewing complaints from employees, consumers, and other groups. Following are the recommendations provided in the paper for regulators, corporations, and civil society organizations:

Recommendations for Regulators

  • Ensure that individuals harmed by the deployment of AI systems are able to make a regulatory complaint or pursue legal action in court.
  • Establish a dedicated AI ombudsman service that reviews disputes or complaints between individuals and companies in an independent and impartial manner.
  • Empower groups or communities of people who have suffered systemic or widespread harm from the development and/or deployment of AI systems to collectively seek redress for such harms.
  • Empower civil society organizations to represent consumers in seeking redress or making general interest complaints against companies using AI systems that are harmful.

Recommendations for Corporations

  • Establish internal ombudsman services to receive and review complaints from stakeholders, including employees or consumers.
  • Engage with external stakeholders, such as academic researchers or consumer advocacy groups, to identify and address issues of bias, discrimination, or unfairness that may exist in AI models.

Recommendations for Civil Society Organizations

  • Engage with underserved or marginalized individuals or communities to identify harmful impacts and seek redress.
  • Ensure that findings from engagement with communities, audits, or research are made publicly available.

“Although various regulatory frameworks in effect include some redress mechanisms, the peculiarities of AI systems often reduce the effectiveness of such mechanisms and make them insufficient to address harms or risks caused by deployed AI systems,” Ogunleye concludes. “It is therefore important for AI regulatory frameworks to create redress mechanisms capable of addressing harms that arise from AI systems in an effective and consumer-centric manner. Failure to do so may further exacerbate issues of inequality and exclusion for certain demographic groups and individuals.”

Download the White Paper