The White House’s AI Bill of Rights Has Bark — but Where’s the Bite?

Patrick K. Lin
3 min readOct 5, 2022

--

This article was originally published on LinkedIn.

The White House Office of Science and Technology Policy (OSTP) released its “Blueprint for an AI Bill of Rights” yesterday. The blueprint is the product of collaboration and input from AI auditing startups, human rights groups, the general public, and even companies like Microsoft, Uber, and Palantir (a data-mining company with seed money from the CIA’s venture capital firm, In-Q-Tel). Between Clearview AI hoarding billions of photos of our faces, predictive policing software targeting Black and Latinx neighborhoods while avoiding predominantly white neighborhoods, and Facebook’s role in facilitating genocide in Myanmar, a document outlining the relationship between AI and our civil liberties is more important than ever.

As such, the AI Bill of Rights covers five principles:

  1. Safe and Effective Systems: you should be protected from ineffective or unsafe algorithms.
  2. Algorithmic Discrimination Protections: you should not be discriminated against by unfair algorithms.
  3. Data Privacy: you should have control over how your data is used.
  4. Notice and Explanation: you should know when, how, and why AI is making a decision about you.
  5. Human Alternatives, Considerations, and Fallback: you should be able to opt out of automated decision-making.

The blueprint urges (since it has no enforcement mechanism) companies and governments at all levels to put these principles into practice. “Simply put, systems should work, they shouldn’t discriminate, they shouldn’t use data indiscriminately,” AI Bill of Rights co-writer Suresh Venkatasubramanian wrote in a Twitter thread. “The AI Bill of Rights reflects, as befits the title, a consensus, broad, and deep American vision of how to govern the automated technologies that impact our lives.”

However, unlike the U.S. Bill of Rights it draws inspiration from, the AI Bill of Rights will not have the force of law. Instead, it is a nonbinding white paper joining a sea of similar documents released by companies, academic institutions, governments, and think tanks. These are well-intentioned efforts that use the right words — like transparency, explainability, and trustworthy — but they only put forth suggestions and are too vague to make a meaningful difference in people’s everyday lives.

While the White House’s blueprint attempts to translate these principles into practice, there is still a ways to go. “We too understand that principles aren’t sufficient,” Alondra Nelson, OSTP deputy director for science and society, said. “This is really just a down payment. It’s just the beginning and the start.”

The lack of teeth in the White House’s AI Bill of Rights is especially disappointing given the more rights-protective AI regulations being developed in the European Union. In addition to public disclosure requirements, the European Parliament is considering outright bans when discussing amendments to the AI Act. For example, some members of the European Parliament argue predictive policing should be banned because it “violates the presumption of innocence as well as human dignity.” Last week, the European Commission proposed a new law, the AI Liability Directive, that would allow people treated unfairly by AI to file lawsuits in civil court.

If the White House’s AI Bill of Rights wishes to lead by example, it can take a page out of the EU’s playbook. After all, often the most right-protective action with respect to AI and technology is not using it in the first place.

===

You can read the full “Blueprint for an AI Bill of Rights” here. You can also check out the companion fact sheet detailing exist and new efforts across federal agencies that are attempting to implement the principles outlined in the blueprint.

--

--

Patrick K. Lin
Patrick K. Lin

Written by Patrick K. Lin

Patrick K. Lin is a New York City-based author focused on researching technology law and policy, artificial intelligence, surveillance, and data privacy.

No responses yet