The Times Real Estate


.

  • Written by Ryan Calo, Professor of Law, University of Washington

The U.S. Federal Trade Commission just fired a shot across the bow of the artificial intelligence industry. On April 19, 2021, a staff attorney at the agency, which serves as the nation’s leading consumer protection authority, wrote a blog post about biased AI algorithms that included a blunt warning: “Keep in mind that if you don’t hold yourself accountable, the FTC may do it for you.”

The post, titled “Aiming for truth, fairness, and equity in your company’s use of AI[1],” was notable for its tough and specific rhetoric about discriminatory AI. The author observed that the commission’s authority to prohibit unfair and deceptive practices “would include the sale or use of – for example – racially biased algorithms” and that industry exaggerations regarding the capability of AI to make fair or unbiased hiring decisions could result in “deception, discrimination – and an FTC law enforcement action.”

Bias seems to pervade the AI industry. Companies large and small are selling demonstrably biased systems[2], and their customers are in turn applying them in ways that disproportionately affect the vulnerable and marginalized. Examples of areas where they are being abused include health care, criminal justice and hiring[3].

Whatever they say or do, companies seem unable or unwilling to rid their data sets and models of the racial, gender and other biases[4] that suffuse society. Industry efforts to address fairness and equity have come under fire as inadequate or poorly supported by leadership, sometimes collapsing entirely[5].

As a researcher who studies law and technology[6] and a longtime observer of the FTC, I took particular note of the not-so-veiled threat of agency action. Agencies routinely use formal and informal policy statements to put regulated entities on notice that they are paying attention to a particular industry or issue. But such a direct threat of agency action – get your act together, or else – is relatively rare for the commission.

What the FTC can do – but hasn’t done

The FTC’s approach on discriminatory AI stands in stark contrast to, for instance, the early days of internet privacy. In the 1990s, the agency embraced a more hands-off, self-regulatory paradigm[7], becoming more assertive only after years of privacy and security lapses.

FTC warns the AI industry: Don't discriminate, or else Tech industry critic Lina Khan’s nomination to be a commissioner on the FTC is further evidence of the Biden administration’s intention to use the agency to regulate the industry. Graeme Jennings/Pool via AP[8]

How much should industry or the public read into a blog post by one government attorney? In my experience, FTC staff generally don’t go rogue. If anything, that a staff attorney apparently felt empowered to use such strong rhetoric on behalf of the commission confirms a broader basis of support within the agency for policing AI.

Can a federal agency, or anyone, define what makes AI fair or equitable? Not easily. But that’s not the FTC’s charge[9]. The agency only has to determine whether the AI industry’s business practices are unfair or deceptive – a standard the agency has almost a century of experience enforcing[10] – or otherwise in violation of laws that Congress has asked the agency to enforce.

Shifting winds on regulating AI

There are reasons to be skeptical of a sea change. The FTC is chronically understaffed[11], especially with respect to technologists. The Supreme Court recently dealt the agency a setback[12] by requiring additional hurdles before the FTC can seek monetary restitution from violators of the FTC Act.

But the winds are also in the commission’s sails. Public concern over AI[13] is growing. Current and incoming commissioners – there are five, with three Democratic appointees – have been vocally skeptical[14] of the technology industry, as is President Biden[15]. The same week as this Supreme Court decision, the commissioners found themselves before the U.S. Senate[16] answering the Commerce Committee’s questions about how the agency could do more for American consumers.

I don’t expect the AI industry to change overnight in response to a blog post. But I would be equally surprised if this blog post were the agency’s last word on discriminatory AI.

[Understand key political developments, each week. Subscribe to The Conversation’s election newsletter[17].]

Authors: Ryan Calo, Professor of Law, University of Washington

Read more https://theconversation.com/ftc-warns-the-ai-industry-dont-discriminate-or-else-159622

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more