FTC warns it could act against biased AI


The US Federal Trade Commission has warned companies against the use of biased artificial intelligence and says they could violate consumer protection laws. A new blog post notes that AI tools can reflect “troubling” racial and gender bias. If those tools are applied to areas like housing or work, falsely advertised as unbiased, or trained based on data collected in a misleading manner, the agency says it can intervene.

“If you’re in a hurry to embrace new technology, make sure you don’t over-promise what your algorithm can deliver,” writes FTC attorney Elisa Jillson – especially when promising decisions that don’t reflect racial or gender bias. “The result could be deception, discrimination – and an FTC law enforcement action.”

As Protocol points outFTC Chairman Rebecca Slaughter recently called algorithm-based bias “a matter of economic justice.” Slaughter and Jillson both state that companies can be prosecuted under the Equal Credit Opportunity Act or the Fair Credit Reporting Act for biased and unfair AI-powered decisions, and unfair and deceptive practices may also fall under Section 5 of the FTC Act.

“It is important to hold yourself accountable for the performance of your algorithm. Our recommendations for transparency and independence can help you with this. But remember, if you don’t hold yourself accountable, the FTC can do it for you, ”Jillson writes.

Artificial intelligence has the potential to reduce human bias in processes such as recruiting, but it can also reproduce or exaggerate that bias, especially if trained in data that reflects this. For example, facial recognition produces less accurate results for black subjects, potentially encouraging false identifications and arrests when the police use them. In 2019, researchers found that a popular healthcare algorithm reduced black patients from receiving important medical care, due to pre-existing differences in the system. Automated gender recognition technology can use simplistic methods that misclassify transgender or nonbinary people. And automated processes – which are often proprietary and secret – can create “black boxes” in which it is difficult to understand or dispute erroneous results.

The European Union recently indicated that it could take a stronger stance on some AI applications, potentially banning their use for “arbitrary surveillance” and social credit scoring. With these latest statements, the FTC has expressed its interest in tackling specific malicious applications.

But it’s still in its infancy, and critics have questioned whether it can meaningfully enforce its rules against major tech companies. In a Senate hearing statement today, FTC Commissioner Rohit Chopra complained that “time and again, when large corporations are blatantly breaking the law, the FTC is unwilling to take meaningful accountability action,” pointing out to Congress and other commissioners. insisted on “turning the page of the newspaper”. The FTC’s alleged powerlessness. “In the world of AI, that could mean scrutinizing companies like Facebook, Amazon, Microsoft and Google, all of which have invested significant resources in powerful systems.