The federal government will “not hesitate to crack down” on harmful business practices involving artificial intelligence, the head of the Federal Trade Commission warned Tuesday in a message partly directed at the developers of widely-used AI tools such as ChatGPT.
FTC Chair Lina Khan joined top officials from U.S. civil rights and consumer protection agencies to put businesses on notice that regulators are working to track and stop illegal behavior in the use and development of biased or deceptive AI tools.
Much of the scrutiny has been on those who deploy automated tools that amplify bias into decisions about who to hire, how worker productivity is monitored, or who get access to housing and loans.
But amid a fast-moving race between tech giants such as Google and Microsoft in selling more advanced tools that generate text, images, and other content resembling the work of humans, Khan also raised the possibility of the FTC wielding its antitrust authority to protect competition.
“We all know that in moments of technological disruption, established players and incumbents may be tempted to crush, absorb or otherwise unlawfully restrain new entrants in order to maintain their dominance,” Khan said at a virtual press event Tuesday. “And we already can see these risks. A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products.”
She added that “if AI tools are being deployed to engage in unfair, deceptive practices or unfair methods of competition, the FTC will not hesitate to crack down on this unlawful behavior.”
Khan was joined by Charlotte Burrows, chair of the Equal Employment Opportunity Commission; Rohit Chopra, director of the Consumer Financial Protection Bureau; and Assistant Attorney General Kristen Clarke, who leads the civil rights division of the Department of Justice.
As lawmakers in the European Union negotiate passage of new AI rules, and some have called for similar laws in the U.S., the top U.S. regulators emphasized Tuesday that many of the most harmful AI products might already run afoul of existing laws protecting civil rights and preventing fraud.
”There is no AI exemption to the laws on the books," Khan said.