The FTC is investigating whether the artificial intelligence app caused injury to users by disseminating false information.
The Federal Trade Commission is investigating whether OpenAI’s ChatGPT has harmed individuals by publishing false information about them, posing a potential legal threat to the popular app that can generate eerily human-sounding content using artificial intelligence.
In a subpoena to the company made public on Thursday, the FTC explains that its investigation of ChatGPT concentrates on whether OpenAI “engaged in unfair or deceptive practices relating to the risks of harm to consumers, including reputational harm.”
One question asks the company to “describe in detail the extent to which you have addressed or mitigated the risk that your large language model products could generate false, misleading, or disparaging statements about real individuals.”
The new FTC investigation under the leadership of Chairwoman Lina Khan represents a significant expansion of the federal government’s role in regulating emerging technologies.
Khan, who testified before the House Judiciary Committee on Thursday, stated that the agency is concerned that ChatGPT and other AI-powered applications do not have any controls over the data they can mine.
“We’ve heard reports of sensitive information appearing in response to an inquiry from another party,” Khan stated. “We’ve heard of emerging libel, defamatory statements, and blatantly false claims. This is the type of deception and deceit that concerns us.”
For detractors of the FTC, the investigation represented yet another foray into uncharted territory for an agency whose antitrust enforcement efforts have recently suffered legal setbacks.
“Does the FTC have jurisdiction over instances in which ChatGPT may have harmed a person’s reputation by publishing false information about them? Adam Kovacevich, founder of Chamber of Progress, an industry trade organization, stated that he did not believe the statement to be at all obvious.
Such matters “fall more within the realm of speech, making speech regulation beyond their authority,” he said.
OpenAI did not respond to comment requests.
Marc Rotenberg, the leader of the organization that filed a complaint against ChatGPT with the FTC in March, stated that it is uncertain whether the FTC has jurisdiction over defamation. However, “misleading advertising clearly falls within the FTC’s purview,” according to the president of the Center for AI and Digital Policy. According to the FTC, disinformation regarding commercial practices already falls under its jurisdiction.
In March, Rotenberg’s organization filed a complaint with the FTC regarding ChatGPT, labeling it “biased, deceptive, and a risk to privacy and public safety,” and arguing that it does not comply with any of the FTC’s guidelines for AI use.
The FTC has broad authority to police unfair and deceptive business practices that can harm consumers as well as unfair competition, but critics say Khan has occasionally overstepped his authority, as evidenced by a federal judge’s decision this week to dismiss the FTC’s attempt to block Microsoft’s acquisition of Activision Blizzard.
Thursday at the House committee hearing, Khan was criticized for her agency’s investigation of Twitter’s consumer privacy protections. The investigation, according to Republicans, was prompted by progressives’ anger over Elon Musk’s acquisition of Twitter and his easing of content moderation policies. Thursday, Twitter asked a federal court to terminate a 2022 settlement with the FTC over alleged privacy violations, citing a “burdensome and vexatious enforcement investigation.”
Khan responded that the agency was only concerned with user privacy and that “we are doing everything possible to ensure Twitter complies with the order.”
In its civil subpoena to OpenAI, the FTC inquired at length about the company’s data-security procedures. It referenced an incident in 2020 in which the company disclosed a flaw that allowed users to view other users’ conversations and payment-related information.
The FTC subpoena also covers the company’s marketing efforts, its practices for training AI models, and its management of users’ personal information. The FTC investigation was first reported by the Washington Post.
The Biden administration has begun scrutinizing whether artificial intelligence tools such as ChatGPT require oversight. In April, as a first step toward prospective regulation, the Commerce Department issued a formal request for public feedback on accountability measures.
The Office of Science and Technology Policy of the White House is also developing strategies to address both the benefits of AI, such as the possibility of using it to expand access to government services, and the harms of AI, such as increased hacking capabilities, discriminatory decisions by AI systems, and the possibility of AI-generated content disrupting elections.
In the current Congress, lawmakers from both parties, led by Senate Majority Leader Chuck Schumer (D., N.Y.), have made artificial intelligence regulation a priority.
In addition to potential reputational risks, legislators express concern that AI tools could be abused to manipulate voters with misinformation, discriminate against minority groups, perpetrate complex financial crimes, displace millions of workers, or cause other harms. Legislators have been particularly concerned about the dangers posed by so-called deepfake videos, which fraudulently depict real people engaging in embarrassing behavior or uttering humiliating statements.
However, new legislation or other measures are likely months or even years away. In what is shaping up to be a crucial competition between China and the United States to dominate the markets for AI tools, legislators must be concerned that any significant action they take risks stifling U.S. innovation.
Even the creators of ChatGPT have advocated for greater government supervision of AI development.
Sam Altman, the chief executive officer of OpenAI, urged Congress in a May hearing to create licensing and safety standards for advanced artificial-intelligence systems, as legislators begin a bipartisan drive to regulate the powerful new consumer tools.
“We comprehend that people are concerned about how this will affect our way of life. During the Senate subcommittee hearing, Sam Altman stated of AI technology, “So are we.” If this technology fails, it can fail severely.
Altman has traveled the globe speaking about the promise and dangers of artificial intelligence, including meetings with world leaders such as French President Emmanuel Macron and Indian Prime Minister Narendra Modi.