OpenAI’s ChatGPT faces U.S. FTC complaint, call for European regulators to step in

Fortune· Drew Angerer—Getty Images

Authorities in the U.S. and Europe should act quickly to protect people against threats posed by OpenAI’s GPT and ChatGPT artificial intelligence models, civil society groups have urged in a coordinated pushback against the technology's rapid proliferation.

On Thursday the U.S.’s Center for AI and Digital Policy (CAIDP) filed a formal complaint with the Federal Trade Commission, calling on the agency to “halt further commercial deployment of GPT by OpenAI” until safeguards have been put in place to stop ChatGPT from deceiving people and perpetuating biases.

CAIDP’s complaint came just one day after the release of a much-publicized open letter calling for a six-month moratorium on the development of next-generation A.I. models. Although the complaint references that letter, the group has signaled 10 days ago that it would be urging the FTC to investigate OpenAI and ChatGPT, and “establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established.”

At the same time as CAIDP’s complaint landed with the FTC, the European Consumer Organisation (BEUC) issued a call for European regulators—both at EU and national levels—to launch investigations into ChatGPT.

“For all the benefits A.I. can bring to our society, we are currently not protected enough from the harm it can cause people,” said BEUC deputy director general Ursula Pachl. “In only a few months, we have seen a massive take-up of ChatGPT and this is only the beginning.”

CAIDP, which advocates for a societally-just rollout of A.I., also asked the FTC to force OpenAI to submit to independent assessments of its GPT products before and after they launch, and to make it easier for people to report incidents in their interactions with GPT-4, the latest version of OpenAI’s large language model.

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices,” said CAIDP president Marc Rotenberg in a statement. “We believe that the FTC should look closely at OpenAI and GPT-4.”

Concerns over ChatGPT, and other chat interfaces such as Microsoft's OpenAI-powered Bing and Google's Bard, include the systems' tendency to make up information—a phenomenon known in the A.I. industry as "hallucination"—and to amplify the biases that are present in the material on which these large-language models have been trained.

EU lawmakers are already planning to regulate the A.I. industry through an Artificial Intelligence Act that the European Commission first proposed nearly two years ago. However, some of the proposals measures are beginning to look outdated given rapid advances in the field and highly competitive rollouts of new services, and the EU’s institutions are now scrambling to modernize the bill so it will adequately tackle services like ChatGPT.

“Waiting for the A.I. Act to be passed and to take effect, which will happen years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people,” Pachl said.

A BEUC spokesperson told Fortune the organization hoped to see a variety of authorities spring into action, including those regulating product safety, data protection, and consumer protection.

However, Berlin technology lawyer Niko Härting said there was "no chance" of EU-level regulators taking action against OpenAI and ChatGPT while the A.I. Act was still being negotiated.

OpenAI had not responded to a request for comment at the time of publication. However, some have responded to Wednesday’s open letter—which was signed by over 1,000 people, including Elon Musk and Apple co-founder Steve Wozniak—by saying fears about A.I. are overblown and development should not be paused.

Others agreed with the letter's call for governments to act quickly to regulate the technology, but took issue with the rationale for such regulation expressed in the open letter, which focused more on the potential of future A.I. system's to exceed human intelligence, and less on potential harms from today's existing systems in areas such as misinformation, bias, cybersecurity, and the outsized environmental costs of the massive amount of computing power and electricity needed to train and run such systems.

“The sky is not falling, and Skynet is not on the horizon,” wrote Daniel Castro and Emily Tavenner, of the pro-Big Tech Center for Data Innovation think tank, on Wednesday.

OpenAI’s own CEO, Sam Altman, recently argued that his company places safety limits on its A.I. models that rivals do not, and said he worried such models could be used for “large-scale disinformation” and “offensive cyberattacks.” He has also said the worst-case scenario for A.I.’s future trajectory is “lights-out for all of us.”

This article was updated on March 30 to include Härting's comment.

This story was originally featured on Fortune.com

More from Fortune:


Advertisement