A U.S. AI policy group has accused OpenAI of violating consumer protection rules by releasing its GPT-4 model and asked the U.S. Federal Trade Commission to immediately block OpenAI from launching its new GPT model and conduct an independent review.
The group, called the Center for AI and Digital Policy (CAIDP), filed a complaint today arguing that the AI text generation tool introduced by OpenAI is “biased, deceptive and a risk to public safety” and violates consumer protection rules.
IGeekphone noted a high-profile public letter calling for a moratorium on large-scale generative AI experiments. CAIDP chairman Marc Rotenberg is also one of the signatories of the letter, along with a group of AI researchers and OpenAI co-founder Elon Musk. Like the letter, the complaint calls for slowing the development of generative AI models and stricter government regulation.
CAIDP points to the potential threat posed by OpenAI’s GPT-4 generative text model, which was announced in mid-March. These include the potential for GPT-4 to generate malicious code, as well as bias in training data that can perpetuate stereotypes or unfairly differentiate between race and gender, for example in hiring. The complaint also points to serious privacy issues with the OpenAI product interface — such as a recently discovered bug that allowed OpenAI ChatGPT’s chat history and possible payment information to be seen by other users.
OpenAI has previously publicly acknowledged the potential threat posed by AI text generation, but CAIDP believes GPT-4 crosses the line into consumer harm and should trigger regulatory action. CAIDP sought to hold OpenAI accountable for violating Section 5 of the Federal Trade Commission Act, which prohibits unfair and deceptive trade practices. The complaint alleges that “OpenAI made GPT-4 available to the public for commercial purposes with full knowledge of these risks.” The CAIDP also calls generative models confidently fabricating facts that do not exist a form of deception.
In its complaint, CAIDP asked the Federal Trade Commission to halt the commercial deployment of any further GPT models and for an independent evaluation before any future models are introduced. It also calls for a publicly accessible reporting tool similar to the one consumers can file fraud complaints. And seek explicit rulemaking by the Federal Trade Commission for generative AI systems to build on the agency’s ongoing, but still relatively informal, research and evaluation of AI tools.
The US Federal Trade Commission has previously expressed an interest in regulating AI tools. The commission has warned that biased AI systems could trigger enforcement actions, and at an event with the Justice Department this week, FTC Chairwoman Lena Khan said the agency would look for signs that large incumbent tech companies were trying to edge out competition.