WASHINGTON Reuters - Tech ethics group Center for Artificial Intelligence and Digital Policy is asking the US Federal Trade Commission to stop using OpenAI in issuing new commercial releases of GPT -- 4, which has wowed some users and caused distress for others with its quick and human-like responses to queries.
In a complaint to the agency on Thursday, the Center for Artificial Intelligence and Digital Policy called GPT 4 biased, deceptive, and a risk to privacy and public safety. The fourth iteration of its GPT Generative Pre-trained Transformer AI program, which is based in California, has excited users by engaging them in human-like conversations, composing songs, and summarizing lengthy documents, according to OpenAI, backed by Microsoft Corp.
The FTC has a formal complaint to the FTC after an open letter sent to Elon Musk, artificial intelligence experts, and industry executives that called for a six month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.
The complaint said that OpenAI's ChatGPT 4 doesn't meet the FTC's standard of transparency, explainable, fair and empirically sound while fostering accountability. The FTC has a duty to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT 4, said Marc Rotenberg, president of CAIDP and a veteran privacy advocate.
Rotenberg was one of the 1,000 signatories to the letter urging a pause in AI experiments.
The group urged the FTC to open an investigation into OpenAI, to enjoin further commercial releases of GPT-4, and to establish necessary guardrails to protect consumers, businesses, and the commercial marketplace.