Lewis Goodall 10am - 12pm
US watchdog probes ChatGPT creator OpenAI over consumer protection issues
14 July 2023, 08:14 | Updated: 25 July 2023, 11:46
An FTC spokesperson had no comment on the investigation, which was first reported by the Washington Post on Thursday.
The US Federal Trade Commission (FTC) has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence (AI) company violated consumer protection laws by scraping public data and publishing false information through its chatbot.
The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements.
An FTC spokesperson had no comment on the investigation, which was first reported by the Washington Post on Thursday.
The FTC document the Post published told OpenAI the agency was investigating whether it has “engaged in unfair or deceptive privacy or data security practices” or practices harming consumers.
OpenAI founder Sam Altman tweeted disappointment that the investigation was disclosed in a “leak”, noting that the move would “not help build trust”, but added that the company will work with the FTC.
He said: “It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law.
“We protect user privacy and design our systems to learn about the world, not private individuals.”
The FTC’s move represents the most significant regulatory threat so far to the nascent but fast-growing AI industry, although it is not the only challenge facing these companies.
Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.
Mr Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent tour of European capitals where regulators were putting final touches on a new AI regulatory framework.
Mr Altman himself has called for AI regulation, although he has tended to emphasise difficult-to-evaluate existential threats such as the possibility that super-intelligent AI systems could one day turn against humanity.
Some argue that focusing on a far-off “science fiction trope” of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behaviour and potential for trickery and disinformation.