OpenAI chief says new rules are needed to guard against AI risks
Simply sign up to the Artificial intelligence myFT Digest -- delivered directly to your inbox.
OpenAI’s chief executive Sam Altman called on US lawmakers to regulate fast-advancing artificial intelligence technology, outlining his fears over the potential for it to provide “interactive disinformation” ahead of the US elections next year.
During a hearing before a US Senate subcommittee on privacy, technology and the law on Tuesday, Altman, whose company created AI chatbot ChatGPT, welcomed regulation of the emerging technology, calling for independent audits, a licensing regime and warnings akin to nutritional labels on food.
Altman was also asked by senators about AI’s ability to predict and influence public opinion in relation to the upcoming election.
“The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation . . . given that we’re going to face an election next year and these models are getting better. I think this is a significant area of concern,” he said.
He urged lawmakers to create “guidelines about what’s expected in terms of disclosure” for companies that offer this technology. Still, he noted the general public would quickly grow to understand its power.
“When Photoshop came on to the scene a long time ago, for a while people were really quite fooled by Photoshopped images and then pretty quickly developed an understanding that images might be Photoshopped. This will be like that, but on steroids.”
The hearing comes as regulators and governments around the world step up their examination of the technology, which is also being developed by Silicon Valley groups such as Google and Microsoft, amid growing concerns about its potential abuses.
EU lawmakers last week agreed on a tough set of rules over the use of AI, including restrictions on chatbots such as ChatGPT. Earlier this month, the US Federal Trade Commission and the UK competition watchdog fired warning shots at the industry. The FTC said it was “focusing intensely on how companies may choose to use AI technology”, while the UK’s Competition and Markets Authority plans to launch a review of the AI market.
The US Congress is also looking into how to craft regulations to govern the technology, and plans to speak to more sources from the industry in the coming months. During Tuesday’s hearing, Richard Blumenthal, the Democratic senator from Connecticut who chairs the privacy subcommittee, suggested putting limitations on the use of AI “where the risk of AI is so extreme that we ought to impose restrictions, or even ban their use, especially when it comes to commercial invasions of privacy for profit, and decisions that affect people’s livelihoods”.
The polite and cordial exchanges with lawmakers — including Blumenthal’s acknowledgment that Altman appeared to “care deeply and intensely” about potential risks from AI — stood in contrast with more combative exchanges that frequently emerged during other tech executives’ appearances before Congress.
Altman said collaboration was needed between industry and lawmakers to craft effective rules. “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening,” he said.
Altman acknowledged that technology such as GPT-4, the underlying technology behind ChatGPT, would “entirely automate away some jobs” but argued that it would create new ones that OpenAI believed “will be much better”.
Blumenthal said his “biggest nightmare” related to the emerging technology was “the looming new industrial revolution, the displacement of millions of workers [and] the loss of huge numbers of jobs”.
He referenced the failure to act quickly to regulate social media and a desire to “not repeat our past mistakes”.
The rapid development of generative AI, which can produce convincing humanlike writing, over the past six months has raised alarm among some AI ethicists.
In March, Twitter chief Elon Musk and more than 1,000 tech researchers and executives signed a letter calling for a six-month break on training AI language models more powerful than GPT-4, the underlying technology OpenAI uses for its chatbot. Earlier this month, AI pioneer Geoffrey Hinton quit Google after a decade in order to speak freely about the risks of the technology, which he warned would amplify societal divides and could be used by bad actors.
Christina Montgomery, vice-president and chief privacy and trust officer at IBM, and Gary Marcus, a professor emeritus at New York University, also testified at the hearing.
Additional reporting by Madhumita Murgia