Image of people working on computers below a sign for artificial intelligence
The California bill requires AI groups in the state to guarantee they will not develop models with ‘a hazardous capability’, such as creating biological or nuclear weapons or aiding cyber security attacks © Reuters

Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework, including creating a “kill switch” to turn off their powerful AI models, in a growing battle over regulatory control of the cutting-edge technology.

The Californian legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI start-ups, OpenAI, Anthropic and Cohere, as well as large language models run by Big Tech companies such as Meta.

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability”, such as creating biological or nuclear weapons or aiding cyber security attacks.

Developers would be required to report on their safety testing and introduce a so-called kill switch to shut down their models, according to the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.

But the law has become the focus of a backlash from many in Silicon Valley because of claims it will force AI start-ups to leave the state and prevent platforms such as Meta from operating open-source models.

“If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” said Andrew Ng, a renowned computer scientist who led AI projects at Alphabet’s Google and China’s Baidu, and who sits on Amazon’s board. “It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”

The rapid growth and huge potential of AI has prompted concerns about the safety of the technology, with billionaire Elon Musk, an early investor in ChatGPT-maker OpenAI, calling it an “existential threat” to humanity last year. This week, a group of current and former OpenAI staffers published an open letter warning that “frontier AI companies” do not have sufficient oversight from governments and pose “serious risks” to humanity.

The Californian bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

Democratic state Senator Scott Wiener, who introduced the legislation, said: “Fundamentally I want AI to succeed and innovation to continue, but let’s try and get out ahead of any safety risks.”

He added it was a “light-touch bill . . . that simply asks developers training huge models to perform basic safety evaluations to identify large risks and to take reasonable steps to mitigate those risks”.

But critics have accused Wiener of being overly restrictive and placing a costly compliance burden on developers, particularly at smaller AI companies. Opponents also claim the bill focuses on hypothetical risks that add an “extreme” liability risk on founders.

Among the fiercest criticisms is that the bill will harm open-source AI models — in which developers make source code freely available to the public, allowing developers to build on top of them — such as Meta’s flagship LLM, Llama. The bill would make developers of open models potentially liable for bad actors that manipulate their models to cause harm.

Arun Rao, lead product manager for generative AI at Meta, said in a post on X last week that the bill was “unworkable” and would “end open source in [California]”.

“The net tax impact by destroying the AI industry and driving companies out could be in the billions, as both companies and highly paid workers leave,” he added.

Wiener said of the criticism: “This is the tech sector, it doesn’t like to have any regulation, so it’s not surprising to me at all that there would be push back.”

Some of the responses were “not fully accurate”, he said, adding he was planning to make amendments to the bill that would clarify its scope.

The proposed amendments state open-source developers will not be liable for models “that undergo lots of fine-tuning”, meaning that if an open- source model is then sufficiently customised by a third party, it is no longer the responsibility of the group that made the original model. They also state the “kill switch” requirement will not apply to open-source models, he said.

Another amendment states the bill will only apply to large models “that cost at least $100mn to train”, and would therefore not impact most smaller start-ups.

“There are these competitive pressures that are affecting these AI organisations that basically incentivise them to cut corners on safety,” CAIS’s Hendrycks said, adding that the bill was “realistic and reasonable” with most people wanting “some basic oversight”.

Yet a senior Silicon Valley venture capitalist said they were already fielding queries from founders asking if they would need to leave the state as a result of the potential legislation.

“My advice to everyone that asks is we stay and fight,” the person said. “But this will put a chill on open source and the start-up ecosystem. I do think some founders will elect to leave.”

Governments around the world have been taking steps to regulate AI over the past year as the technology has boomed in popularity.

US President Joe Biden introduced an executive order in October that aimed to set new standards for AI safety and national security, protect citizens against AI privacy risks, and combat algorithmic discrimination. The UK government in April outlined plans to craft new legislation to regulate AI.

Critics are perplexed about the pace at which the Californian AI bill emerged and passed through the Senate, shepherded by CAIS.

The majority of funding for CAIS comes from Open Philanthropy, a San Francisco-based charity with its roots in the effective altruism movement. It gave grants worth about $9mn to CAIS between 2022 and 2023, in line with its “focus area of potential risks from advanced artificial intelligence”. The CAIS Action Fund, a division of the non-profit that was established last year, registered its first lobbyists in Washington, DC, in 2023 and has spent roughly $30,000 on lobbying this year.

Wiener has received funding from wealthy venture capitalist Ron Conway, managing partner of SV Angel and investors in AI start-ups, over a number of election cycles.

Rayid Ghani, professor of AI at Carnegie Mellon University’s Heinz College, said there was “some overreaction” to the bill, adding that any legislation should focus specifically on use cases of the technology rather than regulating the development of models.

Letter in response to this article:

Task of AI reg­u­la­tion is to get the bal­ance right / From Faisal Al Ban­nai, Adviser to the UAE Pres­id­ent on Strategic Research and Advanced Tech­no­logy Affairs, Dubai, UAE

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments