© FT Montage/Getty Images

Why is regulation of AI needed?

Regulators around the world have found no shortage of issues to worry about with the rise of artificial intelligence.

Should they intervene in algorithms that could bias or distort decisions that affect the everyday lives of billions? What about the risk that chatbots, such as ChatGPT, will supercharge the production of online misinformation, or lead to the misuse of vast amounts of personal data? And what should they do about warnings that computers could soon reach such a level of intelligence that they escape the control of their makers — with potentially dire consequences for humanity?

The technology is moving so fast — and the potential risks are, in some cases, so poorly understood — that there is little agreement yet on a regulatory agenda.

What AI issues are regulators looking at first?

The European Union was well on its way to finalising a first-of-its-kind AI Act that would have controlled, or even banned, supposedly “high-risk” AI systems — such as those used to make decisions on job or loan applications or health treatments. Then ChatGPT mania exploded — the huge public interest in OpenAI’s freely-available generative AI chatbot.

Line chart of Share prices and index (rebased) showing AI stocks surge on ChatGPT hype

Lawmakers quickly adjusted their plans to include so-called foundation models, such as the large language model that sits behind ChatGPT.

The EU rules would force companies to disclose what data these models have been trained on, and could make them liable for misuse of the technology, even when they do not control the ultimate applications.

However, Patrick Van Eecke, co-chair of law firm Cooley’s global cyber, data and privacy practice, believes Brussels has moved too soon to try to regulate a technology that is still “a moving target,” reflecting a cultural bias towards knee-jerk regulation. “We like to regulate reality even before it becomes reality,” he says — echoing a view widely held in the AI world.

Many US tech executives have a different explanation, though. They see it as a deliberate protectionist move by the EU, slapping limitations on a group of mainly American companies that dominate the AI industry.

Will the EU’s AI regulation become a model for the rest of the world?

That is what happened with the bloc’s data protection legislation, and it is a potential development that US tech companies are concerned about. The EU Act’s backers say it will be enforced flexibly to reflect changing standards and technology advances. But critics say experience shows Brussels takes a more dogmatic approach — and that rules baked in now could limit the technology’s evolution.

Some European companies agree. In a letter to the European Commission in June, 150 large European concerns warned that the law could hamper the bloc’s economy by preventing companies there from freely using important AI technology.

A final version of the law needs to be hammered out in a three-way negotiation between the Parliament, Commission, and Council of member states — which means there is still a chance of changes.

Aren’t AI companies asking for regulation?

The AI industry has learned from the backlash against social media that it does not pay to duck regulation on technologies that can have significant social and political impact.

But that does not mean they like what’s planned by the EU. Sam Altman, head of OpenAI and a voluble supporter of AI regulation, told the FT that his company might have to pull out of the EU altogether if the final rules on AI are too stringent. The furore his words provoked led him quickly to backtrack but, behind the scenes, the US concerns are undimmed.

The readiness of big tech companies to call for regulation has also provoked suspicions that they see it as a way to entrench their hold on the AI market. Higher costs and bureaucracy could make it harder for new competitors to break in. 

What’s the alternative to the EU approach?

Before deciding on new laws, many countries are taking a close look at how their existing regulations apply to applications that are powered by AI.

In the US, for example, the Federal Trade Commission has opened an investigation into ChatGPT, using its existing powers. One of its concerns is that ChatGPT is sucking up personal data and sometimes using it to regurgitate false and damaging information about ordinary people.

Unlike their European counterparts, US lawmakers have also embarked on a broad review of AI that explicitly tries to balance the benefits of the technology against its potential harms. US Senate majority leader Chuck Schumer has called for a series of expert briefings and forums for the most important senate committees, to help them decide which aspects of AI might need regulating.

Holly Fechner, co-chair of the technology industry group at law firm Covington & Burling, has said: “Significant bipartisanship in Congress on US competition with China” makes Schumer’s approach “a winning message — and signals that the US is moving in a different direction than Europe.”

If governments don’t regulate now, won’t the AI race become a dangerous free-for-all?

Many tech companies say that the development of AI should mirror the early days of the internet: regulators held off then, letting innovation flourish, and only stepped in later, as needed.

There are already signs that new industry standards and agreements about best practices in AI are starting to take hold, even without explicit regulation. In the US, for example, the industry has been working with the National Institute for Standards and Technology on codifying the best ways to design, train, and deploy AI systems.

There have also been pledges to disclose more about how large models work and what data has been used to train them — although it is unclear how forthcoming companies will be. Meanwhile, there is no shortage of proposals for new “watermarking” systems to verify whether online content has been produced by a generative AI system rather than a person.

Failure to make headway on these and other fronts would greatly hasten the calls for regulation.

Some people developing AI say it could destroy humanity — is that not reason for regulation immediately?

No one in the tech industry thinks today’s AI systems present an existential threat to humanity and there is no agreement on when — if ever — the technology could reach that point. But, earlier this year, an open letter signed by many technologists called for a six-month moratorium on work on the most advanced systems, to allow time to come up with new safety protocols.

While governments have started to consider this issue, it would take new international agreements to try to control the spread of dangerous AI. Even then, such efforts might be impractical, given the wide availability of the computing resources and data sets needed to train AI systems.

For now, the same companies that are leading the charge into AI claim they are also at the forefront of trying to rein it in. OpenAI said in July that it was creating an internal team to start researching ways to control ‘superintelligent’ computers, which it thinks could come this decade. Its proposed solution: a new AI system to control the AI. 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments