Hawking in his library at home.JPG

Since the dawn of civilisation, mankind has been obsessed by the possibility that it will one day be extinguished. The impact of an asteroid on earth and the spectre of nuclear holocaust are the most prevalent millenarian fears of our age. But some scientists are increasingly of the view that a new nightmare must be added to the list. Their concern is that intelligent computers will eventually develop minds of their own and destroy the human race.

The latest warning comes from Professor Stephen Hawking, the renowned astrophysicist. He told an interviewer this week that artificial intelligence could “outsmart us all” and that there is a “near certainty” of technological catastrophe. Most non-experts will dismiss his claims as a fantasy rooted in science fiction. But the pace of progress in artificial intelligence, or AI, means policy makers should already be considering the social consequences.

The idea that machines might one day be capable of thinking like people has been loosely discussed since the dawn of computing in the 1950s. The huge amount of cash being poured into AI research by US technology companies, together with the exponential growth in computer power, means startling predictions are now being made.

According to a recent survey, half the world’s AI experts believe human-level machine intelligence will be achieved by 2040 and 90 per cent say it will arrive by 2075. Several AI experts talk about the possibility that the human brain will eventually be “reverse engineered.” Some prominent tech leaders, meanwhile, warn that the consequences are unpredictable. Elon Musk, the pioneer of electric cars and private space flight at Tesla Motors and SpaceX, has argued that advanced computer technology is “potentially more dangerous than nukes”.

Western governments should be taking the ethical implications of the development of AI seriously. One concern is that nearly all the research being conducted in this field is privately undertaken by US-based technology companies. Google has made some of the most ambitious investments, ranging from its work on quantum computing through to its purchase this year of British AI start-up Deep Mind. But although Google set up an ethics panel following the Deep Mind acquisition, outsiders have no idea what the company is doing – nor how much resource goes into controlling the technology rather than developing it as fast as possible. As these technologies develop, lack of public oversight may become a concern.

That said, the risk that computers might one day pose a challenge to humanity should be put in perspective. Scientists may not be able to say with certainty when, or if, machines will match or outperform mankind.

But before the world gets to that point, the drawing together of both human and computer intelligence will almost certainly help to tackle pressing problems that cannot otherwise be solved. The growing ability of computers to crunch enormous quantities of data, for example, will play a huge role in helping humanity tackle climate change and disease over the next few decades. It would be folly to arrest the development of computer technology now – and forgo those benefits – because of risks that lie much further in the future.

There is every reason to be optimistic about AI research. There is no evidence that scientists will struggle to control computers, even at their most advanced stage. But this is a sector in which pioneers must tread carefully – and with their eyes open to the enduring ability of science to surprise us.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments