Illo for FT Special Reports Cyber Security
© Daniel Pudles

At 4ft-nothing, with orb-like eyes, SoftBank’s humanoid robot, Pepper, is designed to look friendly. But imagine if Pepper — a powerful machine crammed with cameras, sensors and motors — hurtled towards you at top speed? Or stood in your home, secretly recording your life?

In 2017, Lucas Apa and Cesar Cerrudo, security researchers with the consultancy IOActive, showed that the version 2.5.5 of Pepper could be hacked through its software because of vulnerabilities that were discovered when it was connected to a network. They demonstrated that the robot could be controlled remotely, its limbs manipulated and its cameras used to spy on users.

Yet more than a year later, SoftBank has not patched the software, according to an analysis of its change logs by Mr Apa. He told the FT that the Japanese conglomerate had told him it could not fix the problem.

He says: “We were very disappointed by this answer, but we understand that with any new technology it is very hard for manufacturers to get the attention or investment [they need].”

SoftBank says that users were asked to maintain Wi-Fi network security and set robot passwords correctly. “We will continue to improve our security measures on Pepper, so we can counter any risks we may face,” the company says.

Pepper is just one of several robots that Mr Apa and Mr Cerrudo tested last year. They found that others, including those manufactured by UBTech Robotics, Robotis, Universal Robots, Rethink Robotics and Asratec Corp, could be hacked too.

The matter has also been raised by Bundesnetzagentur, the telecoms watchdog in Germany, which last year told parents to destroy talking dolls called Cayla because hackers could use an unsecured Bluetooth device to make the toy reveal personal data.

Instances of robot hacking are few — in part because autonomous machines have yet to be widely deployed outside controlled areas such as factories, where they are connected to local networks. However, as robots become more powerful, concerns that weak security standards could allow them to be weaponised have increased. 

“Most of the devices being produced today have rudimentary security built into them, and that’s something we’re going to have to correct as we build more and more robotic systems,” says Toby Walsh, a computer science professor at the University of New South Wales and research group leader at Data61, Australia’s data innovation network.

He is urging manufacturers to sign a pledge to “neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons”.

Though autonomous weapons are developed separately from industrial or consumer robots, Mr Walsh says that the similarities could increase as everyday robots, such as driverless cars, become more powerful and more commonplace. “Regulators and governments have to step in because we are developing technology that [at times] needs to be withdrawn from the market, it is so unsafe,” he says.

For companies that build robots, the challenges are significant as hackers become more sophisticated.

“The smarter the tech becomes, the smarter the hackers become,” says David Hanson, founder of Hong Kong-based Hanson Robotics. “The more the world becomes automated, the more opportunity there is for mischief and calamity.”

He says that Hanson Robotics is securing its robots, for example by ensuring that data are processed on the robot itself rather than on company servers. Apple, the smartphone maker, uses a similar strategy, processing most user data on devices, which it argues protects privacy. 

Data transferred from Hanson Robotics machines, like its humanoid Sophia, are put through a mathematically irreversible transformation so that they cannot be de-anonymised. But Mr Hanson adds: “It will never be completely safe.”

Some researchers say autonomous robots are no more dangerous than other machines. They say fear has been fuelled by a misunderstanding about the limits of artificially intelligent machines. Humans are still physically stronger and more dangerous than most consumer robots and can also be manipulated, says David Wright, head of the robotics team for Deloitte’s global business services.

He adds that automation could even reduce the risk of hacking by removing human unpredictability and failure. “Individuals could do something off-script, but the bot will only do what it was programmed to do,” he says.

But sceptics argue the replacement of mechanical processes with digital systems has already led to an increase in attacks. Hackers have demonstrated that aircraft can be controlled through in-flight entertainment systems, while cars have been hacked through software.

According to Harri Valpola, founder of Curious AI, a Finnish start-up, robotics is simply the latest iteration of the hacking process.

“Back in the day there were physical [parts],” he says. “Now, more and more of these controls that humans have are not direct, they go through software — and it is always possible to hack.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article