An AI-powered app icon, similar to those of Bing, Gemini, OpenAI, Chatbot, and Copilot, is being displayed on a smartphone
Proceeding with caution: consultants and their clients are increasingly aware of the risks posed by AI tools © Jonathan Raa/NurPhoto via Getty Images

As companies across all sectors experiment with artificial intelligence, consultants are under pressure to master the new technology that their clients are investing in. And, given the technology’s ability to automate tasks, optimise systems and boost productivity, they say the AI era will also mark a strategic shift in their own ways of working.

“AI will likely make firms more efficient and profitable,” says Fiona Czerniawska, chief executive at Source Global, a consulting sector analyst, by helping them save time on data gathering and analysis. Then, there is the market opportunity in terms of the services consultants can provide. “There is interest from clients on having external support in areas like ethical questions and regulation and legal liability,” she notes.

Large consulting firms have already begun to develop tools and train staff on the use of AI.

McKinsey has rolled out Lilli, an in-house platform offering a “search and synthesis of the consultancy firm’s vast stores of knowledge”, including past case studies and research, freeing up time for consultants to spend with clients.

EY has put $1.4bn into its AI platform,, which combines a proprietary large language model with other technologies and human expertise to help clients adopt AI — along with an in-house chat tool and training for staff. KPMG, meanwhile, is working with Microsoft to develop AI tools for its employees and clients.

So far, consultancies have — for the most part — been experimenting with these tools, says Czerniawska. But they will need to be clear about their policies for using AI, she stresses.

This could involve creating an “AI charter” or manual, for example, to guide staff on how to engage with AI in line with the firm’s ethics and values. Consultancies say they are building such frameworks to help their own organisations navigate the regulatory and reputation risks — and could do the same for clients.

It is becoming a more pressing task as governments move to strengthen their regulation of AI. The UK has sought to become a global leader in the technology and has launched a consultation on an AI regulation white paper, which sets out “future-proof” and “pro-innovation” regulatory proposals.

In the EU, in December, member states agreed on the terms of a landmark AI act that, when it comes into effect — probably in 2026 — will impose strict rules on tech companies whose tools are used in Europe.

In the US, President Joe Biden released an executive order on AI last year with a similar aim.

“The consulting industry isn’t always very transparent about the way it works but this is an area where being transparent [about how you use AI] will be very important,” says Czerniawska.

Using AI tools to search for information is one thing. But how upfront should firms be about the use of the technology to write client reports, for example? According to research by Source Global published in December, 35 per cent of clients said they wanted firms to be open about which parts of a report had been produced by AI. And when asked if they saw any regulatory issues when buying AI-generated work from consultancies, 43 per cent also cited concerns over data protection and privacy.

Companies are alive to the risk of flawed programs that could make costly errors. In January, for instance, parcel delivery company DPD’s chatbot composed a poem to a customer about how “useless” the delivery service was, before it was disabled.

Emile Naus, a UK partner at consultancy firm BearingPoint, says there are currently a lot of large language model applications that companies are actively testing out.

Broadly, there are “generic” chatbots and “specialised” chatbots, he explains. The DPD chatbot blunder was partly to do with it being trained on a wide information sample, like a generic chatbot, whereas a specialised bot, trained on data selected in-house, is safer, says Naus.

Catriona Campbell, EY’s UK and Ireland client technology and innovation officer, says: “Like any new technology, AI products pose risks — many of which are not unique to AI, but are consistent across an organisation’s technology stack, such as third-party risks and cyber risks.”

The more AI-specific risks arise from the adaptive nature of the algorithmic techniques underlying the tools, she says, such as “when bias is introduced during the modelling stage through incorrect correlations drawn from data”.

To mitigate these, some consultancies are moving towards a principles-based approach. Campbell says EY’s strategy is to assess “security challenges and the varied regulatory landscape”, alongside ethical risks.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article