Could this article one day be written by a “robo-journalist”, edited by an AI assistant, and distributed by algorithms alone? 

Rapidly evolving artificial intelligence (AI) is unlikely to fully replace traditional media roles but it is fast shaking up the industry. It is set to have a profound impact on the responsibilities and processes of journalists, broadcasters, creatives and advertisers, bringing much-desired speed and efficiency. 

It is also becoming available just as media companies, particularly news outlets, are being forced to tighten their belts and enact waves of lay-offs as the rise of digital advertising groups such as Meta and Google have been one of the factors in a global decline in revenues at newspaper groups.

Many appear to see the technology as a solution: media groups have been ramping up their investment in AI even as they have been forced to make those cost cuts. Games company Blizzard Entertainment, entertainment group Walt Disney, and news organisation the New York Times are all putting money into it. In 2022, the global market for AI in media and entertainment was estimated at nearly $15bn and was projected to grow at a compound annual growth rate of 18.4 per cent between 2023 and 2030, according to Grand View Research

“Whilst we’ve had AI explosions and ‘winters’ before, this time is different,” says Michelle Sally, partner at UK law firm TLT. “In a competitive industry, with various outlets — including [social media platform owners] Meta and ByteDance — changing how we consume news, media companies are utilising AI tools to stand out and remain relevant in an ever-changing market.” 

Use cases and jobs lost: journalism, film, gaming

In news media, the main use cases for AI include the generation of text and images, as well as editing and AI-assisted research. 

Experts suggest it will be most commonly deployed to optimise production processes and take on laborious tasks — for example, identifying the main points of an article to write a headline or using speech-to-text technology to save time in providing on-screen captions or translations. 

“Jobs that are more likely to be replaced or negatively impacted include editing and copywriting jobs, as generative AI is already relatively strong in these domains,” suggests Ravit Dotan, an AI ethics adviser and researcher. 

A journalist working on a computer in a newsroom setting
AI is prone to inaccuracies and lacks the human touch when generating news stories © Shutterstock

Meanwhile, developments in generative AI and increasingly powerful large language models (LLMs) — which, among other things, can generate text from huge sets of data — mean that some media outlets have explored AI-powered news generation and distribution. Such “automated journalism” has the potential to disrupt the traditional journalist role.

“AI is already being widely used when it comes to article writing,” says Oliver Lock, associate at London lawyers Farrer & Co. He points to articles published by Sports Illustrated. In instances where structured data is available — for example, with sports statistics or financial results — AI can easily transform this into a news narrative, says Danielle Chazen of AI-based video and audio transcribers Verbit. 

However, where on-the-ground newsgathering or more complex and nuanced storytelling is required, AI is an imperfect solution — an aid to existing roles, rather than a replacement. Rajvinder Jagdev, partner at specialist IP litigation firm Powell Gilbert LLP, cites a segment broadcast recently by Sky News, where a reporter tried using a generative AI tool to plan, script, and create a TV news slot: the quality was subpar and needed a team of human journalists to finalise the work. 

Jagdev believes that “in the short-to-medium term, AI tools are likely to be used to complement existing workflows rather than to work independently as ‘robo-journalists — but this may be where things end up in the future.” 

In filmmaking, gaming and advertising, generative AI is increasingly present in creative processes. It is also increasingly used to improve the customer experience, making personalised news recommendations or, in advertising, helping to show ads at the right time to the right user. 

People playing online games in an internet cafe
Gaming, like filmmaking and advertising, is already using AI in its creative processes © Jeremy Suyker/Bloomberg

Meta and Google already offer tools that can help marketers generate and better target ads at social media users in real time. There are fears that this could reduce or eliminate the need for advertising creatives, as well as agency staffers who advise on how to place ads effectively. 

“Imagine a level of personalisation similar to TikTok but extended across a wider range of verticals and industries,” says Joel Hellermark, founder and chief executive of AI co-pilot and learning platform Sana. 

Challenges and jobs created: fact checkers and ethics managers

But the implementation of AI in media still faces many challenges. Generative AI technology remains prone to “hallucinations”: namely, generating inaccurate or false information. Usage of outdated or biased data banks for training LLMs can deepen the potential for misinformation. AI tools might also be wielded in a deliberately malicious way to create deepfake videos of people, or to manipulate opinion. 

“If the problem of inaccuracy persists, there may be an increased demand for fact-checkers, and their work may be more challenging and important than ever as the internet gets flooded with more and more AI-generated false information,” says Dotan. “Companies and governments should require media outlets to mark their content in a way that readers can confirm what truly comes from them,” she adds. 

Some argue that tie-ups with news organisations themselves could help solve the problem. “From a social interest perspective, it could make sense for LLMs to arrive at some kind of agreement with large media publishers that would allow the software companies to use trusted content to train their systems,” says Lock. He notes that Le Monde and Prisa Media have struck one such deal with OpenAI, while the New York Times is suing the AI group to prevent it from training its LLMs on the newspaper’s data. 

Exterior of new York Times office building
The New York Times is suing OpenAI to prevent it from training its large language models on the newspaper’s data © Leonardo Munoz/Corbis via Getty Images

Generative AI’s evolution, meanwhile, raises authorship questions — and corresponding concerns about copyright protection and intellectual property ownership of AI-generated content. “As AI tools become more sophisticated and are able to generate content without any prompting, what happens then?” says Jagdev. “Is the author the person who first initialised the AI tool, or perhaps the creator of the AI tool — for example OpenAI, Microsoft, Google — or perhaps even the AI tool itself?”

Such grey areas may give rise to new roles, for instance, ethics managers, responsible for ensuring that AI-generated content adheres to ethical norms, says Hellermark. 

Rafi Azim-Khan, head of digital law, Europe, at lawyers Crowell & Moring says AI represents an “existential threat” to the media sector. However, he adds, “it is fair to say, it will be an opportunity for those who adapt well to it and use it as a positive tool, and a threat to those who do not adapt, who are perpetrators or victims of its misuse, or who ignore the new powerful laws and sanctions being introduced”. 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments