Chris Lehane, Sam Altman and Anna Makanju with OpenAI logo
From left, Chris Lehane, Sam Altman and Anna Makanju, part of OpenAI’s leadership team © FT montage/Bloomberg/dpa/AP/AFP/ Getty Images

OpenAI is building an international team of lobbyists as it seeks to influence politicians and regulators who are increasing their scrutiny over powerful artificial intelligence.

The San Francisco-based start-up told the Financial Times it has expanded the number of staff on its global affairs team from three at the start of 2023 to 35. The company aims to build that up to 50 by the end of 2024.

The push comes as governments explore and debate legislation around AI safety that risk constraining the start-up’s growth and the development of its cutting-edge models, which underpin products such as ChatGPT.

“We are not approaching this from a perspective of we just need to get in there and quash regulations . . . because we don’t have a goal of maximising profit; we have a goal of making sure that AGI benefits all of humanity,” said Anna Makanju, OpenAI’s vice-president of government affairs, referring to artificial general intelligence, or the point that machines have equivalent cognitive abilities to humans.

While forming a small part of OpenAI’s 1,200 employees, the global affairs department is the company’s most international unit, strategically positioned in locations where AI legislation is advanced. This includes stationing staff in Belgium, the UK, Ireland, France, Singapore, India, Brazil and the US.

However, OpenAI remains behind its Big Tech rivals in this outreach. According to public filings in the US, Meta spent a record $7.6mn engaging with the US government in the first quarter of this year, while Google spent $3.1mn and OpenAI $340,000. Regarding AI-specific advocacy, Meta has named 15 lobbyists, Google has five while OpenAI has only two.

“Walking in the door, [ChatGPT had] 100mn users [but the company had] three people to do public policy,” said David Robinson, head of policy planning at OpenAI, who joined the company in May last year after a career in academia and consulting for the White House on its AI policy.

“It was literally to the point where there would be somebody high level who would want a conversation, and there was nobody who could pick up the phone,” he added.

OpenAI’s global affairs unit does not deal with some of the most fraught regulatory cases, however. That task goes to its legal team, which handles issues related to UK and US regulators’ review of its $18bn alliance with Microsoft; the US Securities and Exchange Commission investigation into whether chief executive Sam Altman misled investors during his brief ousting by the board in November; and the US Federal Trade Commission’s consumer protection probe into the company.

Instead, OpenAI’s lobbyists focus on the spread of AI legislation. The UK, the US and Singapore are among many countries dealing with how to govern AI and consulting closely with OpenAI and other tech companies on proposed regulations.

The company was involved in the discussions around the EU’s AI Act, approved this year, one of the most advanced pieces of legislation in seeking to regulate powerful AI models.

OpenAI was among AI companies that argued some of its models should not be considered among those that provide a “high risk” in early drafts of the act and would therefore be subject to tougher rules, according to three people involved in the negotiations. Despite this push, the company’s most capable models will fall under the remit of the act.

OpenAI also argued against the EU’s push to examine all data given to its foundation models, according to people familiar with the negotiations.

The company told the FT that pre-training data — the data sets used to give large language models a broad understanding of language or patterns — should be outside the scope of regulation as it was a poor way of understanding an AI system’s outputs. Instead, it proposed the focus should be on post-training data used to fine-tune models for a particular task.

The EU decided that, for high-risk AI systems, regulators can still request access to the training data to ensure it is free of errors and bias.

Since the EU’s law was approved, OpenAI hired Chris Lehane, who worked for President Bill Clinton, Al Gore’s presidential campaign and was Airbnb’s policy chief as vice-president of public works. Lehane will work closely with Makanju and her team.

OpenAI also recently poached Jakob Kucharczyk, a former competition lead at Meta. Sandro Gianella, head of European policy and partnerships, joined in June last year after working at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Meta in May last year.

The company was recently involved in a series of discussions with policymakers in the US and other markets around OpenAI’s Voice Engine model, which can clone and create custom voices, leading to the company narrowing its release plans after concerns over risks of how it might be used in the context of global elections this year.

The team has been running workshops in countries facing elections this year, such as Mexico and India, and publishing guidance on misinformation. In autocratic countries, OpenAI grants one-to-one access to its models to “trusted individuals” in areas where it deems it is not safe to release the products.

One government official who worked closely with OpenAI said a different concern for the company was ensuring that any rules would be flexible in future and not become outdated with new scientific or technological advancements.

OpenAI hopes to address some hangovers from the social media age, which Makanju said has led to a “general distrust of Silicon Valley companies”.

“Unfortunately, people are often seeing AI with the same lens,” she added. “We spend a lot of time making sure people understand that this technology is quite different, and the regulatory interventions that make sense for it will be very different.”

However, some industry figures are critical of OpenAI’s lobbying expansion.

“Initially, OpenAI recruited people deeply involved in AI policy and specialists, whereas now they are just hiring run-of-the-mill tech lobbyists, which is a very different strategy,” said one person who has directly engaged with OpenAI on creating legislation.

“They’re just wanting to influence legislators in ways that Big Tech has done for over a decade.”

Robinson, OpenAI’s head of planning, said the global affairs team has more ambitious goals. “The mission is safe and broadly beneficial, and so what does that mean? It means creating laws that not only let us innovate and bring beneficial technology to people but also end up in a world where the technology is safe.”

Additional reporting by Madhumita Murgia in London

Letter in response to this article:

Task of AI reg­u­la­tion is to get the bal­ance right / From Faisal Al Ban­nai, Adviser to the UAE Pres­id­ent on Strategic Research and Advanced Tech­no­logy Affairs, Dubai, UAE

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments