This is an audio transcript of the FT News Briefing podcast episode: ‘Open AI’s “red team”’

Sonja Hutson
Good morning from the Financial Times. Today is Friday, April 14th, and this is your FT News Briefing.

[MUSIC PLAYING]

BP has fired up a massive new crude oil platform in the Gulf of Mexico. Global equities have bounced back from the banking crisis — but is it for real?

Katie Martin
We’re in this kind of dreamy netherworld where, you know, equity valuations still don’t make sense. And at some point something’s got to go snap.

Sonja Hutson
Plus, the artificial intelligence chatbot ChatGPT keeps getting smarter and more dangerous. I’m Sonja Hutson, in for Marc Filippino, and here’s the news you need to start your day.

[MUSIC PLAYING]

Oil giant BP yesterday started pumping crude oil through a new $9bn offshore platform in the US Gulf of Mexico. It’s the first one the company has opened in the region since the disastrous Deepwater Horizon explosion 13 years ago. And it’s a sign of BP’s decision to slow down its transition away from fossil fuels. Here’s our Houston correspondent, Justin Jacobs.

Justin Jacobs
Yes. So this project has been in the works for a number of years at BP, but it comes at a really consequential moment for the company, because what we’re seeing here is a real evidence of the shift in the conversation around energy security and climate change in the aftermath of the war in Ukraine. And so for a company like BP, you know, that has meant in the short run, there’s a lot more focus on producing fossil fuels than investing in clean energy, which it is still doing. But it has certainly shifted the balance between those two.

Sonja Hutson
Justin Jacobs is the FT’s Houston correspondent.

[MUSIC PLAYING]

Global equities have bounced back to the levels they were before banking turmoil roiled markets in March. To find out whether all that volatility is really in the rear-view mirror, I’m joined now by the FT’s markets editor, Katie Martin. Hey, Katie.

Katie Martin
Hey, how you doing?

Sonja Hutson
Doing good. So let’s start with the good news. Markets seem calmer. They seem to have calmed down since all the volatility following the failure of Silicon Valley Bank and Credit Suisse and other lenders. Are you breathing easier now?

Katie Martin
I am breathing easier. Yes. That was quite a moment that we had sort of, in the sort of early part of March. There was this whole flurry of bank failures on both sides of the Atlantic, and it just gave everyone these kind of horrible flashbacks to (chuckle) 2008. And people thought, oh, God, is this the big one? Is it coming back? It looks like, no. I mean, if you look at a chart of global stocks or of US stocks, it’s like, you know, banking crisis? What banking crisis? The obvious question is whether this is some sort of horrible calm before the storm. And literally every investor that I speak to, large or small, is super alert to the risks here and is very worried. That’s the pattern that we saw in 2008. And look, 2023 doesn’t look like 2008, but we all thought that the crisis was over in 2008 when JPMorgan bought Bear Stearns. And it took weeks for, you know, for markets to really unravel and for Lehman to happen. So people are just very alert to the fact that they feel like they’ve seen this movie before and that it could all go horribly wrong. But right now, the markets are telling you no need to panic.

Sonja Hutson
Mm-hmm. So what are investors worrying about? What could potentially be the other shoe that drops?

Katie Martin
Multiple shoes. How many shoes can you wear at once? (Laughter) They can all fall, they can all fall off at once. Inflation remains the number one kind of preoccupation of investors globally. But there’s a good number of second round effects, if you like, that can still come out of the US regional banking crisis. It makes you think, well, you know, in the long run, what’s this going to mean for small businesses, for example? Are they going to have to shed some jobs? Are they going to have to cut back on investments that they were going to make if they’re not able to get hold of money as readily as they previously were, because their lender either doesn’t exist anymore or their lender that does still exist suddenly decided to really tighten up lending standards? So this is definitely a big risk, was hanging over the US economy and therefore over markets.

Sonja Hutson
So going back to inflation, how much of a risk is that right now?

Katie Martin
Inflation continues to pull back. But again, you know, investors are humble and they know that they’ve got inflation very, very wrong before and they’re not necessarily prepared to call the all clear on this now. And one potential risk for equity investors is that one of the reasons why equities, like stock markets, have done pretty well since the US regional banks all got themselves into trouble is precisely because the bond market at the time, the US government bond market in particular, went absolutely bonkers around the time that these banks failed, they started pricing in huge number of interest rate cuts or, you know, the very least, an immediate pause to interest rate rises. If that has been an overreaction, and my guess is it has been and it’s going to unravel, then will that pull equities back down with it again, too? So that is definitely another thing that analysts are starting to talk about, that, you know, we’re in this kind of dreamy netherworld where, you know, equity valuations still don’t make sense. And at some point something’s got to go snap.

Sonja Hutson
Katie Martin is the FT’s markets editor. Thanks, Katie.

Katie Martin
Pleasure.

[MUSIC PLAYING]

Sonja Hutson
The newest version of the artificial intelligence chatbot ChatGPT is so powerful it’s creating new fears about how it can be used. So the company behind it has tried to counter those concerns by creating a team of experts to test out what the chatbot can do. The FT’s artificial intelligence editor Madhumita Murgia has been reporting on OpenAI’s “red team”. She joins me now. Hi, Madhu.

Madhumita Murgia
Hi.

Sonja Hutson
Can you tell me a little bit about this team that OpenAI put together to look into dangerous uses for ChatGPT-4?

Madhumita Murgia
So I spent quite a long time reporting this piece that we have out on the “red team”, which is essentially a term for people who test out something in advance of it being released. But what they’re really trying to do is break it. They’re trying to push it to its limits, see all the bad things that it could say by asking it lots of different types of questions in their areas of expertise. And they were a really eclectic bunch. So there was a writing teacher at a community college, there was a chemist, you know, who worked on designing drugs, all these different types of people who look at safety in different aspects. And, you know, they all were really worried by when they tested GPT-4 out, the kind of outputs they were seeing.

Sonja Hutson
But now that ChatGPT-4 is out in the real world, how do they feel about it? Do they feel like the company addressed the concerns that they had?

Madhumita Murgia
So I think now that it’s out in the real world, they continue to be concerned because while some of the, some of what they fed back was taken on board and improved in the final version of GPT-4, they feel that there’s lots of ways in which you can still elicit harmful responses, whether that’s biased responses, discriminatory towards certain, you know, marginalised groups or genders. There’s ways in which you can elicit how-tos for cybercrimes. The chemist that I spoke to, Andrew White, was really concerned because he managed to find a way to get GPT to not only suggest an entirely new type of chemical that could be harmful to human health, but he got it to find where he could order it from on the internet as well.

Sonja Hutson
So what makes this newest version of ChatGPT more concerning than previous versions?

Madhumita Murgia
So I don’t think it’s necessarily more nefarious. It’s just better, right? It’s more powerful. And all of the people I interviewed for my piece had tested both versions and they just found that, you know, GPT-4 was capable of more nuance. So the real difference is that it’s a larger model, it’s a more powerful model. And along with that, that means it has more sort of misuses that come with it as well.

Sonja Hutson
Madhu, what’s your takeaway here? How worried should we be about what ChatGPT-4 can do?

Madhumita Murgia
So having spoken to quite a lot of people who’ve been thinking about the downsides, I have come away concerned about how these this tool could be exploited. There’s a whole community of people out there, you know, the open-source community of coders and developers who are kind of breaking open models like GPT-4 and adapting it and changing it and, you know, connecting it to the internet, for example, and allowing it to perform autonomous actions, kind of breaking it open from the box that it currently sits in. And that really reduces our human oversight of these types of technologies. And then we don’t really know what it is they’re doing or who they’re communicating with. And that’s what we need to worry about. That’s already happening.

Sonja Hutson
Madhumita Murgia is the FT’s artificial intelligence editor. Thanks, Madhu.

Madhumita Murgia
Thank you.

[MUSIC PLAYING]

Sonja Hutson
You can read more on all these stories at FT.com. This has been your daily FT News Briefing. Make sure you check back tomorrow for the latest business news.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments

Comments have not been enabled for this article.