Current, former OpenAI employees warn company not doing enough control dangers of AI (2024)

Geoff Bennett:

A group of current and former OpenAI employees has issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk and without sufficient oversight.

They're calling on leading artificial intelligence companies to be more transparent and provide stronger protections for whistle-blowers. It comes after OpenAI disbanded its team focused on long term A.I. risks and two leaders of that group have resigned.

We're joined now by NPR technology correspondent Bobby Allyn, who's been covering all of these developments and more.

Bobby, thanks for being with us.

So tell us more about who is behind this open letter and what specifically they're asking for.

Bobby Allyn, Business and Technology Reporter, NPR: Yes it's a number of current and former OpenAI employees.

I actually spoke to one of them just today. And what they're saying is really loud and clear. They think OpenAI is too aggressively in search of profits and market share and that they are not focused on responsibly developing A.I. products.

And, remember, this is really important, Geoff because OpenAI started as a nonprofit research lab that was — its aim when it was founded was to develop A.I. products, different than, say, Meta or Microsoft or Amazon, which are these huge publicly traded companies that are competing with one another, right?

OpenAI was supposed to be a nonprofit answer to big tech. And these employees say, look, it looks like you're operating just like big tech. You're pushing out products too quickly and society just isn't ready for them.

Bobby Allyn:

It sounds pretty dire, doesn't it?

And it goes back to this kind of nerdy phrase that A.I. researchers like citing known as P(doom), P meaning what's the probability and doom being — well, we know what doom means. And they like bringing this up because the theory is, if A.I. gets really smart, if it becomes super intelligent and can exceed the skills and brainpower of humanity, maybe one day it will turn on us.

Now, again, this is kind of a theoretical academic exercise at this point, that these sort of killer robots would be marching around cities and at war with humanity. I don't think we're anywhere near that. But they are underscoring this, because, look, that's sort of a hypothetical risk.

But we're seeing real risks play out every single day, whether it's the rise of deepfakes, whether it's A.I. being used to impersonate people, whether it's A.I. being used to supercharge dangerous misinformation around the Web. There are real risks that, according to these former employees, OpenAI doesn't care enough about and isn't doing much to mitigate.

Bobby Allyn:

Yes, OpenAI has publishers by the scruff of their neck.

OpenAI systems were trained on the corpus of the entire Internet, and that includes every large broadcaster and newspaper you can think of. And there, as you mentioned, are two camps emerging now. In the one camp are the publishers who say, you know what, let's strike licensing deals, let's try to bring some revenue in, let's play nice with OpenAI, because we have no choice. This is the future. OpenAI is going ruthlessly towards this direction. Let's try to make some money here.

And then you have newspapers like The New York Times who are in the other camp and have chose the other direction, which is, no, no, no, OpenAI. You took all of our articles without consent, without payment. Now you're making lots of money off of the knowledge and reporting and original work that goes into, say, a New York Times article. We don't want to strike a licensing deal with us. In fact, your systems are based on material that was stolen from us, so you owe us a lot of money and we do not want to play nice.

So, the way it's really going to shake out, I think, is, you know, some publishers are striking these deals. Others will join The New York Times' crusade to go after OpenAI. But it's a really, really interesting time, because, no matter what, they have this material, right, Geoff?

I mean, ChatGPT, every time you ask it a question, it is spitting out answers that are based in part on New York Times' articles, Associated Press articles, NPR articles, you name it. So that's just the future. So the question is, do you strike a deal or do you take them to court? And we're just seeing different sort of strategies here.

Current, former OpenAI employees warn company not doing enough control dangers of AI (2024)

References

Top Articles
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 6069

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.