OpenAI's CEO says New York Times' content is not needed for AI training. Plus, its current techniques for safe AI are not enough to scale.

Deborah Yao, Editor

January 18, 2024

10 Min Read
OpenAI CEO Sam Altman photo

At a Glance

  • OpenAI CEO Sam Altman: AI will have to outperform humans by a big margin to earn trust, at the World Economic Forum in Davos.
  • On the NYT lawsuit: OpenAI does not need to train on the content. They want to link out to the articles when users query.
  • On his firing: It was 'ridiculous.' As AI gets closer to AGI, everyone gets +10 'crazy points' and the tension level rises.

OpenAI CEO Sam Altman, the AI community’s wunderkind du jour, today joined an esteemed conference panel of corporate heavyweights at the World Economic Forum in Davos, Switzerland.

Davos, a small alpine town, gets the global spotlight once a year as the meeting place for global leaders and corporate elites to converge and discuss the world’s most pressing issues.

Here is Altman opining on a range of issues, edited for clarity:

Question: I think most people are worried about two kinds of opposite things about AI. One, is it is going to end humankind as we know it? And the other is, why can't AI drive my car?

Sam Altman: A very good sign about this new tool is that even with its very limited current capability, and its very deep flaws, people are finding ways to use it for great productivity gains, or other gains, and understand the limitations − so a system that is sometimes right, sometimes creative, often totally wrong. Actually, I do not want it to drive your car. But I am happy for it to help you brainstorm what to write about or help you with code. …

Why can’t AI drive my car? Well, there are great self-driving car systems (such as) Waymo around San Francisco; there a lot of them and people love them. What I meant is the sort of OpenAI style of model is good at some things, but not good at a life and death situation. …

AI has been somewhat demystified because people really use it now. And that is, I think, always the best way to walk forward with a new technology.

The thing that people worry about is the ability to trust AI. At what level can you say, ‘I'm really okay with the AI doing it, whether it is driving the car, writing the paper, filling out the medical form?’ … (Or) we are at some level just going to have to trust the black box?

I think humans are pretty forgiving of other humans making mistakes, but not really at all forgiving of computers making mistakes. So people who say things like, ‘self-driving cars are already safer than human-driven cars,’ (the car) probably has to be safer by a factor of, I would guess, between 10 and 100 before people will accept it, maybe even more. …

In some sense, the hardest part is when it is right 99.999% of the time and you let your guard down.

(Consider that) I actually cannot look in your brain and look at the 100 trillion synapses and try to understand what has happened in each one. … But what I can ask you to do is explain to me your reasoning … and I can decide if that sounds reasonable to me or not.

Our AI systems will also be able to do the same thing: They will be able to explain to us in natural language the steps from A to B and we can decide whether we think those are good steps.

What is left for human beings to do if the AI can out-analyze and out-calculate them? A lot of people then say that means we will be left with … our emotional intelligence. … Do you think AI could do that better than us as well?

Chess was one of the first ‘victims’ of AI − Deep Blue beat Kasparov whenever that was a long time ago. And all the commentators said, ‘this is the end of chess now that a computer can beat the human. No one's going to bother to watch chess again or play chess. (But) chess has never been more popular than it is right now. And if (players) cheat with AI, that is a big deal. And no one, or almost no one, watches two AIs play each other.

I admit (that the AI revolution compared to past technological disruptions) does feel different this time. General purpose cognition feels so close to what we all treasure about humanity. … (As such,) everyone's job will be different. … We will all operate at a little bit higher level of abstraction, we will all have access to a lot more capability. And we will still make decisions; they may trend more towards curation over time, but we will make decisions about what should happen in the world.

You have always taken a relatively benign view of AI. But people like Elon Musk, and sometimes Bill Gates, and other very smart people … are very, very worried. Why do you think they are wrong?

I do not think they are guaranteed to be wrong. … This is a technology that is clearly very powerful and we cannot say with certainty exactly what is going to happen. That is the case with all new major technological revolutions. But it is easy to imagine with this one, it is going to have massive effects on the world, and that it could go very wrong.

The technological direction that we have been trying to push it in is one that we think we can make safe, and that includes a lot of things. We believe in iterative deployment. We put this technology out into the world … so people get used to it. We have time, as a society, or institutions have time, to have these discussions to figure out how to regulate this, how to put some guardrails in place.

Can you technically put guardrails and layer a kind of constitution for an AI system? Would that work?

If you look at the progress from GPT-3 to GPT-4 about how well it can align itself to a set of values, we have made massive progress there. Now, there is a harder question than the technical one, which is, ‘who gets to decide what those values are, what the defaults are, and what the bounds are? How does it work in this country versus that country? What am I allowed to do with it versus not?’ That is a big societal question, one of the biggest.

But from a technological approach, there is room for optimism, although the alignment technique (or) techniques we have now, I do not think it will scale all the way to much more powerful systems (so) we are going to need to invent new things. I think it is good that people are afraid of the downsides of this technology. I think it is good that we are talking about it. I think it is good that we and others are being held to a high standard. …

I have a lot of empathy for the general discomfort of the world towards companies like us. … Why is our future in their hands? And … why are they doing this? Why did they get to do this? … I think the world now believes that the benefit here is so tremendous, that we should go do this.

But I think it is on us to figure out a way to get the input from society, about how we are going to make these decisions, not only about what the values of the system are, but what the safety thresholds are, and what kind of global coordination we need to ensure that stuff that happens in one country does not super negatively impact another.

The New York Times is suing you and claims that OpenAI uses its articles as input that allows it to make the language predictions that it makes. … Shouldn’t the people who wrote that get compensated?

We are hoping to train on The New York Times, but it is not a priority. We actually do not need to train on their data. This is something that people do not understand – any one particular training source does not move the needle for us that much.

What we want to do with content owners like The New York Times and in the deals we have done with many other publishers − and we will do more over time − is when a user says ‘Hey ChatGPT, what happened at Davos today?’ We would like to display content, link out, show brands of places like The New York Times, or The Wall Street Journal, or any other great publication, and say, ‘here's what happened today, here's this real time information,’ and then we would like to pay for that, we would like to drive traffic. But it is displaying information when the user queries, not using it to train the model.

Now we could also train the model on it, but it is not a priority. We are happy not to. … One thing that I expect to start changing is these models will be able to take smaller amounts of higher quality data during their training process and think harder about it and learn more. You do not need to read 2,000 biology textbooks to understand high school level biology. Maybe you need to read one, maybe three, but 2,000 … is certainly not going to help you much. And as our models begin to work more that way, we will not need the same massive amounts of training data.

But what we want in any case is to find new economic models that work for the whole world, including content owners. … If we are going to teach someone else physics, using your textbook and using your lesson plans, we would like to find a way for you to get paid for that. If you teach our models, if you help provide the human feedback, I would love to find new models for you to get paid based off the success of that.

So I think there is a great need for new economic models. The current conversation is focused a little bit at the wrong level. I think what it means to train these models is going to change a lot in the next few years.

You were involved in what is perhaps the most widely publicized boardroom scandal in in recent decades. What lesson did you learn from that?

At some point, you just have to laugh. It just gets so ridiculous. …

We had known that our board had gotten too small, and we knew that we did not have the level of experience we needed. But last year was such a wild year for us in so many ways that we sort of just neglected it.

I think one more important thing, though, is as the world gets closer to AGI, the stakes, the stress, the level of tension, that is all going to go up. And for us, this was a microcosm of it but probably not the most stressful experience we ever faced. One thing that I have observed for a while is every one step we take closer to very powerful AI, everybody's character gets +10 crazy points. It is a very stressful thing, and it should be because we are trying to be responsible about very high stakes.

The best thing I learned throughout this, by far, was about the strength of our team. When the board first asked me the day after firing me if I am thinking about coming back, my immediate response was no, because I was just very (upset) by a lot of things about it. And then I quickly got to my senses and I realized I did not want to see all the value get destroyed (we built from) all these wonderful people who put their lives into this and all of our customers. But I did also know …  the company would be fine without me.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like