Sundar Pichai was on the Hard Fork podcast today and was asked directly by Kevin Roose and Casey Newton about the FLI letter as well as long-term AI risk. I pulled out some of Pichai’s answers here:
The FLI letter proposal:
kevin roose
[...] What did you think of that letter, and what do you think of this idea of slowing down the development of big models for six months?
sundar pichai
Look, in this area, I think it’s important to hear concerns. I mean, there are many thoughtful people, people who have thought about AI for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned.
And I think there is merit to be concerned about it. So I think while I may not agree with everything that’s there in the details of how you would go about it, I think the spirit of it is worth being out there. I think you’re going to hear more concerns like that.
This is going to need a lot of debate. No one knows all the answers. No one company can get it right. We have been very clear about responsible AI — one of the first companies to put out AI principles. We issue progress reports.
AI is too important an area not to regulate. It’s also too important an area not to regulate well. So I’m glad these conversations are underway. If you look at an area like genetics in the ’70s, when the power of DNA and recombinant DNA came into being, there were things like the Asilomar Conference.
Paul Berg from Stanford organized it. And a bunch of the leading experts in the field got together and started thinking about voluntary frameworks as well. So I think all those are good ways to think about this.
Game theory:
kevin roose
And just one more thing on this letter calling for this six-month pause. Are you willing to entertain that idea? I know you haven’t committed to it, but is that something you think Google would do?
sundar pichai
So I think in the actual specifics of it, it’s not fully clear to me. How would you do something like that, right, today?
kevin roose
Well, you could send an email to your engineers and say, OK, we’re going to take a six-month break.
sundar pichai
No, no, no, but How would you do — but if others aren’t doing that. So what does that mean? I’m talking about the how would you effectively —
kevin roose
It’s sort of a collective action problem.
sundar pichai
To me at least there is no way to do this effectively without getting governments involved.
casey newton
Yeah.
Long term AI x-risk:
kevin roose
Yeah, so if you had to give a question on the AGI or the more long-term concerns, what would you say is the chance that a more advanced AI could lead to the destruction of humanity?
sundar pichai
There is a spectrum of possibilities. And what you’re saying is in one of that possibility ranges, right? And so if you look at even the current debate about where AI is today or where LLMs are, you see people who are strongly opinionated on either side.
There are a set of who believe these LLMs, they’re just not that powerful. They are statistical models which are —
kevin roose
They’re just fancy autocomplete.
sundar pichai
Yes, that’s one way of putting it, right. And there are people who are looking at this and saying, these are really powerful technologies. You can see emergent capabilities — and so on.
We could hit a wall two iterations down. I don’t think so, but that’s a possibility. They could really progress in a two-year time frame. And so we have to really make sure we are vigilant and working with it.
One of the things that gives me hope about AI, like climate change, is it affects everyone. And so these are both issues that have similar characteristics in the sense that you can’t unilaterally get safety in AI. By definition, it affects everyone. So that tells me the collective will come over time to tackle all of this responsibly.
So I’m optimistic about it because I think people will care and people will respond. But the right way to do that is by being concerned about it. So I would never — at least for me, I would never dismiss any of the concerns, and I’m glad people are taking it seriously. We will.
A reason for optimism:
kevin roose
I hear you saying that what gives you hope for the future when it comes to AI is that other people are concerned about it — that they’re looking at the risks and the challenges. So on one hand, you’re saying that people should be concerned about AI. On the other hand, you’re saying the fact that they are concerned about AI makes you less concerned. So which is —
sundar pichai
Sorry, I’m saying the fact that the way you get things wrong is by not worrying about it. So if you don’t worry about something, you’re just going to completely get surprised. So to me, it gives me hope that there is a lot of people — important people — who are very concerned, and rightfully so.
Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly. I mean, we’ve been working on this for a long time. But I think the fact that so many people are concerned gives me hope that we will rise over time and tackle what we need to do.
Sundar Pichai was on the Hard Fork podcast today and was asked directly by Kevin Roose and Casey Newton about the FLI letter as well as long-term AI risk. I pulled out some of Pichai’s answers here:
The FLI letter proposal:
Game theory:
Long term AI x-risk:
A reason for optimism: