This is great context. With Eliezer being brought up in White House Press Corps meetings, it looks like a flood of people might soon enter the AI risk discourse. Tyler Cowen has been making some pretty bad arguments on AI lately, but I though this quote was spot on:
“This may sound a little harsh, but the rationality community, EA movement, and the AGI arguers all need to radically expand the kinds of arguments they are able to process and deal with. By a lot. One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were. Where were the clergy, the politicians, the historians, and so on? This should be a wake-up call, but so far it has not been.”
Nearly all reasonable discussions of AI x-risk have taken place in the peculiar cultural bubble of rationality and EA. These past efforts could be multiplied by new interest from mainstream folks in AI, policy, philosophy, economics, and other fields. Or they could be misunderstood and discarded in favor of distractions that claim the mantle of AI safety. Hopefully we can find ways of communicating with new people in other disciplines that will lead to productive conversations on AI x-risk.
Where were the clergy, the politicians, the historians, and so on? This should be a wake-up call, but so far it has not been.
The problem is that for those groups to show up, they’d need to think there is a problem, and this specific problem, first. In my bubble, from more humanities-minded people, I have seen reactions such as “the tech people have scared themselves into a frenzy because they believe their own hype”, or even “they’re doing this on purpose to pass the message that the AI they sell is powerful”. The association with longtermists also seems harmful, in a “if they’re involved, there must be something shady about it” way.
I think if one wants involvement from those parts of the public there needs to be a way to overcome the barrier of denial of AI capabilities, and a reframing of the problem in more human terms (for example, how developing dangerous AI is another form of corporations privatising profits and socialising costs, same as with many forms of environmental damage).
This is already a solved problem for smaller stakes communication, send irrevocably costly signals.
For example, a startup founder mortgages their aging parents house and then puts that money in their startup to show skin in the game to attract serious investors.
And investors find that pretty reliable because they can call up the mortgage lender and confirm it’s real.
The questions is what kind of irrevocably costly signals can be coordinated around, that the counter-parties can also reliably confirm. And how many folks would actually pay the price.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost: usually companies are really loathe to admit the dangers of their products, let alone such extreme ones. Yet some people have worked themselves up into believing that’s only marketing.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost:
None explicitly, but the argument that “AI companies drum up the idea that AI can kill everyone for marketing” has emerged a lot after the moratory open letter, seeing how many signatories are in one way or the other involved in the industry. There’s also plenty of examples of employees at least admitting that existential risk from AI is a thing.
That’s the issue though, “the clergy, the politicians, the historians” have not heard of these people, so it’s barely better than totally random people saying it in their view.
If a major company said this on the record, that’s different, because everyone’s heard of Microsoft or Google, and their corporate credibility, reputation, etc., is in aggregate literally worth millions of times more than even the most influential individual who has signed on so far.
“One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were.” [...] Nearly all reasonable discussions of AI x-risk have taken place in the peculiar cultural bubble of rationality and EA.
Some counter-examples that come to mind: Joshua Bengio, Geoffrey Hinton, Stephen Hawking, Bill Gates, Steve Wozniak. Looking at the Pause Giant Experiments Open letter now, I also see several signatories from fields like history, philosophy, some signers identifying as teachers, priests, librarians, psychologists, etc.
(Not that I disagree broadly with your point that the discussion has been strongly weighted in the rationality and EA communities.)
My clergy spouse wishes to remind people that there are some important religious events this week, so many clergy are rather busy. I’m quite hopeful that there will be a strong religious response to AI risk, as there is already to climate risk.
This is great context. With Eliezer being brought up in White House Press Corps meetings, it looks like a flood of people might soon enter the AI risk discourse. Tyler Cowen has been making some pretty bad arguments on AI lately, but I though this quote was spot on:
“This may sound a little harsh, but the rationality community, EA movement, and the AGI arguers all need to radically expand the kinds of arguments they are able to process and deal with. By a lot. One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were. Where were the clergy, the politicians, the historians, and so on? This should be a wake-up call, but so far it has not been.”
Nearly all reasonable discussions of AI x-risk have taken place in the peculiar cultural bubble of rationality and EA. These past efforts could be multiplied by new interest from mainstream folks in AI, policy, philosophy, economics, and other fields. Or they could be misunderstood and discarded in favor of distractions that claim the mantle of AI safety. Hopefully we can find ways of communicating with new people in other disciplines that will lead to productive conversations on AI x-risk.
The problem is that for those groups to show up, they’d need to think there is a problem, and this specific problem, first. In my bubble, from more humanities-minded people, I have seen reactions such as “the tech people have scared themselves into a frenzy because they believe their own hype”, or even “they’re doing this on purpose to pass the message that the AI they sell is powerful”. The association with longtermists also seems harmful, in a “if they’re involved, there must be something shady about it” way.
I think if one wants involvement from those parts of the public there needs to be a way to overcome the barrier of denial of AI capabilities, and a reframing of the problem in more human terms (for example, how developing dangerous AI is another form of corporations privatising profits and socialising costs, same as with many forms of environmental damage).
This is already a solved problem for smaller stakes communication, send irrevocably costly signals.
For example, a startup founder mortgages their aging parents house and then puts that money in their startup to show skin in the game to attract serious investors.
And investors find that pretty reliable because they can call up the mortgage lender and confirm it’s real.
The questions is what kind of irrevocably costly signals can be coordinated around, that the counter-parties can also reliably confirm. And how many folks would actually pay the price.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost: usually companies are really loathe to admit the dangers of their products, let alone such extreme ones. Yet some people have worked themselves up into believing that’s only marketing.
Which companies have said this on the record?
None explicitly, but the argument that “AI companies drum up the idea that AI can kill everyone for marketing” has emerged a lot after the moratory open letter, seeing how many signatories are in one way or the other involved in the industry. There’s also plenty of examples of employees at least admitting that existential risk from AI is a thing.
That’s the issue though, “the clergy, the politicians, the historians” have not heard of these people, so it’s barely better than totally random people saying it in their view.
If a major company said this on the record, that’s different, because everyone’s heard of Microsoft or Google, and their corporate credibility, reputation, etc., is in aggregate literally worth millions of times more than even the most influential individual who has signed on so far.
I mean, the open letter made front page news. And there’s been a few mainstream news stories about the topic, like this one from CBS.
How does that relate to the perception by “the clergy, the politicians, the historians”?
Some counter-examples that come to mind: Joshua Bengio, Geoffrey Hinton, Stephen Hawking, Bill Gates, Steve Wozniak. Looking at the Pause Giant Experiments Open letter now, I also see several signatories from fields like history, philosophy, some signers identifying as teachers, priests, librarians, psychologists, etc.
(Not that I disagree broadly with your point that the discussion has been strongly weighted in the rationality and EA communities.)
My clergy spouse wishes to remind people that there are some important religious events this week, so many clergy are rather busy. I’m quite hopeful that there will be a strong religious response to AI risk, as there is already to climate risk.