Where were the clergy, the politicians, the historians, and so on? This should be a wake-up call, but so far it has not been.
The problem is that for those groups to show up, they’d need to think there is a problem, and this specific problem, first. In my bubble, from more humanities-minded people, I have seen reactions such as “the tech people have scared themselves into a frenzy because they believe their own hype”, or even “they’re doing this on purpose to pass the message that the AI they sell is powerful”. The association with longtermists also seems harmful, in a “if they’re involved, there must be something shady about it” way.
I think if one wants involvement from those parts of the public there needs to be a way to overcome the barrier of denial of AI capabilities, and a reframing of the problem in more human terms (for example, how developing dangerous AI is another form of corporations privatising profits and socialising costs, same as with many forms of environmental damage).
This is already a solved problem for smaller stakes communication, send irrevocably costly signals.
For example, a startup founder mortgages their aging parents house and then puts that money in their startup to show skin in the game to attract serious investors.
And investors find that pretty reliable because they can call up the mortgage lender and confirm it’s real.
The questions is what kind of irrevocably costly signals can be coordinated around, that the counter-parties can also reliably confirm. And how many folks would actually pay the price.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost: usually companies are really loathe to admit the dangers of their products, let alone such extreme ones. Yet some people have worked themselves up into believing that’s only marketing.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost:
None explicitly, but the argument that “AI companies drum up the idea that AI can kill everyone for marketing” has emerged a lot after the moratory open letter, seeing how many signatories are in one way or the other involved in the industry. There’s also plenty of examples of employees at least admitting that existential risk from AI is a thing.
That’s the issue though, “the clergy, the politicians, the historians” have not heard of these people, so it’s barely better than totally random people saying it in their view.
If a major company said this on the record, that’s different, because everyone’s heard of Microsoft or Google, and their corporate credibility, reputation, etc., is in aggregate literally worth millions of times more than even the most influential individual who has signed on so far.
The problem is that for those groups to show up, they’d need to think there is a problem, and this specific problem, first. In my bubble, from more humanities-minded people, I have seen reactions such as “the tech people have scared themselves into a frenzy because they believe their own hype”, or even “they’re doing this on purpose to pass the message that the AI they sell is powerful”. The association with longtermists also seems harmful, in a “if they’re involved, there must be something shady about it” way.
I think if one wants involvement from those parts of the public there needs to be a way to overcome the barrier of denial of AI capabilities, and a reframing of the problem in more human terms (for example, how developing dangerous AI is another form of corporations privatising profits and socialising costs, same as with many forms of environmental damage).
This is already a solved problem for smaller stakes communication, send irrevocably costly signals.
For example, a startup founder mortgages their aging parents house and then puts that money in their startup to show skin in the game to attract serious investors.
And investors find that pretty reliable because they can call up the mortgage lender and confirm it’s real.
The questions is what kind of irrevocably costly signals can be coordinated around, that the counter-parties can also reliably confirm. And how many folks would actually pay the price.
One would imagine that companies saying “our product might kill everyone” and trying to suggest stopping operations for six months at an industry level would be one such cost: usually companies are really loathe to admit the dangers of their products, let alone such extreme ones. Yet some people have worked themselves up into believing that’s only marketing.
Which companies have said this on the record?
None explicitly, but the argument that “AI companies drum up the idea that AI can kill everyone for marketing” has emerged a lot after the moratory open letter, seeing how many signatories are in one way or the other involved in the industry. There’s also plenty of examples of employees at least admitting that existential risk from AI is a thing.
That’s the issue though, “the clergy, the politicians, the historians” have not heard of these people, so it’s barely better than totally random people saying it in their view.
If a major company said this on the record, that’s different, because everyone’s heard of Microsoft or Google, and their corporate credibility, reputation, etc., is in aggregate literally worth millions of times more than even the most influential individual who has signed on so far.
I mean, the open letter made front page news. And there’s been a few mainstream news stories about the topic, like this one from CBS.
How does that relate to the perception by “the clergy, the politicians, the historians”?