If we could travel back in time, and prevent information hazards as “AI can be very powerful” from ever going mainstream, that would probably be a good thing to do. But we live in a world where ChatGPT is the fastest growing app to have ever existed, where the company behind it is publicly stating it wants to build AGI because it can transform all of our lives. Billions are invested in this domain. This meme is already mainstream.
The meme “AI might be disastrous” is luckily also already mainstream. Over 80% worry that AI might cause catastrophic outcomes. The meme “Slowing down progress is good” is mainstream, too. Over 70% of people are in favor of slowing down AI development. Over 60% would support a ban on AI smarter than humans.
So we’re actually on the right track. Advocacy is the thing that got us here—not just talking to the in-crowd about the risks. Geoffrey Hinton quitting, the FLI pause letter, quotes from Elon—these are the things that ended up going mainstream and got people to worry more about all of this. It’s not just a small group of LW folks now.
But we’re still not there. We need the next memes to become mainstream, too:
There is not just one tiny risk, there is a large number of risks, some of which could be default outcomes. Even if you don’t believe in AI takeover, you should still consider all the other ways in which AGI could end up horribly.
Pausing is possible. It’s not easy, but there is nothing inevitable about a small number of AI labs racing towards AGI. It’s not as if molecules automatically assemble themselves into GPUs. We can and should regulate strictly, on an international level, and it needs to happen fast. Polls show that normal people already agree that this should happen, but our politicians will not act unless they are thoroughly pushed.
Act. We should not wait for things to go wrong, we need to act. Speak up, be honest, and make people understand what needs to happen in order for us all to be safe. Most people here (that includes me) are biased towards thinking, doubting and discussing things. This is what got us to consider these risks in the first place, but it also means that we’re very prone to not do anything about it. IMO the largest risk we’re facing right now is dying to a lack of sensible action.
So if you ask me, PauseAI advocacy can be a great way to be productive and mitigate the very worst outcomes, but we’ll always need to consider the specific actions themselves.
If we could travel back in time, and prevent information hazards as “AI can be very powerful” from ever going mainstream, that would probably be a good thing to do. But we live in a world where ChatGPT is the fastest growing app to have ever existed, where the company behind it is publicly stating it wants to build AGI because it can transform all of our lives. Billions are invested in this domain. This meme is already mainstream.
The meme “AI might be disastrous” is luckily also already mainstream. Over 80% worry that AI might cause catastrophic outcomes. The meme “Slowing down progress is good” is mainstream, too. Over 70% of people are in favor of slowing down AI development. Over 60% would support a ban on AI smarter than humans.
So we’re actually on the right track. Advocacy is the thing that got us here—not just talking to the in-crowd about the risks. Geoffrey Hinton quitting, the FLI pause letter, quotes from Elon—these are the things that ended up going mainstream and got people to worry more about all of this. It’s not just a small group of LW folks now.
But we’re still not there. We need the next memes to become mainstream, too:
There is not just one tiny risk, there is a large number of risks, some of which could be default outcomes. Even if you don’t believe in AI takeover, you should still consider all the other ways in which AGI could end up horribly.
Pausing is possible. It’s not easy, but there is nothing inevitable about a small number of AI labs racing towards AGI. It’s not as if molecules automatically assemble themselves into GPUs. We can and should regulate strictly, on an international level, and it needs to happen fast. Polls show that normal people already agree that this should happen, but our politicians will not act unless they are thoroughly pushed.
Act. We should not wait for things to go wrong, we need to act. Speak up, be honest, and make people understand what needs to happen in order for us all to be safe. Most people here (that includes me) are biased towards thinking, doubting and discussing things. This is what got us to consider these risks in the first place, but it also means that we’re very prone to not do anything about it. IMO the largest risk we’re facing right now is dying to a lack of sensible action.
However, there are some forms of advocacy that are net-harmful. Violent protests, for example, have shown to diminish support for a topic. This is why we’re strictly organising peaceful protests, which are shown to have positive effects on public support.
So if you ask me, PauseAI advocacy can be a great way to be productive and mitigate the very worst outcomes, but we’ll always need to consider the specific actions themselves.
Disclaimer: I’m the guy who founded PauseAI