If we could travel back in time, and prevent information hazards as “AI can be very powerful” from ever going mainstream, that would probably be a good thing to do. But we live in a world where ChatGPT is the fastest growing app to have ever existed, where the company behind it is publicly stating it wants to build AGI because it can transform all of our lives. Billions are invested in this domain. This meme is already mainstream.
The meme “AI might be disastrous” is luckily also already mainstream. Over 80% worry that AI might cause catastrophic outcomes. The meme “Slowing down progress is good” is mainstream, too. Over 70% of people are in favor of slowing down AI development. Over 60% would support a ban on AI smarter than humans.
So we’re actually on the right track. Advocacy is the thing that got us here—not just talking to the in-crowd about the risks. Geoffrey Hinton quitting, the FLI pause letter, quotes from Elon—these are the things that ended up going mainstream and got people to worry more about all of this. It’s not just a small group of LW folks now.
But we’re still not there. We need the next memes to become mainstream, too:
There is not just one tiny risk, there is a large number of risks, some of which could be default outcomes. Even if you don’t believe in AI takeover, you should still consider all the other ways in which AGI could end up horribly.
Pausing is possible. It’s not easy, but there is nothing inevitable about a small number of AI labs racing towards AGI. It’s not as if molecules automatically assemble themselves into GPUs. We can and should regulate strictly, on an international level, and it needs to happen fast. Polls show that normal people already agree that this should happen, but our politicians will not act unless they are thoroughly pushed.
Act. We should not wait for things to go wrong, we need to act. Speak up, be honest, and make people understand what needs to happen in order for us all to be safe. Most people here (that includes me) are biased towards thinking, doubting and discussing things. This is what got us to consider these risks in the first place, but it also means that we’re very prone to not do anything about it. IMO the largest risk we’re facing right now is dying to a lack of sensible action.
So if you ask me, PauseAI advocacy can be a great way to be productive and mitigate the very worst outcomes, but we’ll always need to consider the specific actions themselves.
I don’t think it’s a binary; they could still pay less attention!
(plausibly there’s a bazillion things constantly trying to grab their attention, so they won’t “lock on” if we avoid bringing AI to their attention too much)
to clarify: governments have already put some of their agentic capability towards figuring out the most powerful ways to use ai, and there is plenty of documentation already as to what those are. the documentation is the fuel, and it has already caught on “being used to design war devices” fire.
the question is how do they respond. it’s not likely they’ll respond well, regardless, of course. I’m more worried about pause regulation itself changing the landscape in a way that causes net acceleration, rather than advocacy for it independent of the enactment of the regulation, which I expect to do relatively little. individual human words mean little next to the might of “hey chatgpt,” suddenly being a thing that exists.
I don’t think governments have yet committed to trying to train their own state of the art foundation models for military purposes, probably partly because they (sensibly) guess that they would not be able to keep up with the private sector. That means that government interest/involvement has relatively little effect on the pace of advancement of the bleeding edge.
I think posts like this are net harmful, by discouraging people from joining those doing good things without providing an alternative and so wasting energy on meaningless ruminating that doesn’t culminate in any useful action.
I think it’s important for the ‘Pause AI’ movement (which I support) to help politicians, voter, and policy wonks understand that ‘power to do good’ is not necessarily correlated with ‘power to deter harm’ or the ‘power to do indiscriminate harm’. So, advocating for caution (‘OMG AI is really dangerous!‘) should not be read as ‘power to do good’ or ‘power to deter harm’—which could incentivize gov’ts to pursue AI despite the risks.
For example, nuclear weapons can’t really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).
Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).
Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.
If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.
If we could travel back in time, and prevent information hazards as “AI can be very powerful” from ever going mainstream, that would probably be a good thing to do. But we live in a world where ChatGPT is the fastest growing app to have ever existed, where the company behind it is publicly stating it wants to build AGI because it can transform all of our lives. Billions are invested in this domain. This meme is already mainstream.
The meme “AI might be disastrous” is luckily also already mainstream. Over 80% worry that AI might cause catastrophic outcomes. The meme “Slowing down progress is good” is mainstream, too. Over 70% of people are in favor of slowing down AI development. Over 60% would support a ban on AI smarter than humans.
So we’re actually on the right track. Advocacy is the thing that got us here—not just talking to the in-crowd about the risks. Geoffrey Hinton quitting, the FLI pause letter, quotes from Elon—these are the things that ended up going mainstream and got people to worry more about all of this. It’s not just a small group of LW folks now.
But we’re still not there. We need the next memes to become mainstream, too:
There is not just one tiny risk, there is a large number of risks, some of which could be default outcomes. Even if you don’t believe in AI takeover, you should still consider all the other ways in which AGI could end up horribly.
Pausing is possible. It’s not easy, but there is nothing inevitable about a small number of AI labs racing towards AGI. It’s not as if molecules automatically assemble themselves into GPUs. We can and should regulate strictly, on an international level, and it needs to happen fast. Polls show that normal people already agree that this should happen, but our politicians will not act unless they are thoroughly pushed.
Act. We should not wait for things to go wrong, we need to act. Speak up, be honest, and make people understand what needs to happen in order for us all to be safe. Most people here (that includes me) are biased towards thinking, doubting and discussing things. This is what got us to consider these risks in the first place, but it also means that we’re very prone to not do anything about it. IMO the largest risk we’re facing right now is dying to a lack of sensible action.
However, there are some forms of advocacy that are net-harmful. Violent protests, for example, have shown to diminish support for a topic. This is why we’re strictly organising peaceful protests, which are shown to have positive effects on public support.
So if you ask me, PauseAI advocacy can be a great way to be productive and mitigate the very worst outcomes, but we’ll always need to consider the specific actions themselves.
Disclaimer: I’m the guy who founded PauseAI
governments know now, though. there’s no changing that.
I don’t think it’s a binary; they could still pay less attention!
(plausibly there’s a bazillion things constantly trying to grab their attention, so they won’t “lock on” if we avoid bringing AI to their attention too much)
to clarify: governments have already put some of their agentic capability towards figuring out the most powerful ways to use ai, and there is plenty of documentation already as to what those are. the documentation is the fuel, and it has already caught on “being used to design war devices” fire.
the question is how do they respond. it’s not likely they’ll respond well, regardless, of course. I’m more worried about pause regulation itself changing the landscape in a way that causes net acceleration, rather than advocacy for it independent of the enactment of the regulation, which I expect to do relatively little. individual human words mean little next to the might of “hey chatgpt,” suddenly being a thing that exists.
I don’t think governments have yet committed to trying to train their own state of the art foundation models for military purposes, probably partly because they (sensibly) guess that they would not be able to keep up with the private sector. That means that government interest/involvement has relatively little effect on the pace of advancement of the bleeding edge.
It directly contributed to the founding and initial funding of DeepMind, OpenAI and Anthropic.
I think it was net harmful.
I think posts like this are net harmful, by discouraging people from joining those doing good things without providing an alternative and so wasting energy on meaningless ruminating that doesn’t culminate in any useful action.
Tamsin—interesting points.
I think it’s important for the ‘Pause AI’ movement (which I support) to help politicians, voter, and policy wonks understand that ‘power to do good’ is not necessarily correlated with ‘power to deter harm’ or the ‘power to do indiscriminate harm’. So, advocating for caution (‘OMG AI is really dangerous!‘) should not be read as ‘power to do good’ or ‘power to deter harm’—which could incentivize gov’ts to pursue AI despite the risks.
For example, nuclear weapons can’t really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).
Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).
Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.
If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.
typo
I’m interested in what do people think are the best ways of doing advocacy in a way that gives more weight to the risks than the (supposed) benefits.
Talking about all the risks? Focusing on the expert polls instead of the arguments?