An AI crash is our best bet for restricting AI

If there is a fast take-off, or corporations start earning billions on large models in the next years, we’ll get locked into a trajectory toward extinction.

Now, I don’t think either will happen. I think the AI market will crash in the next few years. But my credence here is beside the point.

My point is that the period after an AI crash would be super high leverage to finally get robust enforceable restrictions in place. Besides some ‘warning shot’ where a badly designed AI system happens to cause or come close to causing a catastrophe, I can’t think of a better window of opportunity.

So even if you think it is highly unlikely, preparing for the possibility is worth doing.

As funding dries up and the media turns against AI corporations, their executives get distracted and lose political sway. For a short period, AI Safety and other concerned communities can pack some real punch.

It is the time we can start enforcing all the laws on the books (and put more on the books!) to prevent corporations from recklessly developing and releasing AI models.

Compute limits! Anti-data-scraping! Worker protections! Product liability!

It will be harder to put in place regulations against risks of AGI specifically, because the public will have turned skeptical that AGI could be a thing.

But that’s okay. Enough people are sick of AI corporations and just want to restrict the heck out of them. Environmentalists, the creative industry, exploited workers and whistleblowers, experts fighting deepfakes and disinformation – each has a bone to pick.

There are plenty of robust restrictions we can build consensus around that will make it hard for multi-domain-processing models to get commercially developed.

I’m preparing for this moment.

If you are a funder, keep the possibility of an AI crash in mind. When the time comes, talk with me. Happy to share the information and funding leads I have.

Crossposted from EA Forum (20 points, 3 comments)