I think the idea we’re going to be able to precisely steer government policy to achieve nuanced outcomes is dead on arrival—we’ve been failing at that forever. What’s in our favor this time is that there are many more ways to cripple advance than to accelerate it, so it may be enough for the push to be simply directionally right for things to slow down (with a lot of collateral damage).
Our inner game policy efforts are already bearing fruit. We can’t precisely define exactly what will happen, but we certainly can push for nuance via this route than we would be able to through the public outreach route.
I can see why you would be a lot more positive on advocacy if you thought that crippling advances is a way out of our current crisis. Unfortunately, I fear that will just result in AI being built by whichever country/actor cares the least about safety. So I think we need more nuance than this.
I think the idea we’re going to be able to precisely steer government policy to achieve nuanced outcomes is dead on arrival—we’ve been failing at that forever. What’s in our favor this time is that there are many more ways to cripple advance than to accelerate it, so it may be enough for the push to be simply directionally right for things to slow down (with a lot of collateral damage).
Our inner game policy efforts are already bearing fruit. We can’t precisely define exactly what will happen, but we certainly can push for nuance via this route than we would be able to through the public outreach route.
I can see why you would be a lot more positive on advocacy if you thought that crippling advances is a way out of our current crisis. Unfortunately, I fear that will just result in AI being built by whichever country/actor cares the least about safety. So I think we need more nuance than this.