a) Dumbing down the message will cost us support from ML engineers and researchers. b) If the message is dumbed down too much, then the public is unlikely to create pressure towards the kinds of actions that will actually help as opposed to pressuring politicians to engage in shallow, signaling-driven responses.
I think the idea we’re going to be able to precisely steer government policy to achieve nuanced outcomes is dead on arrival—we’ve been failing at that forever. What’s in our favor this time is that there are many more ways to cripple advance than to accelerate it, so it may be enough for the push to be simply directionally right for things to slow down (with a lot of collateral damage).
Our inner game policy efforts are already bearing fruit. We can’t precisely define exactly what will happen, but we certainly can push for nuance via this route than we would be able to through the public outreach route.
I can see why you would be a lot more positive on advocacy if you thought that crippling advances is a way out of our current crisis. Unfortunately, I fear that will just result in AI being built by whichever country/actor cares the least about safety. So I think we need more nuance than this.
I agree that there is a trade-off here, however:
a) Dumbing down the message will cost us support from ML engineers and researchers.
b) If the message is dumbed down too much, then the public is unlikely to create pressure towards the kinds of actions that will actually help as opposed to pressuring politicians to engage in shallow, signaling-driven responses.
I think the idea we’re going to be able to precisely steer government policy to achieve nuanced outcomes is dead on arrival—we’ve been failing at that forever. What’s in our favor this time is that there are many more ways to cripple advance than to accelerate it, so it may be enough for the push to be simply directionally right for things to slow down (with a lot of collateral damage).
Our inner game policy efforts are already bearing fruit. We can’t precisely define exactly what will happen, but we certainly can push for nuance via this route than we would be able to through the public outreach route.
I can see why you would be a lot more positive on advocacy if you thought that crippling advances is a way out of our current crisis. Unfortunately, I fear that will just result in AI being built by whichever country/actor cares the least about safety. So I think we need more nuance than this.