For the general public considered as a unit, I think “We don’t know if AGI is near, but it could be.” is much too subtle. I don’t know how to handle that, but I think the right way to talk about it is “this is an environment that does not support enough nuance for this true statement to be heard, how do we handle that?”, not “pretend it can handle more than it can.”[1]
I think this is one reason doing mass advocacy is costly, and not be done lightly. There are a lot of advantages to staying in arenas that don’t render a wide swath of true things unsayable. But I don’t think it’s correct to totally rule out participating in those arenas either.
And yes, I do think the same holds for vegan advocacy in the larger world. I think simplifying to “veganism is totally healthy* (*if you do it right)” is fine-enough for pamphlets and slogans. As long as it’s followed up with more nuanced information later, and not used to suppress equally true information.
I think this is one reason doing mass advocacy is costly, and not be done lightly.
And why I deeply disagree with Eliezer’s choice to break open the Overton window, and FLI’s choice to argue for a pause to open the Overton window, because I believe that the nuances of the situation, especially ML nuance, are very critically important for stuff like “Will AI be safe?”, “Will AI generalize”, and so on.
See my original answer for why I think picking such short messages is necessary. Tl;dr: most people aren’t paying attention and round off details, so you have to communicate with the shortest possible message that can’t be rounded off further in some contexts. Your proposed message will be rounded off to “we don’t know”, which is not a message that seems unlikely to me to inspire the correct actions at this point in time.
You are turning this into a hypothetical scenario where your only communication options are “AGI is near” and “AGI is not near”.
“We don’t know if AGI is near, but it could be.” would seem short enough to me.
For the general public considered as a unit, I think “We don’t know if AGI is near, but it could be.” is much too subtle. I don’t know how to handle that, but I think the right way to talk about it is “this is an environment that does not support enough nuance for this true statement to be heard, how do we handle that?”, not “pretend it can handle more than it can.”[1]
I think this is one reason doing mass advocacy is costly, and not be done lightly. There are a lot of advantages to staying in arenas that don’t render a wide swath of true things unsayable. But I don’t think it’s correct to totally rule out participating in those arenas either.
And yes, I do think the same holds for vegan advocacy in the larger world. I think simplifying to “veganism is totally healthy* (*if you do it right)” is fine-enough for pamphlets and slogans. As long as it’s followed up with more nuanced information later, and not used to suppress equally true information.
And why I deeply disagree with Eliezer’s choice to break open the Overton window, and FLI’s choice to argue for a pause to open the Overton window, because I believe that the nuances of the situation, especially ML nuance, are very critically important for stuff like “Will AI be safe?”, “Will AI generalize”, and so on.
See my original answer for why I think picking such short messages is necessary. Tl;dr: most people aren’t paying attention and round off details, so you have to communicate with the shortest possible message that can’t be rounded off further in some contexts. Your proposed message will be rounded off to “we don’t know”, which is not a message that seems unlikely to me to inspire the correct actions at this point in time.