I wouldn’t say that. Signalling the way you seem to have used it implies deception on their part, but each of these instances could just be a skill issue on their end, an inability to construct the right causal graph with sufficient resolution.
For what it’s worth whatever this pattern is pointing at also applies to how wrongly most of us got the AI box problem, i.e., that some humans by default would just let the damn thing out without needing to be persuaded.
How would one even distinguish between those who don’t actually care about solving the problem and only want to signal that they care, and those who care but are too stupid to realize that intent is not magic? I believe that both do exist in the real world.
I would probably start charitably assuming stupidity, and try to explain. If the explanations keep failing mysteriously, I would gradually update towards not wanting to actually achieve the declared goal.
I think lots of people would say that all three examples you gave are more about signalling than about genuinely attempting to accomplish a goal.
I wouldn’t say that. Signalling the way you seem to have used it implies deception on their part, but each of these instances could just be a skill issue on their end, an inability to construct the right causal graph with sufficient resolution.
For what it’s worth whatever this pattern is pointing at also applies to how wrongly most of us got the AI box problem, i.e., that some humans by default would just let the damn thing out without needing to be persuaded.
How would one even distinguish between those who don’t actually care about solving the problem and only want to signal that they care, and those who care but are too stupid to realize that intent is not magic? I believe that both do exist in the real world.
I would probably start charitably assuming stupidity, and try to explain. If the explanations keep failing mysteriously, I would gradually update towards not wanting to actually achieve the declared goal.