No, they’re trying to avoid generalizing from fictional evidence.
Why would you generalize from a cherry-picked example to begin with? The fact that you’re able to find some pretty example to illustrate your point is pretty much no evidence at all in favor of it; and if your cherry-picked example ends up being invalid, that’s more of a reflection of your lack of attention or clarity on how the example was supposed to work than a reflection of the actual difficulty of coming up with examples.
It seems to me like you’re agreeing that people are reading this fable as Bayesian evidence in favor of some view, even though it’s obviously cherry-picked and therefore shouldn’t be evidence in favor of anything even if it were true. In that case, why did you say “no”?
If his chosen example of this phenomenon is not in fact a good example of the phenomenon, then one might reasonably be less inclined to believe that the phenomenon is as common and as important as he suggests it is, and/or less inclined to believe what he says about the phenomenon.
Then why not ask him how prevalent he thinks the phenomenon actually is? I agree it’s not good that his example isn’t as good as it might seem at first, but that’s honestly pretty weak evidence against his general point in the context of alignment.
Why would you generalize from a cherry-picked example to begin with? The fact that you’re able to find some pretty example to illustrate your point is pretty much no evidence at all in favor of it; and if your cherry-picked example ends up being invalid, that’s more of a reflection of your lack of attention or clarity on how the example was supposed to work than a reflection of the actual difficulty of coming up with examples.
It seems to me like you’re agreeing that people are reading this fable as Bayesian evidence in favor of some view, even though it’s obviously cherry-picked and therefore shouldn’t be evidence in favor of anything even if it were true. In that case, why did you say “no”?
Then why not ask him how prevalent he thinks the phenomenon actually is? I agree it’s not good that his example isn’t as good as it might seem at first, but that’s honestly pretty weak evidence against his general point in the context of alignment.
I think people are reading this as intended to be Bayesian evidence in favour of some view.
I probably shouldn’t have said “no, …”, therefore; my apologies.