I’ll repeat here, then, the original question(s) which prompted that comment—how careful one should be to avoid generalization from fictional evidence [described as a fallacy here, but I’d interprete it as a bias as well—which raises another potentially interesting question, how much overlap is there between fallacies and bias]? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose “morality programming” breaks down when conditions shift to ones its designer had not thought about? Or would it be better to avoid fictional examples entirely and stick purely to the facts?
I’ll repeat here, then, the original question(s) which prompted that comment—how careful one should be to avoid generalization from fictional evidence [described as a fallacy here, but I’d interprete it as a bias as well—which raises another potentially interesting question, how much overlap is there between fallacies and bias]? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose “morality programming” breaks down when conditions shift to ones its designer had not thought about? Or would it be better to avoid fictional examples entirely and stick purely to the facts?