I think Kaj has a good point. In a current paper I’m discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen’s berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don’t contain that much of actual relevance to my paper, so I just credit him with the concept and move on.
The example of Metamorphosis of Prime Intellect seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and clearly understandable in the light of a fictional example. But I suspect the problem here is the vividness: it would produce a bias towards increasing risk estimates for that particular problem as a side effect of making the problem itself clearer. Sometimes that might be worth it, especially if the analysis is strong enough to rein in wild risk estimates, but quite often it might be counterproductive.
There is also a variant of absurdity bias in referring to sf: many people tend to regard the whole argument as sf if there is an sf reference in it. I noticed that some listeners to my talk on berserkers did indeed not take the issue of whether there are civilization-killers out there very seriously, while they might be concerned about other “normal” existential risks (and of course, many existential risks are regarded as sf in the first place).
Maybe a rule of thumb is to limit fiction references to where they 1) say something directly relevant, 2) there is a valid reason for crediting them, 3) the biasing effects do not reduce the ability to think rationally about the argument too much.
I think Kaj has a good point. In a current paper I’m discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen’s berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don’t contain that much of actual relevance to my paper, so I just credit him with the concept and move on.
The example of Metamorphosis of Prime Intellect seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and clearly understandable in the light of a fictional example. But I suspect the problem here is the vividness: it would produce a bias towards increasing risk estimates for that particular problem as a side effect of making the problem itself clearer. Sometimes that might be worth it, especially if the analysis is strong enough to rein in wild risk estimates, but quite often it might be counterproductive.
There is also a variant of absurdity bias in referring to sf: many people tend to regard the whole argument as sf if there is an sf reference in it. I noticed that some listeners to my talk on berserkers did indeed not take the issue of whether there are civilization-killers out there very seriously, while they might be concerned about other “normal” existential risks (and of course, many existential risks are regarded as sf in the first place).
Maybe a rule of thumb is to limit fiction references to where they 1) say something directly relevant, 2) there is a valid reason for crediting them, 3) the biasing effects do not reduce the ability to think rationally about the argument too much.